Giter Site home page Giter Site logo

socketio / engine.io Goto Github PK

View Code? Open in Web Editor NEW
4.6K 4.6K 571.0 1.73 MB

The engine used in the Socket.IO JavaScript server, which manages the low-level transports such as HTTP long-polling and WebSocket.

Home Page: https://socket.io

License: MIT License

JavaScript 61.16% TypeScript 38.84%
javascript nodejs real-time websocket

engine.io's People

Contributors

afshinm avatar albertyfwu avatar alexlmeow avatar amiuhle avatar binlain avatar cadorn avatar darrachequesne avatar defunctzombie avatar dependabot[bot] avatar digawp avatar e3dio avatar einaros avatar eugendueck avatar jphire avatar jxck avatar kapouer avatar lpinca avatar mjgil avatar mokesmokes2 avatar nicokaiser avatar nkzawa avatar rase- avatar rauchg avatar raynos avatar roamm avatar ruxkor avatar tootallnate avatar xixixao avatar yields avatar yosiat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

engine.io's Issues

Keep `path` configurable

Defaulting path to /engine.io will not work in our setup after all. We want to keep a prefix that mounts at the root.

If we make the path prefix a constant on the module it will apply to all engine.io instances in the app which is probably not desirable. It seems that it needs to stay as an option.

It could be an undocumented option or have a different name than path but we will continue to need it.

What do you think?

Squid blocking.IO requests after upgrade

I stumbled upon a network that was behind a Squid 3.1.6 transparent proxy. With default engine.IO settings, the connection from engine.IO-client to engine.IO server failed somewhere right after the handshake.

Basically, here' what happens:

  1. The initial handshake GET-request gets made correctly from the engine.IO-client to the server
  2. Engine.IO server responds with the SID, etc (response: 86:0{"sid":"2762152803199324","upgrades":["websocket","flashsocket"],"pingTimeout":60000})
  3. Engine.IO-client starts making POST-requests, and also tries to upgrade to a websocket connection.
  4. Upgrade to Websocket fails
  5. None of the polling requests go through correctly from there on a few seconds after the failed upgrade. That is, the client gets a 500 Internal Server Error response (killed by Squid?)

It is worth mentioning that if an actual request gets made before the client tries to upgrade to websocket, everything seems fine. Problem is that all of the requests that are made slightly after trying to upgrade to websocket get killed.

I don't know if that's a standard behaviour by Squid to block HTTP requests for a few seconds to a certain host if the client tried to do something "illegal" (upgrade to websocket, in my case)?

If it is, perhaps it's thinking about - maybe there's a workaround for that? Because right now, if the client receives a 500 response from the server, it never retries to re-send the last message with a new request. Thus, the original request never gets delivered to the server. (And that results in the client application to stall - in my case. Obviously, I could go and implement some timeouts about requests and responses and check if all requests got a response, and if not, re-make those requests but that'd be a nice feature built into engine.IO client itself.)


Here's a sample of a POST request being made by the client to the server right after trying to upgrade to websocket, and you can see Squid killing the request in the middle by answering with error 500 to the client:

Request
POST http://listhings.com/engine.io/default/?uid=61636229604482789757082425058&transport=polling&sid=7449753105742354

Request Headers:
Accept:/
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Content-Length:58
Content-type:text/plain;charset=UTF-8
Cookie:sessionId=[secret]; secretKey=[secret]; __utma=111177113.1031802160.1335278203.1335357532.1335420851.7; __utmb=111177113.5.10.1335420851; __utmc=111177113; __utmz=111177113.1335278203.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _chartbeat2=vc94hngfdl7uozv1.1334935107546.1335422009418.00000000000001
Host:listhings.com
Origin:http://listhings.com
Referer:http://listhings.com/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.163 Safari/535.19

Query String Parameters:
uid:61636229604482789757082425058
transport:polling
sid:7449753105742354

Request Payload:
55:4{"type":"subscribe","version":1,"channels":["u.p.ud"]}

Response Status Code: 500 Internal Server Error

Response Headers:
Connection:close
Date:Thu, 26 Apr 2012 06:33:20 GMT
Via:1.0 localhost (squid/3.1.6)
X-Cache:MISS from localhost
X-Cache-Lookup:MISS from localhost:8080

CORS problems in Firefox

For various reasons our server is running multiple websocket servers, meaning that one of these servers is running on a different port than our main http server. This works fine in Google Chrome, but in firefox the engine.io instance running on the different port receives an 'xhr poll error' on the client and fails. This seems to be a CORS issue; the origin of our script has a different port than the port it's polling.

The engine.io spec mentions that CORS should be supported, and I see that Access-Control-Allow-Origin is being packed in the xhr polling transport, but I can't find a way to configure what origins are allowed-- it's always set to the value of the request's 'Origin' header. Is there a way to specify a list of allowed origins?

JSONP causes a download dialog on some mobile browsers

As engine.io server responds to form posts with Content-Type': 'text/plain (in polling.js onDataRequest), Nokia N8 will show a download dialog on submit. Setting Content-Type': 'text/html instead fixes this. I hope the change wouldn't cause any problems elsewhere.

Building fails; missing dist-directory

The dist-directory is not included. Therefore make fails with

$ make
/bin/sh: dist/engine.io.js: No such file or directory
make: *** [build] Error 1

Quickfix:

$ mkdir dist

or

--- a/Makefile  Sun Mar 25 11:20:22 2012 -0300
+++ b/Makefile  Wed Mar 28 16:37:54 2012 +0200
@@ -5,12 +5,14 @@
 all: build build-dev

 build:
+   @mkdir -p dist
    @./node_modules/.bin/browserbuild \
        -g eio \
        -m engine.io-client -b lib/ \
        lib > dist/engine.io.js

 build-dev:
+   @mkdir -p dist
    @./node_modules/.bin/browserbuild \
        -g eio \
        -d -m engine.io-client -b lib/ \

Set cache headers for GET responses or implement forceBust

What is the preferred way to set cache headers for Engine.io server's GET responses, to prevent mobile clients such as Android racing rapidly when trying to connect (xhr will otherwise come from browser cache)? Client's forceBust (timestamping the query) is not yet finalized I think (not carried from constructor options).

To test this, send a message from the server on connect, and initialize the client with {upgrade: false, transports: ['polling']} too see it clearly. The socket.onmessage on the client (such as Android) will start triggering in a rapid loop with the initial message. Desktop browsers work ok, they are not as eager to cache requests.

polling socket dies in first ~5 seconds

With the server set up like this:

var io = engine.attach(server, {allowUpgrades: false, transports: ['polling']});

And client like:

var socket = new eio.Socket({port: this.port, transports: ['polling'], upgrade: false});

Data can be sent over the socket, but as soon as the ~5 seconds is up, socket closes with an error:

  engine:socket sending packet "open" ({"sid":"9wriKgPAqJzd6UqGAAAA","upgrades":[],"pingTimeout":60000}) +0ms
  engine:socket flushing buffer to transport +0ms
  engine:socket transport error +5s

[Error: poll connection closed prematurely]

Using Chrome 21.0.1180.75 / Mac OS X 10.8

Connection fails using engine.io-client

Polling

This is from the browser console. No errors are thrown on the server. The onError messages are my logging.

GET http://chanjs.aws.af.cm/vein/default/?uid=5662024393677787497326917946&transport=polling&sid=725856132894867 500 (Internal Server Error)

onError: {description: 500, message: "xhr poll error", type: "TransportError"}

WebSocket is closed before the connection is established.

POST http://chanjs.aws.af.cm/vein/default/?uid=5662024393677787497326917946&transport=polling&sid=725856132894867 500 (Internal Server Error)

onError: {description: 500, message: "xhr post error", type: "TransportError"}
WebSocket

This is from the server console.

/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/transports/websocket.js:69
    this.socket.send(data);
                ^
TypeError: Object #<IncomingMessage> has no method 'send'
    at WebSocket.send (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/transports/websocket.js:69:17)
    at Socket.flush (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/socket.js:232:20)
    at Socket.sendPacket (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/socket.js:218:10)
    at Socket.onOpen (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/socket.js:49:8)
    at new Socket (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/socket.js:30:8)
    at Server.handshake (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/server.js:180:16)
    at Server.handleRequest (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/server.js:161:10)
    at Server.<anonymous> (/mnt/var/vcap.local/dea/apps/chanjs-0-9dcc0f3a6cee0cf7093994fa5a92ee97/app/node_modules/vein/node_modules/engine.io/lib/engine.io.js:129:14)
    at Server.emit (events.js:70:17)
    at HTTPParser.onIncoming (http.js:1572:12)

Browser is Chromium 19.0.1084.52 Arch x64. App is hosted on AppFog (which uses CloudFoundry). Apparently CloudFoundry doesn't support websockets but shouldn't polling still work?

Unhandled 'error' event

I'm seeing this error taking down engine.io in a real-world load test of ~20k clients. I can't reproduce it with a simple stress test - there seems to be something about clients in the wild that cause it. Is it possible that the transport is throwing two errors before clearTransport() is called?

events.js:66
throw arguments[1]; // Unhandled 'error' event
^
Error: reserved fields must be empty
at WebSocket.Transport.onError (/home/ec2-user/node_modules/engine.io/lib/transport.js:77:13)
at WebSocket.EventEmitter.emit (events.js:91:17)
at Receiver.self._receiver.onerror (/home/ec2-user/node_modules/engine.io/node_modules/ws/lib/WebSocket.js:542:10)
at Receiver.error (/home/ec2-user/node_modules/engine.io/node_modules/ws/lib/Receiver.js:301:8)
at Receiver.processPacket (/home/ec2-user/node_modules/engine.io/node_modules/ws/lib/Receiver.js:187:10)
at Receiver.add (/home/ec2-user/node_modules/engine.io/node_modules/ws/lib/Receiver.js:93:24)
at Socket.firstHandler (/home/ec2-user/node_modules/engine.io/node_modules/ws/lib/WebSocket.js:500:22)
at Socket.EventEmitter.emit (events.js:88:17)
at TCP.onread (net.js:403:14)

npm ERR! 404 You should bug the author to publish it

Like Isaacs says I believe t is not possibled in npm (yet)? Because this is still alpha?

alfred@alfred-AMILO-Pi-2515:~$ npm info engine.io
npm http GET https://registry.npmjs.org/engine.io
npm http 404 https://registry.npmjs.org/engine.io

npm ERR! 404 'engine.io' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it
npm ERR! 404 
npm ERR! 404 Note that you can also install from a
npm ERR! 404 tarball, folder, or http url, or git url.
npm ERR! 
npm ERR! System Linux 3.0.0-15-generic
npm ERR! command "node" "/home/alfred/.nvm/v0.6.8/bin/npm" "info" "engine.io"
npm ERR! cwd /home/alfred
npm ERR! node -v v0.6.8
npm ERR! npm -v 1.1.0-2
npm ERR! code E404
npm ERR! message 404 Not Found: engine.io
npm ERR! errno {}
npm ERR! 
npm ERR! Additional logging details can be found in:
npm ERR!     /home/alfred/npm-debug.log
npm not ok

FlashSocket transport broken?

The FlashSocket transport is broken most likely. It contains an "init" function that is never called, and there is no "manager".

Maybe this needs to be initialized (with slightly different config handling) in the Server constructor like websocket? (There would be no chance to change the policyfile port and origins afterwards)

test-acceptance does not exist

$ make test-acceptance
make: *** No rule to make target `test-acceptance'. Stop.

Not sure if test-acceptance is supposed to be there (a grep for that word doesn't yield results) and if Readme.md is stale (or fresher than the tests), so rather than editing Readme, I put it here as an issue.

Conflation

I'd like to see engine.io support a feature called conflation, i.e. the removal of some messages based on criteria like

  • the current performance of the client targeted to receive the message
  • the frequency of messages
  • a newer message making stale all older messages of the same "topic" (a term used in messaging software, somwhat related to how socket.io uses "rooms")

So a more general version of the volatile feature in socket.io.

Conflation is especially beneficial to performance of both server and client, when broadcasting and multicasting (e.g. rooms) are frequently used, and protects the server from getting bogged down memory-wise by a single slow consumer who'll cause the buffer to grow to heaven (or until the heartbeat kills the connection, which could still be a lot depending on the msg frequency / heart beat frequency rate and the number of slow consumers). [1]

In case my description above failed to convey the usefulness of conflation, http://magmasystems.blogspot.jp/2006/08/conflation.html has a brief description of that feature and its application to the distribution of price quotes in finance. IBM, too, use conflation for the same purpose: http://publib.boulder.ibm.com/infocenter/imds/v3r0/index.jsp?topic=/com.ibm.imds.doc/welcome.html .

If engine.io wants to enable conflation feature based on the client's performance consuming messages, it has to get support from the engine.io layer, because that feature depends on the client's state (connection open and drained?), which is - understandably - hidden from the application layer. Conflation based on message frequency alone can obviously be done completely in the application layer, as the application has control over how often it calls emit and can throttle it without the help of engine.io.

There's a rather straightforward way to implement it so that it is both flexible in terms of the conflation logic, yet does not require complex logic inside engine.io itself, and I've actually already implemented it in socket.io v0.9.8.

Here's a simplified pseudo code diff, leaving out a couple of intermediate steps:

before

  • in myApp
    • io.emit(myJavascriptObject);
  • in socket.io/transport
    • transport.write(encodePacket(myJavascriptObject))

after

  • in myApp, configuration
    • io.set('conflater', function(messages) { /* for example: */ return [messages[messages.length - 1]]; });
  • in myApp, runtime
    • io.emit(myJavascriptObject);
  • in socket.io/transport
    • conflationBuffer.push(myJavascriptObject);
    • .onDrained(function() { transport.actualWriteMessages(encodePackets(io.get('conflater')(conflationBuffer))); }

If no conflater function has been configured during initialization, no buffering or calling conflater will be done.

The conflater message can

  • just return that buffer unchanged, in which case no conflation is performed
  • simply remove elements from the array, performing conflation
  • remove elements and replace them with fewer or different elements, i.e. performing aggregation
  • even add elements, for whatever unknown reason I don't know and don't currently care about

The above is simplification of the algorithm is not the whole truth, however: In reality, the functions in socket.io/transport are NOT given the myJavascriptObject, as provided by the client, but the already encoded version of it. They only get to see encodePacket(myJavascriptObject). There are 2 good reasons for this:

  • myJavascriptObject will be encoded only once (in SocketNamespace.packet(..))
  • the transports can be given serialized packet versions straight out of the RedisStore or whatever other store there might be that has the need to serialize messages

Now, I would not want to hand the encoded message into the application layer for the following 2 reasons:

  • the app layer shouldn't know about how stuff gets encoded
  • the app layer will have a hard time working on encoded strings, rather than on proper JS objects

I have an idea how to solve this in a way that

  • hides the lower level encoding internals from the app layer
  • avoid multiple encodings / decodings of the same message (i.e. caching of results)
  • avoids having to maintain a 'cache hash table' or similar with all the related problems (when to garbage collect the cache?)
  • works for both scenarios without the need to serialize up until the point data is sent to the client (in socket.io: MemoryStore) and with that need (RedisStore)

The question is - is the conflation feature deemed important enough, and is it further considered impossible to implement without adding changing engine.io, as I believe? In that case, I would actually like to prepare a pull request.

Introduce `flush` and `drain` events

flush gets called prior to the writeBuffer getting committed.
drain upon the writeBuffer being set to []

It gets called on the server with the socket as first parameter, and on the socket itself.
Needs docs.

Authentication

I see that there are no ways to authenticate a connection. One could argue that it's authentication should be done by a higher level api on top of engine.io and overriding the Server.prototype.verify method but this method is called in a sync way, so that wouldn't be possible either.

So what I would suggest is either:

  • Adding support for authentication during the handshaking dance
  • Make the verify method async so it can be overridden by a third party.

Upon `close`, clear `writeBuffer` on `nextTick`

This allows us to free memory but still give the opportunity for the developer to subscribe to error or close events to create a new reference to the object and avoid its garbage collection prior to us setting it to null.

Test fail: `server send callback should execute in multipart packet`

The server send callback should execute in multipart packet test fails about 20% of the time for me.

I think it is too sensitive.

1) server send callback should execute in multipart packet:
 Error: expected 1 to equal 2
  at Assertion.assert (/pinf/workspaces/github.com/c9/smith.io/node_modules/engine.io/node_modules/expect.js/expect.js:99:13)
  at Assertion.equal (/pinf/workspaces/github.com/c9/smith.io/node_modules/engine.io/node_modules/expect.js/expect.js:200:10)
  at Assertion.be (/pinf/workspaces/github.com/c9/smith.io/node_modules/engine.io/node_modules/expect.js/expect.js:73:24)
  at Object._onTimeout (/pinf/workspaces/github.com/c9/smith.io/node_modules/engine.io/test/server.js:815:26)

clarity on engine.io vs. socket.io

It'd be great if you could explain a little about the differences right now. I think it's clear that engine.io will focus on the transport layer, but is it ready for use?

I noticed that socket.io doesn't list engine.io as a dependency so that makes me wonder what stage this project is in or if the transport layer has been fully extracted from socket.io. Some clarity on this issue would be awesome, thanks!

Consider getting rid of client tracking

With #22, we would need to either:

a) Provide abstract interfaces for talking to other clients, since they could be in other processes or computers.
b) Get rid of .clients altogether, and let messaging between clients be handled by the developer.

Feature request: Make messaging reliable across reconnects (no lost messages)

There is a WebSocket sub-protocol called MBWS which allows exactly that, by using sequence numbers and a 'resync' protocol when reconnecting a failed connection:

http://tools.ietf.org/html/draft-hapner-hybi-messagebroker-subprotocol-01

Currently, iinm, if there is any error doing a send / doWrite(), the package is lost.

This feature does not require the WebSocket protocol, but can be generalized to cover the polling / streaming cases as well. And it doesn't necessarily need to be done in the format specified by MBWS, it won't be in the non-WebSocket cases anyway.

This would have to be in the protocol right from v1.0, otherwise it'll be hard to add later.

What's missing from MBWS though is the possibility to send volatile packages, packages, that are not buffered and will not be resent (and, as socket.io does, will not be sent in case the connection is not open and drained)

unhandled transport error

I'm seeing some exceptions (but not a lot) :

UNCAUGHT EXCEPTION
[Inside 'uncaughtException' event] Error: poll connection closed prematurely
at XHR.onError (/home/test/project/node_modules/engine.io/lib/transport.js:71:13)
at IncomingMessage.onClose (/home/test/project/node_modules/engine.io/lib/transports/polling.js:82:12)
at IncomingMessage.emit (events.js:64:17)
at abortIncoming (http.js:1386:11)
at Socket.<anonymous> (http.js:1459:5)
at Socket.emit (events.js:88:20)
at Array.0 (net.js:335:10)
at EventEmitter._tickCallback (node.js:192:41)

The error event is sent by lib/transports/pollings.js#Polling.prototype.onPollRequest inner onClose() and is not handled by Socket's this.transport.once('error', this.onError.bind(this));

Also the same onPollRequest inner onClose() function doesn't call cleanup() as onDataRequest's onClose() does.

Regards

Server crashes if transport-parameter is wrong

While developing engine.io for java I discovered that I've been able to crash the server when I'm submitting a wrong transport as get parameter.

This happens when I'm switching from polling to websocket transport when the url to the websocket contains "transport=polling"

/Users/tox/src/websockettest/engine/engine.io/lib/transports/index.js:32
  if ('string' == typeof req.query.j) {
                                  ^
TypeError: Cannot read property 'j' of undefined
    at new polling (/Users/tox/src/websockettest/engine/engine.io/lib/transports/index.js:32:35)
    at Server.onWebSocket (/Users/tox/src/websockettest/engine/engine.io/lib/server.js:228:23)
    at Server.<anonymous> (native)
    at Server.emit (events.js:67:17)
    at Server.handleUpgrade (/Users/tox/src/websockettest/engine/engine.io/node_modules/websocket.io/lib/server.js:71:10)
    at Server.handleUpgrade (/Users/tox/src/websockettest/engine/engine.io/lib/server.js:208:11)
    at Server.<anonymous> (/Users/tox/src/websockettest/engine/engine.io/lib/engine.io.js:134:16)
    at Server.emit (events.js:88:20)
    at Socket.<anonymous> (http.js:1425:14)
    at TCP.onread (net.js:354:27)

make bench fails for (colors dependency)

$ make bench

module.js:340
throw err;
^
Error: Cannot find module 'colors'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:362:17)
at require (module.js:378:17)
at Object. (engine.io/bench/runner.js:6:14)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:492:10)
make: *** [bench] Error 1

Adding

"colors": "*"

to the dev-dependencies and another npm install did not do the trick:

$ make bench

engine.io/bench/runner.js:56
console.log('\n Starting benchmark '.grey + first.yellow);
^
TypeError: Cannot read property 'yellow' of undefined
at Object. (engine.io/bench/runner.js:56:51)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:492:10)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
make: *** [bench] Error 1

Make buffer consumption asynchronous

Right now, transports assume the buffers are present in the process where the connections originated.
We need to turn this into a pub/sub interface to allow those buffers to be placed in external message queues.

some sockets are left open

I get approximately 1.000 left opened sockets (netstat count) after two or three days running engine.io 0.1.0 on node 0.6.19.
Restarting engine.io frees them.
The ratio "opened sockets" / "connected clients" oscillates between 1.3 and 2.0, especially when there are thousands of clients.
When the clients leave and only connect for a very short time to engine.io server, the ratio slowly goes up (to 15 after a couple of days).
I can also see two exceptions in the logs :

This one happens quite often :

Error: not opened
    at WebSocket.send (/xxx/node_modules/engine.io/node_modules/websocket.io/node_modules/ws/lib/WebSocket.js:175:16)
    at WebSocket.write (/xxx/node_modules/engine.io/node_modules/websocket.io/lib/protocols/hybi.js:101:13)
    at WebSocket.send (/xxx/node_modules/engine.io/node_modules/websocket.io/lib/socket.js:106:8)
    at WebSocket.send (/xxx/node_modules/engine.io/lib/transports/websocket.js:73:17)
    at Socket.flush (/xxx/node_modules/engine.io/lib/socket.js:255:20)
    at Socket.sendPacket (/xxx/node_modules/engine.io/lib/socket.js:241:10)
    at Socket.send (/xxx/node_modules/engine.io/lib/socket.js:225:8)

Error: not opened
    at WebSocket.send (/xxx/node_modules/engine.io/node_modules/websocket.io/node_modules/ws/lib/WebSocket.js:175:16)
    at WebSocket.write (/xxx/node_modules/engine.io/node_modules/websocket.io/lib/protocols/hybi.js:101:13)
    at WebSocket.send (/xxx/node_modules/engine.io/node_modules/websocket.io/lib/socket.js:106:8)
    at WebSocket.send (/xxx/node_modules/engine.io/lib/transports/websocket.js:73:17)
    at Socket.flush (/xxx/node_modules/engine.io/lib/socket.js:255:20)
    at Socket.sendPacket (/xxx/node_modules/engine.io/lib/socket.js:241:10)
    at Object._onTimeout (/xxx/node_modules/engine.io/lib/socket.js:114:10)
    at Timer.ontimeout (timers.js:94:19)

And this one less often:

Error: Error: read ETIMEDOUT
    at WebSocket.onError (/xxx/node_modules/engine.io/lib/transport.js:77:13)
    at WebSocket.emit (events.js:67:17)
    at Socket. (/xxx/node_modules/engine.io/node_modules/websocket.io/lib/socket.js:29:12)
    at Socket.emit (events.js:88:20)
    at Array.0 (net.js:301:14)
    at EventEmitter._tickCallback (node.js:190:39)

EDIT: show whole lines of stack traces

Test files are missing

The Makefile is pointing to a test folder, which is not available in the current source tree.
Is it missing or is it a todo?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.