In case a worker would spawn a new subprocess with process.env, NODE_UNIQUE_ID
would have been a part of the env. Making the new subprocess believe it is a
worker, this would result in some confusion if the subprocess where to listen to
a port, since the server handle request would then be relayed to the worker.
This patch removes the NODE_UNIQUE_ID flag from process.env on startup so any
subprocess spawned by a worker is a normal process with no cluster stuff.
request.end() would sometimes try to write a zero-length buffer to the socket.
Don't do that, it triggers an unnecessary EPIPE when the other end has closed
the connection.
Fixes #3257.
child_process.fork() support sending native hander object, this patch add support for sending
net.Server and net.Socket object by converting the object to a native handle object and back
to a useful object again.
Note when sending a Socket there was emitted by a net Server object, the server.connections
property becomes null, because it is no longer possible to known when it is destroyed.
This frees us from manually having to copy over functions to SlowBuffer's
prototype (which has bitten us multiple times in the past).
As an added bonus, the `inspect()` function is now shared between Buffer
and SlowBuffer, removing some duplicate code.
Closes #3228.
So instead of:
node.js:201
throw e; // process.nextTick error, or 'error' event on first tick
^
You will now see:
path/to/foo.js:1
throw new Error('bar');
^
This is a sub-set of isaacs patch here:
https://github.com/joyent/node/issues/3235
The difference is that this patch purely adresses the exception output,
but does not try to make any behavior changes / improvements.
Regarding discussion in #3198. Passing the worker as an argument
to an event emitted on the worker is redundant, and an unnecessary
break in consistency vs the events on the ChildProcess objects.
It was removed from 'exit', but 'listening' and others were
overlooked. This corrects that oversight.
test: fixes due to new cluster api.
- changed worker `death` to `exit`.
- corrected argument type expected by worker `exit` handler.
test: more tests of cluster.worker death
cluster: fixed arguments on worker 'exit' event
worker 'exit' event now emits arguments consistent with the
corresponding event in child_process module.
Move parsers.free(parser) to a single function, which also
nulls all of the various references we hang on them.
Also, move the parser.on* methods out of the closure, so that
there's one shared definition of each, instead of re-defining
for each parser in a spot where they can close over references
to other request-specific objects.
Conflicts:
lib/http.js
Move parsers.free(parser) to a single function, which also
nulls all of the various references we hang on them.
Also, move the parser.on* methods out of the closure, so that
there's one shared definition of each, instead of re-defining
for each parser in a spot where they can close over references
to other request-specific objects.
* Calling fs.ReadStream.destroy() or fs.WriteStream.destroy() twice would close
the file descriptor twice. That's bad because the file descriptor may have
been repurposed in the mean time.
* A bad value check in fs.ReadStream.prototype.destroy() would prevent a stream
created with fs.createReadStream({fd:0}) from getting closed.
This is a squashed commit of the main work done on the domains-wip branch.
The original commit messages are preserved for posterity:
* Implicitly add EventEmitters to active domain
* Implicitly add timers to active domain
* domain: add members, remove ctor cb
* Don't hijack bound callbacks for Domain error events
* Add dispose method
* Add domain.remove(ee) method
* A test of multiple domains in process at once
* Put the active domain on the process object
* Only intercept error arg if explicitly requested
* Typo
* Don't auto-add new domains to the current domain
While an automatic parent/child relationship is sort of neat,
and leads to some nice error-bubbling characteristics, it also
results in keeping a reference to every EE and timer created,
unless domains are explicitly disposed of.
* Explicitly adding one domain to another is still fine, of course.
* Don't allow circular domain->domain memberships
* Disposing of a domain removes it from its parent
* Domain disposal turns functions into no-ops
* More documentation of domains
* More thorough dispose() semantics
* An example using domains in an HTTP server
* Don't handle errors on a disposed domain
* Need to push, even if the same domain is entered multiple times
* Array.push is too slow for the EE Ctor
* lint domain
* domain: docs
* Also call abort and destroySoon to clean up event emitters
* domain: Wrap destroy methods in a try/catch
* Attach tick callbacks to active domain
* domain: Only implicitly bind timers, not explicitly
* domain: Don't fire timers when disposed.
* domain: Simplify naming so that MakeCallback works on Timers
* Add setInterval and nextTick to domain test
* domain: Make stack private
The idea here is to reduce the number of times that `setRawMode()` is called
on the `input` stream, since it is expensive, and simply pause()/resume()
should not call it.
So now `setRawMode()` only gets called at the beginning of the Interface
instance, and then when `Interface#close()` is called.
Test case included.
If the fs.open method is modified via AOP-style extension, in between
the creation of an fs.WriteStream and the processing of its action
queue, then the test of whether or not the method === fs.open will fail,
because fs.open has been replaced.
The solution is to save a reference to fs.open on the stream itself when
the action is placed in the queue.
This fixes isaacs/node-graceful-fs#6.
If the fs.open method is modified via AOP-style extension, in between
the creation of an fs.WriteStream and the processing of its action
queue, then the test of whether or not the method === fs.open will fail,
because fs.open has been replaced.
The solution is to save a reference to fs.open on the stream itself when
the action is placed in the queue.
This fixes isaacs/node-graceful-fs#6.
Technically saying `tty.ReadStream#setRawMode()` is correct,
but since a typical use cannot instantiate `tty.ReadStream` themselves,
and 99% of the time the only instance is `process.stdin`,
then a little clarification seemed necessary.
Instead of allocating a new 64KB buffer each time when checking if there is
something to transform, continue to use the same buffer. Once the buffer is
exhausted, allocate a new buffer. This solves the problem of huge allocations
when small fragments of data are processed, but will also continue to work
well with big pieces of data.
Prevents alignment issues when people create a typed array from a buffer.
Unaligned loads or stores are less efficent and (on some architectures) unsafe.
This should only be minimally used, since the `terminal` value will usually be
what you are expecting. This option is specifically for the case where `terminal`
is false, but you still want colors to be output (or vice-versa).
Previously this was a module-level setting, meaning that all REPL instances
had to share the same writer function. Turning it into one of the options
allows individual REPL instances to use their own writer function.
The overall goal here is to make readline more interoperable with other node
Streams like say a net.Socket instance, in "terminal" mode.
See #2922 for all the details.
Closes #2922.
This reverts commit 443071db57.
Patch was overly compilicated and made some incorrect assumptions about the
position of the cursor being at the bottom of the screen. @rlidwka and I are
working on getting a proper implementation written.
This patch will kill the worker once it has lost its connection with the parent.
However if the worker are doing a suicide, other measures will be used.
This patch add a worker.disconnect() method there will stop the worker from accepting
new connections and then stop the IPC. This allow the worker to die graceful.
When the IPC has been disconnected a 'disconnect' event will emit.
The patch also add a cluster.disconnect() method, this will call worker.disconnect() on
all connected workers. When the workers are disconneted it will then close all server
handlers. This allow the cluster itself to self terminate in a graceful way.
Currently, a child process does not emit the 'exit' event until 'close' events
have been received on all three of the child's stdio streams. This change makes
the child object emit 'exit' when the child exits, and a new 'close' event when
all stdio streams are closed.