Commit a804347 makes fs function rethrow errors when the callback is
omitted. While the right thing to do, it's a change from the old v0.8
behavior where such errors were silently ignored.
To give users time to upgrade, temporarily disable that and replace it
with a function that warns once about the deprecated behavior.
Close #5005
This solves the problem of calling `readable.pipe(writable)` after the
readable stream has already emitted 'end', as often is the case when
writing simple HTTP proxies.
The spirit of streams2 is that things will work properly, even if you
don't set them up right away on the first tick.
This approach breaks down, however, because pipe()ing from an ended
readable will just do nothing. No more data will ever arrive, and the
writable will hang open forever never being ended.
However, that does not solve the case of adding a `on('end')` listener
after the stream has received the EOF chunk, if it was the first chunk
received (and thus, length was 0, and 'end' got emitted). So, with
this, we defer the 'end' event emission until the read() function is
called.
Also, in pipe(), if the source has emitted 'end' already, we call the
cleanup/onend function on nextTick. Piping from an already-ended stream
is thus the same as piping from a stream that is in the process of
ending.
Updates many tests that were relying on 'end' coming immediately, even
though they never read() from the req.
Fix #4942
In the function that pre-emptively fills the Readable queue, it relies
on a recursion through:
stream.push(chunk) ->
maybeReadMore(stream, state) ->
if (not reading more and < hwm) stream.read(0) ->
stream._read() ->
stream.push(chunk) -> repeat.
Since this was only calling read() a single time, and then relying on a
future nextTick to collect more data, it ends up causing a nextTick
recursion error (and potentially a RangeError, even) if you have a very
high highWaterMark, and are getting very small chunks pushed
synchronously in _read (as happens with TLS, or many simple test
streams).
This change implements a new approach, so that read(0) is called
repeatedly as long as it is effective (that is, the length keeps
increasing), and thus quickly fills up the buffer for streams such as
these, without any stacks overflowing.
so `ee.emit('error')` doesn't throw when domains are active
create an empty error only when handled by a domain
test for when no error is provided to an error event
Fix #4948
This adds a check before setting the incoming parser
to null. Under certain circumstances it'll already be set to
null by freeParser().
Otherwise this will cause node to crash as it tries to set
null on something that is already null.
When calling setImmediate with extra arguments the this keyword in the
callback would refer to the global object, but when not calling
setImmediate with extra arguments this would refer to the returned
handle object.
This commit fixes that inconsistency so its always set handle object.
The handle object was chosen for performance reasons.
Also, this seems to occasionally cause some annoying file-locking
errors in Windows. Not sure if this is the best fix, but it seems
to make the warnings go away in that spot.
If you call z.flush();z.write('foo'); then it would try to write 'foo'
before the flush was done, triggering an assertion in the zlib binding.
Closes #4950
Consider the following example:
console.log(Buffer('ú').toString('ascii'));
Before this commit, the contents of the buffer was used as-is and hence it
prints 'ú'.
Now, it prints 'C:'. Perhaps not much of an improvement but it conforms to what
the documentation says it does: strip off the high bits.
Fixes #4371.
Fix #4948
This adds a check before setting the incoming parser
to null. Under certain circumstances it'll already be set to
null by freeParser().
Otherwise this will cause node to crash as it tries to set
null on something that is already null.