2017-02-07 18:10:09 +01:00
|
|
|
# How to Write and Run Benchmarks in Node.js Core
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
## Table of Contents
|
2016-02-21 13:14:39 +01:00
|
|
|
|
|
|
|
* [Prerequisites](#prerequisites)
|
2017-02-07 18:10:09 +01:00
|
|
|
* [HTTP Benchmark Requirements](#http-benchmark-requirements)
|
|
|
|
* [Benchmark Analysis Requirements](#benchmark-analysis-requirements)
|
2016-02-21 13:14:39 +01:00
|
|
|
* [Running benchmarks](#running-benchmarks)
|
2017-02-07 18:10:09 +01:00
|
|
|
* [Running individual benchmarks](#running-individual-benchmarks)
|
|
|
|
* [Running all benchmarks](#running-all-benchmarks)
|
|
|
|
* [Comparing Node.js versions](#comparing-nodejs-versions)
|
|
|
|
* [Comparing parameters](#comparing-parameters)
|
2017-10-08 08:50:30 +02:00
|
|
|
* [Running Benchmarks on the CI](#running-benchmarks-on-the-ci)
|
2016-02-21 13:14:39 +01:00
|
|
|
* [Creating a benchmark](#creating-a-benchmark)
|
2017-02-07 18:10:09 +01:00
|
|
|
* [Basics of a benchmark](#basics-of-a-benchmark)
|
|
|
|
* [Creating an HTTP benchmark](#creating-an-http-benchmark)
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2015-02-27 03:37:46 +01:00
|
|
|
## Prerequisites
|
2015-01-28 00:28:41 +01:00
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
Basic Unix tools are required for some benchmarks.
|
|
|
|
[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
|
|
|
|
which need to be included in the global Windows `PATH`.
|
|
|
|
|
|
|
|
### HTTP Benchmark Requirements
|
|
|
|
|
2017-03-25 19:01:06 +01:00
|
|
|
Most of the HTTP benchmarks require a benchmarker to be installed. This can be
|
2016-08-05 11:34:50 +02:00
|
|
|
either [`wrk`][wrk] or [`autocannon`][autocannon].
|
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
`Autocannon` is a Node.js script that can be installed using
|
|
|
|
`npm install -g autocannon`. It will use the Node.js executable that is in the
|
2017-05-17 00:53:27 +02:00
|
|
|
path. In order to compare two HTTP benchmark runs, make sure that the
|
2017-02-07 18:10:09 +01:00
|
|
|
Node.js version in the path is not altered.
|
2016-08-05 11:34:50 +02:00
|
|
|
|
2017-05-17 00:53:27 +02:00
|
|
|
`wrk` may be available through one of the available package managers. If not, it can
|
|
|
|
be easily built [from source][wrk] via `make`.
|
2016-08-05 11:34:50 +02:00
|
|
|
|
2017-03-25 19:01:06 +01:00
|
|
|
By default, `wrk` will be used as the benchmarker. If it is not available,
|
2017-05-17 00:53:27 +02:00
|
|
|
`autocannon` will be used in its place. When creating an HTTP benchmark, the
|
|
|
|
benchmarker to be used should be specified by providing it as an argument:
|
2016-08-05 11:34:50 +02:00
|
|
|
|
|
|
|
`node benchmark/run.js --set benchmarker=autocannon http`
|
|
|
|
|
|
|
|
`node benchmark/http/simple.js benchmarker=autocannon`
|
2015-06-23 05:27:17 +02:00
|
|
|
|
http2: introducing HTTP/2
At long last: The initial *experimental* implementation of HTTP/2.
This is an accumulation of the work that has been done in the nodejs/http2
repository, squashed down to a couple of commits. The original commit
history has been preserved in the nodejs/http2 repository.
This PR introduces the nghttp2 C library as a new dependency. This library
provides the majority of the HTTP/2 protocol implementation, with the rest
of the code here providing the mapping of the library into a usable JS API.
Within src, a handful of new node_http2_*.c and node_http2_*.h files are
introduced. These provide the internal mechanisms that interface with nghttp
and define the `process.binding('http2')` interface.
The JS API is defined within `internal/http2/*.js`.
There are two APIs provided: Core and Compat.
The Core API is HTTP/2 specific and is designed to be as minimal and as
efficient as possible.
The Compat API is intended to be as close to the existing HTTP/1 API as
possible, with some exceptions.
Tests, documentation and initial benchmarks are included.
The `http2` module is gated by a new `--expose-http2` command line flag.
When used, `require('http2')` will be exposed to users. Note that there
is an existing `http2` module on npm that would be impacted by the introduction
of this module, which is the main reason for gating this behind a flag.
When using `require('http2')` the first time, a process warning will be
emitted indicating that an experimental feature is being used.
To run the benchmarks, the `h2load` tool (part of the nghttp project) is
required: `./node benchmarks/http2/simple.js benchmarker=h2load`. Only
two benchmarks are currently available.
Additional configuration options to enable verbose debugging are provided:
```
$ ./configure --debug-http2 --debug-nghttp2
$ NODE_DEBUG=http2 ./node
```
The `--debug-http2` configuration option enables verbose debug statements
from the `src/node_http2_*` files. The `--debug-nghttp2` enables the nghttp
library's own verbose debug output. The `NODE_DEBUG=http2` enables JS-level
debug output.
The following illustrates as simple HTTP/2 server and client interaction:
(The HTTP/2 client and server support both plain text and TLS connections)
```jt client = http2.connect('http://localhost:80');
const req = client.request({ ':path': '/some/path' });
req.on('data', (chunk) => { /* do something with the data */ });
req.on('end', () => {
client.destroy();
});
// Plain text (non-TLS server)
const server = http2.createServer();
server.on('stream', (stream, requestHeaders) => {
stream.respond({ ':status': 200 });
stream.write('hello ');
stream.end('world');
});
server.listen(80);
```
```js
const http2 = require('http2');
const client = http2.connect('http://localhost');
```
Author: Anna Henningsen <anna@addaleax.net>
Author: Colin Ihrig <cjihrig@gmail.com>
Author: Daniel Bevenius <daniel.bevenius@gmail.com>
Author: James M Snell <jasnell@gmail.com>
Author: Jun Mukai
Author: Kelvin Jin
Author: Matteo Collina <matteo.collina@gmail.com>
Author: Robert Kowalski <rok@kowalski.gd>
Author: Santiago Gimeno <santiago.gimeno@gmail.com>
Author: Sebastiaan Deckers <sebdeckers83@gmail.com>
Author: Yosuke Furukawa <yosuke.furukawa@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/14239
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
2017-07-17 19:17:16 +02:00
|
|
|
#### HTTP/2 Benchmark Requirements
|
|
|
|
|
|
|
|
To run the `http2` benchmarks, the `h2load` benchmarker must be used. The
|
|
|
|
`h2load` tool is a component of the `nghttp2` project and may be installed
|
2017-07-17 19:43:33 +02:00
|
|
|
from [nghttp2.org][] or built from source.
|
http2: introducing HTTP/2
At long last: The initial *experimental* implementation of HTTP/2.
This is an accumulation of the work that has been done in the nodejs/http2
repository, squashed down to a couple of commits. The original commit
history has been preserved in the nodejs/http2 repository.
This PR introduces the nghttp2 C library as a new dependency. This library
provides the majority of the HTTP/2 protocol implementation, with the rest
of the code here providing the mapping of the library into a usable JS API.
Within src, a handful of new node_http2_*.c and node_http2_*.h files are
introduced. These provide the internal mechanisms that interface with nghttp
and define the `process.binding('http2')` interface.
The JS API is defined within `internal/http2/*.js`.
There are two APIs provided: Core and Compat.
The Core API is HTTP/2 specific and is designed to be as minimal and as
efficient as possible.
The Compat API is intended to be as close to the existing HTTP/1 API as
possible, with some exceptions.
Tests, documentation and initial benchmarks are included.
The `http2` module is gated by a new `--expose-http2` command line flag.
When used, `require('http2')` will be exposed to users. Note that there
is an existing `http2` module on npm that would be impacted by the introduction
of this module, which is the main reason for gating this behind a flag.
When using `require('http2')` the first time, a process warning will be
emitted indicating that an experimental feature is being used.
To run the benchmarks, the `h2load` tool (part of the nghttp project) is
required: `./node benchmarks/http2/simple.js benchmarker=h2load`. Only
two benchmarks are currently available.
Additional configuration options to enable verbose debugging are provided:
```
$ ./configure --debug-http2 --debug-nghttp2
$ NODE_DEBUG=http2 ./node
```
The `--debug-http2` configuration option enables verbose debug statements
from the `src/node_http2_*` files. The `--debug-nghttp2` enables the nghttp
library's own verbose debug output. The `NODE_DEBUG=http2` enables JS-level
debug output.
The following illustrates as simple HTTP/2 server and client interaction:
(The HTTP/2 client and server support both plain text and TLS connections)
```jt client = http2.connect('http://localhost:80');
const req = client.request({ ':path': '/some/path' });
req.on('data', (chunk) => { /* do something with the data */ });
req.on('end', () => {
client.destroy();
});
// Plain text (non-TLS server)
const server = http2.createServer();
server.on('stream', (stream, requestHeaders) => {
stream.respond({ ':status': 200 });
stream.write('hello ');
stream.end('world');
});
server.listen(80);
```
```js
const http2 = require('http2');
const client = http2.connect('http://localhost');
```
Author: Anna Henningsen <anna@addaleax.net>
Author: Colin Ihrig <cjihrig@gmail.com>
Author: Daniel Bevenius <daniel.bevenius@gmail.com>
Author: James M Snell <jasnell@gmail.com>
Author: Jun Mukai
Author: Kelvin Jin
Author: Matteo Collina <matteo.collina@gmail.com>
Author: Robert Kowalski <rok@kowalski.gd>
Author: Santiago Gimeno <santiago.gimeno@gmail.com>
Author: Sebastiaan Deckers <sebdeckers83@gmail.com>
Author: Yosuke Furukawa <yosuke.furukawa@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/14239
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
2017-07-17 19:17:16 +02:00
|
|
|
|
|
|
|
`node benchmark/http2/simple.js benchmarker=autocannon`
|
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
### Benchmark Analysis Requirements
|
2016-09-26 11:03:21 +02:00
|
|
|
|
2017-05-17 00:53:27 +02:00
|
|
|
To analyze the results, `R` should be installed. Use one of the available
|
|
|
|
package managers or download it from https://www.r-project.org/.
|
2016-02-21 13:14:39 +01:00
|
|
|
|
|
|
|
The R packages `ggplot2` and `plyr` are also used and can be installed using
|
|
|
|
the R REPL.
|
|
|
|
|
|
|
|
```R
|
|
|
|
$ R
|
|
|
|
install.packages("ggplot2")
|
|
|
|
install.packages("plyr")
|
|
|
|
```
|
2015-02-27 03:37:46 +01:00
|
|
|
|
2017-05-17 00:53:27 +02:00
|
|
|
In the event that a message is reported stating that a CRAN mirror must be
|
|
|
|
selected first, specify a mirror by adding in the repo parameter.
|
2016-12-10 02:01:00 +01:00
|
|
|
|
2017-02-10 21:21:35 +01:00
|
|
|
If we used the "http://cran.us.r-project.org" mirror, it could look something
|
|
|
|
like this:
|
2016-12-10 02:01:00 +01:00
|
|
|
|
|
|
|
```R
|
|
|
|
install.packages("ggplot2", repo="http://cran.us.r-project.org")
|
|
|
|
```
|
|
|
|
|
2017-05-17 00:53:27 +02:00
|
|
|
Of course, use an appropriate mirror based on location.
|
2016-12-10 02:01:00 +01:00
|
|
|
A list of mirrors is [located here](https://cran.r-project.org/mirrors.html).
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
## Running benchmarks
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
### Running individual benchmarks
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
This can be useful for debugging a benchmark or doing a quick performance
|
|
|
|
measure. But it does not provide the statistical information to make any
|
|
|
|
conclusions about the performance.
|
2015-06-13 18:07:20 +02:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
Individual benchmarks can be executed by simply executing the benchmark script
|
|
|
|
with node.
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ node benchmark/buffers/buffer-tostring.js
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
buffers/buffer-tostring.js n=10000000 len=0 arg=true: 62710590.393305704
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1 arg=true: 9178624.591787899
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=64 arg=true: 7658962.8891432695
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 4136904.4060201733
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=0 arg=false: 22974354.231509723
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1 arg=false: 11485945.656765845
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=64 arg=false: 8718280.70650129
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 4103857.0726124765
|
|
|
|
```
|
|
|
|
|
|
|
|
Each line represents a single benchmark with parameters specified as
|
|
|
|
`${variable}=${value}`. Each configuration combination is executed in a separate
|
|
|
|
process. This ensures that benchmark results aren't affected by the execution
|
2017-11-17 06:41:14 +01:00
|
|
|
order due to V8 optimizations. **The last number is the rate of operations
|
2016-02-21 13:14:39 +01:00
|
|
|
measured in ops/sec (higher is better).**
|
|
|
|
|
2017-05-17 00:53:27 +02:00
|
|
|
Furthermore a subset of the configurations can be specified, by setting them in
|
2016-02-21 13:14:39 +01:00
|
|
|
the process arguments:
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ node benchmark/buffers/buffer-tostring.js len=1024
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 3498295.68561504
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 3783071.1678948295
|
2014-05-23 05:57:31 +02:00
|
|
|
```
|
2016-02-21 13:14:39 +01:00
|
|
|
|
|
|
|
### Running all benchmarks
|
|
|
|
|
|
|
|
Similar to running individual benchmarks, a group of benchmarks can be executed
|
2017-02-07 18:10:09 +01:00
|
|
|
by using the `run.js` tool. To see how to use this script,
|
|
|
|
run `node benchmark/run.js`. Again this does not provide the statistical
|
2016-02-21 13:14:39 +01:00
|
|
|
information to make any conclusions.
|
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ node benchmark/run.js arrays
|
|
|
|
|
|
|
|
arrays/var-int.js
|
|
|
|
arrays/var-int.js n=25 type=Array: 71.90148040747789
|
|
|
|
arrays/var-int.js n=25 type=Buffer: 92.89648382795582
|
2014-05-23 05:57:31 +02:00
|
|
|
...
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
arrays/zero-float.js
|
|
|
|
arrays/zero-float.js n=25 type=Array: 75.46208316171496
|
|
|
|
arrays/zero-float.js n=25 type=Buffer: 101.62785630273159
|
|
|
|
...
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
arrays/zero-int.js
|
|
|
|
arrays/zero-int.js n=25 type=Array: 72.31023859816062
|
|
|
|
arrays/zero-int.js n=25 type=Buffer: 90.49906662339653
|
2014-05-23 05:57:31 +02:00
|
|
|
...
|
|
|
|
```
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
It is possible to execute more groups by adding extra process arguments.
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ node benchmark/run.js arrays buffers
|
|
|
|
```
|
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
### Comparing Node.js versions
|
2016-02-21 13:14:39 +01:00
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
To compare the effect of a new Node.js version use the `compare.js` tool. This
|
2016-02-21 13:14:39 +01:00
|
|
|
will run each benchmark multiple times, making it possible to calculate
|
2017-02-07 18:10:09 +01:00
|
|
|
statistics on the performance measures. To see how to use this script,
|
|
|
|
run `node benchmark/compare.js`.
|
2016-02-21 13:14:39 +01:00
|
|
|
|
|
|
|
As an example on how to check for a possible performance improvement, the
|
|
|
|
[#5134](https://github.com/nodejs/node/pull/5134) pull request will be used as
|
|
|
|
an example. This pull request _claims_ to improve the performance of the
|
|
|
|
`string_decoder` module.
|
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
First build two versions of Node.js, one from the master branch (here called
|
2016-02-21 13:14:39 +01:00
|
|
|
`./node-master`) and another with the pull request applied (here called
|
2017-08-02 20:15:54 +02:00
|
|
|
`./node-pr-5134`).
|
2016-02-21 13:14:39 +01:00
|
|
|
|
2017-10-11 10:15:49 +02:00
|
|
|
To run multiple compiled versions in parallel you need to copy the output of the
|
|
|
|
build: `cp ./out/Release/node ./node-master`. Check out the following example:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ git checkout master
|
|
|
|
$ ./configure && make -j4
|
|
|
|
$ cp ./out/Release/node ./node-master
|
|
|
|
|
|
|
|
$ git checkout pr-5134
|
|
|
|
$ ./configure && make -j4
|
|
|
|
$ cp ./out/Release/node ./node-pr-5134
|
|
|
|
```
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
The `compare.js` tool will then produce a csv file with the benchmark results.
|
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ node benchmark/compare.js --old ./node-master --new ./node-pr-5134 string_decoder > compare-pr-5134.csv
|
|
|
|
```
|
2015-06-13 18:07:20 +02:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
For analysing the benchmark results use the `compare.R` tool.
|
2015-06-13 18:07:20 +02:00
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ cat compare-pr-5134.csv | Rscript benchmark/compare.R
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2017-01-11 13:16:25 +01:00
|
|
|
improvement confidence p.value
|
2016-02-21 13:14:39 +01:00
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=1024 encoding=ascii 12.46 % *** 1.165345e-04
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=1024 encoding=base64-ascii 24.70 % *** 1.820615e-15
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=1024 encoding=base64-utf8 23.60 % *** 2.105625e-12
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=1024 encoding=utf8 14.04 % *** 1.291105e-07
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=128 encoding=ascii 6.70 % * 2.928003e-02
|
|
|
|
...
|
2014-05-23 05:57:31 +02:00
|
|
|
```
|
2016-02-21 13:14:39 +01:00
|
|
|
|
|
|
|
In the output, _improvement_ is the relative improvement of the new version,
|
2017-01-11 13:16:25 +01:00
|
|
|
hopefully this is positive. _confidence_ tells if there is enough
|
2016-02-21 13:14:39 +01:00
|
|
|
statistical evidence to validate the _improvement_. If there is enough evidence
|
|
|
|
then there will be at least one star (`*`), more stars is just better. **However
|
2017-05-17 00:53:27 +02:00
|
|
|
if there are no stars, then don't make any conclusions based on the
|
|
|
|
_improvement_.** Sometimes this is fine, for example if no improvements are
|
|
|
|
expected, then there shouldn't be any stars.
|
2016-02-21 13:14:39 +01:00
|
|
|
|
|
|
|
**A word of caution:** Statistics is not a foolproof tool. If a benchmark shows
|
|
|
|
a statistical significant difference, there is a 5% risk that this
|
2017-02-10 21:21:35 +01:00
|
|
|
difference doesn't actually exist. For a single benchmark this is not an
|
2016-02-21 13:14:39 +01:00
|
|
|
issue. But when considering 20 benchmarks it's normal that one of them
|
|
|
|
will show significance, when it shouldn't. A possible solution is to instead
|
|
|
|
consider at least two stars (`**`) as the threshold, in that case the risk
|
|
|
|
is 1%. If three stars (`***`) is considered the risk is 0.1%. However this
|
|
|
|
may require more runs to obtain (can be set with `--runs`).
|
|
|
|
|
|
|
|
_For the statistically minded, the R script performs an [independent/unpaired
|
|
|
|
2-group t-test][t-test], with the null hypothesis that the performance is the
|
2017-01-11 13:16:25 +01:00
|
|
|
same for both versions. The confidence field will show a star if the p-value
|
2016-02-21 13:14:39 +01:00
|
|
|
is less than `0.05`._
|
|
|
|
|
|
|
|
The `compare.R` tool can also produce a box plot by using the `--plot filename`
|
2017-05-17 00:53:27 +02:00
|
|
|
option. In this case there are 48 different benchmark combinations, and there
|
|
|
|
may be a need to filter the csv file. This can be done while benchmarking
|
|
|
|
using the `--set` parameter (e.g. `--set encoding=ascii`) or by filtering results
|
2016-02-21 13:14:39 +01:00
|
|
|
afterwards using tools such as `sed` or `grep`. In the `sed` case be sure to
|
|
|
|
keep the first line since that contains the header information.
|
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ cat compare-pr-5134.csv | sed '1p;/encoding=ascii/!d' | Rscript benchmark/compare.R --plot compare-plot.png
|
|
|
|
|
2017-01-11 13:16:25 +01:00
|
|
|
improvement confidence p.value
|
2016-02-21 13:14:39 +01:00
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=1024 encoding=ascii 12.46 % *** 1.165345e-04
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=128 encoding=ascii 6.70 % * 2.928003e-02
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=1024 inlen=32 encoding=ascii 7.47 % *** 5.780583e-04
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=16 inlen=1024 encoding=ascii 8.94 % *** 1.788579e-04
|
|
|
|
string_decoder/string-decoder.js n=250000 chunk=16 inlen=128 encoding=ascii 10.54 % *** 4.016172e-05
|
2014-05-23 05:57:31 +02:00
|
|
|
...
|
|
|
|
```
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
![compare tool boxplot](doc_img/compare-boxplot.png)
|
|
|
|
|
|
|
|
### Comparing parameters
|
|
|
|
|
|
|
|
It can be useful to compare the performance for different parameters, for
|
|
|
|
example to analyze the time complexity.
|
|
|
|
|
|
|
|
To do this use the `scatter.js` tool, this will run a benchmark multiple times
|
2017-02-07 18:10:09 +01:00
|
|
|
and generate a csv with the results. To see how to use this script,
|
|
|
|
run `node benchmark/scatter.js`.
|
2016-02-21 13:14:39 +01:00
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
|
|
|
|
```
|
|
|
|
|
|
|
|
After generating the csv, a comparison table can be created using the
|
|
|
|
`scatter.R` tool. Even more useful it creates an actual scatter plot when using
|
|
|
|
the `--plot filename` option.
|
2015-01-28 00:28:41 +01:00
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ cat scatter.csv | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log
|
2015-01-28 00:28:41 +01:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
aggregating variable: inlen
|
2015-01-28 00:28:41 +01:00
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
chunk encoding mean confidence.interval
|
|
|
|
16 ascii 1111933.3 221502.48
|
|
|
|
16 base64-ascii 167508.4 33116.09
|
|
|
|
16 base64-utf8 122666.6 25037.65
|
|
|
|
16 utf8 783254.8 159601.79
|
|
|
|
64 ascii 2623462.9 399791.36
|
|
|
|
64 base64-ascii 462008.3 85369.45
|
|
|
|
64 base64-utf8 420108.4 85612.05
|
|
|
|
64 utf8 1358327.5 235152.03
|
|
|
|
256 ascii 3730343.4 371530.47
|
|
|
|
256 base64-ascii 663281.2 80302.73
|
|
|
|
256 base64-utf8 632911.7 81393.07
|
|
|
|
256 utf8 1554216.9 236066.53
|
|
|
|
1024 ascii 4399282.0 186436.46
|
|
|
|
1024 base64-ascii 730426.6 63806.12
|
|
|
|
1024 base64-utf8 680954.3 68076.33
|
|
|
|
1024 utf8 1554832.5 237532.07
|
2015-01-28 00:28:41 +01:00
|
|
|
```
|
2016-02-21 13:14:39 +01:00
|
|
|
|
|
|
|
Because the scatter plot can only show two variables (in this case _chunk_ and
|
|
|
|
_encoding_) the rest is aggregated. Sometimes aggregating is a problem, this
|
|
|
|
can be solved by filtering. This can be done while benchmarking using the
|
|
|
|
`--set` parameter (e.g. `--set encoding=ascii`) or by filtering results
|
|
|
|
afterwards using tools such as `sed` or `grep`. In the `sed` case be
|
|
|
|
sure to keep the first line since that contains the header information.
|
|
|
|
|
2016-07-14 12:46:01 +02:00
|
|
|
```console
|
2016-02-21 13:14:39 +01:00
|
|
|
$ cat scatter.csv | sed -E '1p;/([^,]+, ){3}128,/!d' | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log
|
|
|
|
|
|
|
|
chunk encoding mean confidence.interval
|
|
|
|
16 ascii 701285.96 21233.982
|
|
|
|
16 base64-ascii 107719.07 3339.439
|
|
|
|
16 base64-utf8 72966.95 2438.448
|
|
|
|
16 utf8 475340.84 17685.450
|
|
|
|
64 ascii 2554105.08 87067.132
|
|
|
|
64 base64-ascii 330120.32 8551.707
|
|
|
|
64 base64-utf8 249693.19 8990.493
|
|
|
|
64 utf8 1128671.90 48433.862
|
|
|
|
256 ascii 4841070.04 181620.768
|
|
|
|
256 base64-ascii 849545.53 29931.656
|
|
|
|
256 base64-utf8 809629.89 33773.496
|
|
|
|
256 utf8 1489525.15 49616.334
|
|
|
|
1024 ascii 4931512.12 165402.805
|
|
|
|
1024 base64-ascii 863933.22 27766.982
|
|
|
|
1024 base64-utf8 827093.97 24376.522
|
|
|
|
1024 utf8 1487176.43 50128.721
|
2015-01-28 00:28:41 +01:00
|
|
|
```
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
![compare tool boxplot](doc_img/scatter-plot.png)
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2017-10-08 08:50:30 +02:00
|
|
|
### Running Benchmarks on the CI
|
|
|
|
|
|
|
|
To see the performance impact of a Pull Request by running benchmarks on
|
|
|
|
the CI, check out [How to: Running core benchmarks on Node.js CI][benchmark-ci].
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
## Creating a benchmark
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
### Basics of a benchmark
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
All benchmarks use the `require('../common.js')` module. This contains the
|
2017-05-17 00:53:27 +02:00
|
|
|
`createBenchmark(main, configs[, options])` method which will setup the
|
2016-12-29 13:07:08 +01:00
|
|
|
benchmark.
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-12-29 13:07:08 +01:00
|
|
|
The arguments of `createBenchmark` are:
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-12-29 13:07:08 +01:00
|
|
|
* `main` {Function} The benchmark function,
|
|
|
|
where the code running operations and controlling timers should go
|
|
|
|
* `configs` {Object} The benchmark parameters. `createBenchmark` will run all
|
|
|
|
possible combinations of these parameters, unless specified otherwise.
|
|
|
|
Each configuration is a property with an array of possible values.
|
|
|
|
Note that the configuration values can only be strings or numbers.
|
|
|
|
* `options` {Object} The benchmark options. At the moment only the `flags`
|
|
|
|
option for specifying command line flags is supported.
|
|
|
|
|
|
|
|
`createBenchmark` returns a `bench` object, which is used for timing
|
2016-02-21 13:14:39 +01:00
|
|
|
the runtime of the benchmark. Run `bench.start()` after the initialization
|
|
|
|
and `bench.end(n)` when the benchmark is done. `n` is the number of operations
|
2017-05-17 00:53:27 +02:00
|
|
|
performed in the benchmark.
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-12-29 13:07:08 +01:00
|
|
|
The benchmark script will be run twice:
|
|
|
|
|
|
|
|
The first pass will configure the benchmark with the combination of
|
|
|
|
parameters specified in `configs`, and WILL NOT run the `main` function.
|
|
|
|
In this pass, no flags except the ones directly passed via commands
|
2017-05-17 00:53:27 +02:00
|
|
|
when running the benchmarks will be used.
|
2016-12-29 13:07:08 +01:00
|
|
|
|
|
|
|
In the second pass, the `main` function will be run, and the process
|
|
|
|
will be launched with:
|
|
|
|
|
2017-05-17 00:53:27 +02:00
|
|
|
* The flags passed into `createBenchmark` (the third argument)
|
|
|
|
* The flags in the command passed when the benchmark was run
|
2016-12-29 13:07:08 +01:00
|
|
|
|
|
|
|
Beware that any code outside the `main` function will be run twice
|
|
|
|
in different processes. This could be troublesome if the code
|
|
|
|
outside the `main` function has side effects. In general, prefer putting
|
|
|
|
the code inside the `main` function if it's more than just declaration.
|
|
|
|
|
2016-02-21 13:14:39 +01:00
|
|
|
```js
|
|
|
|
'use strict';
|
|
|
|
const common = require('../common.js');
|
2017-06-01 00:21:22 +02:00
|
|
|
const { SlowBuffer } = require('buffer');
|
2014-05-23 05:57:31 +02:00
|
|
|
|
2016-12-29 13:07:08 +01:00
|
|
|
const configs = {
|
|
|
|
// Number of operations, specified here so they show up in the report.
|
|
|
|
// Most benchmarks just use one value for all runs.
|
2016-02-21 13:14:39 +01:00
|
|
|
n: [1024],
|
2016-12-29 13:07:08 +01:00
|
|
|
type: ['fast', 'slow'], // Custom configurations
|
|
|
|
size: [16, 128, 1024] // Custom configurations
|
|
|
|
};
|
|
|
|
|
|
|
|
const options = {
|
2017-05-17 00:53:27 +02:00
|
|
|
// Add --expose-internals in order to require internal modules in main
|
2016-12-29 13:07:08 +01:00
|
|
|
flags: ['--zero-fill-buffers']
|
|
|
|
};
|
|
|
|
|
|
|
|
// main and configs are required, options is optional.
|
|
|
|
const bench = common.createBenchmark(main, configs, options);
|
|
|
|
|
|
|
|
// Note that any code outside main will be run twice,
|
|
|
|
// in different processes, with different command line arguments.
|
2014-05-23 05:57:31 +02:00
|
|
|
|
|
|
|
function main(conf) {
|
2017-05-17 00:53:27 +02:00
|
|
|
// Only flags that have been passed to createBenchmark
|
|
|
|
// earlier when main is run will be in effect.
|
|
|
|
// In order to benchmark the internal modules, require them here. For example:
|
2016-12-29 13:07:08 +01:00
|
|
|
// const URL = require('internal/url').URL
|
|
|
|
|
|
|
|
// Start the timer
|
2016-02-21 13:14:39 +01:00
|
|
|
bench.start();
|
|
|
|
|
2016-12-29 13:07:08 +01:00
|
|
|
// Do operations here
|
2016-02-21 13:14:39 +01:00
|
|
|
const BufferConstructor = conf.type === 'fast' ? Buffer : SlowBuffer;
|
|
|
|
|
|
|
|
for (let i = 0; i < conf.n; i++) {
|
|
|
|
new BufferConstructor(conf.size);
|
2014-05-23 05:57:31 +02:00
|
|
|
}
|
2016-12-29 13:07:08 +01:00
|
|
|
|
|
|
|
// End the timer, pass in the number of operations
|
2016-02-21 13:14:39 +01:00
|
|
|
bench.end(conf.n);
|
2014-05-23 05:57:31 +02:00
|
|
|
}
|
|
|
|
```
|
2016-07-14 12:46:01 +02:00
|
|
|
|
2017-02-07 18:10:09 +01:00
|
|
|
### Creating an HTTP benchmark
|
2016-08-05 11:34:50 +02:00
|
|
|
|
|
|
|
The `bench` object returned by `createBenchmark` implements
|
|
|
|
`http(options, callback)` method. It can be used to run external tool to
|
|
|
|
benchmark HTTP servers.
|
|
|
|
|
|
|
|
```js
|
|
|
|
'use strict';
|
|
|
|
|
|
|
|
const common = require('../common.js');
|
|
|
|
|
|
|
|
const bench = common.createBenchmark(main, {
|
|
|
|
kb: [64, 128, 256, 1024],
|
|
|
|
connections: [100, 500]
|
|
|
|
});
|
|
|
|
|
|
|
|
function main(conf) {
|
|
|
|
const http = require('http');
|
|
|
|
const len = conf.kb * 1024;
|
2016-11-18 18:40:45 +01:00
|
|
|
const chunk = Buffer.alloc(len, 'x');
|
2016-08-05 11:34:50 +02:00
|
|
|
const server = http.createServer(function(req, res) {
|
|
|
|
res.end(chunk);
|
|
|
|
});
|
|
|
|
|
|
|
|
server.listen(common.PORT, function() {
|
|
|
|
bench.http({
|
|
|
|
connections: conf.connections,
|
|
|
|
}, function() {
|
|
|
|
server.close();
|
|
|
|
});
|
|
|
|
});
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
Supported options keys are:
|
|
|
|
* `port` - defaults to `common.PORT`
|
|
|
|
* `path` - defaults to `/`
|
|
|
|
* `connections` - number of concurrent connections to use, defaults to 100
|
|
|
|
* `duration` - duration of the benchmark in seconds, defaults to 10
|
|
|
|
* `benchmarker` - benchmarker to use, defaults to
|
|
|
|
`common.default_http_benchmarker`
|
|
|
|
|
|
|
|
[autocannon]: https://github.com/mcollina/autocannon
|
2016-07-14 12:46:01 +02:00
|
|
|
[wrk]: https://github.com/wg/wrk
|
|
|
|
[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes.2C_unequal_variances
|
2016-11-01 22:43:21 +01:00
|
|
|
[git-for-windows]: http://git-scm.com/download/win
|
http2: introducing HTTP/2
At long last: The initial *experimental* implementation of HTTP/2.
This is an accumulation of the work that has been done in the nodejs/http2
repository, squashed down to a couple of commits. The original commit
history has been preserved in the nodejs/http2 repository.
This PR introduces the nghttp2 C library as a new dependency. This library
provides the majority of the HTTP/2 protocol implementation, with the rest
of the code here providing the mapping of the library into a usable JS API.
Within src, a handful of new node_http2_*.c and node_http2_*.h files are
introduced. These provide the internal mechanisms that interface with nghttp
and define the `process.binding('http2')` interface.
The JS API is defined within `internal/http2/*.js`.
There are two APIs provided: Core and Compat.
The Core API is HTTP/2 specific and is designed to be as minimal and as
efficient as possible.
The Compat API is intended to be as close to the existing HTTP/1 API as
possible, with some exceptions.
Tests, documentation and initial benchmarks are included.
The `http2` module is gated by a new `--expose-http2` command line flag.
When used, `require('http2')` will be exposed to users. Note that there
is an existing `http2` module on npm that would be impacted by the introduction
of this module, which is the main reason for gating this behind a flag.
When using `require('http2')` the first time, a process warning will be
emitted indicating that an experimental feature is being used.
To run the benchmarks, the `h2load` tool (part of the nghttp project) is
required: `./node benchmarks/http2/simple.js benchmarker=h2load`. Only
two benchmarks are currently available.
Additional configuration options to enable verbose debugging are provided:
```
$ ./configure --debug-http2 --debug-nghttp2
$ NODE_DEBUG=http2 ./node
```
The `--debug-http2` configuration option enables verbose debug statements
from the `src/node_http2_*` files. The `--debug-nghttp2` enables the nghttp
library's own verbose debug output. The `NODE_DEBUG=http2` enables JS-level
debug output.
The following illustrates as simple HTTP/2 server and client interaction:
(The HTTP/2 client and server support both plain text and TLS connections)
```jt client = http2.connect('http://localhost:80');
const req = client.request({ ':path': '/some/path' });
req.on('data', (chunk) => { /* do something with the data */ });
req.on('end', () => {
client.destroy();
});
// Plain text (non-TLS server)
const server = http2.createServer();
server.on('stream', (stream, requestHeaders) => {
stream.respond({ ':status': 200 });
stream.write('hello ');
stream.end('world');
});
server.listen(80);
```
```js
const http2 = require('http2');
const client = http2.connect('http://localhost');
```
Author: Anna Henningsen <anna@addaleax.net>
Author: Colin Ihrig <cjihrig@gmail.com>
Author: Daniel Bevenius <daniel.bevenius@gmail.com>
Author: James M Snell <jasnell@gmail.com>
Author: Jun Mukai
Author: Kelvin Jin
Author: Matteo Collina <matteo.collina@gmail.com>
Author: Robert Kowalski <rok@kowalski.gd>
Author: Santiago Gimeno <santiago.gimeno@gmail.com>
Author: Sebastiaan Deckers <sebdeckers83@gmail.com>
Author: Yosuke Furukawa <yosuke.furukawa@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/14239
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
2017-07-17 19:17:16 +02:00
|
|
|
[nghttp2.org]: http://nghttp2.org
|
2017-10-22 17:51:14 +02:00
|
|
|
[benchmark-ci]: https://github.com/nodejs/benchmarking/blob/master/docs/core_benchmarks.md
|