35ed79932c
PR-URL: https://github.com/iojs/io.js/pull/847 Reviewed-By: Brendan Ashworth <brendan.ashworth@me.com> |
||
---|---|---|
.. | ||
arrays | ||
buffers | ||
crypto | ||
events | ||
fs | ||
http | ||
misc | ||
net | ||
querystring | ||
tls | ||
url | ||
common.js | ||
compare.js | ||
fs-write-stream-throughput.js | ||
http_bench.js | ||
http_server_lag.js | ||
http_simple_auto.js | ||
http_simple_bench.sh | ||
http_simple_cluster.js | ||
http_simple.js | ||
http_simple.rb | ||
http-flamegraph.sh | ||
http.sh | ||
idle_clients.js | ||
idle_server.js | ||
io.c | ||
plot.R | ||
README.md | ||
report-startup-memory.js | ||
static_http_server.js |
io.js core benchmark tests
This folder contains benchmark tests to measure the performance for certain io.js APIs.
prerequisites
Most of the http benchmarks require wrk
to be compiled beforehand.
make wrk
How to run tests
There are two ways to run benchmark tests:
- Run all tests of a given type, for example, buffers
iojs benchmark/common.js buffers
The above command will find all scripts under buffers
directory and require
each of them as a module. When a test script is required, it creates an instance
of Benchmark
(a class defined in common.js). In the next tick, the Benchmark
constructor iterates through the configuration object property values and run
the test function with each of the combined arguments in spawned processes. For
example, buffers/buffer-read.js has the following configuration:
var bench = common.createBenchmark(main, {
noAssert: [false, true],
buffer: ['fast', 'slow'],
type: ['UInt8', 'UInt16LE', 'UInt16BE',
'UInt32LE', 'UInt32BE',
'Int8', 'Int16LE', 'Int16BE',
'Int32LE', 'Int32BE',
'FloatLE', 'FloatBE',
'DoubleLE', 'DoubleBE'],
millions: [1]
});
The runner takes one item from each of the property array value to build a list of arguments to run the main function. The main function will receive the conf object as follows:
- first run:
{ noAssert: false,
buffer: 'fast',
type: 'UInt8',
millions: 1
}
- second run:
{
noAssert: false,
buffer: 'fast',
type: 'UInt16LE',
millions: 1
}
...
In this case, the main function will run 2214*1 = 56 times. The console output looks like the following:
buffers//buffer-read.js
buffers/buffer-read.js noAssert=false buffer=fast type=UInt8 millions=1: 271.83
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16LE millions=1: 239.43
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16BE millions=1: 244.57
...
- Run an individual test, for example, buffer-slice.js
iojs benchmark/buffers/buffer-read.js
The output:
buffers/buffer-read.js noAssert=false buffer=fast type=UInt8 millions=1: 246.79
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16LE millions=1: 240.11
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16BE millions=1: 245.91
...
- Run tests with options
This example will run only the first type of url test, with one iteration. (Note: benchmarks require many iterations to be statistically accurate.)
iojs benchmark/url/url-parse.js type=one n=1
Output:
url/url-parse.js type=one n=1: 1663.74402
How to write a benchmark test
The benchmark tests are grouped by types. Each type corresponds to a subdirectory,
such as arrays
, buffers
, or fs
.
Let's add a benchmark test for Buffer.slice function. We first create a file buffers/buffer-slice.js.
The code snippet
var common = require('../common.js'); // Load the test runner
var SlowBuffer = require('buffer').SlowBuffer;
// Create a benchmark test for function `main` and the configuration variants
var bench = common.createBenchmark(main, {
type: ['fast', 'slow'], // Two types of buffer
n: [512] // Number of times (each unit is 1024) to call the slice API
});
function main(conf) {
// Read the parameters from the configuration
var n = +conf.n;
var b = conf.type === 'fast' ? buf : slowBuf;
bench.start(); // Start benchmarking
for (var i = 0; i < n * 1024; i++) {
// Add your test here
b.slice(10, 256);
}
bench.end(n); // End benchmarking
}