0
0
mirror of https://github.com/nodejs/node.git synced 2024-11-29 15:06:33 +01:00
nodejs/benchmark
Karl Skomski a18dd7b788 src: replace naive search in Buffer::IndexOf
Adds the string search implementation from v8
which uses naive search if pattern length < 8
or to a specific badness then uses Boyer-Moore-Horspool

Added benchmark shows the expected improvements
Added option to use ucs2 encoding with Buffer::IndexOf

Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Trevor Norris <trev.norris@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/2539
2015-10-07 21:09:53 -07:00
..
arrays
buffers src: replace naive search in Buffer::IndexOf 2015-10-07 21:09:53 -07:00
crypto benchmark: add rsa/aes-gcm performance test 2015-04-04 12:37:26 +09:00
events events: deprecate static listenerCount function 2015-08-20 03:17:08 +05:30
fixtures src: replace naive search in Buffer::IndexOf 2015-10-07 21:09:53 -07:00
fs benchmark: Correct the bufferSize to highWaterMark 2013-11-06 16:32:22 +04:00
http benchmark: fix chunky client benchmark execution 2015-04-03 00:56:45 -04:00
misc src: replace usage of v8::Handle with v8::Local 2015-09-06 21:38:05 +10:00
net benchmark: fix chunky client benchmark execution 2015-04-03 00:56:45 -04:00
path benchmark: add remaining path benchmarks & optimize 2015-07-26 22:17:41 -07:00
querystring benchmark: add a few querystring benchmarks 2015-02-14 23:41:52 -08:00
tls benchmark: fixate ciphers in tls benchmarks 2013-12-07 02:32:03 +04:00
url lib: micro-optimize url.resolve() 2014-12-20 21:33:52 +01:00
common.js benchmark: update comment in common.js 2015-09-29 16:00:04 -07:00
compare.js benchmark: allow compare via fine-grained filters 2015-02-04 16:55:18 -05:00
fs-write-stream-throughput.js
http_bench.js benchmark: make concurrent requests configurable 2015-06-29 23:01:53 -07:00
http_server_lag.js
http_simple_auto.js
http_simple_bench.sh node: rename from io.js to node 2015-08-23 17:59:43 -04:00
http_simple_cluster.js
http_simple.js
http_simple.rb
http-flamegraph.sh node: rename from io.js to node 2015-08-23 17:59:43 -04:00
http.sh node: rename from io.js to node 2015-08-23 17:59:43 -04:00
idle_clients.js
idle_server.js
io.c
plot_csv.R node: rename from io.js to node 2015-08-23 17:59:43 -04:00
plot.R node: rename from io.js to node 2015-08-23 17:59:43 -04:00
README.md doc: rename from iojs(1) to node(1) in benchmarks 2015-09-15 14:02:02 -04:00
report-startup-memory.js
static_http_server.js

Node.js core benchmark tests

This folder contains benchmark tests to measure the performance for certain Node.js APIs.

Prerequisites

Most of the http benchmarks require wrk and ab (ApacheBench) being installed. These may be available through your preferred package manager.

If they are not available:

  • wrk may easily be built from source via make.
  • ab is sometimes bundled in a package called apache2-utils.

How to run tests

There are three ways to run benchmark tests:

Run all tests of a given type

For example, buffers:

node benchmark/common.js buffers

The above command will find all scripts under buffers directory and require each of them as a module. When a test script is required, it creates an instance of Benchmark (a class defined in common.js). In the next tick, the Benchmark constructor iterates through the configuration object property values and runs the test function with each of the combined arguments in spawned processes. For example, buffers/buffer-read.js has the following configuration:

var bench = common.createBenchmark(main, {
    noAssert: [false, true],
    buffer: ['fast', 'slow'],
    type: ['UInt8', 'UInt16LE', 'UInt16BE',
        'UInt32LE', 'UInt32BE',
        'Int8', 'Int16LE', 'Int16BE',
        'Int32LE', 'Int32BE',
        'FloatLE', 'FloatBE',
        'DoubleLE', 'DoubleBE'],
        millions: [1]
});

The runner takes one item from each of the property array value to build a list of arguments to run the main function. The main function will receive the conf object as follows:

  • first run:
    {   noAssert: false,
        buffer: 'fast',
        type: 'UInt8',
        millions: 1
    }
  • second run:
    {
        noAssert: false,
        buffer: 'fast',
        type: 'UInt16LE',
        millions: 1
    }

...

In this case, the main function will run 2214*1 = 56 times. The console output looks like the following:

buffers//buffer-read.js
buffers/buffer-read.js noAssert=false buffer=fast type=UInt8 millions=1: 271.83
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16LE millions=1: 239.43
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16BE millions=1: 244.57
...

The last number is the rate of operations. Higher is better.

Run an individual test

For example, buffer-slice.js:

node benchmark/buffers/buffer-read.js

The output:

buffers/buffer-read.js noAssert=false buffer=fast type=UInt8 millions=1: 246.79
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16LE millions=1: 240.11
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16BE millions=1: 245.91
...

Run tests with options

This example will run only the first type of url test, with one iteration. (Note: benchmarks require many iterations to be statistically accurate.)

node benchmark/url/url-parse.js type=one n=1

Output:

url/url-parse.js type=one n=1: 1663.74402

How to write a benchmark test

The benchmark tests are grouped by types. Each type corresponds to a subdirectory, such as arrays, buffers, or fs.

Let's add a benchmark test for Buffer.slice function. We first create a file buffers/buffer-slice.js.

The code snippet

var common = require('../common.js'); // Load the test runner

var SlowBuffer = require('buffer').SlowBuffer;

// Create a benchmark test for function `main` and the configuration variants
var bench = common.createBenchmark(main, {
  type: ['fast', 'slow'], // Two types of buffer
  n: [512] // Number of times (each unit is 1024) to call the slice API
});

function main(conf) {
  // Read the parameters from the configuration
  var n = +conf.n;
  var b = conf.type === 'fast' ? buf : slowBuf;
  bench.start(); // Start benchmarking
  for (var i = 0; i < n * 1024; i++) {
    // Add your test here
    b.slice(10, 256);
  }
  bench.end(n); // End benchmarking
}