0
0
mirror of https://github.com/nodejs/node.git synced 2024-11-30 07:27:22 +01:00
nodejs/benchmark
Anatoli Papirovski 0f9efef05d
timers: refactor timer list processing
Instead of using kOnTimeout index to track a special list
processing function, just pass in a function to C++ at
startup that executes all handles and determines which
function to call.

This change improves the performance of unpooled timeouts
by roughly 20%, as well as makes the unref/ref processing
easier to follow.

PR-URL: https://github.com/nodejs/node/pull/18582
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Vladimir de Turckheim <vlad2t@hotmail.com>
Reviewed-By: Evan Lucas <evanlucas@me.com>
2018-02-09 14:59:07 -05:00
..
arrays
assert benchmark: (assert) refactor 2018-02-01 10:48:58 +01:00
async_hooks benchmark: refactor 2018-02-01 10:49:04 +01:00
buffers benchmark: (buffer) refactor 2018-02-01 10:48:59 +01:00
child_process benchmark: (child_process) use destructuring 2018-01-23 01:29:32 +01:00
cluster benchmark: use destructuring 2018-01-23 01:29:34 +01:00
crypto benchmark: (crypto) refactor 2018-02-01 10:49:00 +01:00
dgram benchmark: refactor 2018-02-01 10:49:04 +01:00
dns benchmark: use destructuring 2018-01-23 01:29:34 +01:00
domain benchmark: refactor 2018-02-01 10:49:04 +01:00
es benchmark: spread operator benchmark 2018-02-01 07:35:54 -08:00
events benchmark: (events) use destructuring 2018-01-23 01:29:30 +01:00
fixtures
fs fs: add initial set of fs.promises APIs 2018-02-05 20:31:04 -08:00
http benchmark: shorten config name in http benchmark 2018-02-01 12:02:54 +01:00
http2 benchmark: (http(2)) refactor 2018-02-01 10:49:03 +01:00
misc benchmark: refactor 2018-02-01 10:49:04 +01:00
module benchmark: refactor 2018-02-01 10:49:04 +01:00
net
os benchmark: (os) use destructuring 2018-01-23 01:29:23 +01:00
path benchmark: fix platform in basename-win32 2018-02-01 10:48:56 +01:00
process
querystring
streams benchmark: add stream.pipe benchmarks 2018-02-09 09:47:40 +01:00
string_decoder benchmark: (string_decoder) use destructuring 2018-01-23 01:29:25 +01:00
timers timers: refactor timer list processing 2018-02-09 14:59:07 -05:00
tls test: add test for tls benchmarks 2018-02-04 08:14:44 -05:00
url benchmark: (url) refactor 2018-02-01 10:49:01 +01:00
util benchmark: refactor 2018-02-01 10:49:04 +01:00
v8 benchmark: refactor 2018-02-01 10:49:04 +01:00
vm benchmark: refactor 2018-02-01 10:49:04 +01:00
zlib
_benchmark_progress.js
_cli.js
_cli.R
_http-benchmarkers.js benchmark: cut down http benchmark run time 2018-02-01 11:43:30 +01:00
_test-double-benchmarker.js benchmark: implement duration in http test double 2018-01-29 17:46:14 +08:00
common.js
compare.js benchmark: improve compare output 2018-02-08 11:38:25 -05:00
compare.R benchmark: make compare.R easier to understand 2018-01-29 13:26:34 +08:00
README.md
run.js
scatter.js
scatter.R

Node.js Core Benchmarks

This folder contains code and data used to measure performance of different Node.js implementations and different ways of writing JavaScript run by the built-in JavaScript engine.

For a detailed guide on how to write and run benchmarks in this directory, see the guide on benchmarks.

Table of Contents

Benchmark Directories

Directory Purpose
arrays Benchmarks for various operations on array-like objects, including Array, Buffer, and typed arrays.
assert Benchmarks for the assert subsystem.
buffers Benchmarks for the buffer subsystem.
child_process Benchmarks for the child_process subsystem.
crypto Benchmarks for the crypto subsystem.
dgram Benchmarks for the dgram subsystem.
domain Benchmarks for the domain subsystem.
es Benchmarks for various new ECMAScript features and their pre-ES2015 counterparts.
events Benchmarks for the events subsystem.
fixtures Benchmarks fixtures used in various benchmarks throughout the benchmark suite.
fs Benchmarks for the fs subsystem.
http Benchmarks for the http subsystem.
http2 Benchmarks for the http2 subsystem.
misc Miscellaneous benchmarks and benchmarks for shared internal modules.
module Benchmarks for the module subsystem.
net Benchmarks for the net subsystem.
path Benchmarks for the path subsystem.
process Benchmarks for the process subsystem.
querystring Benchmarks for the querystring subsystem.
streams Benchmarks for the streams subsystem.
string_decoder Benchmarks for the string_decoder subsystem.
timers Benchmarks for the timers subsystem, including setTimeout, setInterval, .etc.
tls Benchmarks for the tls subsystem.
url Benchmarks for the url subsystem, including the legacy url implementation and the WHATWG URL implementation.
util Benchmarks for the util subsystem.
vm Benchmarks for the vm subsystem.

Other Top-level files

The top-level files include common dependencies of the benchmarks and the tools for launching benchmarks and visualizing their output. The actual benchmark scripts should be placed in their corresponding directories.

  • _benchmark_progress.js: implements the progress bar displayed when running compare.js
  • _cli.js: parses the command line arguments passed to compare.js, run.js and scatter.js
  • _cli.R: parses the command line arguments passed to compare.R
  • _http-benchmarkers.js: selects and runs external tools for benchmarking the http subsystem.
  • common.js: see Common API.
  • compare.js: command line tool for comparing performance between different Node.js binaries.
  • compare.R: R script for statistically analyzing the output of compare.js
  • run.js: command line tool for running individual benchmark suite(s).
  • scatter.js: command line tool for comparing the performance between different parameters in benchmark configurations, for example to analyze the time complexity.
  • scatter.R: R script for visualizing the output of scatter.js with scatter plots.

Common API

The common.js module is used by benchmarks for consistency across repeated tasks. It has a number of helpful functions and properties to help with writing benchmarks.

createBenchmark(fn, configs[, options])

See the guide on writing benchmarks.

default_http_benchmarker

The default benchmarker used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.

PORT

The default port used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.

sendResult(data)

Used in special benchmarks that can't use createBenchmark and the object it returns to accomplish what they need. This function reports timing data to the parent process (usually created by running compare.js, run.js or scatter.js).