0
0
mirror of https://github.com/nodejs/node.git synced 2024-12-01 16:10:02 +01:00
nodejs/test/code-cache/test-code-cache.js

72 lines
2.2 KiB
JavaScript
Raw Normal View History

build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
'use strict';
// Flags: --expose-internals
// This test verifies that if the binary is compiled with code cache,
// and the cache is used when built in modules are compiled.
// Otherwise, verifies that no cache is used when compiling builtins.
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
const { isMainThread } = require('../common');
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
const assert = require('assert');
const {
cachableBuiltins,
process: split execution into main scripts This patch splits the execution mode selection from the environment setup in `lib/internal/bootstrap/node.js`, and split the entry point of different execution mode into main scripts under `lib/internal/main`: - `check_syntax.js`: used when `-c`/`--check` which only checks the syntax of the input instead of executing it. - `eval_stdin.js`: used when `-e` is passed without value and stdin is not a TTY (e.g. something is piped). - `eval_string`: used when `-e` is passed along with a string argument - `inspect.js`: for `node inspect`/`node debug` - `print_bash_completion.js`: for `--completion-bash` - `print_help.js`: for `--help` - `prof_process.js`: for `--prof-process` - `repl.js`: for the REPL - `run_main_module.js`: used when a main module is passed - `run_third_party_main.js`: for the legacy `_third_party_main.js` support - `worker_thread.js`: for workers This makes the entry points easier to navigate and paves the way for customized v8 snapshots (that do not need to deserialize execution mode setup) and better embedder APIs. As an example, after this patch, for the most common case where Node.js executes a user module as an entry point, it essentially goes through: - `lib/internal/per_context.js` to setup the v8 Context (which is also run when setting up contexts for the `vm` module) - `lib/internal/bootstrap/loaders.js` to set up internal binding and builtin module loaders (that are separate from the loaders accessible in the user land). - `lib/internal/bootstrap/node.js`: to set up the rest of the environment, including various globals and the process object - `lib/internal/main/run_main_module.js`: which is selected from C++ to prepare execution of the user module. This patch also removes `NativeModuleLoader::CompileAndCall` and exposes `NativeModuleLoader::LookupAndCompile` directly so that we can handle syntax errors and runtime errors of bootstrap scripts differently. PR-URL: https://github.com/nodejs/node/pull/25667 Reviewed-By: Anna Henningsen <anna@addaleax.net>
2019-01-15 16:12:21 +01:00
cannotBeRequired
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
} = require('internal/bootstrap/cache');
const {
internalBinding
} = require('internal/test/binding');
const {
src: use STL containers instead of v8 values for static module data Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: https://github.com/nodejs/node/pull/24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
2018-11-14 15:38:12 +01:00
getCacheUsage
} = internalBinding('native_module');
for (const key of cachableBuiltins) {
if (!isMainThread && key === 'trace_events') {
continue; // Cannot load trace_events in workers
}
require(key);
}
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
src: use STL containers instead of v8 values for static module data Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: https://github.com/nodejs/node/pull/24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
2018-11-14 15:38:12 +01:00
// The computation has to be delayed until we have done loading modules
const {
compiledWithoutCache,
compiledWithCache
} = getCacheUsage();
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
const loadedModules = process.moduleLoadList
.filter((m) => m.startsWith('NativeModule'))
.map((m) => m.replace('NativeModule ', ''));
// The binary is not configured with code cache, verifies that the builtins
// are all compiled without cache and we are doing the bookkeeping right.
if (process.config.variables.node_code_cache_path === undefined) {
console.log('The binary is not configured with code cache');
if (isMainThread) {
assert.deepStrictEqual(compiledWithCache, new Set());
for (const key of loadedModules) {
assert(compiledWithoutCache.has(key),
`"${key}" should've been compiled without code cache`);
}
} else {
// TODO(joyeecheung): create a list of modules whose cache can be shared
// from the main thread to the worker thread and check that their
// cache are hit
assert.notDeepStrictEqual(compiledWithCache, new Set());
src: use STL containers instead of v8 values for static module data Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: https://github.com/nodejs/node/pull/24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
2018-11-14 15:38:12 +01:00
}
} else {
console.log('The binary is configured with code cache');
assert.strictEqual(
typeof process.config.variables.node_code_cache_path,
'string'
);
for (const key of loadedModules) {
process: split execution into main scripts This patch splits the execution mode selection from the environment setup in `lib/internal/bootstrap/node.js`, and split the entry point of different execution mode into main scripts under `lib/internal/main`: - `check_syntax.js`: used when `-c`/`--check` which only checks the syntax of the input instead of executing it. - `eval_stdin.js`: used when `-e` is passed without value and stdin is not a TTY (e.g. something is piped). - `eval_string`: used when `-e` is passed along with a string argument - `inspect.js`: for `node inspect`/`node debug` - `print_bash_completion.js`: for `--completion-bash` - `print_help.js`: for `--help` - `prof_process.js`: for `--prof-process` - `repl.js`: for the REPL - `run_main_module.js`: used when a main module is passed - `run_third_party_main.js`: for the legacy `_third_party_main.js` support - `worker_thread.js`: for workers This makes the entry points easier to navigate and paves the way for customized v8 snapshots (that do not need to deserialize execution mode setup) and better embedder APIs. As an example, after this patch, for the most common case where Node.js executes a user module as an entry point, it essentially goes through: - `lib/internal/per_context.js` to setup the v8 Context (which is also run when setting up contexts for the `vm` module) - `lib/internal/bootstrap/loaders.js` to set up internal binding and builtin module loaders (that are separate from the loaders accessible in the user land). - `lib/internal/bootstrap/node.js`: to set up the rest of the environment, including various globals and the process object - `lib/internal/main/run_main_module.js`: which is selected from C++ to prepare execution of the user module. This patch also removes `NativeModuleLoader::CompileAndCall` and exposes `NativeModuleLoader::LookupAndCompile` directly so that we can handle syntax errors and runtime errors of bootstrap scripts differently. PR-URL: https://github.com/nodejs/node/pull/25667 Reviewed-By: Anna Henningsen <anna@addaleax.net>
2019-01-15 16:12:21 +01:00
if (cannotBeRequired.includes(key)) {
assert(compiledWithoutCache.has(key),
`"${key}" should've been compiled without code cache`);
} else {
assert(compiledWithCache.has(key),
`"${key}" should've been compiled with code cache`);
}
}
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
}