0
0
mirror of https://github.com/nodejs/node.git synced 2024-12-01 16:10:02 +01:00
nodejs/test/code-cache/test-code-cache.js

72 lines
2.2 KiB
JavaScript
Raw Normal View History

build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
'use strict';
// Flags: --expose-internals
// This test verifies that if the binary is compiled with code cache,
// and the cache is used when built in modules are compiled.
// Otherwise, verifies that no cache is used when compiling builtins.
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
const { isMainThread } = require('../common');
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
const assert = require('assert');
const {
cachableBuiltins,
cannotUseCache
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
} = require('internal/bootstrap/cache');
const {
internalBinding
} = require('internal/test/binding');
const {
src: use STL containers instead of v8 values for static module data Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: https://github.com/nodejs/node/pull/24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
2018-11-14 15:38:12 +01:00
getCacheUsage
} = internalBinding('native_module');
for (const key of cachableBuiltins) {
if (!isMainThread && key === 'trace_events') {
continue; // Cannot load trace_events in workers
}
require(key);
}
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
src: use STL containers instead of v8 values for static module data Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: https://github.com/nodejs/node/pull/24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
2018-11-14 15:38:12 +01:00
// The computation has to be delayed until we have done loading modules
const {
compiledWithoutCache,
compiledWithCache
} = getCacheUsage();
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
const loadedModules = process.moduleLoadList
.filter((m) => m.startsWith('NativeModule'))
.map((m) => m.replace('NativeModule ', ''));
// The binary is not configured with code cache, verifies that the builtins
// are all compiled without cache and we are doing the bookkeeping right.
if (process.config.variables.node_code_cache_path === undefined) {
console.log('The binary is not configured with code cache');
if (isMainThread) {
assert.deepStrictEqual(compiledWithCache, new Set());
for (const key of loadedModules) {
assert(compiledWithoutCache.has(key),
`"${key}" should've been compiled without code cache`);
}
} else {
// TODO(joyeecheung): create a list of modules whose cache can be shared
// from the main thread to the worker thread and check that their
// cache are hit
assert.notDeepStrictEqual(compiledWithCache, new Set());
src: use STL containers instead of v8 values for static module data Instead of putting the source code and the cache in v8::Objects, put them in per-process std::maps. This has the following benefits: - It's slightly lighter in weight compared to storing things on the v8 heap. Also it may be slightly faster since the preivous v8::Object is already in dictionary mode - though the difference is very small given the number of native modules is limited. - The source and code cache generation templates are now much simpler since they just initialize static arrays and manipulate STL constructs. - The static native module data can be accessed independently of any Environment or Isolate, and it's easy to look them up from the C++'s side. - It's now impossible to mutate the source code used to compile native modules from the JS land since it's completely separate from the v8 heap. We can still get the constant strings from process.binding('natives') but that's all. A few drive-by fixes: - Remove DecorateErrorStack in LookupAndCompile - We don't need to capture the exception to decorate when we encounter errors during native module compilation, as those errors should be syntax errors and v8 is able to decorate them well. We use CompileFunctionInContext so there is no need to worry about wrappers either. - The code cache could be rejected when node is started with v8 flags. Instead of aborting in that case, simply keep a record in the native_module_without_cache set. - Refactor js2c.py a bit, reduce code duplication and inline Render() to make the one-byte/two-byte special treatment easier to read. PR-URL: https://github.com/nodejs/node/pull/24384 Fixes: https://github.com/Remove Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Franziska Hinkelmann <franziska.hinkelmann@gmail.com> Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com>
2018-11-14 15:38:12 +01:00
}
} else {
console.log('The binary is configured with code cache');
assert.strictEqual(
typeof process.config.variables.node_code_cache_path,
'string'
);
for (const key of loadedModules) {
if (cannotUseCache.includes(key)) {
assert(compiledWithoutCache.has(key),
`"${key}" should've been compiled without code cache`);
} else {
assert(compiledWithCache.has(key),
`"${key}" should've been compiled with code cache`);
}
}
build: speed up startup with V8 code cache This patch speeds up the startup time and reduce the startup memory footprint by using V8 code cache when comiling builtin modules. The current approach is demonstrated in the `with-code-cache` Makefile target (no corresponding Windows target at the moment). 1. Build the binary normally (`src/node_code_cache_stub.cc` is used), by now `internalBinding('code_cache')` is an empty object 2. Run `tools/generate_code_cache.js` with the binary, which generates the code caches by reading source code of builtin modules off source code exposed by `require('internal/bootstrap/cache').builtinSource` and then generate a C++ file containing static char arrays of the code cache, using a format similar to `node_javascript.cc` 3. Run `configure` with the `--code-cache-path` option so that the newly generated C++ file will be used when compiling the new binary. The generated C++ file will put the cache into the `internalBinding('code_cache')` object with the module ids as keys 4. The new binary tries to read the code cache from `internalBinding('code_cache')` and use it to compile builtin modules. If the cache is used, it will put the id into `require('internal/bootstrap/cache').compiledWithCache` for bookkeeping, otherwise the id will be pushed into `require('internal/bootstrap/cache').compiledWithoutCache` This patch also added tests that verify the code cache is generated and used when compiling builtin modules. The binary with code cache: - Is ~1MB bigger than the binary without code cahe - Consumes ~1MB less memory during start up - Starts up about 60% faster PR-URL: https://github.com/nodejs/node/pull/21405 Reviewed-By: John-David Dalton <john.david.dalton@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Gus Caplan <me@gus.host>
2018-06-18 19:02:57 +02:00
}