Prior to this point, a dynamic build might have resulted in some runtime
libraries being statically linked into shared objects and executables in
cases where "shared" runtime libraries were actually linker scripts that
linked static versions. This was the case with the MongoDB toolchain and
some distro toolchains, including those installed as updated compiler
versions in RHEL.
The effect of having runtime libraries statically linked was that
symbols from those libraries would end up scattered over the compiled
objects, increasing object sizes and slowing down server startup. Now,
whenever a dynamic build is selected, the user can choose whether to
create "shim" runtime libraries that wrap the static ones.
The default behavior remains as it was before, and dynamic runtime must
be enabled in order to use it.
We sometimes have situations where a dependency applies at a large
scope, such as in the case of tcmalloc, which can apply everywhere. What
we have done previously is to hack these dependencies into the LIBDEPS
environment variable by adding a builder to all nodes that can produce a
compiler result. This is, as stated previously, hackish and hard to
control, and it results in adding a Public dependency to all those
nodes.
What we now do instead is to define LIBDEPS_GLOBAL on the *build
environment* (not the Builder node) listing the targets we would like to
push down to all other nodes below that point. This has the effect of
adding those targets as Private dependencies on all Builder nodes from
that point downward, which means some common Public dependencies can be
converted to a Private dependency that is stated only once.
We discovered in SERVER-50852 that when unit test binary names do not
end with "_test" it can cause problems with the hang check analyzer. To
prevent such occurrences in the future, all CppUnitTest targets must
will be checked to ensure they are named correctly.
Allow Ninja to rebuild build.ninja any time a SCons tool or the compiler
is changed between iterative rebuilds. This change allows us to ensure
that we don't have any stale object files lying around that may have
been produced by an incompatible toolchain.
Allow Ninja to rebuild build.ninja any time a SCons tool or the compiler
is changed between iterative rebuilds. This change allows us to ensure
that we don't have any stale object files lying around that may have
been produced by an incompatible toolchain.
Some versions of ccache do not know how to handle clang's
-fsanitizer-blacklist flags. Some versions don't handle it at all, while
others only handle one instance, even though it can appear multiple
times on the command line. Because the argument can change the resulting
compiled object, not taking the flags into account properly can cause
ccache to pull an incorrect object file from its cache. The exact
behavior depends on the ccache version and how the arguments are changed
on the command line. We implement a workaround suggested by the ccache
developers until a newer version of ccache with all the required fixes
is in common use.
* Workaround ref: https://github.com/ccache/ccache/issues/174
If CCACHE or ICECC are specified on the SCons command line but the paths
given don't exist, the associated tool would simply be skipped. This
caused confusion when users were expecting the tool to run and the
compile would proceed without it. Now specifying an incorrect path to
the tool will cause a configure failure.
Before this point, remote builds did not work because Icecream did not
copy sanitizer blacklist files to the remote hosts. We had a check in
place that silently turned Icecream builds with sanitizers into local
builds. Now we build the sanitizer blacklist files into the environment
tarball that Icecream uses for remote builds.