We need to consider CONFIGURE_SUBST_FILES as generated files when
processing FinalTargetFiles and related variables. Additionally, we have
to make sure that we actually recurse into such directories during the
export phase.
MozReview-Commit-ID: 6ZwHMzjoT6t
The Windows content indexing service has been known to scan the objdir.
This can add significant system overhead and slow down builds or
subsequent operations. The objdir is meant to be a short-lived black box
and there really isn't a major benefit to indexing it.
There is a file attribute on Windows that disables content indexing.
This commit adds a utility function for creating a directory and
optionally disabling indexing on it. We call it at the top of
`mach build` to ensure the objdir as content indexing disabled.
MozReview-Commit-ID: 68gxCxbkVAN
We don't process IPDL and WebIDL files during artifact builds. The
backend shouldn't need to care about them.
Before: 2001 total backend files; 2001 created; 0 updated; 0 unchanged; 0 deleted
After: 1945 total backend files; 1945 created; 0 updated; 0 unchanged; 0 deleted
After this change, we no longer write any .cpp files to the objdir
during config.status for artifact builds.
As part of this, we stopped generated webidlsrcs.mk and the build broke
because of an unguarded use in a Makefile.
After this change, only 102/3366 files in the objdir after
`mach configure` are not a) in _virtualenv b) named backend.mk
c) named Makefile (~950 each of backend.mk and Makefile).
MozReview-Commit-ID: 11AIn1i4x4f
I can't think of a good reason why unified C++ files need to exist
during artifact builds. Let's not write them in this build config.
Before: 2517 total backend files; 2517 created; 0 updated; 0 unchanged; 0 deleted
After: 2001 total backend files; 2001 created; 0 updated; 0 unchanged; 0 deleted
No measureable change in performance on Linux. But on Windows where file
creation is in the ~1ms range, this will surely result in a speedup of
several hundred milliseconds.
We are still generating some .cpp files during artifact builds. This will be
addressed in subsequent commits.
MozReview-Commit-ID: Lqx36YE8qZZ
As previous measurements have shown, creating/appending files
on Windows/NTFS is slow because the CloseHandle() Win32 API takes
1-3ms to complete. This is apparently due to a fundamental issue
with NTFS extents. A way to work around this slowness is to use
multiple threads for I/O so file closing doesn't block execution
as much.
This commit updates the file copier to use a thread pool of 4
threads when processing file copies. Additional threads appear
to have diminishing returns.
On my i7-6700K, this reduces the time for processing the tests install
manifest (24,572 files) on Windows from ~22.0s to ~12.5s in the best
case.
Using the thread pool globally resulted in a performance regression
on Linux. Given the performance sensitivity of manifest copying,
I thought it best to implement a slightly redundant non-Windows
branch to preserve performance. For the record, that same machine
running Linux is capable of processing nearly the same install
manifest (24,616 files) in ~2.2s in the best case.
MozReview-Commit-ID: B9LbKaOoO1u
This adds a simple schema for build telemetry data. We can make it more
restrictive once we have a better feeling for what kind of data we want
to submit.
This also moves more common data about the system to the telemetry handler.
We leave psutil derivied information in the resource usage data as not every
system will have psutil installed.
MozReview-Commit-ID: CFRq1Ow6AOf
self._pushhead_cache no longer exists. But self._tree_cache does!
This was causing AttributeError when running `mach artifact clear-cache`
and other misc `mach artifact` sub-commands.
MozReview-Commit-ID: CP8NL6eCfhD
Currently, config.status runs `mach artifact install`. mach commands prefix
output lines with elapsed time by default. When running from `mach build`,
there will be 2 sets of times in `mach artifact install` output lines.
When config.status is run directly, there will be no times printed
except for `mach artifact install`. It is weird both ways.
Fix it by not printing lines when running `mach artifact install` from
config.status.
MozReview-Commit-ID: GVinyI4Z0qr
Previously, we We were running version 2.5.1 of requests. Newer
versions have several bug fixes and even address a CVE.
Source was obtained from
https://pypi.python.org/packages/source/r/requests/requests-2.9.1.tar.gz
and checked in without modification. This should be a rubber stamp
review.
MozReview-Commit-ID: 9tFSVJFfwGh
Opt-in by adding --enable-gradle-mobile-android-builds.
Gradle dependencies (including the Android-Gradle plugin) are assumed
to be present. Local developers will fetch them from the jcentral
repository.
Android-specific Maven dependencies are shipped as "extras" with the
Android SDK, and should be found automatically by the Android-Gradle
plugin.
MozReview-Commit-ID: 966XgddWgEu
The moz.build data is now sufficient to, with some convolution, generate
the same compilation database that recursing the tree with the showbuild
target does.
The resulting code is not the prettiest, and exposes the shortcomings of
the current moz.build data model. It is however a first step towards
fixing those shortcomings, because they are now more clearly identified.
This was validated on all platforms on try by checking the output of
mach build-backend -b CompileDB -d -n
is empty when backing out the patch after running
mach build-backend -b CompileDB
once.
There are a few difficult directories to handle, with limited usefulness
compared to having the CompileDB properly filled for everything else
in a timely manner, so skip them for now.
The most common issue I'm hearing with eslint is people who have an outdated
node installed. This does a quick check to verify the version is high enough
before linting.
MozReview-Commit-ID: Em0jn18OUYo
This adds a substs field and cherrypicks the MOZ_ARTIFACT_BUILDS field so
we can determine whether or not an artifact build occurred.
MozReview-Commit-ID: 8aio8mP8pmR
Because the add-ons manager hasn't startup up yet we can replace the certificate
database in xpcshell tests with one that claims add-ons are signed by valid
certificates even when they aren't. This allows us to run tests even in builds
where signing cannot be disabled during for the normal application.
This adds an override for all tests except those that are explicitely testing
signing.
Because the add-ons manager hasn't startup up yet we can replace the certificate
database in xpcshell tests with one that claims add-ons are signed by valid
certificates even when they aren't. This allows us to run tests even in builds
where signing cannot be disabled during for the normal application.
This adds an override for all tests except those that are explicitely testing
signing.
This fails under Docker otherwise. platform.architecture() uses the
architecture of the Python executable, which should be fine assuming the
system-installed Python is being used. Even if the user installed their
own Python, running a 32-bit Python on a 64-bit system feels like
something that would be extremely rare.
DONTBUILD (NPOTB)
We need this package to build psutil and other Python packages with C
extensions.
Fedora 23 offers Python 3 as the default package and the package name
changed from python-devel to python2-devel. Fedora 22 does appear to
support python2-devel as an alias. So we use the same name everywhere.
DONTBUILD (NPOTB)
This will perform a single HTTP request that completes in <1s. Contrast
with before when we performed multiple HTTP requests and the process
took several seconds.
DONTBUILD (NPOTB)
MozReview-Commit-ID: DjX3LBdSOIk
We have very few directories where we have SOURCES declared that are not
part of a library or program in some way. In fact, there is only one
where it is legitimate because we only use the object file
(build/unix/elfhack/inject). Others are the result of moz.build control
flow (see e.g. netwerk/standalone), and we end up building more objects
than we need to.
There are other cases where we need objects without actually linking
them anywhere, but there are other sources in the same directory, and a
corresponding Linkable is emitted. And in fact, the only case I knew
about (media/libvpx), doesn't use such objects since bug 1151175.
Since bug 1239217, artifacts builds are using a hybrid build system that
uses the fastermake rules to copy files to dist/bin, which means
artifact files are not removed by the processing of the dist/bin install
manifest. This means we don't need to add them to the recursivemake
install manifest anymore.
Currenlty --enable-artifact builds will take artifacts from fx-team regardless of the
state of the current working directory. This can lead to broken builds if someone
updates to a tree other than fx-team.
This commit changes the default behavior from tracking fx-team to finding all recent
pushheads and finding the closest to the working parent that has artifacts available.
This also fixes a mis-match between tree names according to mozext and branch names in
the taskcluster index preventing artifact download from common integration branches.
clang based tools typically choke when they encounter a lot of -XClang
arguments, and since those args are passed to the cc1 driver they're
unnecessary for such tools anyways.
This helps people who have things like --enable-clang-plugin in their
mozconfig use clang based tools without having to remove these arguments
manually.
Downloaded from https://pypi.python.org/packages/source/v/virtualenv/virtualenv-14.0.1.tar.gz
and extracted without any modifications.
This appears to speed up `mach configure` for artifact builds by ~3.3s on
my Linux machine (~16.3s to 13.0s). A significant part of this appears
to be newer version of pip caching and reusing wheels after first
install.
This should be a rubber stamp review, as all changes are from trusted
upstream packages.
Calling error() in moz.build files to indicate unsupported configurations is
useful, but readers using EmptyConfig will trigger them currently. This patch
allows a Config to have an `error_is_fatal` attribute, which will make
error emit a warning instead.
This is two straightforward optimizations in FileCopier: avoiding a redundant iteration
over the directory structure to find destination files (which includes a
call to normpath) and avoiding redundant calls to determine directories to preserve
when remove_unaccounted is not specified (which include a call to dirname).
Running a no-op install of _tests with this patch results in a reduction of about
25,000 calls to normpath and remove about 220,000 calls to dirname, resulting in
an overall speedup of 10-20%.
Listing the tests in all-tests.json is prerequisite to adding support
for Marionette tests to `./mach test FILE`.
Marionette tests will be run from the source directory, as they do not
currently need packaging.
The current rule is only for "backend.RecursiveMakeBackend", but, with
the current default of generating both the RecursiveMake and FasterMake
backends, the command creates/refreshes both backends. This is, in fact,
how the FasterMake backend is refreshed in most cases.
Moreover, with an hybrid backends, the generated file is not
"backend.RecursiveMakeBackend" anymore, so we need a more generic way to
handle this.
Furthermore, it's not necessarily desirable for all backends to have a
dependency file to handle the dependencies to refresh the backend, so
generate a plain list instead. This has the side effect of making `mach
build-backend --diff` more readable for changes to that file.
Finally, make the backend.* files created like any other backend file,
such that its diff appears in the `mach build-backend --diff` output.
The FasterMake build system is meant to be invoked through `mach build
faster`, which does it already, or, in the near future, as part of an
hybrid build system, which will deal with it as well. People doing
`make -C objdir/faster` won't have the backend automatically refreshed,
but that's not a supported way to use it anyways.
Install manifests are not empty in normal conditions, apart a few
exceptions where they are only used for a "magic" `rm -rf`.
However, we're going to introduce changes that will empty some of
the install manifests and make their work happen from a different
backend, in which case we don't want them to correspond to a `rm -rf`.
When doing build system changes affecting backend-generated files, I
often use `mach build-backend --diff`. But most of the time I end up
wanting to look at the full diff again when doing further changes, which
leads me to stash my changes away, run `mach build-backend` to get the
initial state again, unstash and rerun `mach build-backend --diff`.
This has been a time drain for long enough :)
The code doing this was added before we had install manifests, back when
they were purge manifests, and before we tracked all files created by
the backend. Nowadays, if an install manifest is removed, it will be
removed in BuildBackend.consume.
In fact, purging the install manifests in the backend itself breaks the
deleted files count displayed after reticunating splines.
XPI_NAME affects no tier on its own, as it merely changes paths where
things end up that are defined by other variables.
FILES_PER_UNIFIED_FILE had an erroneous value.