Won't impact performance much. But fewer make foo makes porting the C++
unit tests (which are the largest remaining tests) to the Python
archiver easier to grok.
This conversion did change behavior slightly. Previously, startup
cache files weren't being packaged if startup cache was disabled. Now,
we always package them since their presence in the test archive should
be harmless. The original change to guard their inclusion in
ee82e0ae5488 was probably unnecessary.
This is slightly more involved than earlier changes because reftests
have a one-off mechanism for finding files. Essentially, the master
reftest manifest is loaded, directories are discovered, and every file
in those directories is packaged.
We add support to our test archive generation tool to read sources from
reftest manifests and tell it where the reftest manifests are.
print-manifest-dirs.py was only being used for staging reftest files.
Since we don't do that any more, the functionality doesn't need to exist
in a standalone file, so it has been moved inline into test_archive.py.
This change avoids copying ~26,000 tests consuming 131 MB during test
packaging. This is a majority of the file count that was remaining in
the stage directory at this point. On my machine (which hasn't typically
seen major wall time wins from not staging files due to its fast SSD),
this change made test packaging ~20% faster, reducing wall time from
~50s to ~40s!
A Try push seemed to indicate drastic results with the series up to this
point. Including the already landed changes to generate test archives
concurrently, test packaging times on OS X builders dropped from ~18:40
to 6:29! Times on Linux x64 remained about the same (~2:46). This is
possibly due to these machines already having SSDs and due to normal
variance in performance of builders and EC2 instances.
With this change, all test ZIP archives are now generated via Python and
mozpack.
This change does not change I/O or file copy behavior at all. There is
still a lot of room for eliminating extra file copies.
The web-platform test archive now builds without any staging at all.
This saves ~103 MB of file copies on my machine.
The testing/web-platform/Makefile.in serves no purpose after this
change, so it and all references to it have been removed.
This is very similar to what we did for xpcshell. Like xpcshell, there
are still some staged files. However, about 73MB of copies are
eliminated with this change. On my machine, overall execution time of
test packaging appears to decrease, although CPU usage is up slightly.
This commit produces the xpcshell test archive without staging 5000+
xpcshell test files first.
We teach the archiver to ignore .mkdir.done files.
The xpcshell Makefile.in still stages some files. This is less than
ideal. However, it is a small handful of files and shouldn't add too
much overhead.
This appears to not impact overall CPU usage significantly on my
machine, despute using Python instead of `zip`. It does reduce I/O
by ~25MB by avoiding the staging copy.
Test archive generation currently copies a bunch of files into a staging
area then runs `zip` to produce ZIP files. There are 2 concerns with
this approach:
1) We incur a lot of extra I/O to copy files so everything is
rooted in a single tree so the `zip` invocation and paths are
simple.
2) ZIP files inherit properties from the local filesystem (including
mtime), making ZIP files non-deterministic.
This commit introduces a new mozbuild action for producing test
archives. It does so using the mozpack file finder and JAR writer,
which are used throughout the build to deterministically
produce ZIP/JAR files from files in multiple source directories.
We implement support for producing the mozharness archive. This archive
does not involve files that are staged, so no I/O is saved. In fact,
the switch from `zip` to Python likely makes this slightly slower.
However, we do have deterministic archives now.
Additional archives will be ported over in subsequent commits.
Add get_pref(), set_pref(), set_prefs() to make manipulate preferences
easier.
enforce_gecko_prefs() did the similar job as set_prefs(), but it will
restart the browser if a preference to be set are different from what is
already set in the system. Not all gecko preferences require a restart
to work. Using set_prefs() should make testing faster. See bug 1048554.
As the aws command line tool call is piped, its status is lost, but the
net result is an empty variable assigment. We take advantage of this to
detect errors in the aws tool.
Add new tasks for the "Linux" platform. These run on the same docker image as
the Linux64 builds, but that image has been modified to contain a bunch of
*.i686 packages required to cross-compile for i686. Due to yum's propensity
for resolving dependencies without regard to architecture, with this patch the
system-setup.sh script lists both architectures of each file explicitly.
This also leaves `gcc` installed for user convenience in installing Python
extensions, NPM modules, etc.
This also includes 'subversion' for clang builds (bug 1208029)
Previously, we had a single make target and rule for generating all test
archives. These tasks can be performed in parallel. This commit
refactors the make file to add multiple targets for each archive and
thus enables test archives to be generated concurrently.
On my MacBook Pro, this reduces `make package-tests -j8` from ~78s to
~50s, a reduction of ~28s, or ~36%. Reduction on machines without SSDs
(like many builders in automation) will likely be less. Although, the
page cache should service most file reads during archiving since these
files were just staged, so hopefully the gains are in the same ballpark.
Upcoming work will introduce multiple targets for building test
archives. To prepare for this, we introduce a phony target that
tracks the staging of all test files so each target can gate on a common
prerequisite.
If a Buildbot test job is scheduled through TaskCluster (The Buildbot Bridge supports this),
then the generated Buildbot Change associated to a test job does not have the installer and
test url necessary to Mozharness to run the test job.
Since we can't modify how a test job is called on Buildbot (we can't switch from
--read-builbot-config to --installer-url and --test-url), we have to detect that there is
a 'taskId' defined for the test job (this indicates that the job was scheduled through the BBB)
and based on suc 'taskID' we can determine the parent task and the artifacts it uploaded.
Changes to ScriptMixin:
* Refactor _retry_download_file() to _retry_download()
* If no file is specified when calling_retry_download() we call _urlopen() instead of _download_file()
* Add load_json_url() method to fetch the contents of a json file without writing to disk
Changes to TestingMixin:
* If the job is triggered through Buildbot we look for the Changes object, otherwise, we look
for artifacts of the parent task
* Added functions find_artifacts_from_buildbot_changes (original behaviour)
and find_artifacts_from_taskcluster (functionality via TaskClusterArtifactsFinderMixin)
* Call self.exception() instead of raising exceptions + minor fixes
New TaskClusterArtifactsFinderMixin:
* It allows any inheriting class to find the artifacts of the build job which triggers this test job
This stops exposing ANDROID_BUILD_TOOLS and ANDROID_PLATFORM_TOOLS via
AC_SUBST. We expose most tools already, and this adds EMULATOR, and
consumes it (and ADB) where appropriate.
This performs the update check for system add-ons. It runs as part of the daily
add-on update checks similar to hotfix checks. Currently no URL is set so builds
won't actually start checking yet.
I've taken a few shortcuts here by only staging updates and needing a restart to
install as well as always downloading updates rather than using existing local
copies. At least the latter probably needs fixing before turning this on but
it makes more sense to iterate on those in tree.
When workers shut down we discard the event queue rather than running it to completion. Originally workers managed their event queue themselves and would simply iterate through the array of events and cancel them all. After bug 914762 this was done by setting a (thread-)global "canceling" flag and then calling NS_ProcessPendingEvents. But this neglects that a shut down request can be received while the worker is in a sync queue. In this case, calling NS_ProcessPendingEvents will process any events pending in the sync queue, which is *not* the queue we need to cancel.
The fix is, if we are in a sync queue when NotifyInternal is called, to defer clearing the queue until the top-most sync queue is destroyed and we are about to return to the regular event queue. Only then can we call NS_ProcessPendingEvents to clear out the queue. Because we can never process any events from this queue while sync queues are active, the timing of the mass cancellation is unchanged from the perspective of events in the regular queue.
This cleans up some redundant keys in `branches/try/job_flags.yml`, spells
the platform correctly (`linux`, not `linux32`), and defines the platform in
`base_job_flags.yml`
This commit is us getting out of our own way. We were specifying
-classpath twice, once in $(JAVAC) and once in java-build.mk. Only
the latter of these is active. This a problem for ANDROID_EXTRA_JARS
-- those JARs should be on the classpath and input to $(DX) -- and
JARs that should be on the classpath but *not* input to $(DX). This
commit removes the global flags to $(JAVAC) and adds
JAVA_{BOOT}CLASSPATH_JARS. This required some hijinkery moving
wildcards to moz.build files, but everything seems to work.
As well as clarifying some parts of the build, part 2 uses this work
to modify the classpath.
In a following patch, all DevTools moz.build files will use DevToolsModules to
install JS modules at a path that corresponds directly to their source tree
location. Here we rewrite all require and import calls to match the new
location that these files are installed to.
This performs the update check for system add-ons. It runs as part of the daily
add-on update checks similar to hotfix checks. Currently no URL is set so builds
won't actually start checking yet.
I've taken a few shortcuts here by only staging updates and needing a restart to
install as well as always downloading updates rather than using existing local
copies. At least the latter probably needs fixing before turning this on but
it makes more sense to iterate on those in tree.
Moves the test to .https so it actually works.
Switches to using promise_test like the current blink test since sequential promise is not implemented and not needed.
compares registrations by scope since getRegistrations() produces new objects every time.
Change the with_iframe() call for the remote frame registration to actually
wait till the remote register() finishes so there is a valid registration to
unregister() when the frame receives a message.
Always use an in-process webserver, removing the need for apache - and
hopefuly providing better accuracy for numbers.
This means that we know have to copy the pagesets in the talos dir on
harness.
On windows, some pagesets paths were too long due to that, so the
solution is to replace "page_load_test" with "tests".
In Mozharness we support a developer mode which is capable of downloading
artifacts from the Release Engineering LDAP protected artifacts.
The credentials are stored for developers convenience unencrypted in a plain
text. This is not wanted by most developers.
In this patch we make sure that the password is prompted of the user once but
we do not store on disk.
This adds support for web-platform-tests to mach try. It changes the implementation
so that instead of passing paths to manifests, the user passes arbitary paths in the
source tree, and tests under that path are run, with test discovery mainly left to
the harness.
This test sends keys to the urlbar causing a page navigation, then waits on
the current url to confirm the navigation is reflected. Because the navigation
changes remoteness, the url check and loading the content listener in the
new process race. When the url check wins, it causes a hang by sending a
message before the frame script to receive it loads.
This is a very specific scenario that only impacts tests that need to cause
navigation to in-process pages with key events. If these sort of tests
become a priority, this will need to be revisited.
This adds support for web-platform-tests to mach try. It changes the implementation
so that instead of passing paths to manifests, the user passes arbitary paths in the
source tree, and tests under that path are run, with test discovery mainly left to
the harness.
This adds support for web-platform-tests to mach try. It changes the implementation
so that instead of passing paths to manifests, the user passes arbitary paths in the
source tree, and tests under that path are run, with test discovery mainly left to
the harness.