With this chunking strategy, the runtimes of tests are taken into account, such that each chunk
takes roughly the same amount of time to finish. Tests belonging to the same manifest will not get
split up.
The algorithm works by sorting every manifest from slowest to fastest. Each manifest is popped off
and its tests are added to the fastest chunk to date until no manifests are left. Total runtimes of
the chunks are re-calculated after every addition.
This patch detects when breaking away from the parent job isn't strictly necessary to manage processses in a new job to allow using nested job objects to manage processes and their children. Loss of job object functionality is handled in mozprocess as non-fatal, however mozrunner and others doing things like restarting firefox require it.
A new parameter called 'processStderrLine' is added. When specified, stdout gets processed by the
'processOutputLine' callbacks and stderr is processed by the 'processStderrLine' callbacks. When
not specified, stderr is redirected to stdout which is the same default behaviour.
A side effect of this is that mozprocess now uses three threads to process output. One thread each
for stdout and stderr that reads output lines and stores them in a Queue as fast as possible, this
makes sure there is no blocking in stdout.write(). A third thread executes the callbacks.
A filter is a callable that accepts an iterable of tests and a dictionary of values (e.g mozinfo.info) and returns an iterable of tests. Note filtering can mean modifying tests in addition to removing them. For example, this implements a "timeout-if" tag in the manifest:
from manifestparser import expression
import mozinfo
def timeout_if(tests, values):
for test in tests:
if 'timeout-if' in test:
timeout, condition = test['timeout-if'].split(',', 1)
if expression.parse(condition, **values):
test['timeout'] = timeout
yield test
tests = mp.active_tests(filters=[timeout_if], **mozinfo.info)
A filter is a callable that accepts an iterable of tests and a dictionary of values (e.g mozinfo.info) and returns an iterable of tests. Note filtering can mean modifying tests in addition to removing them. For example, this implements a "timeout-if" tag in the manifest:
from manifestparser import expression
import mozinfo
def timeout_if(tests, values):
for test in tests:
if 'timeout-if' in test:
timeout, condition = test['timeout-if'].split(',', 1)
if expression.parse(condition, **values):
test['timeout'] = timeout
yield test
tests = mp.active_tests(filters=[timeout_if], **mozinfo.info)
The build system / mach currently has a very hacky virtualenv setup.
Essentially, it resorts to sys.path munging instead of a proper,
isolated environment.
During initialization, mach installs python/psutil in sys.path. Later
on, some code does an |import psutil|. This fails iff the psutil C
extension can't be found.
If there is a psutil C extension installed outside of mach and
python/psutil, |import psutil| may load it. The version mismatch isn't
detected until an extension-using psutil API is called. This has
manifested inside |mach build| via the resource monitor as an
|AttributeError: 'module' object has no attribute 'linux_sysinfo'|
exception during psutil.virtual_memory().
The proper fix for this is for the Python environment to ensure the
psutil C extension is built before attempting to import and use psutil.
Arguably, psutil itself should perform some kind of version check when
it imports the C extension to ensure things are in sync and fail at
import time.
Fixing mach and the build system Python environment to build psutil
earlier/properly is a long outstanding bug. It needs to be addressed.
But it is considerable effort. This patch continues the long history of
wallpapering over psutil import/run failures because using a proper
virutalenv from mach/build system is a lot of work. Sad panda.
--HG--
extra : rebase_source : 5c449d69c0fd907ea8359ac721ef6287baa4f10e