The inputs to scripts for GENERATED_FILES are restricted to filenames
only. We have several examples in the tree, however, where a script
takes non-filename arguments. For converting those cases to use
GENERATED_FILES, we first need to provide some way of "injecting"
non-filename arguments into the script.
This commit adds a method for doing that, by extending the .script flag
on GENERATED_FILES to include an optional method name:
f = GENERATED_FILES['foo']
f.script = 'script.py:make_foo'
will invoke the make_foo function found in script.py instead of the
function named main.
As content in moz.build files has grown, it has become clear that
storing everything in one global namespace (the "context") per moz.build
file will not scale. This approach (which is carried over from
Makefile.in patterns) limits our ability to do things like declare
multiple instances of things (like libraries) per file.
A few months ago, templates were introduced to moz.build files. These
started the process of introducing separate contexts / containers in
each moz.build file. But it stopped short of actually emitting multiple
contexts per container. Instead, results were merged with the main
context.
This patch takes sub-contexts to the next level.
Introduced is the "SubContext" class. It is a Context derived from
another context. SubContexts are special in that they are context
managers. With the context manager is entered, the SubContext becomes
the main context associated with the executing sandbox, temporarily
masking the existence of the main context. This means that UPPERCASE
variable accesses and writes will be handled by the active SubContext.
This allows SubContext instances to define different sets of variables.
When a SubContext is spawned, it is attached to the sandbox executing
it. The moz.build reader will now emit not only the main context, but
also every SubContext that was derived from it.
To aid with the creation and declaration of sub-contexts, we introduce
the SUBCONTEXTS variable. This variable holds a list of classes that
define sub-contexts.
Sub-contexts behave a lot like templates. Their class names becomes the
symbol name in the sandbox.
The regular expression cache for mozpack.path.match was keyed off the
original pattern. However, that variable was mutated as part of the
function and the mutated result was subsequently stored as the cache
key. This effectively resulted in a 0% cache hit rate.
On some tests being written for bug 1132111 which involve a full
filesystem traversal for moz.build files and subsequent execution of
those files, the following timings are indicative of the impact of this
patch.
Before:
real 16.082s
user 14.760s
sys 1.318s
After:
real 6.345s
user 5.085s
sys 1.257s
Support for a callback to be executed post sandbox evaluation was added
in 24b43ecb4cad (bug 949906) to unbust Sphinx as a result of some GYP
processing changes. e93c40d4344f and bug 1071012 subsequently changed
how Sphinx variables are extracted from moz.build, removing the only
consumer of this feature.
Since there are no consumers of this feature left, remove it and make
the code simpler.
The value of cache files or cache size might decrease under these
circumstances.
* The user manually changes the max cache size to a value smaller than
the current cache size.
* The cache size is reaching the cache max size.
We should not assume both values will be increased after the build is
finished.
The Android ARchive contains the compiled Gecko libraries that Firefox
for Android interfaces to. It does not contain the Gecko resources
(the omnijar, omni.ja) nor the compiled Java code (classes.dex).
This also uploads metadata and sha1 hashes for future consumption by
Maven and/or Ivy dependency managers. In some brave future world,
we'll work out exactly what that looks like; for now, this solves a
storage problem (each .aar file is ~20MB) and it's possible to point
Gradle directly at the uploaded Ivy metadata and artifacts.
--HG--
extra : rebase_source : 0c12b44f587d4a027ca5258bae8fcbb6f6028c24
Now that we have proper moz.build objects for GENERATED_FILES, we can
add 'script' flags and 'args' flags in moz.build for select
GENERATED_FILES. We restrict 'args' to being filenames for ease of
implementing checks for file existence, and many (all?) of the examples
of file generation throughout the tree don't need arbitrary strings or
Python data.
This patch is mostly useful for being able to see these changes
independently of the major changes to GENERATED_FILES. We are going to
need proper moz.build objects for GENERATED_FILES when we add the
ability to define scripts and arguments for them, so we might as well do
that first.
With the previous changes, we can now tell Visual Studio about the
actual unified files, rather than the files that are #include'd in them.
I believe this is much closer to what Visual Studio wants to see, and
enables things like Intellisense to work properly.
CommonBackend is where the writing of the unified files belong, since
that's an operation common to all backends. A special hook into
subclasses is used to enable subclass-specific processing of
UnifiedSources.
UnifiedSources will be processed outside of consume_object, so we need
some way of accessing the backend file for an object outside of
consume_object as well.
UnifiedSources, not the recursivemake backend, should be responsible for
figuring out what unified files to generate, and what those unified
files should contain.
Generating the list of idl deps to generate an xpt from its dependency list
makes us give all _previous_ dependencies, inherited from the .deps makefiles.
This leads to removed files being listed on xpidl-process.py command line, and
the command subsequently failing.
Instead, use generated lists of idl dependencies. At the same time, lighten the
generated Makefile further by not emitting xpt dependencies on their containing
directory, and instead generating it from the $xpt_files list.
This brings down the Makefile size from 100k to 38k.
The previous debugger was setting a breakpoint in the mach dispatcher.
This required users to step into the main function to be called into.
Using pdb.runcall(), the debugger starts at the first line in the
executed command, which is far more useful.
--HG--
extra : rebase_source : 10734805ad40ec85eedbb089b11d618dc32398e7
extra : amend_source : da89da1af1f137db2f5902bcb5a69312f4636a4b
Similar to the changes made for IPDL files, this commit moves all of the
non-makefile related logic for WebIDL files out of the recursive make
backend and into the common build backend. Derivative backends that
would like to do interesting things with WebIDL files now need to
implement _handle_webidl_build, which takes more parameters, but should
ideally require less duplication of logic.
After a bunch of tiny changes, we're finally ready to make real
progress. We can now move the grouping of the generated IDPL C++ files
and the actual writing of the unified files for them into the common
build backend. Derivative backends now only have to concern themselves
with adding the particular logic that compiling those files requires.
We'll need to write out unified files for multiple backends, not just
the recursive make one. Put that logic someplace where all build
backends can access it.
_add_unified_build_rules shouldn't be in the business of determining how
to group files into their unified files. That logic belongs in the
caller of _add_unified_build_rules. Once that's done, the logic for
determining how to group files can migrate out of the recursive make
bakend.
Nothing about writing unified files is specific to the recursive make
backend, and if we want to write the unified files for IPDL and WebIDL
files, we'll need this functionality available in the common build
backend.
RecursiveMakeBackend._group_unified_files doesn't contain any
functionality specific to the recursive make backend. We would also
like to move the unification of generated IPDL and WebIDL source files
into the common build backend. Moving _group_unified_files into the
common build backend would be the logical place for it, but the frontend
should also be able to handle unifying files so that backends don't have
to duplicate logic for UNIFIED_FILES. Therefore, we choose to move it
to mozbuild.util as its final resting place.
Pushing on a CLOSED TREE since Android build only.
--HG--
extra : rebase_source : cc99efa734d1f4738d3f026c930a4d1955723783
extra : amend_source : 52c07807a77f263d2eb2593826dc0285928d9be4
This handles:
list.0=A
list.1=B
list.sublist.0=C
so that
list=>[A, B]
list.sublist=>[C]
and
dict=default
dict.key1=A
dict.key2=B
so that
dict=>{key1:A, key2:B}
dict=>default
--HG--
extra : rebase_source : 2c8f5941d2fca9c56b3858d3e98078a43fa69090
DONTBUILD NPOTB on a CLOSED TREE
This was tested by ally and mfinkle in #mobile.
--HG--
extra : amend_source : 5dd061d6ef98f49cb232baa7221ec514a32eeeee
This does two things. First, it aligns the brew formula name
(AndroidNdk) with the brew file name (android-ndk.rb). Second, it
makes sure that we actually find the android-ndk.rb file. I think
|mach bootstrap| always worked, due to a felicity where the working
directory always contained android-ndk.rb; but |python bootstrap.py|
failed because android-ndk.rb was downloaded to a temporary location
that was not included in the |brew install| command.
--HG--
rename : python/mozboot/mozboot/android-ndk-r8e.rb => python/mozboot/mozboot/android-ndk.rb
extra : rebase_source : 44f164d3d5916bc2faf4c64285e232047daea35e