This commit produces an "install bouncer" APK which is a "hollow
shell" that looks like the main Fennec APK. In particular, both APKs have:
* the same Android package name (application id); and
* the same set of <permission>, <uses-permission>, and <uses-feature>
blocks in their manifests.
The bouncer APK must always have an android:versionCode smaller than
the main Fennec APK; for now, we will just bump that manually
mobile/android/bouncer/moz.build.
Call a distribution in /data/data/$PACKAGE/distribution a "data
distribution". Right now we read data distributions only in response
to writing them via another code path (extracting from APK, or
downloading). We don't recognize a data distribution in the same way
that we recognize a system distribution (in /system/.../distribution)
in the Java code, simply because we don't look for it; and I haven't
investigated, but I think that Gecko may in fact recognize a data
distribution in this case.
This patch simply recognizes data distributions after looking for
other distributions. That way data distributions written by the
bouncer APK are recognized and initialized, but not given precedence
over other distribution channels.
This reads from "assets/distribution/**" in the APK and writes to
"distribution/**" in the data directory. That output is the same, but
the input used to read from "distribution/**", which is not really
supported by modern build tooling (Gradle), which doesn't allow to
write files directly into the APK root.
I manually tested this without issue. I see no way to add meaningful
tests to our current Robocop test suite; the long term testing
approach is to develop a new test for this functionality and only run
it against the "distribution" build type that was added in Bug
1163080. However, that's a larger project than I have time for now.
This simply packs the assets/ subdirectory of the distribution
directory into the assets/ directory of the Android APK using existing
mechanisms. It also removes the older method of manually pushing
files into dist/bin/distribution, from where they would be packaged
into the APK under distribution/.
While waiting for a fix at test harness level, added a helper waiting for MozAfterPaint
when running in e10s mode for all the browser_profiler_tree-abstract tests.
While investigating bug 1243549, we encountered several instances of the following error message during each startup:
*************************
A coding exception was thrown and uncaught in a Task.
Full message: TypeError: this.Paths is null
Full stack: Agent.wipe@resource:///modules/sessionstore/SessionWorker.js:296:7
worker.dispatch@resource:///modules/sessionstore/SessionWorker.js:21:24
anonymous/AbstractWorker.prototype.handleMessage@resource://gre/modules/workers/PromiseWorker.js:122:16
@resource:///modules/sessionstore/SessionWorker.js:30:41
*************************
These messages can be explained as follows:
* If sanitization has failed during shutdown, it attempts again to
sanitize during startup. This happens more often than it used to,
because of 1/ startup bug fixes in bug 1089695; 2/ new shutdown bugs
most likely also added by or around bug 1089695.
* Sanitization during startup doesn't wait until Session Restore has
properly started to sanitize the session. So sanitization of Session
Restore file fails. This has probably always been the case, except
we never noticed.
* For some reason I do not understand, it attempts to sanitize several
times.
* I suspect that this can cause problems during startup, as
sanitization and Session Restore race to use/remove the files of
Session Restore.
This patch makes sure that SessionFile.wipe() waits until
initialization of SessionFile is complete before proceeding.
gDevTools.jsm isn't properly reloadable as JSM as meant to be long-lived singletons.
Also, it contains browser related code (gDevToolsBrowser) mixed with more generic one (gDevTools).
This move is going to help hot reloading devtools codebase while improve readability of
one of our core piece of code (devtools startup and browser hooks).
Extracted a shared helper to open the browser context menu and choose
the 'inspect element' item. This helper works with e10s.
Adapted it a little bit so it waits for the right events in order to
make sure the inspector is ready.
This also involved modifying inspectNode in nsContextMenu.js to make it
wait until the node was selected and the node was ready.
Used this in browser_inspector_initialization.js,
browser_rules_content_02.js and browser_markup_keybindings_04.js
Also removed a now useless inspector-updated event that was trigger from
the animation-inspector panel in some situation. This was left behind
from a long time ago but didn't serve any purpose anymore.
We assume that the total number of cycles spent executing JS code
during an event is equal to the number of cycles in the "top group",
i.e. a group to which everything belongs. While this is true in
theory, RDTSC is actually non-monotonic, so we can end up with fewer
cycles reported for the top group than for some groups whose execution
was actually shorter. When we end up in this situation, groups with
more cycles than the top group will be reported as using more CPU than
was actually used.
This patch fixes the situation by proxying RDTSC behind a trivial API
that ensures that values are monotonic during each tick.
The current API of AddonWatcher only supports a single callback. That's pretty unfriendly to testing, debugging, add-ons, etc.
This patch replaces the mechanism with a notification through the nsIObserverService.