Save the time the tracker adds a new changed id and use that to compare the age of the record on the server vs the age of the local change to decide if it's server wins or client wins. Fix up various direct uses of changedIDs to use the API and make the save-to-disk lazy to avoid excessive writes. Add a test to make sure addChangedID only increases in time.
Write a FormWrapper that knows about GUIDs and get/sets them in moz_formhistory as needed. It lazily adds the columns on failure and lazily generates GUIDs for entries that are missing it. Don't eagerly create a sha1 formItem mapping -- don't create it at all, so empty syncs will be much faster too.
Add an engines object to meta/global to track version and syncID for each engine. If the server is outdated, wipe the data and set a new version and syncID. If the client is oudated, ask for an upgrade. Differing syncIDs cause a reupload. All engines are right now the default version 1.
Inline various _init calls and do super's init with <Super>.call(this, args..). Add various get/set sugar to those missing e.g., meta.keyring. Also simplify crypto record creation by setting cleartext in the parent.
--HG--
extra : rebase_source : 6c9a9f210f8f46ac338adb84188538e7353c9673
Include the URI on success/fail requests and only trace log the onStartRequest. Switch various debug messages to trace and remove importing Log4Moz in fx-weave-overlay and generic-change. Drop the rootLogger to Debug to not log trace messages from unpreffed loggers.
Limit the initial the first fetch of new items by a total number of fetch and subtract the number of items processed. Use the difference to keep fetching more items from the backlog in chunks.
Allow each engine to provide a custom Collection object and have History provide a collection that filters out certain data. This is inefficient because we have to first create then encrypt the record before we can filter it out.
Rework server/user/misc prefs to allow relative paths and full urls for generating API paths. Cache string properties of generated URLs under the storageAPI instead of using dynamic getters.
Only fetch a limited number of items on first/update syncs and if we get the same number, ask the server for the ids to fetch later. Also on every download, process some of the backlog and save the list of GUIDs to disk as json for cross-session support.
Get rid of Filters and automatically JSON.stringify PUT/POST data that aren't strings, so plain Records can be passed in to PUT and POST. This leverages toJSON of Records to provide an object that can be serialized. Fix up client record serialize/deserialize to still escape/unescape non-ASCII.
Instrument all functions that are part of the sync engine (except some constructors, etc.) and generate statistics (min/max/sum/num/avg) for processing. For now with the default appender, implement toString to report just the total time.