#rb Phillip.Kavan
Add missing generate public hash entries to ArchiveStackTrace
#fyi Francis.Hurteau
#preflight 626c692cb046e6ecc338080e
#lockdown JeanFrancois.Dube
[CL 19988606 by Marc Audy in ue5-main branch]
#rb trivial
#rnx
#preflight 626b9cbb53253f874b90efd3
- Although we don't really want to write out the virtualization flag, by removing it during ::BuildFlagsForSerialization we later check UpdatedFlags to see if the payload is virtualized and if we should write it out to disk or not.
- Removing the flag means we re-hydrate virtualized payloads when saved to disk, or in the case of an editordomain, try to store it as a local payload in the editor domain trailer which is not supported (if we did have a newly generated payload in an editor domain save it should go to the legacy path for now)
- The fix for this is probably quite simple, but the testing it would require means it's better to pull out the line of code and allow packages to serialize out the virtualization flag for now until a better fix is ready.
[CL 19976045 by paul chipchase in ue5-main branch]
#rb PJ.Kack
#rnx
#preflight 626a7ef2485a2fed3a2e04d7
- Note that these problems were not causing crashes, but they did cause errors to be logged about invalid formats and request the packages to be re-saved.
- Most of the problem comes from the code paths being too complicated, hopefully now things are locked down we should be able to start pruning this soon and make it easier to maintain in the future.
### Problem
- The problem comes from certain package setups that allow for the existence of duplicate payload entries where one entry is local or referenced and another version is virtualized. If a trailer is created from this data then the virtualized entry will be discarded (as the look up table is a map) and we revert to using the non virtualized payload instead. Ideally we'd want to discard the local/referenced payloads in favour of the virtualized one to save space.
-- By itself this bug would waste disk space but not cause other problems, however we were also writing out the virtualized flag so that when the bulkdata is next loaded it has the virtualized flag but then checks with the trailer and is told that it is not virtualized which triggers some asserts I added to find badly formed data.
- How the package gets duplicate entries like this is somewhat complicated involing duplicating objects from other (already virtualized) packages during load, then saving them (found in packages with Niagra assets at least)
### Fix
- When creating a trailer from a trailer build, we first find any local and/or referenced entries and check if there are duplicate entries in the virtualized payload entries. If so we remove the local/referenced version in favor of using the virtualized version.
-- Ideally we'd do this during the serialization of exports as it might let us skip the loading of some payloads for save entirely, however we don't know all the entries until after export serialization is completed so this is the only point we can search for duplicates safely.
- Added a way to get the full name of the owning asset, including package name + asset, when asking the user to re-save. This makes debugging the problems easier.
- Add a new check when loading to find payloads saved with the virtualization flag and to a trailer. If we do find one we warn the user that they should re-save the package and remove the flag so that no further problems occur.
- Demoted the old check for bad formats to a warning as we don't need to fail builds because of it.
-- Additionally we now only give the warning if we detect that we are loading from a package, as being virtualized and not in a package trailer would be valid when loading from other sources.
- Added comments to consider removing both checks asking the user to re-save the package after 5.2, as in theory we should only ever see these problems internally at Epic as it is unlikely that externals will have enabled the system.
- When saving a virtualized payload to the trailer, make sure to remove the virtualized flag, that should be picked up on load by polling the trailer for the payload status and should not be baked into the export data.
-- Hypothetically this will allow for us to re-hydrate the package without re-saving it through manipulating the trailer.
[CL 19963043 by paul chipchase in ue5-main branch]
#rb Matt.Peters
#rnx
#preflight 626a9f386461dd769fea25df
- If a payload was virtualized it was incorrectly being marked as requiring the legacy serialization path.
- This would technically work but was triggering a log warning about the bulkdata being stored in the wrong format (since it was in the legacy format) as we do not expect bulkdata with virtualized payloads to use the legacy path.
- We need to change the editor domain version to clear out any bad entries
[CL 19961539 by paul chipchase in ue5-main branch]
Remove bValid from FMetaData and FData return structs; it is redundant with the RawHash/Buffer data already present in those structs.
#preflight 62680b42430b9997ebe0a46d
#rb Devin.Doucette
#rnx
[CL 19923039 by Matt Peters in ue5-main branch]
Do not pop when peeking flush requests, they are removed from the GT
#rb CarlMagnus.Nordin, PJ.Kack
#jira UE-148465
#preflight 625d71b848670f31a61e9e95
[CL 19832325 by Francis Hurteau in ue5-main branch]
#rb trivial
#rnx
#preflight 624d76f24e0a8b95d08a4494
- Someone was complaining about seing a large number of long calls to FEditorBulkData::DetachFromDisk that were taking a long time. Although from context he was able to determine that the problem was caused by renaming a package this was not obvious from the provided trace.
-- Added some trace scopes to UObject::Rename, FLinkerManager::ResetLoaders and around the decompresison of payloads when detaching an editorbulkdata. This would've m,ade it very obvious as to where the time was going.
-- Added two new counters to FEditorBulkData to track the amount of data (in bytes) either loaded from disk or pulled from a virtualized backend. This would've allowed us to see how much data was being loaded during the rename (in this case near 32gb...)
[CL 19645373 by paul chipchase in ue5-main branch]
Automatically propagate explicit package load request association to a package dependencies which allow flush request to process only related packages while skipping others
This should also allow FlushAsyncLoading to be re-entrant down the line
Suppress the transaction system while ticking async loading in the main thread
Added an editor config to enabled the async loading thread. Currently non functional
#rb CarlMagnus.Nordin
#preflight 623e0081a67e4e1ab70598d7
[CL 19626072 by Francis Hurteau in ue5-main branch]
Add iostore chunk hashes to the asset registry during cook.
Remove old hashing code, which was extensive. This moves the async writes in to UE::Task from the old AsyncWorkSequence().
#rb Matt.Peters
#rb Jeff.Roberts
#preflight 624b24a5f73c316f68303946
[CL 19611812 by Dan Thompson in ue5-main branch]
331s -> 0.28s to consolidate objects when cloning a small shooting range map
#rb Robert.Manuszewski,Chris.Gagnon,Francis.Hurteau,Jamie.Dale,JeanLuc.Corenthin,Phillip.Kavan
#ROBOMERGE-AUTHOR: johan.torp
#ROBOMERGE-SOURCE: CL 19507095 via CL 19507103 via CL 19507108 via CL 19507110
#ROBOMERGE-BOT: UE5 (Release-Engine-Staging -> Main) (v937-19513599)
[CL 19514683 by johan torp in ue5-main branch]