#rb Per.Larsson, Pere.Rifa
#jira UE-222974
- To enable the feature call FBulkData::SetCookedIndex with a FBulkDataCookedIndex set to a value between 1 - 255. Zero is currently reserved as the default/off state and can be quickly accessed via FBulkDataCookedIndex::Default.
-- Note that we might change the default value in the future, the main reason to keep it as zero for now is that it means FChunkId values will remain unchanged for bulkdata files not using the feature.
- When a bulkdata object has a cooked index it will output to a file with that value based on the following format <packagename>.CookedIndex.<extension> so a normal bulkdata payload with a cooked index of 5 would end up writing to <packagename>.005.ubulk.
-- This allows the calling systems to control which bulkdata payloads go to which sub files.
- We currently do not support memory mapped payloads or payloads with the duplicate non optional flags. Support and testing for this will be added later.
- Tested saving/editing/loading packages with bulkdata in the editor (vector fields), build/cook/run normal builds, build/cook/run with feature enabled then running the new code with data produced from non modified code and running non modified exe on data generated with the new code.
### IPackageResourceManager
- Added overloads for most methods that take EPackageSegment that also take a FBulkDataCookedIndex and deprecated the older versions.
- Not all methods have been ported over, just the ones I could test but the rest will need the same treatment at some point.
### FLinkerSave
- Now stores each set of bulkdata, optional bulkdata and memory mapped payloads in separate archives, one per cooked index.
- Added a method ::HasCookedIndexBulkData that returns if any of the normal bulkdata payloads contain a non default cooked index. This is used for some paranoid checks when saving packages to the workspace domain.
[CL 36754477 by paul chipchase in 5.5 branch]
#rb Per.Larsson
#rnx
- The problem occurs because before we kick off the job to serialize a compressed entry to the DDC (so that we don't have to run compression on the same data each time the container is built) we were also kicking off a job to potentially encrypt the compressed data. It was possible for the encryption job to start modifying the data before we had serialized a copy for the DDC job, meaning that we'd be writing incorrect data to the DDC. Next time the container is built that bad data might be retrieved and the container would become corrupted.
- We now make sure that we finish setting up the DDC job before we kick off the encryption job.
- NOTE: The key for the compressed DDC entries has been changed to flush out the corrupted data already present.
[CL 35132937 by paul chipchase in ue5-main branch]
Remove some functions that were deprecated in 5.1 .
In AssetRegistryState implementation, tweak some function local variables to work for the upcoming code.
#rnx
#rb Francis.Hurteau
[CL 35122649 by matt peters in ue5-main branch]
The HandleDDCGetResult callback was releasing the FIoStoreWriteQueueEntry so that it could be completed and deleted before it was used to update the dispatcher queue size tracking.
#rb per.larsson
#rnx
[CL 34605028 by pj kack in ue5-main branch]
Key changes:
- Removed the use of cookedfiles.manifest
- Changed IoStore mode of UnrealPak to be capable of getting zenserver launch data from either a package store manifest (cbobject, metadata) OR a project store marker file (json)
- Ensured that the UAT and IoStore mode of UnrealPak can launch zenserver reliably by passing along the SponsorProcessId when launching zenserver
- Ensured that shader archives (and their accompanying json metadata files) can be read either as loose files on disk or directly from zenserver if it the data for them is internal to zenserver (ie: they have a valid chunk id)
Remaining work:
- Pak mode of UnrealPak must be able to launch zenserver and pull data from it.
#rb PJ.Kack
[CL 34565129 by zousar shaker in ue5-main branch]
[FYI] Zousar.Shaker
Original CL Desc
-----------------------------------------------------------------
Incremental step towards being able to stage to both a pak build as well as a nopak (streamng) build from a snapshot entirely stored in zenserver (no loose files on the filesystem except a ue.projectstore file).
Key changes:
- Removed the use of cookedfiles.manifest
- Changed IoStore mode of UnrealPak to be capable of getting zenserver launch data from either a package store manifest (cbobject, metadata) OR a project store marker file (json)
- Ensured that the UAT and IoStore mode of UnrealPak can launch zenserver reliably by passing along the SponsorProcessId when launching zenserver
- Ensured that shader archives (and their accompanying json metadata files) can be read either as loose files on disk or directly from zenserver if it the data for them is internal to zenserver (ie: they have a valid chunk id)
Remaining work:
- Pak mode of UnrealPak must be able to launch zenserver and pull data from it.
#rb PJ.Kack
[CL 34498668 by zousar shaker in ue5-main branch]
Key changes:
- Removed the use of cookedfiles.manifest
- Changed IoStore mode of UnrealPak to be capable of getting zenserver launch data from either a package store manifest (cbobject, metadata) OR a project store marker file (json)
- Ensured that the UAT and IoStore mode of UnrealPak can launch zenserver reliably by passing along the SponsorProcessId when launching zenserver
- Ensured that shader archives (and their accompanying json metadata files) can be read either as loose files on disk or directly from zenserver if it the data for them is internal to zenserver (ie: they have a valid chunk id)
Remaining work:
- Pak mode of UnrealPak must be able to launch zenserver and pull data from it.
#rb PJ.Kack
[CL 34481636 by zousar shaker in ue5-main branch]
BatchGet with max 128 inflight requests (or ~1 GiB in total) in batches of 8 items (or ~16 MiB each).
BatchPut with max 128 inflight requests (or ~256 MiB in total) in batches of 8 items (or ~1 MiB each).
Skip ddc for chunks smaller than CompressionMinBytesSaved (1KiB by default).
Skip ddc for .umap to avoid cache churn since maps are known to cook non-deterministically.
Skip ddc for shaders that use a different code path in UnrealPak as well as in runtime.
Use a new DDC2 cache key (that includes the CompressionBufferSize) and cache bucket.
Use TArray64/FMemoryWriter64 for serializing the data to support chunks bigger than 2 GiB.
Postpone allocation of compression buffers until the ddc get request completes and the size is known.
Reduce memory buffer limits to 2 GiB and 3 GiB again (earlier temporary bumps to 3 GiB and 4 GiB are not needed after recent task/retraction changes).
Add logging of number of ddc hits and puts.
#jira UE-204758
#rb paul.chipchase, Per.Larsson
#tests identical binary output
[CL 34451300 by pj kack in ue5-main branch]
From FChunkBlock::Size to FChunkBlock::DiskSize.
From FIoStoreWriteQueueEntry::CompressedSize to FIoStoreWriteQueueEntry::DiskSize.
#rb pj.kack
#tests identical binary output
#rnx
[CL 34306949 by pj kack in ue5-main branch]