#rb Per.Larsson
#rnx
- Moved the priming code to it's own code files.
- Priming no longer uses FOnDemandIoBackend but manages its own requests.
- Bumped up the number of connections to the largest supported by FEventLoop (63) to speed up the priming as much as possible.
- Added additional cmdline options although -PrimeEndPoint is still required:
-- '-PrimeAll' will cause all available CDNs to be primed.
-- '-PrimeUrl=' will override the CDN url with the one provided.
--- '-List' will print all of the available CDN urls to screen along with the average latency to it.
-- '-Help' prints help on the new commands to the screen.
- Fix minor bug where UnrealPak was returning none zero for success and zero for error cases.
- Automatically call ::ResolveDeferredEndpoints as part of FDistributionEndpoints::Flush if needed. Failure to do so can lead to flush waiting forever on work that has never been initiated.
[CL 27514497 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#rnx
### New Command
- Run UnrealPak.exe with the commandline "-PrimeEndPoint=ABC" where ABC is a path to a IoStoreOnDemand.ini containing the info about the end point to prime.
- The command will mount the correct backend, connect to the end point, download the utoc files there and then attempt to download each chunk present so that they will be cached by the CDN endpoint.
- The command does not use the IAS filecache and should not affect the state of the local machine.
- We also make use of the existing IAS code, so this path can also be used for profiling when the IAS requests are saturated etc.
### Code Changes
- The code for parsing the ini file in FIoStoreOnDemandModule::StartupModule has been moved to common utility code so that it can be reused.
[CL 27026561 by paul chipchase in ue5-main branch]
- added support for uploading containers marked as on demand directly from UnrealPak.exe
- removed C# based upload logic from automation scripts
- removed on demand I/O store writer since this change reads chunks directly from container files instead of loose files
Example usage:
UnrealPak.exe -Upload=<ContainerPathOrWildcard> -ServiceUrl=<URL> -Bucket=<BucketName> -AccessKey=<Key> -SecretKey=<Key>
Read credentials from an AWS key chain file with the following command line:
-CredentialsFile=<Path> -CredentialsFileKeyName=<EntryName>
Specify -KeepUploadedContainers to prevent UnrealPak to delete on demand containers after the upload has been completed.
Specify -BucketPrefix=<Path> to upload chunks to a specific sub directory within the bucket.
#rb none
[CL 26115169 by per larsson in ue5-main branch]
#rb Per.Larsson
#jira UE-186965
#preflight 6470b4a9f3773f755083fa2c
- The compressor stat arrays belonging to FOutputPakFile were being initialized right after async pakfile work has been scheduled. In rare cases the scheduled work could run before the initialization was completed and we would attempt to write out stats into uninitialized memory.
- We now make sure that the stat arrays are initialized before any work can be scheduled.
- I did a quick pass to check for similar patterns but couldn't find any.
[CL 25661995 by paul chipchase in ue5-main branch]
Output is the same as when executing the -list command separately.
Running with -csv=<dir> will create one <pakchunkfilename>.pak.csv file per generated pak and one <pakchunkfilename>.utoc.csv file per generated container.
Running with -csv=<file> will create one single <file>.pak.csv for all paks and one <file>.utoc.csv for all containers.
Details:
The container output (the big set of data) is generated based on in-memory writer TOC data once the IoStoreWriters have been finalized.
The pak output (smaller set of data) is generated using the existing list command and loads the newly generated files from disk.
Cleanup the remains of three obsolete csv output features for containers (-csvoutput, -writefinalorder and OUTPUT_CHUNKID_DIRECTORY).
Optimize container output generation by extending FIoStoreTocChunkInfo with OffsetOnDisk and NumCompressedBlocks filled directly by EnumerateChunks.
Add a ChunkIdToFileNameMap to FCookedPackageStore.
Use the FCookedPackageStore to output the actual package ids and package names (this is currently broken when running -list as a separate command).
#rb carlmagnus.nordin
#rnx
#preflight 63e5eacb98775169f8dea431
[CL 24118941 by pj kack in ue5-main branch]
Add support for diffing pak directories for pak diff.
Add support for different cryptokeys for pak diff and legacy iostore diff.
#jira UE-175144
#rb carlmagnus.nordin
#rnx
#preflight 63d9259e7a39a18021d4f997
[CL 23945218 by pj kack in ue5-main branch]
Let CollectFilesToAdd fallback to the old logic of using the source file name when the FPakInputPair.Dest specifies a directory, i.e. is ending with a "/".
Let ExtractFilesFromPak start using destination file names just as ProcessCommandLine reading from response files is doing so that -Extract with -responsefile will generate the same format as UAT with file names.
Remove dead code in ProcessPakFileSpecificCommandLine.
#jira UE-171678
#rb carlmagnus.nordin
#lockdown mark.lintott
#preflight 6390679d1776b8c21cf42bac
[CL 23427029 by pj kack in ue5-main branch]
#rb CarlMagnus.Nordin, Per.Larsson
#rnx
#preflight 63496fcf1f6054a99fe8bd0c
- When parsing the paklist, any file with a -rehydrate flag will be considered for rehydration
- Add the virtualization module to the PakFileUtilities module
- Add the source control module to UnrealPak (the virtualization module should be taking care of this)
- While parsing the files to be included in the pak file, we will record if any of the files require rehydration, if so this will be noted in FPakCommandLineParameters::bRequiresRehydration and used to initialize the virtualization system.
-- We only need to initialize the system once, even if we detect that multiple pak files have files that need rehydration.
-- If no pak file needs rehydration we do not initialize the system and the virtualization module is never loaded.
- FOutputPakFileEntry::CompressedFileBuffer was made private with accessors to make the refactor easier.
- Now when we start to compress a file, we always load it entirely into memory (along with any potential padding needed), if we then detect that the file doesn't require padding then we just use the in memory buffer as the output and if we do need compression we compress the buffer.
-- This means that there is only one place we load the file from disk, meaing only one place we need to insert the rehydration code to. This load is where the rehydration occurs.
-- In the previous code we would load the file into memory and then retain this copy during the compression pass, so no additional memory should be used.
- At the moment when we get the buffer back fromt he rehydration pass it will be in the form of a FSharedBuffer and so for now we need to memcpy to FCompressedFileBuffer::UncompressedBuffer, which is a waste of cpu cycles.
-- In a future change we should change UncompressedBuffer from TArray to FSharedBuffer to avoid this.
[CL 22595708 by paul chipchase in ue5-main branch]
#rb CarlMagnus.Nordin
#rnx
#preflight 6346ab768a0a7b2adc72cce4
### Problem
- The bug was originally introduced in CL 21791765
- Each file to be compressed calls FPakWriterContext::BeginCompress which will create a FMemoryCompressor and assign it to the files FOutputPakFileEntry. The problem is that the work done by the FMemoryCompressor can potentially complete and signal EndCompressionBarrier before the FMemoryCompresseor has been assigned. This can be due to the work itself being very small, or that closing a file handle when FCompressedFileBuffer::BeginCompressFileToWorkingBuffer returns (which creates the FMemoryCompressor) stalls against system resources. This potentially allows FPakWriterContext::EndCompres to execute concurrently and incorrectly believe that the file is not being compressed as FOutputPakFileEntry::MemoryCompressor is still nullptr.
- Not only does this mean that some files end up being paked uncompressed when they should be compressed, the results are not deterministic which means we can end up with different binary output when paking the exact same data with the exact same settings.
### Solution
- The constructor of FMemoryCompressor no longer adds its work to the task graph system. Instead the tasks are only added after the compressor has been assigned to FOutputPakFileEntry.
- Replaced auto in ranged for loops with the type, as per recent coding standards discussions.
[CL 22505090 by paul chipchase in ue5-main branch]
thread-safe delegates are not zero-initializable and so can't be used as global vars because they are vulnerable to static initialization order fiasco.
#jira UE-163668
#rb steve.robb
#preflight 632462db3752284a3179ec02
[CL 22094531 by Andriy Tylychko in ue5-main branch]
* Added -cryptofile documentation line when starting unrealpak without parameters
#rb Erik.Knapik
#preflight skip
#ROBOMERGE-AUTHOR: henrik.karlsson
#ROBOMERGE-SOURCE: CL 21183712 via CL 21195199 via CL 21195379 via CL 21195491
#ROBOMERGE-BOT: UE5 (Release-Engine-Staging -> Main) (v972-20964824)
[CL 21196924 by henrik karlsson in ue5-main branch]