You've already forked UnrealEngineUWP
mirror of
https://github.com/izzy2lost/UnrealEngineUWP.git
synced 2026-03-26 18:15:20 -07:00
#lockdown Nick.Penwarden ============================ MAJOR FEATURES & CHANGES ============================ Change 3494741 by Steve.Robb Generated code size savings. #jira UE-43048 Change 3495484 by Steve.Robb Fix for generated indices of static arrays when saving configs. Change 3497926 by Robert.Manuszewski Removed FPackageFileSummary's CompressedChunks array as it was no longer being used by anything. Change 3498077 by Robert.Manuszewski Only use the recursion guard in async loading code when the event driven loader is enabled. Change 3498112 by Ben.Marsh UBT: Respect the option to not create debug info in the Android toolchain. This option is already being respected by the compiler, but the linker adds debug info of its own. Change 3500239 by Robert.Manuszewski Made sure the Super Class token stream is also locked when assembling Class token stream with async loading thread enabled. This to to prevent race conditions when loading BP classes. Change 3500395 by Steve.Robb Extra codegen savings when not in hot reload. Change 3501004 by Steve.Robb EObjectFlags now have constexpr operators. Change 3502079 by Ben.Marsh UBT: Pad multi-line error messages so that they align under the prefix for the first line, and include the timestamp if necessary. Change 3502527 by Steve.Robb Fix for zero-sized array compile error in generated code when all functions are editor-only. Change 3502542 by Ben.Marsh UAT: Remove the custom source parameter from log functions, and add support for a customizable indent instead. Change 3502868 by Steve.Robb Workaround for inefficient generated code with stateless lambdas on Clang. Change 3503550 by Steve.Robb Another generated code lambda optimization. Change 3503582 by Ben.Marsh BuildGraph: Add support for nullable parameter types. Change 3504424 by Steve.Robb New AllOf, AnyOf and NoneOf algorithms. Change 3504712 by Ben.Marsh UAT: Less spammy log and error output from UAT. * Callstacks for AutomationExceptions are suppressed by default but still included in the log (the path to the log is noted in console output with the message from the exception). * Add a mechanism for any exceptions to be caught and rethrown with additional lines of context (CommandUtils.AddContext()) that will be appended to the error output by UAT. Avoids decaying the exception type or masking the inner exception message while still adding additional information. * AggregateExceptions resulting from exceptions on child threads are automatically unwrapped (full details are still appended to the log) * Name of the calling function is not included in console output by default, but still included in the log. Change 3504808 by Ben.Marsh UAT: Suppress P4 output when running a recursive instance of UAT. Change 3505044 by Steve.Robb Code generation improved for TCppClassType code. Change 3505485 by Ben.Marsh Fix deterministic cooking issue; always use a pseudo-random number stream when compiling a module. Change 3505699 by Ben.Marsh Plugins: Store the bEnabledByDefault flag exactly as it was read from disk rather than collapsing it to an absolute value based on the default for the location it was read from. This allows loading/saving plugin descriptors without any knowledge of whether they are game or engine plugins. Change 3506055 by Ben.Marsh UAT: Add a class to apply a log indent for the lifetime of an object (ScopedLogIndent), and use it to apply an indent to MegaXGE/ParallelExecutor output. Change 3507745 by Robert.Manuszewski Moved FSimpleObjectReferenceCollectorArchive and FSimpleObjectReferenceCollectorArchive to be internal archives used only by FReferenceCollector so that they are constructed only once per GC task instead of potentially multiple times per GC (as was the case with UDataTables and BlueprintGeneratedClasses). Change 3507911 by Ben.Marsh Plugins: Minor changes to plugin descriptors. * Add a distinct setting for an unspecified EnabledByDefault setting in plugin descriptors. * Add a function to IPlugin to determine the effective EnabledByDefault setting, based on where the plugin was loaded from. Change 3508669 by Ben.Marsh EC: Parse multi-line messages from UBT and UAT. Change 3508691 by Ben.Marsh Fix double-spacing of cook stats. Change 3509245 by Steve.Robb UHT makefiles removed. Flag audit removed. Change 3509275 by Steve.Robb Fix for mismatched stat categories in AudioMixer. #jira UE-46129 Change 3509289 by Robert.Manuszewski Custom Version Container will no longer be always constructed in FArchive constructor. This reduces the number of the Custom Version Container allocations considerably. Change 3509294 by Robert.Manuszewski UDataTable::AddReferencedObjects will no longer try to iterate over the RowMap if there's no UObject references in it. Change 3509312 by Steve.Robb GitHub# 3679: Add TArray constructor that takes a raw pointer and a count Check improved for Append() to allow nullptr in empty ranges, and added to new constructor too. #jira UE-46136 Change 3509396 by Steve.Robb GitHub# 3676: Fix TUnion operator<< compile error #jira UE-46099 Change 3509633 by Steve.Robb Fix for line numbers on multiline macros. Change 3509938 by Gil.Gribb UE4 - Fix rare assert involving cancelled precache requests and non-pak-file loading. Change 3510593 by Daniel.Lamb Fixed up unsoilicited files getting populated with files which aren't finished being created yet. #test None Change 3510594 by Daniel.Lamb Fixed up temp files directory for patching. Thanks David Yerkess @ Milestone #review@Ben.Marsh Change 3511628 by Ben.Marsh PR #3707: Fixed UBT stack size (Contributed by gildor2) Change 3511808 by Ben.Marsh Optimize checks for whether the game project contains source code. Now stops as soon as the first file is found and ignores directories beginning with a '.' character (eg. .git) #jira UE-46540 Change 3512017 by Ben.Marsh Plugins: Deprecate the QueryStatusForAllPlugins() function; the same functionality is available via the IPlugin interface. Change 3513935 by Steve.Robb Reverted array iteration in FPropertyNode::PropagatePropertyChange as this is now covered in TProperty::InitializeValueInternal() as of CL# 3293477. Change 3514142 by Steve.Robb MemoryProfiler2 added to generated solution. Change 3516463 by Ben.Marsh Plugins: Create a manifest for each PAK file containing all the plugin descriptors in one place. Eliminates need to recurse through directories and read separate multiple files in serial at startup, and allows reading all plugin descriptors with one read. The "Mods" directory is excluded from the manifest, since these are intended to be installed separately by the user. Change 3517860 by Ben.Marsh PR #3727: FString Dereference Fixes (Contributed by jovisgCL) Change 3517967 by Ben.Marsh Suppress additional system error dialogs when loading DLLs if -unnattended is on the command line. Change 3518070 by Steve.Robb Disable Binned2 stats in shipping non-editor builds. Change 3520079 by Steve.Robb Fixed bad codegen TAssetPtrs being passed into BlueprintImplementableEvent functions. #jira UE-24034 Change 3520080 by Robert.Manuszewski Made max package summary size to be configurable with ini setting Change 3520083 by Steve.Robb Force a GC after hot reload to clean up reinstanced objects which may still tick. #jira UE-40421 Change 3520480 by Robert.Manuszewski Improved assert message when the initial package read request was too small. Change 3520590 by Graeme.Thornton SignedArchiveReader optimizations - Loads more stats - Stop chunk cache worker from waking up continuously to poll for work. Only wake up when triggered by the archive reader - Signed archive reader just yields when waiting for buffers to finish loading, rather than sleeping for some arbitrary amount of time - Track the number of pending read requests in an atomic counter, to save having to lock the request queue to check for new entries Change3521023by Graeme.Thornton Remove spin from signed archive reader. Main thread waits on an event triggered by the chunk worker to indicate that new chunks are ready for processing Change 3521787 by Ben.Marsh PR #3736: Small static code analysis fixes (Contributed by jovisgCL) Change 3521789 by Ben.Marsh PR #3735: Fix case sensitivity issue in FWindowsPlatformProcess::IsApplicationRunning. (Contributed by samhocevar) Change 3524721 by Ben.Marsh Move Linux SDL initialization into FLinuxPlatformApplicationMisc. Attempting to move functionality related to interactive applications (graphics, input, etc...) into a separate place, so it can ultimately be moved out of Core. Change 3524741 by Ben.Marsh Move PumpMessages() into FPlatformApplicationMisc. Change 3525399 by Ben.Marsh UGS: Use the default Perforce server port when opening P4V if there is not one set in the environment. Change 3525743 by Ben.Marsh UAT: Add a parameter to allow updating version files without updating Version.h, to allow faster link times on incremental builds. Change 3525746 by Ben.Marsh EC: Include the clobber option on new workspaces, to allow overriding version files when syncing. Change 3526453 by Ben.Marsh UGS: Do not generate project files when syncing precompiled binaries. Change 3527045 by Ben.Marsh Fix hot reload generating import libraries without DLLs. Now that they are produced by separate actions by default, it was removing DLLs from the action graph due to the bSkipLinkingWhenNothingToCompile setting. Change 3527420 by Ben.Marsh UGS: Add additional search paths for UGS config files, and fix a few cosmetic issues (inability to display ampersands in tools menu, showing changelist -1 when running a tool without syncing). Config files are now read from: Engine/Programs/UnrealGameSync/UnrealGameSync.ini Engine/Programs/UnrealGameSync/NotForLicensees/UnrealGameSync.ini If a project is selected: <ProjectDir>/Build/UnrealGameSync.ini <ProjectDir>/Build/NotForLicensees/UnrealGameSync.ini If the .uprojectdirs file is selected: Engine/Programs/UnrealGameSync/DefaultProject.ini Engine/Programs/UnrealGameSync/NotForLicensees/DefaultProject.ini Change 3528063 by Ben.Marsh Fix non-thread safe construction of FPluginManager singleton. Length of time spent in the constructor resulted in multiple instances being constructed at startup, making the time to enumerate plugins on slow media significantly worse. Change 3528415 by Ben.Marsh UAT: Remove \r characters from the end of multiline log messages. Change 3528427 by Ben.Marsh EC: Fix spaces being converted to tabs at start of line in failure emails (by Gmail), and wrap following lines at the same indent. Change 3528485 by Ben.Marsh EC: Remove zero-width word break characters from slashes in notification emails; can cause really hard to debug problems when copy pasted into other places. Change 3528505 by Steve.Robb PR #3755: MallocProfiler - Remove subfolder from profiling save directory (Contributed by Josef-CL) #jira UE-46819 Change 3528772 by Robert.Manuszewski Enabling actor and blueprint clustering in ShooterGame Change 3528786 by Robert.Manuszewski PR #3760: Fix typo (Contributed by jesseyeh) Change 3528792 by Steve.Robb PR #3764: MallocProfiler - Refactoring Scopelock (Contributed by Josef-CL) #jira UE-46962 Change 3528941 by Robert.Manuszewski Fixed lazy object pointers not being updated for streaming sub-levels in PIE. Fixed lazy pointers returning object that is still being loaded which could lead to undefined behavior when client code started modifying the returned object. #jira UE-44996 Change 3530241 by Ben.Marsh UAT: Only pass -submit or -nosubmit to child instances of UAT if they were specified on the original command line. BuildCookRun uses this flag to determine whether to submit, rather than just whether to allow submitting, so we shouldn't pass an inferred value. Change 3531377 by Ben.Marsh Plugins: Allow plugins to specify a list of supported target platforms, which is propagated to any .uproject file that enables it. This has several advantages over the per-module platform whitelist/blacklist: * Platform-specific .uplugin files can now be excluded when staging other platforms. Previously, it was only possible to determine which platforms a plugin supports by reading the plugin descriptor itself. Now that information is copied into the .uproject file, so the runtime knows which plugins to ignore. * References to dependent plugins from platform-specific plugins can now be eliminated. * Plugins containing content can now be unambiguously disabled on a per-platform basis (having no modules for a platform does not confer that a plugin doesn't support that platform; now it is possible to specify supported platforms explicitly). * The editor can load any plugins without having to whitelist supported editor host platforms. UE4 targets which support loading plugins for target platforms can set TargetRules.bIncludePluginsForTargetPlatforms (true for the editor by default, false for any other target types). This defines the LOAD_PLUGINS_FOR_TARGET_PLATFORMS macro at runtime, which allows the plugin system to filter which plugins to look for at runtime. Any .uproject file will be updated at startup to contain the list of supported platforms for each referenced plugin if necessary. Change 3531502 by Jin.Zhang Add support for GPUCrash #rb Change 3531664 by Ben.Marsh UBT: Change output format from C# JSON writer to match output by the engine. Change 3531848 by Ben.Marsh UAT: Add script to resaving all project descriptors under a folder, embedding information for any supported platforms for the plugins they enable. Change 3531869 by Ben.Marsh UAT: Add parameter to the ResaveProjectDescriptors command to update the engine association field. Change 3532474 by Ben.Marsh UBT: Use the same mechanism as UAT for logging exceptions. Change 3532734 by Graeme.Thornton Initial VSCode Support - Tasks generated for building all game/engine/program targets - Debugging support for targets on Win64 Change3532789by Steve.Robb FScriptSet::Add and TScriptMap::Add now replace the element, matching the behavior of TSet and TMap. Set_Add and Map_Add no longer have a return value. FScriptSet::Find and FScriptMap::Find functions are now FindIndex. FScriptSetHelper::FindElementFromHash is now FindElementIndexFromHash. Change 3532845 by Steve.Robb Obsolete UHT settings deleted. Change 3532875 by Graeme.Thornton VSCode - Add debug targets for different target configurations - Choose between VS debugger (windows) and GDB (mac/linux) Change 3532906 by Graeme.Thornton VSCode - Point all builds directly at UBT rather than the batch files - Adjust mac build tasks to run through mono Change 3532924 by Ben.Marsh UAT: Set the UAT working directory immediately on startup. This ensures that any command line arguments containing paths are resolved consistently to the branch root. Change 3535234 by Graeme.Thornton VSCode - Pass intellisense system a list of paths to use for header resolution Change 3535247 by Graeme.Thornton UBT - Add a ToString to ProjectFile.Source file to help with debugger watch presentation Change 3535376 by Graeme.Thornton VSCode - Added build jobs for C# projects - Linked launch tasks to relevant build task Change 3537083 by Ben.Marsh EC: Change P4 swarm links to start at the changelist for a build. Change 3537368 by Graeme.Thornton Fix for crash in FSignedArchiveReader when multithreading is disabled Change 3537550 by Graeme.Thornton Fixed a crash in the taskgraph when running single threaded Change 3537922 by Steve.Robb Missing PF_ATC_RGBA_I added to FOREACH_ENUM_EPIXELFORMAT. Change 3539691 by Graeme.Thornton VSCode - Various updates to get PC and Mac C++ projects building and debugging. - Some other changes to C# setup to allow compilation. Debugging doesn't work. Change 3539775 by Ben.Marsh Plugins: Various fixes to settings for enabling plugins. * Fix crash on startup when trying to disable a missing plugin (was keeping pointers to elements in the project's plugin reference array, which may be modified if a plugin is disabled). * Revert fix to set PluginDescriptor.bRequiresBuildPlatform = true by default. This was the originally intended behavior, but it was accidentally defaulted to false during serialization unless specified in the .uplugin file. Many plugins may rely on this behavior (they may not declare asset classes otherwise, for example, which could result in loss of data), so change the default value to false instead. Also fixes popups to disable platform-specific plugins if platform SDKs are not installed. * Fix plugins which are referenced but do not exist not showing the appropriate prompt to disable them. Change 3540788 by Ben.Marsh UBT: Add support for declaring custom pre-build steps and post-build steps from .target.cs files. Similarly to the custom build steps configurable from .uproject and .uplugin files, these specify commands which will be executed by the host platform's shell before or after a build. The following variables are expanded within the list of commands before execution: $(EngineDir), $(ProjectDir), $(TargetName), $(TargetPlatform), $(TargetConfiguration), $(TargetType), $(ProjectFile). Example usage: public class UnrealPakTarget : TargetRules { public UnrealPakTarget(TargetInfo Target) : base(Target) { Type = TargetType.Program; LinkType = TargetLinkType.Monolithic; LaunchModuleName = "UnrealPak"; if(HostPlatform == UnrealTargetPlatform.Win64) { PreBuildSteps.Add("echo Before building:"); PreBuildSteps.Add("echo This is $(TargetName) $(TargetConfiguration) $(TargetPlatform)"); PostBuildSteps.Add("echo After building!"); PostBuildSteps.Add("echo This is $(TargetName) $(TargetConfiguration) $(TargetPlatform)"); } } } Change 3541664 by Graeme.Thornton VSCode - Add problemMatcher tag to cpp build targets Change 3541732 by Graeme.Thornton VSCode - Change UBT command line switch to "-vscode" for simplicity Change 3541967 by Graeme.Thornton VSCode - Fixes for Mac/Linux build steps Change 3541968 by Ben.Marsh CRP: Pass through the EnabledPlugins element in crash context XML files. #jira UE-46912 Change 3542519 by Ben.Marsh UBT: Add chain of references to error messages when configuring plugins. Change 3542523 by Ben.Marsh UBT: Add more useful error message when attempt to parse a JSON object fails. Change 3542658 by Ben.Marsh UBT: Include a chain of references when reporting errors instantiating modules. Change 3543432 by Ben.Marsh Plugins: Fix plugins which are enabled by default not being enabled unless a project file is set. Change 3543436 by Ben.Marsh UBT: Prevent recursing through the same module more than once when building out the referenced modules. Produces much shorter reference chains when something fails. Change 3543536 by Ben.Marsh UBT: Downgrade message about redundant plugin references to a warning. Change 3543871 by Gil.Gribb UE4 - Fixed a critical crash bug with non-EDL loading from pak files. Change 3543924 by Robert.Manuszewski Fixed a crash on UnrealFrontend startup caused by re-assembling GC token stream for one of the classes. +Small optimization to token stream generation code. Change 3544469 by Jin.Zhang Crashes page displays the list of plugins from the crash context #rb Change 3544608 by Steve.Robb Fix for nativized generated code. #jira UE-47452 Change 3544612 by Ben.Marsh Add callback into FMacPlatformMisc::PumpMessages() from FMacPlatformApplicationMisc::PumpMessages(). #jira UE-47449 Change 3545954 by Gil.Gribb Fixed a critical crash bug relating to a race condition in async package summary reading. Change 3545968 by Ben.Marsh UAT: Fix incorrect username in BuildGraph <Submit> task. Should use the username from the Perforce environment, not assume the logged in user name is the same. #jira UE-47419 Change 3545976 by Ben.Marsh EC: Delete the AutoSDK client if the directory doesn't exist. When we format build machines, we need to force everything to be resynced from scratch. Change 3546185 by Ben.Marsh Hacky fix for deployment on IOS/TVOS. Since deployment directly references the NonUFS manifest files that are written out, merge all the SystemNonUFS files back into the NonUFS list after the regular NonUFS files have been remapped. Change 3547084 by Gil.Gribb Fixed a critical race condition in the new async loader. This was only reproducible on IOS, but may affect other platforms. Change 3547968 by Gil.Gribb Fixed critical race which potentially could cause a crash in the pak precacher. Change 3504722 by Ben.Marsh BuildGraph: Improved tracing for error messages. All errors are now propagated as exceptions, and are tagged with additional context information about the task currently being run. For example, throwing new AutomationException("Unable to write foo.txt") from SetVersionTask.Execute is now displayed in the log as: ERROR: Unable to write to foo.txt while executing <SetVersion Change="0" CompatibleChange="0" Branch="Unknown" Promoted="True" /> at Engine\Build\InstalledEngineBuild.xml(91) (see D:\P4 UE4\Engine\Programs\AutomationTool\Saved\Logs\UAT_Log.txt for full exception trace) Change 3512255 by Ben.Marsh Rename FPaths functions with a "Game" prefix (GameDir(), GameContentDir(), etc...) to have a "Project" prefix (ProjectDir(), ProjectContentDir(), etc...) for clarity with non-game uses of UE4. Old functions still exist but are deprecated. Change 3512332 by Ben.Marsh Rename "Game" functions in FApp to be "Project" functions (FApp::GetGameName() -> FApp::GetProjectName(), etc...) for clarity with non-game uses of UE4. Change 3512393 by Ben.Marsh Rename FPaths::GameLogDir() to FPaths::ProjectLogDir(). Change 3513452 by Ben.Marsh Plugins: Rename EPluginLoadedFrom::GameProject to EPluginLoadedFrom::Project. Change 3516262 by Ben.Marsh Add support for a "Mods" folder distinct from the project's "Plugins" folder, instead of using the bIsMod flag on the plugin descriptor. * Mods are enumerated similarly to regular plugins, but IPlugin::GetType() will return EPluginType::Mod. * The DLCName parameter to BuildCookRun and the cooker now correctly finds any plugin in the Plugins or Mods directory (or any subfolders). Change 3517565 by Ben.Marsh Remove fixed engine version numbers from OSS plugins. Change 3518005 by Ben.Marsh UAT: Remove the bUFSFile parameter from DeployLowerCaseFilenames(). Every platform returns false if the argument is false. Change 3518054 by Ben.Marsh UAT: Use an enum to direct whether all directories should be searched when finding files to stage, rather than a bool. Having so many optional boolean arguments makes code unreadable and refactoring hard. Change 3524496 by Ben.Marsh Start moving GUI application code into a separate static platform class, hopefully ultimately removing it from Core. Change 3524641 by Ben.Marsh Move more functionality related to windowed/graphical applications into FPlatformApplicationMisc. Change 3528723 by Steve.Robb MoveTemp now static asserts if passed a const reference or rvalue. MoveTempIfPossible still follows the old (std::move) rule, which is useful for templates where the nature of the argument is not obvious. Fixes to violations of these new rules. Change 3528876 by Ben.Marsh Move FPlatformMisc::ClipboardCopy and FPlatformMisc::ClipboardPaste to FPlatformApplicationMisc::ClipboardCopy and FPlatformApplicationMisc::ClipboardPaste. Change 3529073 by Ben.Marsh Add script to package ShooterGame for any platforms. Change 3531493 by Ben.Marsh Update platform-specific plugins to declare the target platforms they support. Change 3531611 by Ben.Marsh UAT: Add a ResavePluginDescriptors command, which resaves all plugin descriptors under a given folder, removing any outdated fields and rewrites them in a consistent style. Many plugins in the wild contain redundant or no-longer used fields due to using our plugins as templates. Change3531868by Ben.Marsh Resaving project descriptors to remove invalid fields. Change 3531983 by Ben.Marsh UAT: Simplify logic for staging code, and add validation against shipping files in restricted folders. * Added a new SystemNonUFS type for staged files, which excludes files from being remapped or renamed by the platform layer. * Replaced the DeplyomentContext.StageFiles() function with simpler overloads for particular use cases (options for remapping are replaced with the SystemNonUFS file type) * Config entries in the [Staging] category in DefaultGame.ini file allow remapping one directory to another, so restricted content can be made public in packaged builds (Example syntax: +RemapDirectory=(From="Foo/NoRedist", To="Foo")) * An error is output if any restricted folder names other than the output platform are in the staged output. Change 3540315 by Ben.Marsh UAT: Moving StreamCopyDescription command into a NotForLicensees folder, since it's only meant to be used by engine developers. Change 3542410 by Ben.Marsh UBT: Deprecate accessing properties through BuildConfiguration.* or UEBuildConfiguration.* from .target.cs files. These have been aliases to the current TargetRules instance for several releases already. Change 3543018 by Ben.Marsh UBT: Deprecate the BuildConfiguration and UEBuildConfiguration aliases from the ModuleRules class. These have been implemented as an alias ot the ReadOnlyTargetRules instance passed to the constructor for several engine versions. Change 3544371 by Steve.Robb Fixes to TSet_Add and TMap_Add BPs. #jira UE-47441 [CL 3548391 by Ben Marsh in Main branch]
4396 lines
130 KiB
C++
4396 lines
130 KiB
C++
// Copyright 1998-2017 Epic Games, Inc. All Rights Reserved.
|
|
|
|
#include "IPlatformFilePak.h"
|
|
#include "HAL/FileManager.h"
|
|
#include "Misc/CoreMisc.h"
|
|
#include "Misc/CommandLine.h"
|
|
#include "Async/AsyncWork.h"
|
|
#include "Serialization/MemoryReader.h"
|
|
#include "HAL/IConsoleManager.h"
|
|
#include "Misc/CoreDelegates.h"
|
|
#include "Misc/App.h"
|
|
#include "Modules/ModuleManager.h"
|
|
#include "Misc/SecureHash.h"
|
|
#include "HAL/FileManagerGeneric.h"
|
|
#include "HAL/IPlatformFileModule.h"
|
|
#include "SignedArchiveReader.h"
|
|
#include "Misc/AES.h"
|
|
#include "GenericPlatform/GenericPlatformChunkInstall.h"
|
|
#include "Async/AsyncFileHandle.h"
|
|
#include "Templates/Greater.h"
|
|
#include "Serialization/ArchiveProxy.h"
|
|
|
|
DEFINE_LOG_CATEGORY(LogPakFile);
|
|
|
|
DEFINE_STAT(STAT_PakFile_Read);
|
|
DEFINE_STAT(STAT_PakFile_NumOpenHandles);
|
|
|
|
TPakChunkHash ComputePakChunkHash(const void* InData, int64 InDataSizeInBytes)
|
|
{
|
|
#if PAKHASH_USE_CRC
|
|
return FCrc::MemCrc32(InData, InDataSizeInBytes);
|
|
#else
|
|
FSHAHash Hash;
|
|
FSHA1::HashBuffer(InData, InDataSizeInBytes, Hash);
|
|
return Hash;
|
|
#endif
|
|
}
|
|
|
|
#ifndef EXCLUDE_NONPAK_UE_EXTENSIONS
|
|
#define EXCLUDE_NONPAK_UE_EXTENSIONS 1 // Use .Build.cs file to disable this if the game relies on accessing loose files
|
|
#endif
|
|
|
|
FFilenameSecurityDelegate& FPakPlatformFile::GetFilenameSecurityDelegate()
|
|
{
|
|
static FFilenameSecurityDelegate Delegate;
|
|
return Delegate;
|
|
}
|
|
|
|
|
|
#define USE_PAK_PRECACHE (!IS_PROGRAM && !WITH_EDITOR) // you can turn this off to use the async IO stuff without the precache
|
|
|
|
/**
|
|
* Precaching
|
|
*/
|
|
|
|
const ANSICHAR* FPakPlatformFile::GetPakEncryptionKey()
|
|
{
|
|
FCoreDelegates::FPakEncryptionKeyDelegate& Delegate = FCoreDelegates::GetPakEncryptionKeyDelegate();
|
|
if (Delegate.IsBound())
|
|
{
|
|
return Delegate.Execute();
|
|
}
|
|
else
|
|
{
|
|
return nullptr;
|
|
}
|
|
}
|
|
|
|
void FPakPlatformFile::GetPakSigningKeys(FString& OutExponent, FString& OutModulus)
|
|
{
|
|
FCoreDelegates::FPakSigningKeysDelegate& Delegate = FCoreDelegates::GetPakSigningKeysDelegate();
|
|
if (Delegate.IsBound())
|
|
{
|
|
return Delegate.Execute(OutExponent, OutModulus);
|
|
}
|
|
}
|
|
|
|
DECLARE_DWORD_ACCUMULATOR_STAT(TEXT("PakCache Sync Decrypts (Uncompressed Path)"), STAT_PakCache_SyncDecrypts, STATGROUP_PakFile);
|
|
DECLARE_FLOAT_ACCUMULATOR_STAT(TEXT("PakCache Decrypt Time"), STAT_PakCache_DecryptTime, STATGROUP_PakFile);
|
|
DECLARE_DWORD_ACCUMULATOR_STAT(TEXT("PakCache Async Decrypts (Compressed Path)"), STAT_PakCache_CompressedDecrypts, STATGROUP_PakFile);
|
|
DECLARE_DWORD_ACCUMULATOR_STAT(TEXT("PakCache Async Decrypts (Uncompressed Path)"), STAT_PakCache_UncompressedDecrypts, STATGROUP_PakFile);
|
|
|
|
inline void DecryptData(uint8* InData, uint32 InDataSize)
|
|
{
|
|
SCOPE_SECONDS_ACCUMULATOR(STAT_PakCache_DecryptTime);
|
|
const ANSICHAR* Key = FPakPlatformFile::GetPakEncryptionKey();
|
|
checkf(Key, TEXT("AES decryption has been requested, but no valid encryption key was available"));
|
|
FAES::DecryptData(InData, InDataSize, Key);
|
|
}
|
|
|
|
#if USE_PAK_PRECACHE
|
|
#include "TaskGraphInterfaces.h"
|
|
#define PAK_CACHE_GRANULARITY (64*1024)
|
|
static_assert((PAK_CACHE_GRANULARITY % FPakInfo::MaxChunkDataSize) == 0, "PAK_CACHE_GRANULARITY must be set to a multiple of FPakInfo::MaxChunkDataSize");
|
|
#define PAK_CACHE_MAX_REQUESTS (8)
|
|
#define PAK_CACHE_MAX_PRIORITY_DIFFERENCE_MERGE (AIOP_Normal - AIOP_Precache)
|
|
#define PAK_EXTRA_CHECKS DO_CHECK
|
|
|
|
DECLARE_MEMORY_STAT(TEXT("PakCache Current"), STAT_PakCacheMem, STATGROUP_Memory);
|
|
DECLARE_MEMORY_STAT(TEXT("PakCache High Water"), STAT_PakCacheHighWater, STATGROUP_Memory);
|
|
|
|
DECLARE_FLOAT_ACCUMULATOR_STAT(TEXT("PakCache Signing Chunk Hash Time"), STAT_PakCache_SigningChunkHashTime, STATGROUP_PakFile);
|
|
DECLARE_MEMORY_STAT(TEXT("PakCache Signing Chunk Hash Size"), STAT_PakCache_SigningChunkHashSize, STATGROUP_PakFile);
|
|
|
|
|
|
static int32 GPakCache_Enable = 1;
|
|
static FAutoConsoleVariableRef CVar_Enable(
|
|
TEXT("pakcache.Enable"),
|
|
GPakCache_Enable,
|
|
TEXT("If > 0, then enable the pak cache.")
|
|
);
|
|
|
|
int32 GPakCache_MaxRequestsToLowerLevel = 2;
|
|
static FAutoConsoleVariableRef CVar_MaxRequestsToLowerLevel(
|
|
TEXT("pakcache.MaxRequestsToLowerLevel"),
|
|
GPakCache_MaxRequestsToLowerLevel,
|
|
TEXT("Controls the maximum number of IO requests submitted to the OS filesystem at one time. Limited by PAK_CACHE_MAX_REQUESTS.")
|
|
);
|
|
|
|
int32 GPakCache_MaxRequestSizeToLowerLevelKB = 1024;
|
|
static FAutoConsoleVariableRef CVar_MaxRequestSizeToLowerLevelKB(
|
|
TEXT("pakcache.MaxRequestSizeToLowerLevellKB"),
|
|
GPakCache_MaxRequestSizeToLowerLevelKB,
|
|
TEXT("Controls the maximum size (in KB) of IO requests submitted to the OS filesystem.")
|
|
);
|
|
|
|
int32 GPakCache_NumUnreferencedBlocksToCache = 10;
|
|
static FAutoConsoleVariableRef CVar_NumUnreferencedBlocksToCache(
|
|
TEXT("pakcache.NumUnreferencedBlocksToCache"),
|
|
GPakCache_NumUnreferencedBlocksToCache,
|
|
TEXT("Controls the maximum number of unreferenced blocks to keep. This is a classic disk cache and the maxmimum wasted memory is pakcache.MaxRequestSizeToLowerLevellKB * pakcache.NumUnreferencedBlocksToCache.")
|
|
);
|
|
|
|
class FPakPrecacher;
|
|
|
|
typedef uint64 FJoinedOffsetAndPakIndex;
|
|
static FORCEINLINE uint16 GetRequestPakIndexLow(FJoinedOffsetAndPakIndex Joined)
|
|
{
|
|
return uint16((Joined >> 48) & 0xffff);
|
|
}
|
|
|
|
static FORCEINLINE int64 GetRequestOffset(FJoinedOffsetAndPakIndex Joined)
|
|
{
|
|
return int64(Joined & 0xffffffffffffll);
|
|
}
|
|
|
|
static FORCEINLINE FJoinedOffsetAndPakIndex MakeJoinedRequest(uint16 PakIndex, int64 Offset)
|
|
{
|
|
check(Offset >= 0);
|
|
return (FJoinedOffsetAndPakIndex(PakIndex) << 48) | Offset;
|
|
}
|
|
|
|
enum
|
|
{
|
|
IntervalTreeInvalidIndex = 0
|
|
};
|
|
|
|
|
|
typedef uint32 TIntervalTreeIndex; // this is the arg type of TSparseArray::operator[]
|
|
|
|
static uint32 GNextSalt = 1;
|
|
|
|
// This is like TSparseArray, only a bit safer and I needed some restrictions on resizing.
|
|
template<class TItem>
|
|
class TIntervalTreeAllocator
|
|
{
|
|
TArray<TItem> Items;
|
|
TArray<int32> FreeItems; //@todo make this into a linked list through the existing items
|
|
uint32 Salt;
|
|
uint32 SaltMask;
|
|
public:
|
|
TIntervalTreeAllocator()
|
|
{
|
|
check(GNextSalt < 4);
|
|
Salt = (GNextSalt++) << 30;
|
|
SaltMask = MAX_uint32 << 30;
|
|
verify((Alloc() & ~SaltMask) == IntervalTreeInvalidIndex); // we want this to always have element zero so we can figure out an index from a pointer
|
|
}
|
|
inline TIntervalTreeIndex Alloc()
|
|
{
|
|
int32 Result;
|
|
if (FreeItems.Num())
|
|
{
|
|
Result = FreeItems.Pop();
|
|
}
|
|
else
|
|
{
|
|
Result = Items.Num();
|
|
Items.AddUninitialized();
|
|
|
|
}
|
|
new ((void*)&Items[Result]) TItem();
|
|
return Result | Salt;;
|
|
}
|
|
void EnsureNoRealloc(int32 NeededNewNum)
|
|
{
|
|
if (FreeItems.Num() + Items.GetSlack() < NeededNewNum)
|
|
{
|
|
Items.Reserve(Items.Num() + NeededNewNum);
|
|
}
|
|
}
|
|
FORCEINLINE TItem& Get(TIntervalTreeIndex InIndex)
|
|
{
|
|
TIntervalTreeIndex Index = InIndex & ~SaltMask;
|
|
check((InIndex & SaltMask) == Salt && Index != IntervalTreeInvalidIndex && Index >= 0 && Index < (uint32)Items.Num()); //&& !FreeItems.Contains(Index));
|
|
return Items[Index];
|
|
}
|
|
FORCEINLINE void Free(TIntervalTreeIndex InIndex)
|
|
{
|
|
TIntervalTreeIndex Index = InIndex & ~SaltMask;
|
|
check((InIndex & SaltMask) == Salt && Index != IntervalTreeInvalidIndex && Index >= 0 && Index < (uint32)Items.Num()); //&& !FreeItems.Contains(Index));
|
|
Items[Index].~TItem();
|
|
FreeItems.Push(Index);
|
|
if (FreeItems.Num() + 1 == Items.Num())
|
|
{
|
|
// get rid everything to restore memory coherence
|
|
Items.Empty();
|
|
FreeItems.Empty();
|
|
verify((Alloc() & ~SaltMask) == IntervalTreeInvalidIndex); // we want this to always have element zero so we can figure out an index from a pointer
|
|
}
|
|
}
|
|
FORCEINLINE void CheckIndex(TIntervalTreeIndex InIndex)
|
|
{
|
|
TIntervalTreeIndex Index = InIndex & ~SaltMask;
|
|
check((InIndex & SaltMask) == Salt && Index != IntervalTreeInvalidIndex && Index >= 0 && Index < (uint32)Items.Num()); // && !FreeItems.Contains(Index));
|
|
}
|
|
};
|
|
|
|
class FIntervalTreeNode
|
|
{
|
|
public:
|
|
TIntervalTreeIndex LeftChildOrRootOfLeftList;
|
|
TIntervalTreeIndex RootOfOnList;
|
|
TIntervalTreeIndex RightChildOrRootOfRightList;
|
|
|
|
FIntervalTreeNode()
|
|
: LeftChildOrRootOfLeftList(IntervalTreeInvalidIndex)
|
|
, RootOfOnList(IntervalTreeInvalidIndex)
|
|
, RightChildOrRootOfRightList(IntervalTreeInvalidIndex)
|
|
{
|
|
}
|
|
~FIntervalTreeNode()
|
|
{
|
|
check(LeftChildOrRootOfLeftList == IntervalTreeInvalidIndex && RootOfOnList == IntervalTreeInvalidIndex && RightChildOrRootOfRightList == IntervalTreeInvalidIndex); // this routine does not handle recursive destruction
|
|
}
|
|
};
|
|
|
|
static TIntervalTreeAllocator<FIntervalTreeNode> GIntervalTreeNodeNodeAllocator;
|
|
|
|
static FORCEINLINE uint64 HighBit(uint64 x)
|
|
{
|
|
return x & (1ull << 63);
|
|
}
|
|
|
|
static FORCEINLINE bool IntervalsIntersect(uint64 Min1, uint64 Max1, uint64 Min2, uint64 Max2)
|
|
{
|
|
return !(Max2 < Min1 || Max1 < Min2);
|
|
}
|
|
|
|
template<typename TItem>
|
|
// this routine assume that the pointers remain valid even though we are reallocating
|
|
static void AddToIntervalTree_Dangerous(
|
|
TIntervalTreeIndex* RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
TIntervalTreeIndex Index,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
uint32 CurrentShift,
|
|
uint32 MaxShift
|
|
)
|
|
{
|
|
while (true)
|
|
{
|
|
if (*RootNode == IntervalTreeInvalidIndex)
|
|
{
|
|
*RootNode = GIntervalTreeNodeNodeAllocator.Alloc();
|
|
}
|
|
|
|
int64 MinShifted = HighBit(MinInterval << CurrentShift);
|
|
int64 MaxShifted = HighBit(MaxInterval << CurrentShift);
|
|
FIntervalTreeNode& Root = GIntervalTreeNodeNodeAllocator.Get(*RootNode);
|
|
|
|
if (MinShifted == MaxShifted && CurrentShift < MaxShift)
|
|
{
|
|
CurrentShift++;
|
|
RootNode = (!MinShifted) ? &Root.LeftChildOrRootOfLeftList : &Root.RightChildOrRootOfRightList;
|
|
}
|
|
else
|
|
{
|
|
TItem& Item = Allocator.Get(Index);
|
|
if (MinShifted != MaxShifted) // crosses middle
|
|
{
|
|
Item.Next = Root.RootOfOnList;
|
|
Root.RootOfOnList = Index;
|
|
}
|
|
else // we are at the leaf
|
|
{
|
|
if (!MinShifted)
|
|
{
|
|
Item.Next = Root.LeftChildOrRootOfLeftList;
|
|
Root.LeftChildOrRootOfLeftList = Index;
|
|
}
|
|
else
|
|
{
|
|
Item.Next = Root.RightChildOrRootOfRightList;
|
|
Root.RightChildOrRootOfRightList = Index;
|
|
}
|
|
}
|
|
return;
|
|
}
|
|
}
|
|
}
|
|
|
|
template<typename TItem>
|
|
static void AddToIntervalTree(
|
|
TIntervalTreeIndex* RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
TIntervalTreeIndex Index,
|
|
uint32 StartShift,
|
|
uint32 MaxShift
|
|
)
|
|
{
|
|
GIntervalTreeNodeNodeAllocator.EnsureNoRealloc(1 + MaxShift - StartShift);
|
|
TItem& Item = Allocator.Get(Index);
|
|
check(Item.Next == IntervalTreeInvalidIndex);
|
|
uint64 MinInterval = GetRequestOffset(Item.OffsetAndPakIndex);
|
|
uint64 MaxInterval = MinInterval + Item.Size - 1;
|
|
AddToIntervalTree_Dangerous(RootNode, Allocator, Index, MinInterval, MaxInterval, StartShift, MaxShift);
|
|
|
|
}
|
|
|
|
template<typename TItem>
|
|
static FORCEINLINE bool ScanNodeListForRemoval(
|
|
TIntervalTreeIndex* Iter,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
TIntervalTreeIndex Index,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval
|
|
)
|
|
{
|
|
while (*Iter != IntervalTreeInvalidIndex)
|
|
{
|
|
|
|
TItem& Item = Allocator.Get(*Iter);
|
|
if (*Iter == Index)
|
|
{
|
|
*Iter = Item.Next;
|
|
Item.Next = IntervalTreeInvalidIndex;
|
|
return true;
|
|
}
|
|
Iter = &Item.Next;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
template<typename TItem>
|
|
static bool RemoveFromIntervalTree(
|
|
TIntervalTreeIndex* RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
TIntervalTreeIndex Index,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
uint32 CurrentShift,
|
|
uint32 MaxShift
|
|
)
|
|
{
|
|
bool bResult = false;
|
|
if (*RootNode != IntervalTreeInvalidIndex)
|
|
{
|
|
int64 MinShifted = HighBit(MinInterval << CurrentShift);
|
|
int64 MaxShifted = HighBit(MaxInterval << CurrentShift);
|
|
FIntervalTreeNode& Root = GIntervalTreeNodeNodeAllocator.Get(*RootNode);
|
|
|
|
if (!MinShifted && !MaxShifted)
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
bResult = ScanNodeListForRemoval(&Root.LeftChildOrRootOfLeftList, Allocator, Index, MinInterval, MaxInterval);
|
|
}
|
|
else
|
|
{
|
|
bResult = RemoveFromIntervalTree(&Root.LeftChildOrRootOfLeftList, Allocator, Index, MinInterval, MaxInterval, CurrentShift + 1, MaxShift);
|
|
}
|
|
}
|
|
else if (!MinShifted && MaxShifted)
|
|
{
|
|
bResult = ScanNodeListForRemoval(&Root.RootOfOnList, Allocator, Index, MinInterval, MaxInterval);
|
|
}
|
|
else
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
bResult = ScanNodeListForRemoval(&Root.RightChildOrRootOfRightList, Allocator, Index, MinInterval, MaxInterval);
|
|
}
|
|
else
|
|
{
|
|
bResult = RemoveFromIntervalTree(&Root.RightChildOrRootOfRightList, Allocator, Index, MinInterval, MaxInterval, CurrentShift + 1, MaxShift);
|
|
}
|
|
}
|
|
if (bResult)
|
|
{
|
|
if (Root.LeftChildOrRootOfLeftList == IntervalTreeInvalidIndex && Root.RootOfOnList == IntervalTreeInvalidIndex && Root.RightChildOrRootOfRightList == IntervalTreeInvalidIndex)
|
|
{
|
|
check(&Root == &GIntervalTreeNodeNodeAllocator.Get(*RootNode));
|
|
GIntervalTreeNodeNodeAllocator.Free(*RootNode);
|
|
*RootNode = IntervalTreeInvalidIndex;
|
|
}
|
|
}
|
|
}
|
|
return bResult;
|
|
}
|
|
|
|
template<typename TItem>
|
|
static bool RemoveFromIntervalTree(
|
|
TIntervalTreeIndex* RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
TIntervalTreeIndex Index,
|
|
uint32 StartShift,
|
|
uint32 MaxShift
|
|
)
|
|
{
|
|
TItem& Item = Allocator.Get(Index);
|
|
uint64 MinInterval = GetRequestOffset(Item.OffsetAndPakIndex);
|
|
uint64 MaxInterval = MinInterval + Item.Size - 1;
|
|
return RemoveFromIntervalTree(RootNode, Allocator, Index, MinInterval, MaxInterval, StartShift, MaxShift);
|
|
}
|
|
|
|
template<typename TItem>
|
|
static FORCEINLINE void ScanNodeListForRemovalFunc(
|
|
TIntervalTreeIndex* Iter,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
TFunctionRef<bool(TIntervalTreeIndex)> Func
|
|
)
|
|
{
|
|
while (*Iter != IntervalTreeInvalidIndex)
|
|
{
|
|
TItem& Item = Allocator.Get(*Iter);
|
|
uint64 Offset = uint64(GetRequestOffset(Item.OffsetAndPakIndex));
|
|
uint64 LastByte = Offset + uint64(Item.Size) - 1;
|
|
|
|
// save the value and then clear it.
|
|
TIntervalTreeIndex NextIndex = Item.Next;
|
|
if (IntervalsIntersect(MinInterval, MaxInterval, Offset, LastByte) && Func(*Iter))
|
|
{
|
|
*Iter = NextIndex; // this may have already be deleted, so cannot rely on the memory block
|
|
}
|
|
else
|
|
{
|
|
Iter = &Item.Next;
|
|
}
|
|
}
|
|
}
|
|
|
|
template<typename TItem>
|
|
static void MaybeRemoveOverlappingNodesInIntervalTree(
|
|
TIntervalTreeIndex* RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
uint64 MinNode,
|
|
uint64 MaxNode,
|
|
uint32 CurrentShift,
|
|
uint32 MaxShift,
|
|
TFunctionRef<bool(TIntervalTreeIndex)> Func
|
|
)
|
|
{
|
|
if (*RootNode != IntervalTreeInvalidIndex)
|
|
{
|
|
int64 MinShifted = HighBit(MinInterval << CurrentShift);
|
|
int64 MaxShifted = HighBit(MaxInterval << CurrentShift);
|
|
FIntervalTreeNode& Root = GIntervalTreeNodeNodeAllocator.Get(*RootNode);
|
|
uint64 Center = (MinNode + MaxNode + 1) >> 1;
|
|
|
|
//UE_LOG(LogTemp, Warning, TEXT("Exploring Node %X [%d, %d] %d%d interval %llX %llX node interval %llX %llX center %llX "), *RootNode, CurrentShift, MaxShift, !!MinShifted, !!MaxShifted, MinInterval, MaxInterval, MinNode, MaxNode, Center);
|
|
|
|
|
|
if (!MinShifted)
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
//UE_LOG(LogTemp, Warning, TEXT("LeftBottom %X [%d, %d] %d%d"), *RootNode, CurrentShift, MaxShift, !!MinShifted, !!MaxShifted);
|
|
ScanNodeListForRemovalFunc(&Root.LeftChildOrRootOfLeftList, Allocator, MinInterval, MaxInterval, Func);
|
|
}
|
|
else
|
|
{
|
|
//UE_LOG(LogTemp, Warning, TEXT("LeftRecur %X [%d, %d] %d%d"), *RootNode, CurrentShift, MaxShift, !!MinShifted, !!MaxShifted);
|
|
MaybeRemoveOverlappingNodesInIntervalTree(&Root.LeftChildOrRootOfLeftList, Allocator, MinInterval, FMath::Min(MaxInterval, Center - 1), MinNode, Center - 1, CurrentShift + 1, MaxShift, Func);
|
|
}
|
|
}
|
|
|
|
//UE_LOG(LogTemp, Warning, TEXT("Center %X [%d, %d] %d%d"), *RootNode, CurrentShift, MaxShift, !!MinShifted, !!MaxShifted);
|
|
ScanNodeListForRemovalFunc(&Root.RootOfOnList, Allocator, MinInterval, MaxInterval, Func);
|
|
|
|
if (MaxShifted)
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
//UE_LOG(LogTemp, Warning, TEXT("RightBottom %X [%d, %d] %d%d"), *RootNode, CurrentShift, MaxShift, !!MinShifted, !!MaxShifted);
|
|
ScanNodeListForRemovalFunc(&Root.RightChildOrRootOfRightList, Allocator, MinInterval, MaxInterval, Func);
|
|
}
|
|
else
|
|
{
|
|
//UE_LOG(LogTemp, Warning, TEXT("RightRecur %X [%d, %d] %d%d"), *RootNode, CurrentShift, MaxShift, !!MinShifted, !!MaxShifted);
|
|
MaybeRemoveOverlappingNodesInIntervalTree(&Root.RightChildOrRootOfRightList, Allocator, FMath::Max(MinInterval, Center), MaxInterval, Center, MaxNode, CurrentShift + 1, MaxShift, Func);
|
|
}
|
|
}
|
|
|
|
//UE_LOG(LogTemp, Warning, TEXT("Done Exploring Node %X [%d, %d] %d%d interval %llX %llX node interval %llX %llX center %llX "), *RootNode, CurrentShift, MaxShift, !!MinShifted, !!MaxShifted, MinInterval, MaxInterval, MinNode, MaxNode, Center);
|
|
|
|
if (Root.LeftChildOrRootOfLeftList == IntervalTreeInvalidIndex && Root.RootOfOnList == IntervalTreeInvalidIndex && Root.RightChildOrRootOfRightList == IntervalTreeInvalidIndex)
|
|
{
|
|
check(&Root == &GIntervalTreeNodeNodeAllocator.Get(*RootNode));
|
|
GIntervalTreeNodeNodeAllocator.Free(*RootNode);
|
|
*RootNode = IntervalTreeInvalidIndex;
|
|
}
|
|
}
|
|
}
|
|
|
|
|
|
template<typename TItem>
|
|
static FORCEINLINE bool ScanNodeList(
|
|
TIntervalTreeIndex Iter,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
TFunctionRef<bool(TIntervalTreeIndex)> Func
|
|
)
|
|
{
|
|
while (Iter != IntervalTreeInvalidIndex)
|
|
{
|
|
TItem& Item = Allocator.Get(Iter);
|
|
uint64 Offset = uint64(GetRequestOffset(Item.OffsetAndPakIndex));
|
|
uint64 LastByte = Offset + uint64(Item.Size) - 1;
|
|
if (IntervalsIntersect(MinInterval, MaxInterval, Offset, LastByte))
|
|
{
|
|
if (!Func(Iter))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
Iter = Item.Next;
|
|
}
|
|
return true;
|
|
}
|
|
|
|
template<typename TItem>
|
|
static bool OverlappingNodesInIntervalTree(
|
|
TIntervalTreeIndex RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
uint64 MinNode,
|
|
uint64 MaxNode,
|
|
uint32 CurrentShift,
|
|
uint32 MaxShift,
|
|
TFunctionRef<bool(TIntervalTreeIndex)> Func
|
|
)
|
|
{
|
|
if (RootNode != IntervalTreeInvalidIndex)
|
|
{
|
|
int64 MinShifted = HighBit(MinInterval << CurrentShift);
|
|
int64 MaxShifted = HighBit(MaxInterval << CurrentShift);
|
|
FIntervalTreeNode& Root = GIntervalTreeNodeNodeAllocator.Get(RootNode);
|
|
uint64 Center = (MinNode + MaxNode + 1) >> 1;
|
|
|
|
if (!MinShifted)
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
if (!ScanNodeList(Root.LeftChildOrRootOfLeftList, Allocator, MinInterval, MaxInterval, Func))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
else
|
|
{
|
|
if (!OverlappingNodesInIntervalTree(Root.LeftChildOrRootOfLeftList, Allocator, MinInterval, FMath::Min(MaxInterval, Center - 1), MinNode, Center - 1, CurrentShift + 1, MaxShift, Func))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
}
|
|
if (!ScanNodeList(Root.RootOfOnList, Allocator, MinInterval, MaxInterval, Func))
|
|
{
|
|
return false;
|
|
}
|
|
if (MaxShifted)
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
if (!ScanNodeList(Root.RightChildOrRootOfRightList, Allocator, MinInterval, MaxInterval, Func))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
else
|
|
{
|
|
if (!OverlappingNodesInIntervalTree(Root.RightChildOrRootOfRightList, Allocator, FMath::Max(MinInterval, Center), MaxInterval, Center, MaxNode, CurrentShift + 1, MaxShift, Func))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
return true;
|
|
}
|
|
|
|
template<typename TItem>
|
|
static bool ScanNodeListWithShrinkingInterval(
|
|
TIntervalTreeIndex Iter,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64& MaxInterval,
|
|
TFunctionRef<bool(TIntervalTreeIndex)> Func
|
|
)
|
|
{
|
|
while (Iter != IntervalTreeInvalidIndex)
|
|
{
|
|
TItem& Item = Allocator.Get(Iter);
|
|
uint64 Offset = uint64(GetRequestOffset(Item.OffsetAndPakIndex));
|
|
uint64 LastByte = Offset + uint64(Item.Size) - 1;
|
|
//UE_LOG(LogTemp, Warning, TEXT("Test Overlap %llu %llu %llu %llu"), MinInterval, MaxInterval, Offset, LastByte);
|
|
if (IntervalsIntersect(MinInterval, MaxInterval, Offset, LastByte))
|
|
{
|
|
//UE_LOG(LogTemp, Warning, TEXT("Overlap %llu %llu %llu %llu"), MinInterval, MaxInterval, Offset, LastByte);
|
|
if (!Func(Iter))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
Iter = Item.Next;
|
|
}
|
|
return true;
|
|
}
|
|
|
|
template<typename TItem>
|
|
static bool OverlappingNodesInIntervalTreeWithShrinkingInterval(
|
|
TIntervalTreeIndex RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64& MaxInterval,
|
|
uint64 MinNode,
|
|
uint64 MaxNode,
|
|
uint32 CurrentShift,
|
|
uint32 MaxShift,
|
|
TFunctionRef<bool(TIntervalTreeIndex)> Func
|
|
)
|
|
{
|
|
if (RootNode != IntervalTreeInvalidIndex)
|
|
{
|
|
|
|
int64 MinShifted = HighBit(MinInterval << CurrentShift);
|
|
int64 MaxShifted = HighBit(FMath::Min(MaxInterval, MaxNode) << CurrentShift); // since MaxInterval is changing, we cannot clamp it during recursion.
|
|
FIntervalTreeNode& Root = GIntervalTreeNodeNodeAllocator.Get(RootNode);
|
|
uint64 Center = (MinNode + MaxNode + 1) >> 1;
|
|
|
|
if (!MinShifted)
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
if (!ScanNodeListWithShrinkingInterval(Root.LeftChildOrRootOfLeftList, Allocator, MinInterval, MaxInterval, Func))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
else
|
|
{
|
|
if (!OverlappingNodesInIntervalTreeWithShrinkingInterval(Root.LeftChildOrRootOfLeftList, Allocator, MinInterval, MaxInterval, MinNode, Center - 1, CurrentShift + 1, MaxShift, Func)) // since MaxInterval is changing, we cannot clamp it during recursion.
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
}
|
|
if (!ScanNodeListWithShrinkingInterval(Root.RootOfOnList, Allocator, MinInterval, MaxInterval, Func))
|
|
{
|
|
return false;
|
|
}
|
|
MaxShifted = HighBit(FMath::Min(MaxInterval, MaxNode) << CurrentShift); // since MaxInterval is changing, we cannot clamp it during recursion.
|
|
if (MaxShifted)
|
|
{
|
|
if (CurrentShift == MaxShift)
|
|
{
|
|
if (!ScanNodeListWithShrinkingInterval(Root.RightChildOrRootOfRightList, Allocator, MinInterval, MaxInterval, Func))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
else
|
|
{
|
|
if (!OverlappingNodesInIntervalTreeWithShrinkingInterval(Root.RightChildOrRootOfRightList, Allocator, FMath::Max(MinInterval, Center), MaxInterval, Center, MaxNode, CurrentShift + 1, MaxShift, Func))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
return true;
|
|
}
|
|
|
|
|
|
template<typename TItem>
|
|
static void MaskInterval(
|
|
TIntervalTreeIndex Index,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
uint32 BytesToBitsShift,
|
|
uint64* Bits
|
|
)
|
|
{
|
|
TItem& Item = Allocator.Get(Index);
|
|
uint64 Offset = uint64(GetRequestOffset(Item.OffsetAndPakIndex));
|
|
uint64 LastByte = Offset + uint64(Item.Size) - 1;
|
|
uint64 InterMinInterval = FMath::Max(MinInterval, Offset);
|
|
uint64 InterMaxInterval = FMath::Min(MaxInterval, LastByte);
|
|
if (InterMinInterval <= InterMaxInterval)
|
|
{
|
|
uint32 FirstBit = uint32((InterMinInterval - MinInterval) >> BytesToBitsShift);
|
|
uint32 LastBit = uint32((InterMaxInterval - MinInterval) >> BytesToBitsShift);
|
|
uint32 FirstQWord = FirstBit >> 6;
|
|
uint32 LastQWord = LastBit >> 6;
|
|
uint32 FirstBitQWord = FirstBit & 63;
|
|
uint32 LastBitQWord = LastBit & 63;
|
|
if (FirstQWord == LastQWord)
|
|
{
|
|
Bits[FirstQWord] |= ((MAX_uint64 << FirstBitQWord) & (MAX_uint64 >> (63 - LastBitQWord)));
|
|
}
|
|
else
|
|
{
|
|
Bits[FirstQWord] |= (MAX_uint64 << FirstBitQWord);
|
|
for (uint32 QWordIndex = FirstQWord + 1; QWordIndex < LastQWord; QWordIndex++)
|
|
{
|
|
Bits[QWordIndex] = MAX_uint64;
|
|
}
|
|
Bits[LastQWord] |= (MAX_uint64 >> (63 - LastBitQWord));
|
|
}
|
|
}
|
|
}
|
|
|
|
|
|
|
|
template<typename TItem>
|
|
static void OverlappingNodesInIntervalTreeMask(
|
|
TIntervalTreeIndex RootNode,
|
|
TIntervalTreeAllocator<TItem>& Allocator,
|
|
uint64 MinInterval,
|
|
uint64 MaxInterval,
|
|
uint64 MinNode,
|
|
uint64 MaxNode,
|
|
uint32 CurrentShift,
|
|
uint32 MaxShift,
|
|
uint32 BytesToBitsShift,
|
|
uint64* Bits
|
|
)
|
|
{
|
|
OverlappingNodesInIntervalTree(
|
|
RootNode,
|
|
Allocator,
|
|
MinInterval,
|
|
MaxInterval,
|
|
MinNode,
|
|
MaxNode,
|
|
CurrentShift,
|
|
MaxShift,
|
|
[&Allocator, MinInterval, MaxInterval, BytesToBitsShift, Bits](TIntervalTreeIndex Index) -> bool
|
|
{
|
|
MaskInterval(Index, Allocator, MinInterval, MaxInterval, BytesToBitsShift, Bits);
|
|
return true;
|
|
}
|
|
);
|
|
}
|
|
|
|
|
|
|
|
class IPakRequestor
|
|
{
|
|
friend class FPakPrecacher;
|
|
FJoinedOffsetAndPakIndex OffsetAndPakIndex; // this is used for searching and filled in when you make the request
|
|
uint64 UniqueID;
|
|
TIntervalTreeIndex InRequestIndex;
|
|
public:
|
|
IPakRequestor()
|
|
: OffsetAndPakIndex(MAX_uint64) // invalid value
|
|
, UniqueID(0)
|
|
, InRequestIndex(IntervalTreeInvalidIndex)
|
|
{
|
|
}
|
|
virtual ~IPakRequestor()
|
|
{
|
|
}
|
|
virtual void RequestIsComplete()
|
|
{
|
|
}
|
|
};
|
|
|
|
static FPakPrecacher* PakPrecacherSingleton = nullptr;
|
|
|
|
class FPakPrecacher
|
|
{
|
|
enum class EInRequestStatus
|
|
{
|
|
Complete,
|
|
Waiting,
|
|
InFlight,
|
|
Num
|
|
};
|
|
|
|
enum class EBlockStatus
|
|
{
|
|
InFlight,
|
|
Complete,
|
|
Num
|
|
};
|
|
|
|
IPlatformFile* LowerLevel;
|
|
FCriticalSection CachedFilesScopeLock;
|
|
FJoinedOffsetAndPakIndex LastReadRequest;
|
|
uint64 NextUniqueID;
|
|
int64 BlockMemory;
|
|
int64 BlockMemoryHighWater;
|
|
|
|
struct FCacheBlock
|
|
{
|
|
FJoinedOffsetAndPakIndex OffsetAndPakIndex;
|
|
int64 Size;
|
|
uint8 *Memory;
|
|
uint32 InRequestRefCount;
|
|
TIntervalTreeIndex Index;
|
|
TIntervalTreeIndex Next;
|
|
EBlockStatus Status;
|
|
|
|
FCacheBlock()
|
|
: OffsetAndPakIndex(0)
|
|
, Size(0)
|
|
, Memory(nullptr)
|
|
, InRequestRefCount(0)
|
|
, Index(IntervalTreeInvalidIndex)
|
|
, Next(IntervalTreeInvalidIndex)
|
|
, Status(EBlockStatus::InFlight)
|
|
{
|
|
}
|
|
};
|
|
|
|
struct FPakInRequest
|
|
{
|
|
FJoinedOffsetAndPakIndex OffsetAndPakIndex;
|
|
int64 Size;
|
|
IPakRequestor* Owner;
|
|
uint64 UniqueID;
|
|
TIntervalTreeIndex Index;
|
|
TIntervalTreeIndex Next;
|
|
EAsyncIOPriority Priority;
|
|
EInRequestStatus Status;
|
|
|
|
FPakInRequest()
|
|
: OffsetAndPakIndex(0)
|
|
, Size(0)
|
|
, Owner(nullptr)
|
|
, UniqueID(0)
|
|
, Index(IntervalTreeInvalidIndex)
|
|
, Next(IntervalTreeInvalidIndex)
|
|
, Priority(AIOP_MIN)
|
|
, Status(EInRequestStatus::Waiting)
|
|
{
|
|
}
|
|
};
|
|
|
|
struct FPakData
|
|
{
|
|
IAsyncReadFileHandle* Handle;
|
|
int64 TotalSize;
|
|
uint64 MaxNode;
|
|
uint32 StartShift;
|
|
uint32 MaxShift;
|
|
uint32 BytesToBitsShift;
|
|
FName Name;
|
|
|
|
TIntervalTreeIndex InRequests[AIOP_NUM][(int32)EInRequestStatus::Num];
|
|
TIntervalTreeIndex CacheBlocks[(int32)EBlockStatus::Num];
|
|
|
|
TArray<TPakChunkHash> ChunkHashes;
|
|
|
|
FPakData(IAsyncReadFileHandle* InHandle, FName InName, int64 InTotalSize)
|
|
: Handle(InHandle)
|
|
, TotalSize(InTotalSize)
|
|
, StartShift(0)
|
|
, MaxShift(0)
|
|
, BytesToBitsShift(0)
|
|
, Name(InName)
|
|
{
|
|
check(Handle && TotalSize > 0 && Name != NAME_None);
|
|
for (int32 Index = 0; Index < AIOP_NUM; Index++)
|
|
{
|
|
for (int32 IndexInner = 0; IndexInner < (int32)EInRequestStatus::Num; IndexInner++)
|
|
{
|
|
InRequests[Index][IndexInner] = IntervalTreeInvalidIndex;
|
|
}
|
|
}
|
|
for (int32 IndexInner = 0; IndexInner < (int32)EBlockStatus::Num; IndexInner++)
|
|
{
|
|
CacheBlocks[IndexInner] = IntervalTreeInvalidIndex;
|
|
}
|
|
uint64 StartingLastByte = FMath::Max((uint64)TotalSize, uint64(PAK_CACHE_GRANULARITY + 1));
|
|
StartingLastByte--;
|
|
|
|
{
|
|
uint64 LastByte = StartingLastByte;
|
|
while (!HighBit(LastByte))
|
|
{
|
|
LastByte <<= 1;
|
|
StartShift++;
|
|
}
|
|
}
|
|
{
|
|
uint64 LastByte = StartingLastByte;
|
|
uint64 Block = (uint64)PAK_CACHE_GRANULARITY;
|
|
|
|
while (Block)
|
|
{
|
|
Block >>= 1;
|
|
LastByte >>= 1;
|
|
BytesToBitsShift++;
|
|
}
|
|
BytesToBitsShift--;
|
|
check(1 << BytesToBitsShift == PAK_CACHE_GRANULARITY);
|
|
MaxShift = StartShift;
|
|
while (LastByte)
|
|
{
|
|
LastByte >>= 1;
|
|
MaxShift++;
|
|
}
|
|
MaxNode = MAX_uint64 >> StartShift;
|
|
check(MaxNode >= StartingLastByte && (MaxNode >> 1) < StartingLastByte);
|
|
// UE_LOG(LogTemp, Warning, TEXT("Test %d %llX %llX "), MaxShift, (uint64(PAK_CACHE_GRANULARITY) << (MaxShift + 1)), (uint64(PAK_CACHE_GRANULARITY) << MaxShift));
|
|
check(MaxShift && (uint64(PAK_CACHE_GRANULARITY) << (MaxShift + 1)) == 0 && (uint64(PAK_CACHE_GRANULARITY) << MaxShift) != 0);
|
|
}
|
|
}
|
|
};
|
|
TMap<FName, uint16> CachedPaks;
|
|
TArray<FPakData> CachedPakData;
|
|
|
|
TIntervalTreeAllocator<FPakInRequest> InRequestAllocator;
|
|
TIntervalTreeAllocator<FCacheBlock> CacheBlockAllocator;
|
|
TMap<uint64, TIntervalTreeIndex> OutstandingRequests;
|
|
|
|
TArray<FJoinedOffsetAndPakIndex> OffsetAndPakIndexOfSavedBlocked;
|
|
|
|
struct FRequestToLower
|
|
{
|
|
IAsyncReadRequest* RequestHandle;
|
|
TIntervalTreeIndex BlockIndex;
|
|
int64 RequestSize;
|
|
uint8* Memory;
|
|
FRequestToLower()
|
|
: RequestHandle(nullptr)
|
|
, BlockIndex(IntervalTreeInvalidIndex)
|
|
, RequestSize(0)
|
|
, Memory(nullptr)
|
|
{
|
|
}
|
|
};
|
|
|
|
FRequestToLower RequestsToLower[PAK_CACHE_MAX_REQUESTS];
|
|
TArray<IAsyncReadRequest*> RequestsToDelete;
|
|
int32 NotifyRecursion;
|
|
|
|
uint32 Loads;
|
|
uint32 Frees;
|
|
uint64 LoadSize;
|
|
FEncryptionKey EncryptionKey;
|
|
bool bSigned;
|
|
|
|
public:
|
|
|
|
static void Init(IPlatformFile* InLowerLevel, const FEncryptionKey& InEncryptionKey)
|
|
{
|
|
if (!PakPrecacherSingleton)
|
|
{
|
|
verify(!FPlatformAtomics::InterlockedCompareExchangePointer((void**)&PakPrecacherSingleton, new FPakPrecacher(InLowerLevel, InEncryptionKey), nullptr));
|
|
}
|
|
check(PakPrecacherSingleton);
|
|
}
|
|
|
|
static void Shutdown()
|
|
{
|
|
if (PakPrecacherSingleton)
|
|
{
|
|
FPakPrecacher* LocalPakPrecacherSingleton = PakPrecacherSingleton;
|
|
if (LocalPakPrecacherSingleton && LocalPakPrecacherSingleton == FPlatformAtomics::InterlockedCompareExchangePointer((void**)&PakPrecacherSingleton, nullptr, LocalPakPrecacherSingleton))
|
|
{
|
|
LocalPakPrecacherSingleton->TrimCache(true);
|
|
double StartTime = FPlatformTime::Seconds();
|
|
while (!LocalPakPrecacherSingleton->IsProbablyIdle())
|
|
{
|
|
FPlatformProcess::SleepNoStats(0.001f);
|
|
if (FPlatformTime::Seconds() - StartTime > 10.0)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("FPakPrecacher was not idle after 10s, exiting anyway and leaking."));
|
|
return;
|
|
}
|
|
}
|
|
delete PakPrecacherSingleton;
|
|
}
|
|
}
|
|
check(!PakPrecacherSingleton);
|
|
}
|
|
|
|
static FPakPrecacher& Get()
|
|
{
|
|
check(PakPrecacherSingleton);
|
|
return *PakPrecacherSingleton;
|
|
}
|
|
|
|
FPakPrecacher(IPlatformFile* InLowerLevel, const FEncryptionKey& InEncryptionKey)
|
|
: LowerLevel(InLowerLevel)
|
|
, LastReadRequest(0)
|
|
, NextUniqueID(1)
|
|
, BlockMemory(0)
|
|
, BlockMemoryHighWater(0)
|
|
, NotifyRecursion(0)
|
|
, Loads(0)
|
|
, Frees(0)
|
|
, LoadSize(0)
|
|
, EncryptionKey(InEncryptionKey)
|
|
, bSigned(!InEncryptionKey.Exponent.IsZero() && !InEncryptionKey.Modulus.IsZero())
|
|
{
|
|
check(LowerLevel && FPlatformProcess::SupportsMultithreading());
|
|
GPakCache_MaxRequestsToLowerLevel = FMath::Max(FMath::Min(FPlatformMisc::NumberOfIOWorkerThreadsToSpawn(), GPakCache_MaxRequestsToLowerLevel), 1);
|
|
check(GPakCache_MaxRequestsToLowerLevel <= PAK_CACHE_MAX_REQUESTS);
|
|
}
|
|
|
|
void StartSignatureCheck(bool bWasCanceled, IAsyncReadRequest* Request, int32 IndexToFill);
|
|
void DoSignatureCheck(bool bWasCanceled, IAsyncReadRequest* Request, int32 IndexToFill);
|
|
|
|
IPlatformFile* GetLowerLevelHandle()
|
|
{
|
|
check(LowerLevel);
|
|
return LowerLevel;
|
|
}
|
|
|
|
bool HasEnoughRoomForPrecache()
|
|
{
|
|
return GPakCache_AcceptPrecacheRequests;
|
|
}
|
|
|
|
uint16* RegisterPakFile(FName File, int64 PakFileSize)
|
|
{
|
|
uint16* PakIndexPtr = CachedPaks.Find(File);
|
|
if (!PakIndexPtr)
|
|
{
|
|
check(CachedPakData.Num() < MAX_uint16);
|
|
IAsyncReadFileHandle* Handle = LowerLevel->OpenAsyncRead(*File.ToString());
|
|
if (!Handle)
|
|
{
|
|
return nullptr;
|
|
}
|
|
CachedPakData.Add(FPakData(Handle, File, PakFileSize));
|
|
PakIndexPtr = &CachedPaks.Add(File, CachedPakData.Num() - 1);
|
|
UE_LOG(LogPakFile, Log, TEXT("New pak file %s added to pak precacher."), *File.ToString());
|
|
|
|
FPakData& Pak = CachedPakData[*PakIndexPtr];
|
|
|
|
if (bSigned)
|
|
{
|
|
// Load signature data
|
|
FString SignaturesFilename = FPaths::ChangeExtension(File.ToString(), TEXT("sig"));
|
|
IFileHandle* SignaturesFile = LowerLevel->OpenRead(*SignaturesFilename);
|
|
ensure(SignaturesFile);
|
|
|
|
FArchiveFileReaderGeneric* Reader = new FArchiveFileReaderGeneric(SignaturesFile, *SignaturesFilename, SignaturesFile->Size());
|
|
FEncryptedSignature MasterSignature;
|
|
*Reader << MasterSignature;
|
|
*Reader << Pak.ChunkHashes;
|
|
delete Reader;
|
|
|
|
// Check that we have the correct match between signature and pre-cache granularity
|
|
int64 NumPakChunks = Align(PakFileSize, FPakInfo::MaxChunkDataSize) / FPakInfo::MaxChunkDataSize;
|
|
ensure(NumPakChunks == Pak.ChunkHashes.Num());
|
|
|
|
// Decrypt signature hash
|
|
FDecryptedSignature DecryptedSignature;
|
|
FEncryption::DecryptSignature(MasterSignature, DecryptedSignature, EncryptionKey);
|
|
|
|
// Check the signatures are still as we expected them
|
|
TPakChunkHash Hash = ComputePakChunkHash(&Pak.ChunkHashes[0], Pak.ChunkHashes.Num() * sizeof(TPakChunkHash));
|
|
ensure(Hash == DecryptedSignature.Data);
|
|
}
|
|
}
|
|
return PakIndexPtr;
|
|
}
|
|
|
|
private: // below here we assume CachedFilesScopeLock until we get to the next section
|
|
|
|
uint16 GetRequestPakIndex(FJoinedOffsetAndPakIndex OffsetAndPakIndex)
|
|
{
|
|
uint16 Result = GetRequestPakIndexLow(OffsetAndPakIndex);
|
|
check(Result < CachedPakData.Num());
|
|
return Result;
|
|
}
|
|
|
|
FJoinedOffsetAndPakIndex FirstUnfilledBlockForRequest(TIntervalTreeIndex NewIndex, FJoinedOffsetAndPakIndex ReadHead = 0)
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
FPakInRequest& Request = InRequestAllocator.Get(NewIndex);
|
|
uint16 PakIndex = GetRequestPakIndex(Request.OffsetAndPakIndex);
|
|
int64 Offset = GetRequestOffset(Request.OffsetAndPakIndex);
|
|
int64 Size = Request.Size;
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(Offset + Request.Size <= Pak.TotalSize && Size > 0 && Request.Priority >= AIOP_MIN && Request.Priority <= AIOP_MAX && Request.Status != EInRequestStatus::Complete && Request.Owner);
|
|
if (PakIndex != GetRequestPakIndex(ReadHead))
|
|
{
|
|
// this is in a different pak, so we ignore the read head position
|
|
ReadHead = 0;
|
|
}
|
|
if (ReadHead)
|
|
{
|
|
// trim to the right of the read head
|
|
int64 Trim = FMath::Max(Offset, GetRequestOffset(ReadHead)) - Offset;
|
|
Offset += Trim;
|
|
Size -= Trim;
|
|
}
|
|
|
|
static TArray<uint64> InFlightOrDone;
|
|
|
|
int64 FirstByte = AlignDown(Offset, PAK_CACHE_GRANULARITY);
|
|
int64 LastByte = Align(Offset + Size, PAK_CACHE_GRANULARITY) - 1;
|
|
uint32 NumBits = (PAK_CACHE_GRANULARITY + LastByte - FirstByte) / PAK_CACHE_GRANULARITY;
|
|
uint32 NumQWords = (NumBits + 63) >> 6;
|
|
InFlightOrDone.Reset();
|
|
InFlightOrDone.AddZeroed(NumQWords);
|
|
if (NumBits != NumQWords * 64)
|
|
{
|
|
uint32 Extras = NumQWords * 64 - NumBits;
|
|
InFlightOrDone[NumQWords - 1] = (MAX_uint64 << (64 - Extras));
|
|
}
|
|
|
|
if (Pak.CacheBlocks[(int32)EBlockStatus::Complete] != IntervalTreeInvalidIndex)
|
|
{
|
|
OverlappingNodesInIntervalTreeMask<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
Pak.BytesToBitsShift,
|
|
&InFlightOrDone[0]
|
|
);
|
|
}
|
|
if (Request.Status == EInRequestStatus::Waiting && Pak.CacheBlocks[(int32)EBlockStatus::InFlight] != IntervalTreeInvalidIndex)
|
|
{
|
|
OverlappingNodesInIntervalTreeMask<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::InFlight],
|
|
CacheBlockAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
Pak.BytesToBitsShift,
|
|
&InFlightOrDone[0]
|
|
);
|
|
}
|
|
for (uint32 Index = 0; Index < NumQWords; Index++)
|
|
{
|
|
if (InFlightOrDone[Index] != MAX_uint64)
|
|
{
|
|
uint64 Mask = InFlightOrDone[Index];
|
|
int64 FinalOffset = FirstByte + PAK_CACHE_GRANULARITY * 64 * Index;
|
|
while (Mask & 1)
|
|
{
|
|
FinalOffset += PAK_CACHE_GRANULARITY;
|
|
Mask >>= 1;
|
|
}
|
|
return MakeJoinedRequest(PakIndex, FinalOffset);
|
|
}
|
|
}
|
|
return MAX_uint64;
|
|
}
|
|
|
|
bool AddRequest(TIntervalTreeIndex NewIndex)
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
FPakInRequest& Request = InRequestAllocator.Get(NewIndex);
|
|
uint16 PakIndex = GetRequestPakIndex(Request.OffsetAndPakIndex);
|
|
int64 Offset = GetRequestOffset(Request.OffsetAndPakIndex);
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(Offset + Request.Size <= Pak.TotalSize && Request.Size > 0 && Request.Priority >= AIOP_MIN && Request.Priority <= AIOP_MAX && Request.Status == EInRequestStatus::Waiting && Request.Owner);
|
|
|
|
static TArray<uint64> InFlightOrDone;
|
|
|
|
int64 FirstByte = AlignDown(Offset, PAK_CACHE_GRANULARITY);
|
|
int64 LastByte = Align(Offset + Request.Size, PAK_CACHE_GRANULARITY) - 1;
|
|
uint32 NumBits = (PAK_CACHE_GRANULARITY + LastByte - FirstByte) / PAK_CACHE_GRANULARITY;
|
|
uint32 NumQWords = (NumBits + 63) >> 6;
|
|
InFlightOrDone.Reset();
|
|
InFlightOrDone.AddZeroed(NumQWords);
|
|
if (NumBits != NumQWords * 64)
|
|
{
|
|
uint32 Extras = NumQWords * 64 - NumBits;
|
|
InFlightOrDone[NumQWords - 1] = (MAX_uint64 << (64 - Extras));
|
|
}
|
|
|
|
if (Pak.CacheBlocks[(int32)EBlockStatus::Complete] != IntervalTreeInvalidIndex)
|
|
{
|
|
Request.Status = EInRequestStatus::Complete;
|
|
OverlappingNodesInIntervalTree<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, &Pak, FirstByte, LastByte](TIntervalTreeIndex Index) -> bool
|
|
{
|
|
CacheBlockAllocator.Get(Index).InRequestRefCount++;
|
|
MaskInterval(Index, CacheBlockAllocator, FirstByte, LastByte, Pak.BytesToBitsShift, &InFlightOrDone[0]);
|
|
return true;
|
|
}
|
|
);
|
|
for (uint32 Index = 0; Index < NumQWords; Index++)
|
|
{
|
|
if (InFlightOrDone[Index] != MAX_uint64)
|
|
{
|
|
Request.Status = EInRequestStatus::Waiting;
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
if (Request.Status == EInRequestStatus::Waiting)
|
|
{
|
|
if (Pak.CacheBlocks[(int32)EBlockStatus::InFlight] != IntervalTreeInvalidIndex)
|
|
{
|
|
Request.Status = EInRequestStatus::InFlight;
|
|
OverlappingNodesInIntervalTree<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::InFlight],
|
|
CacheBlockAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, &Pak, FirstByte, LastByte](TIntervalTreeIndex Index) -> bool
|
|
{
|
|
CacheBlockAllocator.Get(Index).InRequestRefCount++;
|
|
MaskInterval(Index, CacheBlockAllocator, FirstByte, LastByte, Pak.BytesToBitsShift, &InFlightOrDone[0]);
|
|
return true;
|
|
}
|
|
);
|
|
|
|
for (uint32 Index = 0; Index < NumQWords; Index++)
|
|
{
|
|
if (InFlightOrDone[Index] != MAX_uint64)
|
|
{
|
|
Request.Status = EInRequestStatus::Waiting;
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
else
|
|
{
|
|
#if PAK_EXTRA_CHECKS
|
|
OverlappingNodesInIntervalTree<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::InFlight],
|
|
CacheBlockAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, &Pak, FirstByte, LastByte](TIntervalTreeIndex Index) -> bool
|
|
{
|
|
check(0); // if we are complete, then how come there are overlapping in flight blocks?
|
|
return true;
|
|
}
|
|
);
|
|
#endif
|
|
}
|
|
{
|
|
AddToIntervalTree<FPakInRequest>(
|
|
&Pak.InRequests[Request.Priority][(int32)Request.Status],
|
|
InRequestAllocator,
|
|
NewIndex,
|
|
Pak.StartShift,
|
|
Pak.MaxShift
|
|
);
|
|
}
|
|
check(&Request == &InRequestAllocator.Get(NewIndex));
|
|
if (Request.Status == EInRequestStatus::Complete)
|
|
{
|
|
NotifyComplete(NewIndex);
|
|
return true;
|
|
}
|
|
else if (Request.Status == EInRequestStatus::Waiting)
|
|
{
|
|
StartNextRequest();
|
|
}
|
|
return false;
|
|
}
|
|
|
|
void ClearBlock(FCacheBlock &Block)
|
|
{
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakReadRequest[%016llX, %016llX) ClearBlock"), Block.OffsetAndPakIndex, Block.OffsetAndPakIndex + Block.Size);
|
|
|
|
if (Block.Memory)
|
|
{
|
|
check(Block.Size);
|
|
BlockMemory -= Block.Size;
|
|
DEC_MEMORY_STAT_BY(STAT_PakCacheMem, Block.Size);
|
|
check(BlockMemory >= 0);
|
|
|
|
FMemory::Free(Block.Memory);
|
|
Block.Memory = nullptr;
|
|
}
|
|
Block.Next = IntervalTreeInvalidIndex;
|
|
CacheBlockAllocator.Free(Block.Index);
|
|
}
|
|
|
|
void ClearRequest(FPakInRequest& DoneRequest)
|
|
{
|
|
uint64 Id = DoneRequest.UniqueID;
|
|
TIntervalTreeIndex Index = DoneRequest.Index;
|
|
|
|
DoneRequest.OffsetAndPakIndex = 0;
|
|
DoneRequest.Size = 0;
|
|
DoneRequest.Owner = nullptr;
|
|
DoneRequest.UniqueID = 0;
|
|
DoneRequest.Index = IntervalTreeInvalidIndex;
|
|
DoneRequest.Next = IntervalTreeInvalidIndex;
|
|
DoneRequest.Priority = AIOP_MIN;
|
|
DoneRequest.Status = EInRequestStatus::Num;
|
|
|
|
verify(OutstandingRequests.Remove(Id) == 1);
|
|
InRequestAllocator.Free(Index);
|
|
}
|
|
void TrimCache(bool bDiscardAll = false)
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
int32 NumToKeep = bDiscardAll ? 0 : GPakCache_NumUnreferencedBlocksToCache;
|
|
int32 NumToRemove = FMath::Max<int32>(0, OffsetAndPakIndexOfSavedBlocked.Num() - NumToKeep);
|
|
if (NumToRemove)
|
|
{
|
|
for (int32 Index = 0; Index < NumToRemove; Index++)
|
|
{
|
|
FJoinedOffsetAndPakIndex OffsetAndPakIndex = OffsetAndPakIndexOfSavedBlocked[Index];
|
|
uint16 PakIndex = GetRequestPakIndex(OffsetAndPakIndex);
|
|
int64 Offset = GetRequestOffset(OffsetAndPakIndex);
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
MaybeRemoveOverlappingNodesInIntervalTree<FCacheBlock>(
|
|
&Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
Offset,
|
|
Offset,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
FCacheBlock &Block = CacheBlockAllocator.Get(BlockIndex);
|
|
if (!Block.InRequestRefCount)
|
|
{
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakReadRequest[%016llX, %016llX) Discard Cached"), Block.OffsetAndPakIndex, Block.OffsetAndPakIndex + Block.Size);
|
|
ClearBlock(Block);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
);
|
|
|
|
|
|
}
|
|
OffsetAndPakIndexOfSavedBlocked.RemoveAt(0, NumToRemove, false);
|
|
}
|
|
}
|
|
|
|
void RemoveRequest(TIntervalTreeIndex Index)
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
FPakInRequest& Request = InRequestAllocator.Get(Index);
|
|
uint16 PakIndex = GetRequestPakIndex(Request.OffsetAndPakIndex);
|
|
int64 Offset = GetRequestOffset(Request.OffsetAndPakIndex);
|
|
int64 Size = Request.Size;
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(Offset + Request.Size <= Pak.TotalSize && Request.Size > 0 && Request.Priority >= AIOP_MIN && Request.Priority <= AIOP_MAX && int32(Request.Status) >= 0 && int32(Request.Status) < int32(EInRequestStatus::Num));
|
|
|
|
if (RemoveFromIntervalTree<FPakInRequest>(&Pak.InRequests[Request.Priority][(int32)Request.Status], InRequestAllocator, Index, Pak.StartShift, Pak.MaxShift))
|
|
{
|
|
|
|
int64 OffsetOfLastByte = Offset + Size - 1;
|
|
MaybeRemoveOverlappingNodesInIntervalTree<FCacheBlock>(
|
|
&Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
Offset,
|
|
OffsetOfLastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, OffsetOfLastByte](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
FCacheBlock &Block = CacheBlockAllocator.Get(BlockIndex);
|
|
check(Block.InRequestRefCount);
|
|
if (!--Block.InRequestRefCount)
|
|
{
|
|
if (GPakCache_NumUnreferencedBlocksToCache && GetRequestOffset(Block.OffsetAndPakIndex) + Block.Size > OffsetOfLastByte) // last block
|
|
{
|
|
OffsetAndPakIndexOfSavedBlocked.Remove(Block.OffsetAndPakIndex);
|
|
OffsetAndPakIndexOfSavedBlocked.Add(Block.OffsetAndPakIndex);
|
|
return false;
|
|
}
|
|
ClearBlock(Block);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
);
|
|
TrimCache();
|
|
OverlappingNodesInIntervalTree<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::InFlight],
|
|
CacheBlockAllocator,
|
|
Offset,
|
|
Offset + Size - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
FCacheBlock &Block = CacheBlockAllocator.Get(BlockIndex);
|
|
check(Block.InRequestRefCount);
|
|
Block.InRequestRefCount--;
|
|
return true;
|
|
}
|
|
);
|
|
}
|
|
else
|
|
{
|
|
check(0); // not found
|
|
}
|
|
ClearRequest(Request);
|
|
}
|
|
|
|
void NotifyComplete(TIntervalTreeIndex RequestIndex)
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
FPakInRequest& Request = InRequestAllocator.Get(RequestIndex);
|
|
|
|
uint16 PakIndex = GetRequestPakIndex(Request.OffsetAndPakIndex);
|
|
int64 Offset = GetRequestOffset(Request.OffsetAndPakIndex);
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(Offset + Request.Size <= Pak.TotalSize && Request.Size > 0 && Request.Priority >= AIOP_MIN && Request.Priority <= AIOP_MAX && Request.Status == EInRequestStatus::Complete);
|
|
|
|
check(Request.Owner && Request.UniqueID);
|
|
|
|
if (Request.Status == EInRequestStatus::Complete && Request.UniqueID == Request.Owner->UniqueID && RequestIndex == Request.Owner->InRequestIndex && Request.OffsetAndPakIndex == Request.Owner->OffsetAndPakIndex)
|
|
{
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakReadRequest[%016llX, %016llX) Notify complete"), Request.OffsetAndPakIndex, Request.OffsetAndPakIndex + Request.Size);
|
|
Request.Owner->RequestIsComplete();
|
|
return;
|
|
}
|
|
else
|
|
{
|
|
check(0); // request should have been found
|
|
}
|
|
}
|
|
|
|
FJoinedOffsetAndPakIndex GetNextBlock(EAsyncIOPriority& OutPriority)
|
|
{
|
|
bool bAcceptingPrecacheRequests = HasEnoughRoomForPrecache();
|
|
|
|
// CachedFilesScopeLock is locked
|
|
uint16 BestPakIndex = 0;
|
|
FJoinedOffsetAndPakIndex BestNext = MAX_uint64;
|
|
|
|
OutPriority = AIOP_MIN;
|
|
bool bAnyOutstanding = false;
|
|
for (EAsyncIOPriority Priority = AIOP_MAX;; Priority = EAsyncIOPriority(int32(Priority) - 1))
|
|
{
|
|
if (Priority == AIOP_Precache && !bAcceptingPrecacheRequests && bAnyOutstanding)
|
|
{
|
|
break;
|
|
}
|
|
for (int32 Pass = 0; ; Pass++)
|
|
{
|
|
FJoinedOffsetAndPakIndex LocalLastReadRequest = Pass ? 0 : LastReadRequest;
|
|
|
|
uint16 PakIndex = GetRequestPakIndex(LocalLastReadRequest);
|
|
int64 Offset = GetRequestOffset(LocalLastReadRequest);
|
|
check(Offset <= CachedPakData[PakIndex].TotalSize);
|
|
|
|
|
|
for (; BestNext == MAX_uint64 && PakIndex < CachedPakData.Num(); PakIndex++)
|
|
{
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
if (Pak.InRequests[Priority][(int32)EInRequestStatus::Complete] != IntervalTreeInvalidIndex)
|
|
{
|
|
bAnyOutstanding = true;
|
|
}
|
|
if (Pak.InRequests[Priority][(int32)EInRequestStatus::Waiting] != IntervalTreeInvalidIndex)
|
|
{
|
|
uint64 Limit = uint64(Pak.TotalSize - 1);
|
|
if (BestNext != MAX_uint64 && GetRequestPakIndex(BestNext) == PakIndex)
|
|
{
|
|
Limit = GetRequestOffset(BestNext) - 1;
|
|
}
|
|
|
|
OverlappingNodesInIntervalTreeWithShrinkingInterval<FPakInRequest>(
|
|
Pak.InRequests[Priority][(int32)EInRequestStatus::Waiting],
|
|
InRequestAllocator,
|
|
uint64(Offset),
|
|
Limit,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, &Pak, &BestNext, &BestPakIndex, PakIndex, &Limit, LocalLastReadRequest](TIntervalTreeIndex Index) -> bool
|
|
{
|
|
FJoinedOffsetAndPakIndex First = FirstUnfilledBlockForRequest(Index, LocalLastReadRequest);
|
|
check(LocalLastReadRequest != 0 || First != MAX_uint64); // if there was not trimming, and this thing is in the waiting list, then why was no start block found?
|
|
if (First < BestNext)
|
|
{
|
|
BestNext = First;
|
|
BestPakIndex = PakIndex;
|
|
Limit = GetRequestOffset(BestNext) - 1;
|
|
}
|
|
return true; // always have to keep going because we want the smallest one
|
|
}
|
|
);
|
|
}
|
|
}
|
|
if (!LocalLastReadRequest)
|
|
{
|
|
break; // this was a full pass
|
|
}
|
|
}
|
|
|
|
if (Priority == AIOP_MIN || BestNext != MAX_uint64)
|
|
{
|
|
OutPriority = Priority;
|
|
break;
|
|
}
|
|
}
|
|
return BestNext;
|
|
}
|
|
|
|
bool AddNewBlock()
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
EAsyncIOPriority RequestPriority;
|
|
FJoinedOffsetAndPakIndex BestNext = GetNextBlock(RequestPriority);
|
|
if (BestNext == MAX_uint64)
|
|
{
|
|
return false;
|
|
}
|
|
uint16 PakIndex = GetRequestPakIndex(BestNext);
|
|
int64 Offset = GetRequestOffset(BestNext);
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(Offset < Pak.TotalSize);
|
|
int64 FirstByte = AlignDown(Offset, PAK_CACHE_GRANULARITY);
|
|
int64 LastByte = FMath::Min(Align(FirstByte + (GPakCache_MaxRequestSizeToLowerLevelKB * 1024), PAK_CACHE_GRANULARITY) - 1, Pak.TotalSize - 1);
|
|
check(FirstByte >= 0 && LastByte < Pak.TotalSize && LastByte >= 0 && LastByte >= FirstByte);
|
|
|
|
uint32 NumBits = (PAK_CACHE_GRANULARITY + LastByte - FirstByte) / PAK_CACHE_GRANULARITY;
|
|
uint32 NumQWords = (NumBits + 63) >> 6;
|
|
|
|
static TArray<uint64> InFlightOrDone;
|
|
InFlightOrDone.Reset();
|
|
InFlightOrDone.AddZeroed(NumQWords);
|
|
if (NumBits != NumQWords * 64)
|
|
{
|
|
uint32 Extras = NumQWords * 64 - NumBits;
|
|
InFlightOrDone[NumQWords - 1] = (MAX_uint64 << (64 - Extras));
|
|
}
|
|
|
|
if (Pak.CacheBlocks[(int32)EBlockStatus::Complete] != IntervalTreeInvalidIndex)
|
|
{
|
|
OverlappingNodesInIntervalTreeMask<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
Pak.BytesToBitsShift,
|
|
&InFlightOrDone[0]
|
|
);
|
|
}
|
|
if (Pak.CacheBlocks[(int32)EBlockStatus::InFlight] != IntervalTreeInvalidIndex)
|
|
{
|
|
OverlappingNodesInIntervalTreeMask<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::InFlight],
|
|
CacheBlockAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
Pak.BytesToBitsShift,
|
|
&InFlightOrDone[0]
|
|
);
|
|
}
|
|
|
|
static TArray<uint64> Requested;
|
|
Requested.Reset();
|
|
Requested.AddZeroed(NumQWords);
|
|
for (EAsyncIOPriority Priority = AIOP_MAX;; Priority = EAsyncIOPriority(int32(Priority) - 1))
|
|
{
|
|
if (Priority + PAK_CACHE_MAX_PRIORITY_DIFFERENCE_MERGE < RequestPriority)
|
|
{
|
|
break;
|
|
}
|
|
if (Pak.InRequests[Priority][(int32)EInRequestStatus::Waiting] != IntervalTreeInvalidIndex)
|
|
{
|
|
OverlappingNodesInIntervalTreeMask<FPakInRequest>(
|
|
Pak.InRequests[Priority][(int32)EInRequestStatus::Waiting],
|
|
InRequestAllocator,
|
|
FirstByte,
|
|
LastByte,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
Pak.BytesToBitsShift,
|
|
&Requested[0]
|
|
);
|
|
}
|
|
if (Priority == AIOP_MIN)
|
|
{
|
|
break;
|
|
}
|
|
}
|
|
|
|
|
|
int64 Size = PAK_CACHE_GRANULARITY * 64 * NumQWords;
|
|
for (uint32 Index = 0; Index < NumQWords; Index++)
|
|
{
|
|
uint64 NotAlreadyInFlightAndRequested = ((~InFlightOrDone[Index]) & Requested[Index]);
|
|
if (NotAlreadyInFlightAndRequested != MAX_uint64)
|
|
{
|
|
Size = PAK_CACHE_GRANULARITY * 64 * Index;
|
|
while (NotAlreadyInFlightAndRequested & 1)
|
|
{
|
|
Size += PAK_CACHE_GRANULARITY;
|
|
NotAlreadyInFlightAndRequested >>= 1;
|
|
}
|
|
break;
|
|
}
|
|
}
|
|
check(Size > 0 && Size <= (GPakCache_MaxRequestSizeToLowerLevelKB * 1024));
|
|
Size = FMath::Min(FirstByte + Size, LastByte + 1) - FirstByte;
|
|
|
|
TIntervalTreeIndex NewIndex = CacheBlockAllocator.Alloc();
|
|
|
|
FCacheBlock& Block = CacheBlockAllocator.Get(NewIndex);
|
|
Block.Index = NewIndex;
|
|
Block.InRequestRefCount = 0;
|
|
Block.Memory = nullptr;
|
|
Block.OffsetAndPakIndex = MakeJoinedRequest(PakIndex, FirstByte);
|
|
Block.Size = Size;
|
|
Block.Status = EBlockStatus::InFlight;
|
|
|
|
AddToIntervalTree<FCacheBlock>(
|
|
&Pak.CacheBlocks[(int32)EBlockStatus::InFlight],
|
|
CacheBlockAllocator,
|
|
NewIndex,
|
|
Pak.StartShift,
|
|
Pak.MaxShift
|
|
);
|
|
|
|
TArray<TIntervalTreeIndex> Inflights;
|
|
|
|
for (EAsyncIOPriority Priority = AIOP_MAX;; Priority = EAsyncIOPriority(int32(Priority) - 1))
|
|
{
|
|
if (Pak.InRequests[Priority][(int32)EInRequestStatus::Waiting] != IntervalTreeInvalidIndex)
|
|
{
|
|
MaybeRemoveOverlappingNodesInIntervalTree<FPakInRequest>(
|
|
&Pak.InRequests[Priority][(int32)EInRequestStatus::Waiting],
|
|
InRequestAllocator,
|
|
uint64(FirstByte),
|
|
uint64(FirstByte + Size - 1),
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, &Block, &Inflights](TIntervalTreeIndex RequestIndex) -> bool
|
|
{
|
|
Block.InRequestRefCount++;
|
|
if (FirstUnfilledBlockForRequest(RequestIndex) == MAX_uint64)
|
|
{
|
|
InRequestAllocator.Get(RequestIndex).Next = IntervalTreeInvalidIndex;
|
|
Inflights.Add(RequestIndex);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
);
|
|
}
|
|
#if PAK_EXTRA_CHECKS
|
|
OverlappingNodesInIntervalTree<FPakInRequest>(
|
|
Pak.InRequests[Priority][(int32)EInRequestStatus::InFlight],
|
|
InRequestAllocator,
|
|
uint64(FirstByte),
|
|
uint64(FirstByte + Size - 1),
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[](TIntervalTreeIndex) -> bool
|
|
{
|
|
check(0); // if this is in flight, then why does it overlap my new block
|
|
return false;
|
|
}
|
|
);
|
|
OverlappingNodesInIntervalTree<FPakInRequest>(
|
|
Pak.InRequests[Priority][(int32)EInRequestStatus::Complete],
|
|
InRequestAllocator,
|
|
uint64(FirstByte),
|
|
uint64(FirstByte + Size - 1),
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[](TIntervalTreeIndex) -> bool
|
|
{
|
|
check(0); // if this is complete, then why does it overlap my new block
|
|
return false;
|
|
}
|
|
);
|
|
#endif
|
|
if (Priority == AIOP_MIN)
|
|
{
|
|
break;
|
|
}
|
|
}
|
|
for (TIntervalTreeIndex Fli : Inflights)
|
|
{
|
|
FPakInRequest& CompReq = InRequestAllocator.Get(Fli);
|
|
CompReq.Status = EInRequestStatus::InFlight;
|
|
AddToIntervalTree(&Pak.InRequests[CompReq.Priority][(int32)EInRequestStatus::InFlight], InRequestAllocator, Fli, Pak.StartShift, Pak.MaxShift);
|
|
}
|
|
|
|
StartBlockTask(Block);
|
|
return true;
|
|
|
|
}
|
|
|
|
int32 OpenTaskSlot()
|
|
{
|
|
int32 IndexToFill = -1;
|
|
for (int32 Index = 0; Index < GPakCache_MaxRequestsToLowerLevel; Index++)
|
|
{
|
|
if (!RequestsToLower[Index].RequestHandle)
|
|
{
|
|
IndexToFill = Index;
|
|
break;
|
|
}
|
|
}
|
|
return IndexToFill;
|
|
}
|
|
|
|
|
|
bool HasRequestsAtStatus(EInRequestStatus Status)
|
|
{
|
|
for (uint16 PakIndex = 0; PakIndex < CachedPakData.Num(); PakIndex++)
|
|
{
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
for (EAsyncIOPriority Priority = AIOP_MAX;; Priority = EAsyncIOPriority(int32(Priority) - 1))
|
|
{
|
|
if (Pak.InRequests[Priority][(int32)Status] != IntervalTreeInvalidIndex)
|
|
{
|
|
return true;
|
|
}
|
|
if (Priority == AIOP_MIN)
|
|
{
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
return false;
|
|
}
|
|
|
|
bool CanStartAnotherTask()
|
|
{
|
|
if (OpenTaskSlot() < 0)
|
|
{
|
|
return false;
|
|
}
|
|
return HasRequestsAtStatus(EInRequestStatus::Waiting);
|
|
}
|
|
void ClearOldBlockTasks()
|
|
{
|
|
if (!NotifyRecursion)
|
|
{
|
|
for (IAsyncReadRequest* Elem : RequestsToDelete)
|
|
{
|
|
Elem->WaitCompletion();
|
|
delete Elem;
|
|
}
|
|
RequestsToDelete.Empty();
|
|
}
|
|
}
|
|
void StartBlockTask(FCacheBlock& Block)
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
|
|
#define CHECK_REDUNDANT_READS (0)
|
|
#if CHECK_REDUNDANT_READS
|
|
static struct FRedundantReadTracker
|
|
{
|
|
TMap<int64, double> LastReadTime;
|
|
int32 NumRedundant;
|
|
FRedundantReadTracker()
|
|
: NumRedundant(0)
|
|
{
|
|
}
|
|
|
|
void CheckBlock(int64 Offset, int64 Size)
|
|
{
|
|
double NowTime = FPlatformTime::Seconds();
|
|
int64 StartBlock = Offset / PAK_CACHE_GRANULARITY;
|
|
int64 LastBlock = (Offset + Size - 1) / PAK_CACHE_GRANULARITY;
|
|
for (int64 CurBlock = StartBlock; CurBlock <= LastBlock; CurBlock++)
|
|
{
|
|
double LastTime = LastReadTime.FindRef(CurBlock);
|
|
if (LastTime > 0.0 && NowTime - LastTime < 3.0)
|
|
{
|
|
NumRedundant++;
|
|
FPlatformMisc::LowLevelOutputDebugStringf(TEXT("Redundant read at block %d, %6.1fms ago (%d total redundant blocks)\r\n"), int32(CurBlock), 1000.0f * float(NowTime - LastTime), NumRedundant);
|
|
}
|
|
LastReadTime.Add(CurBlock, NowTime);
|
|
}
|
|
}
|
|
} RedundantReadTracker;
|
|
#else
|
|
static struct FRedundantReadTracker
|
|
{
|
|
FORCEINLINE void CheckBlock(int64 Offset, int64 Size)
|
|
{
|
|
}
|
|
} RedundantReadTracker;
|
|
|
|
#endif
|
|
|
|
int32 IndexToFill = OpenTaskSlot();
|
|
if (IndexToFill < 0)
|
|
{
|
|
check(0);
|
|
return;
|
|
}
|
|
EAsyncIOPriority Priority = AIOP_Normal; // the lower level requests are not prioritized at the moment
|
|
check(Block.Status == EBlockStatus::InFlight);
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakReadRequest[%016llX, %016llX) StartBlockTask"), Block.OffsetAndPakIndex, Block.OffsetAndPakIndex + Block.Size);
|
|
uint16 PakIndex = GetRequestPakIndex(Block.OffsetAndPakIndex);
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
RequestsToLower[IndexToFill].BlockIndex = Block.Index;
|
|
RequestsToLower[IndexToFill].RequestSize = Block.Size;
|
|
RequestsToLower[IndexToFill].Memory = nullptr;
|
|
check(&CacheBlockAllocator.Get(RequestsToLower[IndexToFill].BlockIndex) == &Block);
|
|
|
|
FAsyncFileCallBack CallbackFromLower =
|
|
[this, IndexToFill](bool bWasCanceled, IAsyncReadRequest* Request)
|
|
{
|
|
if (bSigned)
|
|
{
|
|
StartSignatureCheck(bWasCanceled, Request, IndexToFill);
|
|
}
|
|
else
|
|
{
|
|
NewRequestsToLowerComplete(bWasCanceled, Request, IndexToFill);
|
|
}
|
|
};
|
|
|
|
RequestsToLower[IndexToFill].RequestHandle = Pak.Handle->ReadRequest(GetRequestOffset(Block.OffsetAndPakIndex), Block.Size, Priority, &CallbackFromLower);
|
|
RedundantReadTracker.CheckBlock(GetRequestOffset(Block.OffsetAndPakIndex), Block.Size);
|
|
LastReadRequest = Block.OffsetAndPakIndex + Block.Size;
|
|
Loads++;
|
|
LoadSize += Block.Size;
|
|
}
|
|
|
|
void CompleteRequest(bool bWasCanceled, uint8* Memory, TIntervalTreeIndex BlockIndex)
|
|
{
|
|
FCacheBlock& Block = CacheBlockAllocator.Get(BlockIndex);
|
|
uint16 PakIndex = GetRequestPakIndex(Block.OffsetAndPakIndex);
|
|
int64 Offset = GetRequestOffset(Block.OffsetAndPakIndex);
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(!Block.Memory && Block.Size);
|
|
check(!bWasCanceled); // this is doable, but we need to transition requests back to waiting, inflight etc.
|
|
|
|
if (!RemoveFromIntervalTree<FCacheBlock>(&Pak.CacheBlocks[(int32)EBlockStatus::InFlight], CacheBlockAllocator, Block.Index, Pak.StartShift, Pak.MaxShift))
|
|
{
|
|
check(0);
|
|
}
|
|
|
|
if (Block.InRequestRefCount == 0 || bWasCanceled)
|
|
{
|
|
FMemory::Free(Memory);
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakReadRequest[%016llX, %016llX) Cancelled"), Block.OffsetAndPakIndex, Block.OffsetAndPakIndex + Block.Size);
|
|
ClearBlock(Block);
|
|
}
|
|
else
|
|
{
|
|
Block.Memory = Memory;
|
|
check(Block.Memory && Block.Size);
|
|
BlockMemory += Block.Size;
|
|
check(BlockMemory > 0);
|
|
DEC_MEMORY_STAT_BY(STAT_AsyncFileMemory, Block.Size);
|
|
INC_MEMORY_STAT_BY(STAT_PakCacheMem, Block.Size);
|
|
|
|
if (BlockMemory > BlockMemoryHighWater)
|
|
{
|
|
BlockMemoryHighWater = BlockMemory;
|
|
SET_MEMORY_STAT(STAT_PakCacheHighWater, BlockMemoryHighWater);
|
|
|
|
#if 0
|
|
static int64 LastPrint = 0;
|
|
if (BlockMemoryHighWater / 1024 / 1024 /16 != LastPrint)
|
|
{
|
|
LastPrint = BlockMemoryHighWater / 1024 / 1024 / 16;
|
|
//FPlatformMisc::LowLevelOutputDebugStringf(TEXT("Precache HighWater %dMB\r\n"), int32(LastPrint));
|
|
UE_LOG(LogPakFile, Log, TEXT("Precache HighWater %dMB\r\n"), int32(LastPrint * 16));
|
|
}
|
|
#endif
|
|
}
|
|
Block.Status = EBlockStatus::Complete;
|
|
AddToIntervalTree<FCacheBlock>(
|
|
&Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
Block.Index,
|
|
Pak.StartShift,
|
|
Pak.MaxShift
|
|
);
|
|
TArray<TIntervalTreeIndex> Completeds;
|
|
for (EAsyncIOPriority Priority = AIOP_MAX;; Priority = EAsyncIOPriority(int32(Priority) - 1))
|
|
{
|
|
if (Pak.InRequests[Priority][(int32)EInRequestStatus::InFlight] != IntervalTreeInvalidIndex)
|
|
{
|
|
MaybeRemoveOverlappingNodesInIntervalTree<FPakInRequest>(
|
|
&Pak.InRequests[Priority][(int32)EInRequestStatus::InFlight],
|
|
InRequestAllocator,
|
|
uint64(Offset),
|
|
uint64(Offset + Block.Size - 1),
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, &Completeds](TIntervalTreeIndex RequestIndex) -> bool
|
|
{
|
|
if (FirstUnfilledBlockForRequest(RequestIndex) == MAX_uint64)
|
|
{
|
|
InRequestAllocator.Get(RequestIndex).Next = IntervalTreeInvalidIndex;
|
|
Completeds.Add(RequestIndex);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
);
|
|
}
|
|
if (Priority == AIOP_MIN)
|
|
{
|
|
break;
|
|
}
|
|
}
|
|
for (TIntervalTreeIndex Comp : Completeds)
|
|
{
|
|
FPakInRequest& CompReq = InRequestAllocator.Get(Comp);
|
|
CompReq.Status = EInRequestStatus::Complete;
|
|
AddToIntervalTree(&Pak.InRequests[CompReq.Priority][(int32)EInRequestStatus::Complete], InRequestAllocator, Comp, Pak.StartShift, Pak.MaxShift);
|
|
NotifyComplete(Comp); // potentially scary recursion here
|
|
}
|
|
}
|
|
}
|
|
|
|
bool StartNextRequest()
|
|
{
|
|
if (CanStartAnotherTask())
|
|
{
|
|
return AddNewBlock();
|
|
}
|
|
return false;
|
|
}
|
|
|
|
bool GetCompletedRequestData(FPakInRequest& DoneRequest, uint8* Result)
|
|
{
|
|
// CachedFilesScopeLock is locked
|
|
check(DoneRequest.Status == EInRequestStatus::Complete);
|
|
uint16 PakIndex = GetRequestPakIndex(DoneRequest.OffsetAndPakIndex);
|
|
int64 Offset = GetRequestOffset(DoneRequest.OffsetAndPakIndex);
|
|
int64 Size = DoneRequest.Size;
|
|
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(Offset + DoneRequest.Size <= Pak.TotalSize && DoneRequest.Size > 0 && DoneRequest.Priority >= AIOP_MIN && DoneRequest.Priority <= AIOP_MAX && DoneRequest.Status == EInRequestStatus::Complete);
|
|
|
|
int64 BytesCopied = 0;
|
|
|
|
#if 0 // this path removes the block in one pass, however, this is not what we want because it wrecks precaching, if we change back GetCompletedRequest needs to maybe start a new request and the logic of the IAsyncFile read needs to change
|
|
MaybeRemoveOverlappingNodesInIntervalTree<FCacheBlock>(
|
|
&Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
Offset,
|
|
Offset + Size - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, Offset, Size, &BytesCopied, Result, &Pak](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
FCacheBlock &Block = CacheBlockAllocator.Get(BlockIndex);
|
|
int64 BlockOffset = GetRequestOffset(Block.OffsetAndPakIndex);
|
|
check(Block.Memory && Block.Size && BlockOffset >= 0 && BlockOffset + Block.Size <= Pak.TotalSize);
|
|
|
|
int64 OverlapStart = FMath::Max(Offset, BlockOffset);
|
|
int64 OverlapEnd = FMath::Min(Offset + Size, BlockOffset + Block.Size);
|
|
check(OverlapEnd > OverlapStart);
|
|
BytesCopied += OverlapEnd - OverlapStart;
|
|
FMemory::Memcpy(Result + OverlapStart - Offset, Block.Memory + OverlapStart - BlockOffset, OverlapEnd - OverlapStart);
|
|
check(Block.InRequestRefCount);
|
|
if (!--Block.InRequestRefCount)
|
|
{
|
|
ClearBlock(Block);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
);
|
|
|
|
if (!RemoveFromIntervalTree<FPakInRequest>(&Pak.InRequests[DoneRequest.Priority][(int32)EInRequestStatus::Complete], InRequestAllocator, DoneRequest.Index, Pak.StartShift, Pak.MaxShift))
|
|
{
|
|
check(0); // not found
|
|
}
|
|
ClearRequest(DoneRequest);
|
|
#else
|
|
OverlappingNodesInIntervalTree<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
Offset,
|
|
Offset + Size - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[this, Offset, Size, &BytesCopied, Result, &Pak](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
FCacheBlock &Block = CacheBlockAllocator.Get(BlockIndex);
|
|
int64 BlockOffset = GetRequestOffset(Block.OffsetAndPakIndex);
|
|
check(Block.Memory && Block.Size && BlockOffset >= 0 && BlockOffset + Block.Size <= Pak.TotalSize);
|
|
|
|
int64 OverlapStart = FMath::Max(Offset, BlockOffset);
|
|
int64 OverlapEnd = FMath::Min(Offset + Size, BlockOffset + Block.Size);
|
|
check(OverlapEnd > OverlapStart);
|
|
BytesCopied += OverlapEnd - OverlapStart;
|
|
FMemory::Memcpy(Result + OverlapStart - Offset, Block.Memory + OverlapStart - BlockOffset, OverlapEnd - OverlapStart);
|
|
return true;
|
|
}
|
|
);
|
|
#endif
|
|
check(BytesCopied == Size);
|
|
|
|
|
|
return true;
|
|
}
|
|
|
|
///// Below here are the thread entrypoints
|
|
|
|
public:
|
|
|
|
void NewRequestsToLowerComplete(bool bWasCanceled, IAsyncReadRequest* Request, int32 Index)
|
|
{
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
RequestsToLower[Index].RequestHandle = Request;
|
|
ClearOldBlockTasks();
|
|
NotifyRecursion++;
|
|
if (!RequestsToLower[Index].Memory) // might have already been filled in by the signature check
|
|
{
|
|
RequestsToLower[Index].Memory = Request->GetReadResults();
|
|
}
|
|
CompleteRequest(bWasCanceled, RequestsToLower[Index].Memory, RequestsToLower[Index].BlockIndex);
|
|
RequestsToLower[Index].RequestHandle = nullptr;
|
|
RequestsToDelete.Add(Request);
|
|
RequestsToLower[Index].BlockIndex = IntervalTreeInvalidIndex;
|
|
StartNextRequest();
|
|
NotifyRecursion--;
|
|
}
|
|
|
|
bool QueueRequest(IPakRequestor* Owner, FName File, int64 PakFileSize, int64 Offset, int64 Size, EAsyncIOPriority Priority)
|
|
{
|
|
check(Owner && File != NAME_None && Size > 0 && Offset >= 0 && Offset < PakFileSize && Priority >= AIOP_MIN && Priority <= AIOP_MAX);
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
uint16* PakIndexPtr = RegisterPakFile(File, PakFileSize);
|
|
if (PakIndexPtr == nullptr)
|
|
{
|
|
return false;
|
|
}
|
|
uint16 PakIndex = *PakIndexPtr;
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
check(Pak.Name == File && Pak.TotalSize == PakFileSize && Pak.Handle);
|
|
|
|
TIntervalTreeIndex RequestIndex = InRequestAllocator.Alloc();
|
|
FPakInRequest& Request = InRequestAllocator.Get(RequestIndex);
|
|
FJoinedOffsetAndPakIndex RequestOffsetAndPakIndex = MakeJoinedRequest(PakIndex, Offset);
|
|
Request.OffsetAndPakIndex = RequestOffsetAndPakIndex;
|
|
Request.Size = Size;
|
|
Request.Priority = Priority;
|
|
Request.Status = EInRequestStatus::Waiting;
|
|
Request.Owner = Owner;
|
|
Request.UniqueID = NextUniqueID++;
|
|
Request.Index = RequestIndex;
|
|
check(Request.Next == IntervalTreeInvalidIndex);
|
|
Owner->OffsetAndPakIndex = Request.OffsetAndPakIndex;
|
|
Owner->UniqueID = Request.UniqueID;
|
|
Owner->InRequestIndex = RequestIndex;
|
|
check(!OutstandingRequests.Contains(Request.UniqueID));
|
|
OutstandingRequests.Add(Request.UniqueID, RequestIndex);
|
|
if (AddRequest(RequestIndex))
|
|
{
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakReadRequest[%016llX, %016llX) QueueRequest HOT"), RequestOffsetAndPakIndex, RequestOffsetAndPakIndex + Request.Size);
|
|
}
|
|
else
|
|
{
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakReadRequest[%016llX, %016llX) QueueRequest COLD"), RequestOffsetAndPakIndex, RequestOffsetAndPakIndex + Request.Size);
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
bool GetCompletedRequest(IPakRequestor* Owner, uint8* UserSuppliedMemory)
|
|
{
|
|
check(Owner);
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
ClearOldBlockTasks();
|
|
TIntervalTreeIndex RequestIndex = OutstandingRequests.FindRef(Owner->UniqueID);
|
|
static_assert(IntervalTreeInvalidIndex == 0, "FindRef will return 0 for something not found");
|
|
if (RequestIndex)
|
|
{
|
|
FPakInRequest& Request = InRequestAllocator.Get(RequestIndex);
|
|
check(Owner == Request.Owner && Request.Status == EInRequestStatus::Complete && Request.UniqueID == Request.Owner->UniqueID && RequestIndex == Request.Owner->InRequestIndex && Request.OffsetAndPakIndex == Request.Owner->OffsetAndPakIndex);
|
|
return GetCompletedRequestData(Request, UserSuppliedMemory);
|
|
}
|
|
return false; // canceled
|
|
}
|
|
|
|
void CancelRequest(IPakRequestor* Owner)
|
|
{
|
|
check(Owner);
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
ClearOldBlockTasks();
|
|
TIntervalTreeIndex RequestIndex = OutstandingRequests.FindRef(Owner->UniqueID);
|
|
static_assert(IntervalTreeInvalidIndex == 0, "FindRef will return 0 for something not found");
|
|
if (RequestIndex)
|
|
{
|
|
FPakInRequest& Request = InRequestAllocator.Get(RequestIndex);
|
|
check(Owner == Request.Owner && Request.UniqueID == Request.Owner->UniqueID && RequestIndex == Request.Owner->InRequestIndex && Request.OffsetAndPakIndex == Request.Owner->OffsetAndPakIndex);
|
|
RemoveRequest(RequestIndex);
|
|
}
|
|
StartNextRequest();
|
|
}
|
|
|
|
bool IsProbablyIdle() // nothing to prevent new requests from being made before I return
|
|
{
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
return !HasRequestsAtStatus(EInRequestStatus::Waiting) && !HasRequestsAtStatus(EInRequestStatus::InFlight);
|
|
}
|
|
|
|
void Unmount(FName PakFile)
|
|
{
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
uint16* PakIndexPtr = CachedPaks.Find(PakFile);
|
|
if (!PakIndexPtr)
|
|
{
|
|
UE_LOG(LogPakFile, Log, TEXT("Pak file %s was never used, so nothing to unmount"), *PakFile.ToString());
|
|
return; // never used for anything, nothing to check or clean up
|
|
}
|
|
TrimCache(true);
|
|
uint16 PakIndex = *PakIndexPtr;
|
|
FPakData& Pak = CachedPakData[PakIndex];
|
|
int64 Offset = MakeJoinedRequest(PakIndex, 0);
|
|
|
|
bool bHasOutstandingRequests = false;
|
|
|
|
OverlappingNodesInIntervalTree<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::Complete],
|
|
CacheBlockAllocator,
|
|
0,
|
|
Offset + Pak.TotalSize - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[&bHasOutstandingRequests](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
check(!"Pak cannot be unmounted with outstanding requests");
|
|
bHasOutstandingRequests = true;
|
|
return false;
|
|
}
|
|
);
|
|
OverlappingNodesInIntervalTree<FCacheBlock>(
|
|
Pak.CacheBlocks[(int32)EBlockStatus::InFlight],
|
|
CacheBlockAllocator,
|
|
0,
|
|
Offset + Pak.TotalSize - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[&bHasOutstandingRequests](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
check(!"Pak cannot be unmounted with outstanding requests");
|
|
bHasOutstandingRequests = true;
|
|
return false;
|
|
}
|
|
);
|
|
for (EAsyncIOPriority Priority = AIOP_MAX;; Priority = EAsyncIOPriority(int32(Priority) - 1))
|
|
{
|
|
OverlappingNodesInIntervalTree<FPakInRequest>(
|
|
Pak.InRequests[Priority][(int32)EInRequestStatus::InFlight],
|
|
InRequestAllocator,
|
|
0,
|
|
Offset + Pak.TotalSize - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[&bHasOutstandingRequests](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
check(!"Pak cannot be unmounted with outstanding requests");
|
|
bHasOutstandingRequests = true;
|
|
return false;
|
|
}
|
|
);
|
|
OverlappingNodesInIntervalTree<FPakInRequest>(
|
|
Pak.InRequests[Priority][(int32)EInRequestStatus::Complete],
|
|
InRequestAllocator,
|
|
0,
|
|
Offset + Pak.TotalSize - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[&bHasOutstandingRequests](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
check(!"Pak cannot be unmounted with outstanding requests");
|
|
bHasOutstandingRequests = true;
|
|
return false;
|
|
}
|
|
);
|
|
OverlappingNodesInIntervalTree<FPakInRequest>(
|
|
Pak.InRequests[Priority][(int32)EInRequestStatus::Waiting],
|
|
InRequestAllocator,
|
|
0,
|
|
Offset + Pak.TotalSize - 1,
|
|
0,
|
|
Pak.MaxNode,
|
|
Pak.StartShift,
|
|
Pak.MaxShift,
|
|
[&bHasOutstandingRequests](TIntervalTreeIndex BlockIndex) -> bool
|
|
{
|
|
check(!"Pak cannot be unmounted with outstanding requests");
|
|
bHasOutstandingRequests = true;
|
|
return false;
|
|
}
|
|
);
|
|
if (Priority == AIOP_MIN)
|
|
{
|
|
break;
|
|
}
|
|
}
|
|
if (!bHasOutstandingRequests)
|
|
{
|
|
UE_LOG(LogPakFile, Log, TEXT("Pak file %s removed from pak precacher."), *PakFile.ToString());
|
|
CachedPaks.Remove(PakFile);
|
|
check(Pak.Handle);
|
|
delete Pak.Handle;
|
|
Pak.Handle = nullptr;
|
|
int32 NumToTrim = 0;
|
|
for (int32 Index = CachedPakData.Num() - 1; Index >= 0; Index--)
|
|
{
|
|
if (!CachedPakData[Index].Handle)
|
|
{
|
|
NumToTrim++;
|
|
}
|
|
else
|
|
{
|
|
break;
|
|
}
|
|
}
|
|
if (NumToTrim)
|
|
{
|
|
CachedPakData.RemoveAt(CachedPakData.Num() - NumToTrim, NumToTrim);
|
|
}
|
|
}
|
|
else
|
|
{
|
|
UE_LOG(LogPakFile, Log, TEXT("Pak file %s was NOT removed from pak precacher because it had outstanding requests."), *PakFile.ToString());
|
|
}
|
|
}
|
|
|
|
|
|
// these are not threadsafe and should only be used for synthetic testing
|
|
uint64 GetLoadSize()
|
|
{
|
|
return LoadSize;
|
|
}
|
|
uint32 GetLoads()
|
|
{
|
|
return Loads;
|
|
}
|
|
uint32 GetFrees()
|
|
{
|
|
return Frees;
|
|
}
|
|
|
|
void DumpBlocks()
|
|
{
|
|
while (!FPakPrecacher::Get().IsProbablyIdle())
|
|
{
|
|
QUICK_SCOPE_CYCLE_COUNTER(STAT_WaitDumpBlocks);
|
|
FPlatformProcess::SleepNoStats(0.001f);
|
|
}
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
bool bDone = !HasRequestsAtStatus(EInRequestStatus::Waiting) && !HasRequestsAtStatus(EInRequestStatus::InFlight) && !HasRequestsAtStatus(EInRequestStatus::Complete);
|
|
|
|
if (!bDone)
|
|
{
|
|
UE_LOG(LogPakFile, Log, TEXT("PakCache has outstanding requests with %llu total memory."), BlockMemory);
|
|
}
|
|
else
|
|
{
|
|
UE_LOG(LogPakFile, Log, TEXT("PakCache has no outstanding requests with %llu total memory."), BlockMemory);
|
|
}
|
|
}
|
|
};
|
|
|
|
static void WaitPrecache(const TArray<FString>& Args)
|
|
{
|
|
uint32 Frees = FPakPrecacher::Get().GetFrees();
|
|
uint32 Loads = FPakPrecacher::Get().GetLoads();
|
|
uint64 LoadSize = FPakPrecacher::Get().GetLoadSize();
|
|
|
|
double StartTime = FPlatformTime::Seconds();
|
|
|
|
while (!FPakPrecacher::Get().IsProbablyIdle())
|
|
{
|
|
check(Frees == FPakPrecacher::Get().GetFrees()); // otherwise we are discarding things, which is not what we want for this synthetic test
|
|
QUICK_SCOPE_CYCLE_COUNTER(STAT_WaitPrecache);
|
|
FPlatformProcess::SleepNoStats(0.001f);
|
|
}
|
|
Loads = FPakPrecacher::Get().GetLoads() - Loads;
|
|
LoadSize = FPakPrecacher::Get().GetLoadSize() - LoadSize;
|
|
float TimeSpent = FPlatformTime::Seconds() - StartTime;
|
|
float LoadSizeMB = float(LoadSize) / (1024.0f * 1024.0f);
|
|
float MBs = LoadSizeMB / TimeSpent;
|
|
UE_LOG(LogPakFile, Log, TEXT("Loaded %4d blocks (align %4dKB) totalling %7.2fMB in %4.2fs = %6.2fMB/s"), Loads, PAK_CACHE_GRANULARITY / 1024, LoadSizeMB, TimeSpent, MBs);
|
|
}
|
|
|
|
static FAutoConsoleCommand WaitPrecacheCmd(
|
|
TEXT("pak.WaitPrecache"),
|
|
TEXT("Debug command to wait on the pak precache."),
|
|
FConsoleCommandWithArgsDelegate::CreateStatic(&WaitPrecache)
|
|
);
|
|
|
|
static void DumpBlocks(const TArray<FString>& Args)
|
|
{
|
|
FPakPrecacher::Get().DumpBlocks();
|
|
}
|
|
|
|
static FAutoConsoleCommand DumpBlocksCmd(
|
|
TEXT("pak.DumpBlocks"),
|
|
TEXT("Debug command to spew the outstanding blocks."),
|
|
FConsoleCommandWithArgsDelegate::CreateStatic(&DumpBlocks)
|
|
);
|
|
|
|
static FCriticalSection FPakReadRequestEvent;
|
|
|
|
class FPakAsyncReadFileHandle;
|
|
//uncompress(unencrypt(checksig())))
|
|
|
|
class FPakReadRequestBase : public IAsyncReadRequest, public IPakRequestor
|
|
{
|
|
protected:
|
|
|
|
int64 Offset;
|
|
int64 BytesToRead;
|
|
FEvent* WaitEvent;
|
|
int32 BlockIndex;
|
|
EAsyncIOPriority Priority;
|
|
bool bRequestOutstanding;
|
|
bool bNeedsRemoval;
|
|
bool bInternalRequest; // we are using this internally to deal with compressed, encrypted and signed, so we want the memory back from a precache request.
|
|
|
|
public:
|
|
FPakReadRequestBase(FName InPakFile, int64 PakFileSize, FAsyncFileCallBack* CompleteCallback, int64 InOffset, int64 InBytesToRead, EAsyncIOPriority InPriority, uint8* UserSuppliedMemory, bool bInInternalRequest = false, int32 InBlockIndex = -1)
|
|
: IAsyncReadRequest(CompleteCallback, false, UserSuppliedMemory)
|
|
, Offset(InOffset)
|
|
, BytesToRead(InBytesToRead)
|
|
, WaitEvent(nullptr)
|
|
, BlockIndex(InBlockIndex)
|
|
, Priority(InPriority)
|
|
, bRequestOutstanding(true)
|
|
, bNeedsRemoval(true)
|
|
, bInternalRequest(bInInternalRequest)
|
|
{
|
|
}
|
|
|
|
virtual ~FPakReadRequestBase()
|
|
{
|
|
if (bNeedsRemoval)
|
|
{
|
|
FPakPrecacher::Get().CancelRequest(this);
|
|
}
|
|
if (Memory && !bUserSuppliedMemory)
|
|
{
|
|
// this can happen with a race on cancel, it is ok, they didn't take the memory, free it now
|
|
DEC_MEMORY_STAT_BY(STAT_AsyncFileMemory, BytesToRead);
|
|
FMemory::Free(Memory);
|
|
}
|
|
Memory = nullptr;
|
|
}
|
|
|
|
// IAsyncReadRequest Interface
|
|
|
|
virtual void WaitCompletionImpl(float TimeLimitSeconds) override
|
|
{
|
|
if (bRequestOutstanding)
|
|
{
|
|
{
|
|
FScopeLock Lock(&FPakReadRequestEvent);
|
|
if (bRequestOutstanding)
|
|
{
|
|
check(!WaitEvent);
|
|
WaitEvent = FPlatformProcess::GetSynchEventFromPool(true);
|
|
}
|
|
}
|
|
if (WaitEvent)
|
|
{
|
|
if (TimeLimitSeconds == 0.0f)
|
|
{
|
|
WaitEvent->Wait();
|
|
check(!bRequestOutstanding);
|
|
}
|
|
else
|
|
{
|
|
WaitEvent->Wait(TimeLimitSeconds * 1000.0f);
|
|
}
|
|
FScopeLock Lock(&FPakReadRequestEvent);
|
|
FPlatformProcess::ReturnSynchEventToPool(WaitEvent);
|
|
WaitEvent = nullptr;
|
|
}
|
|
}
|
|
}
|
|
virtual void CancelImpl() override
|
|
{
|
|
check(!WaitEvent); // you canceled from a different thread that you waited from
|
|
FPakPrecacher::Get().CancelRequest(this);
|
|
bNeedsRemoval = false;
|
|
if (bRequestOutstanding)
|
|
{
|
|
bRequestOutstanding = false;
|
|
SetComplete();
|
|
}
|
|
}
|
|
|
|
// IPakRequestor Interface
|
|
|
|
int32 GetBlockIndex()
|
|
{
|
|
return BlockIndex;
|
|
}
|
|
};
|
|
|
|
class FPakReadRequest : public FPakReadRequestBase
|
|
{
|
|
public:
|
|
|
|
FPakReadRequest(FName InPakFile, int64 PakFileSize, FAsyncFileCallBack* CompleteCallback, int64 InOffset, int64 InBytesToRead, EAsyncIOPriority InPriority, uint8* UserSuppliedMemory, bool bInInternalRequest = false, int32 InBlockIndex = -1)
|
|
: FPakReadRequestBase(InPakFile, PakFileSize, CompleteCallback, InOffset, InBytesToRead, InPriority, UserSuppliedMemory, bInInternalRequest, InBlockIndex)
|
|
{
|
|
check(Offset >= 0 && BytesToRead > 0);
|
|
check(bInternalRequest || Priority > AIOP_Precache || !bUserSuppliedMemory); // you never get bits back from a precache request, so why supply memory?
|
|
|
|
if (!FPakPrecacher::Get().QueueRequest(this, InPakFile, PakFileSize, Offset, BytesToRead, Priority))
|
|
{
|
|
bRequestOutstanding = false;
|
|
SetComplete();
|
|
}
|
|
}
|
|
|
|
virtual void RequestIsComplete() override
|
|
{
|
|
check(bRequestOutstanding);
|
|
if (!bCanceled && (bInternalRequest || Priority > AIOP_Precache))
|
|
{
|
|
if (!bUserSuppliedMemory)
|
|
{
|
|
check(!Memory);
|
|
Memory = (uint8*)FMemory::Malloc(BytesToRead);
|
|
INC_MEMORY_STAT_BY(STAT_AsyncFileMemory, BytesToRead);
|
|
}
|
|
else
|
|
{
|
|
check(Memory);
|
|
}
|
|
if (!FPakPrecacher::Get().GetCompletedRequest(this, Memory))
|
|
{
|
|
check(bCanceled);
|
|
}
|
|
}
|
|
SetDataComplete();
|
|
{
|
|
FScopeLock Lock(&FPakReadRequestEvent);
|
|
bRequestOutstanding = false;
|
|
if (WaitEvent)
|
|
{
|
|
WaitEvent->Trigger();
|
|
}
|
|
SetAllComplete();
|
|
}
|
|
}
|
|
};
|
|
|
|
class FPakEncryptedReadRequest : public FPakReadRequestBase
|
|
{
|
|
int64 OriginalOffset;
|
|
int64 OriginalSize;
|
|
|
|
public:
|
|
|
|
FPakEncryptedReadRequest(FName InPakFile, int64 PakFileSize, FAsyncFileCallBack* CompleteCallback, int64 InPakFileStartOffset, int64 InFileOffset, int64 InBytesToRead, EAsyncIOPriority InPriority, uint8* UserSuppliedMemory, bool bInInternalRequest = false, int32 InBlockIndex = -1)
|
|
: FPakReadRequestBase(InPakFile, PakFileSize, CompleteCallback, InPakFileStartOffset + InFileOffset, InBytesToRead, InPriority, UserSuppliedMemory, bInInternalRequest, InBlockIndex)
|
|
, OriginalOffset(InPakFileStartOffset + InFileOffset)
|
|
, OriginalSize(InBytesToRead)
|
|
{
|
|
Offset = InPakFileStartOffset + AlignDown(InFileOffset, FAES::AESBlockSize);
|
|
BytesToRead = Align(InBytesToRead, FAES::AESBlockSize);
|
|
|
|
if (!FPakPrecacher::Get().QueueRequest(this, InPakFile, PakFileSize, Offset, BytesToRead, Priority))
|
|
{
|
|
bRequestOutstanding = false;
|
|
SetComplete();
|
|
}
|
|
}
|
|
|
|
virtual void RequestIsComplete() override
|
|
{
|
|
check(bRequestOutstanding);
|
|
if (!bCanceled && (bInternalRequest || Priority > AIOP_Precache))
|
|
{
|
|
uint8* OversizedBuffer = nullptr;
|
|
if (OriginalOffset != Offset)
|
|
{
|
|
// We've read some bytes from before the requested offset, so we need to grab that larger amount
|
|
// from read request and then cut out the bit we want!
|
|
OversizedBuffer = (uint8*)FMemory::Malloc(BytesToRead);
|
|
}
|
|
|
|
if (!bUserSuppliedMemory)
|
|
{
|
|
check(!Memory);
|
|
Memory = (uint8*)FMemory::Malloc(OriginalSize);
|
|
INC_MEMORY_STAT_BY(STAT_AsyncFileMemory, OriginalSize);
|
|
}
|
|
else
|
|
{
|
|
check(Memory);
|
|
}
|
|
|
|
if (!FPakPrecacher::Get().GetCompletedRequest(this, OversizedBuffer != nullptr ? OversizedBuffer : Memory))
|
|
{
|
|
check(bCanceled);
|
|
}
|
|
|
|
INC_DWORD_STAT(STAT_PakCache_UncompressedDecrypts);
|
|
|
|
if (OversizedBuffer)
|
|
{
|
|
check(IsAligned((void*)BytesToRead, FAES::AESBlockSize));
|
|
DecryptData(OversizedBuffer, BytesToRead);
|
|
FMemory::Memcpy(Memory, OversizedBuffer + (OriginalOffset - Offset), OriginalSize);
|
|
FMemory::Free(OversizedBuffer);
|
|
}
|
|
else
|
|
{
|
|
DecryptData(Memory, Align(OriginalSize, FAES::AESBlockSize));
|
|
}
|
|
}
|
|
SetDataComplete();
|
|
{
|
|
FScopeLock Lock(&FPakReadRequestEvent);
|
|
bRequestOutstanding = false;
|
|
if (WaitEvent)
|
|
{
|
|
WaitEvent->Trigger();
|
|
}
|
|
SetAllComplete();
|
|
}
|
|
}
|
|
};
|
|
|
|
class FPakSizeRequest : public IAsyncReadRequest
|
|
{
|
|
public:
|
|
FPakSizeRequest(FAsyncFileCallBack* CompleteCallback, int64 InFileSize)
|
|
: IAsyncReadRequest(CompleteCallback, true, nullptr)
|
|
{
|
|
Size = InFileSize;
|
|
SetComplete();
|
|
}
|
|
virtual void WaitCompletionImpl(float TimeLimitSeconds) override
|
|
{
|
|
}
|
|
virtual void CancelImpl()
|
|
{
|
|
}
|
|
};
|
|
|
|
|
|
struct FCachedAsyncBlock
|
|
{
|
|
FPakReadRequest* RawRequest;
|
|
uint8* Raw; // compressed, encrypted and/or signature not checked
|
|
uint8* Processed; // decompressed, deencrypted and signature checked
|
|
FGraphEventRef CPUWorkGraphEvent;
|
|
int32 RawSize;
|
|
int32 ProcessedSize;
|
|
int32 RefCount;
|
|
bool bInFlight;
|
|
bool bCPUWorkIsComplete;
|
|
FCachedAsyncBlock()
|
|
: RawRequest(0)
|
|
, Raw(nullptr)
|
|
, Processed(nullptr)
|
|
, RawSize(0)
|
|
, ProcessedSize(0)
|
|
, RefCount(0)
|
|
, bInFlight(false)
|
|
, bCPUWorkIsComplete(false)
|
|
{
|
|
}
|
|
};
|
|
|
|
|
|
class FPakProcessedReadRequest : public IAsyncReadRequest
|
|
{
|
|
FPakAsyncReadFileHandle* Owner;
|
|
int64 Offset;
|
|
int64 BytesToRead;
|
|
FEvent* WaitEvent;
|
|
FThreadSafeCounter CompleteRace; // this is used to resolve races with natural completion and cancel; there can be only one.
|
|
EAsyncIOPriority Priority;
|
|
bool bRequestOutstanding;
|
|
bool bHasCancelled;
|
|
bool bHasCompleted;
|
|
|
|
public:
|
|
FPakProcessedReadRequest(FPakAsyncReadFileHandle* InOwner, FAsyncFileCallBack* CompleteCallback, int64 InOffset, int64 InBytesToRead, EAsyncIOPriority InPriority, uint8* UserSuppliedMemory)
|
|
: IAsyncReadRequest(CompleteCallback, false, UserSuppliedMemory)
|
|
, Owner(InOwner)
|
|
, Offset(InOffset)
|
|
, BytesToRead(InBytesToRead)
|
|
, WaitEvent(nullptr)
|
|
, Priority(InPriority)
|
|
, bRequestOutstanding(true)
|
|
, bHasCancelled(false)
|
|
, bHasCompleted(false)
|
|
{
|
|
check(Offset >= 0 && BytesToRead > 0);
|
|
check(Priority > AIOP_Precache || !bUserSuppliedMemory); // you never get bits back from a precache request, so why supply memory?
|
|
}
|
|
|
|
virtual ~FPakProcessedReadRequest()
|
|
{
|
|
DoneWithRawRequests();
|
|
if (Memory && !bUserSuppliedMemory)
|
|
{
|
|
// this can happen with a race on cancel, it is ok, they didn't take the memory, free it now
|
|
DEC_MEMORY_STAT_BY(STAT_AsyncFileMemory, BytesToRead);
|
|
FMemory::Free(Memory);
|
|
}
|
|
Memory = nullptr;
|
|
}
|
|
|
|
// IAsyncReadRequest Interface
|
|
|
|
virtual void WaitCompletionImpl(float TimeLimitSeconds) override
|
|
{
|
|
if (bRequestOutstanding)
|
|
{
|
|
{
|
|
FScopeLock Lock(&FPakReadRequestEvent);
|
|
if (bRequestOutstanding)
|
|
{
|
|
check(!WaitEvent);
|
|
WaitEvent = FPlatformProcess::GetSynchEventFromPool(true);
|
|
}
|
|
}
|
|
if (WaitEvent)
|
|
{
|
|
if (TimeLimitSeconds == 0.0f)
|
|
{
|
|
WaitEvent->Wait();
|
|
check(!bRequestOutstanding);
|
|
}
|
|
else
|
|
{
|
|
WaitEvent->Wait(TimeLimitSeconds * 1000.0f);
|
|
}
|
|
FScopeLock Lock(&FPakReadRequestEvent);
|
|
FPlatformProcess::ReturnSynchEventToPool(WaitEvent);
|
|
WaitEvent = nullptr;
|
|
}
|
|
}
|
|
}
|
|
virtual void CancelImpl() override
|
|
{
|
|
check(!WaitEvent); // you canceled from a different thread that you waited from
|
|
if (bRequestOutstanding)
|
|
{
|
|
CancelRawRequests();
|
|
bRequestOutstanding = false;
|
|
SetComplete();
|
|
}
|
|
}
|
|
|
|
void RequestIsComplete()
|
|
{
|
|
check(bRequestOutstanding);
|
|
if (!bCanceled && Priority > AIOP_Precache)
|
|
{
|
|
GatherResults();
|
|
}
|
|
SetDataComplete();
|
|
{
|
|
FScopeLock Lock(&FPakReadRequestEvent);
|
|
bRequestOutstanding = false;
|
|
if (WaitEvent)
|
|
{
|
|
WaitEvent->Trigger();
|
|
}
|
|
SetAllComplete();
|
|
}
|
|
}
|
|
void GatherResults();
|
|
void DoneWithRawRequests();
|
|
bool CheckCompletion(const FPakEntry& FileEntry, int32 BlockIndex, TArray<FCachedAsyncBlock>& Blocks);
|
|
void CancelRawRequests();
|
|
};
|
|
|
|
FAutoConsoleTaskPriority CPrio_AsyncIOCPUWorkTaskPriority(
|
|
TEXT("TaskGraph.TaskPriorities.AsyncIOCPUWork"),
|
|
TEXT("Task and thread priority for decompression, decryption and signature checking of async IO from a pak file."),
|
|
ENamedThreads::BackgroundThreadPriority, // if we have background priority task threads, then use them...
|
|
ENamedThreads::NormalTaskPriority, // .. at normal task priority
|
|
ENamedThreads::NormalTaskPriority // if we don't have background threads, then use normal priority threads at normal task priority instead
|
|
);
|
|
|
|
class FAsyncIOCPUWorkTask
|
|
{
|
|
FPakAsyncReadFileHandle& Owner;
|
|
int32 BlockIndex;
|
|
|
|
public:
|
|
FORCEINLINE FAsyncIOCPUWorkTask(FPakAsyncReadFileHandle& InOwner, int32 InBlockIndex)
|
|
: Owner(InOwner)
|
|
, BlockIndex(InBlockIndex)
|
|
{
|
|
}
|
|
static FORCEINLINE TStatId GetStatId()
|
|
{
|
|
RETURN_QUICK_DECLARE_CYCLE_STAT(FsyncIOCPUWorkTask, STATGROUP_TaskGraphTasks);
|
|
}
|
|
static FORCEINLINE ENamedThreads::Type GetDesiredThread()
|
|
{
|
|
return CPrio_AsyncIOCPUWorkTaskPriority.Get();
|
|
}
|
|
FORCEINLINE static ESubsequentsMode::Type GetSubsequentsMode()
|
|
{
|
|
return ESubsequentsMode::TrackSubsequents;
|
|
}
|
|
void DoTask(ENamedThreads::Type CurrentThread, const FGraphEventRef& MyCompletionGraphEvent);
|
|
};
|
|
|
|
class FAsyncIOSignatureCheckTask
|
|
{
|
|
bool bWasCanceled;
|
|
IAsyncReadRequest* Request;
|
|
int32 IndexToFill;
|
|
|
|
public:
|
|
FORCEINLINE FAsyncIOSignatureCheckTask(bool bInWasCanceled, IAsyncReadRequest* InRequest, int32 InIndexToFill)
|
|
: bWasCanceled(bInWasCanceled)
|
|
, Request(InRequest)
|
|
, IndexToFill(InIndexToFill)
|
|
{
|
|
}
|
|
|
|
static FORCEINLINE TStatId GetStatId()
|
|
{
|
|
RETURN_QUICK_DECLARE_CYCLE_STAT(FsyncIOCPUWorkTask, STATGROUP_TaskGraphTasks);
|
|
}
|
|
static FORCEINLINE ENamedThreads::Type GetDesiredThread()
|
|
{
|
|
return CPrio_AsyncIOCPUWorkTaskPriority.Get();
|
|
}
|
|
FORCEINLINE static ESubsequentsMode::Type GetSubsequentsMode()
|
|
{
|
|
return ESubsequentsMode::TrackSubsequents;
|
|
}
|
|
void DoTask(ENamedThreads::Type CurrentThread, const FGraphEventRef& MyCompletionGraphEvent)
|
|
{
|
|
FPakPrecacher::Get().DoSignatureCheck(bWasCanceled, Request, IndexToFill);
|
|
}
|
|
};
|
|
|
|
void FPakPrecacher::StartSignatureCheck(bool bWasCanceled, IAsyncReadRequest* Request, int32 Index)
|
|
{
|
|
TGraphTask<FAsyncIOSignatureCheckTask>::CreateTask().ConstructAndDispatchWhenReady(bWasCanceled, Request, Index);
|
|
}
|
|
|
|
void FPakPrecacher::DoSignatureCheck(bool bWasCanceled, IAsyncReadRequest* Request, int32 Index)
|
|
{
|
|
int64 SignatureIndex = -1;
|
|
int64 NumSignaturesToCheck = 0;
|
|
const uint8* Data = nullptr;
|
|
int64 RequestSize = 0;
|
|
int64 RequestOffset = 0;
|
|
uint16 PakIndex;
|
|
|
|
{
|
|
// Try and keep lock for as short a time as possible. Find our request and copy out the data we need
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
FRequestToLower& RequestToLower = RequestsToLower[Index];
|
|
RequestToLower.RequestHandle = Request;
|
|
RequestToLower.Memory = Request->GetReadResults();
|
|
|
|
NumSignaturesToCheck = Align(RequestToLower.RequestSize, FPakInfo::MaxChunkDataSize) / FPakInfo::MaxChunkDataSize;
|
|
check(NumSignaturesToCheck >= 1);
|
|
|
|
FCacheBlock& Block = CacheBlockAllocator.Get(RequestToLower.BlockIndex);
|
|
RequestOffset = GetRequestOffset(Block.OffsetAndPakIndex);
|
|
check((RequestOffset % FPakInfo::MaxChunkDataSize) == 0);
|
|
RequestSize = RequestToLower.RequestSize;
|
|
PakIndex = GetRequestPakIndex(Block.OffsetAndPakIndex);
|
|
Data = RequestToLower.Memory;
|
|
SignatureIndex = RequestOffset / FPakInfo::MaxChunkDataSize;
|
|
}
|
|
|
|
check(Data);
|
|
check(NumSignaturesToCheck > 0);
|
|
check(RequestSize > 0);
|
|
check(RequestOffset >= 0);
|
|
|
|
// Hash the contents of the incoming buffer and check that it matches what we expected
|
|
for (int64 SignedChunkIndex = 0; SignedChunkIndex < NumSignaturesToCheck; ++SignedChunkIndex, ++SignatureIndex)
|
|
{
|
|
int64 Size = FMath::Min(RequestSize, (int64)FPakInfo::MaxChunkDataSize);
|
|
|
|
{
|
|
SCOPE_SECONDS_ACCUMULATOR(STAT_PakCache_SigningChunkHashTime);
|
|
|
|
TPakChunkHash ThisHash = ComputePakChunkHash(Data, Size);
|
|
bool bChunkHashesMatch;
|
|
{
|
|
FScopeLock Lock(&CachedFilesScopeLock);
|
|
FPakData* PakData = &CachedPakData[PakIndex];
|
|
bChunkHashesMatch = ThisHash == PakData->ChunkHashes[SignatureIndex];
|
|
}
|
|
ensure(bChunkHashesMatch);
|
|
if (!ensure(bChunkHashesMatch))
|
|
{
|
|
UE_LOG(LogPakFile, Warning, TEXT("Pak chunk signing mismatch! Pak file has been corrupted or tampered with!"));
|
|
//FPlatformMisc::RequestExit(true);
|
|
}
|
|
}
|
|
|
|
INC_MEMORY_STAT_BY(STAT_PakCache_SigningChunkHashSize, Size);
|
|
|
|
RequestOffset += Size;
|
|
Data += Size;
|
|
RequestSize -= Size;
|
|
}
|
|
|
|
NewRequestsToLowerComplete(bWasCanceled, Request, Index);
|
|
}
|
|
|
|
class FPakAsyncReadFileHandle final : public IAsyncReadFileHandle
|
|
{
|
|
FName PakFile;
|
|
int64 PakFileSize;
|
|
int64 OffsetInPak;
|
|
int64 CompressedFileSize;
|
|
int64 UncompressedFileSize;
|
|
const FPakEntry* FileEntry;
|
|
TSet<FPakProcessedReadRequest*> LiveRequests;
|
|
TArray<FCachedAsyncBlock> Blocks;
|
|
FAsyncFileCallBack ReadCallbackFunction;
|
|
FCriticalSection CriticalSection;
|
|
int32 NumLiveRawRequests;
|
|
|
|
public:
|
|
FPakAsyncReadFileHandle(const FPakEntry* InFileEntry, FPakFile* InPakFile, const TCHAR* Filename)
|
|
: PakFile(InPakFile->GetFilenameName())
|
|
, PakFileSize(InPakFile->TotalSize())
|
|
, FileEntry(InFileEntry)
|
|
, NumLiveRawRequests(0)
|
|
{
|
|
OffsetInPak = FileEntry->Offset + FileEntry->GetSerializedSize(InPakFile->GetInfo().Version);
|
|
UncompressedFileSize = FileEntry->UncompressedSize;
|
|
CompressedFileSize = FileEntry->UncompressedSize;
|
|
if (FileEntry->CompressionMethod != COMPRESS_None && UncompressedFileSize)
|
|
{
|
|
check(FileEntry->CompressionBlocks.Num());
|
|
CompressedFileSize = FileEntry->CompressionBlocks.Last().CompressedEnd - OffsetInPak;
|
|
check(CompressedFileSize > 0);
|
|
const int32 CompressionBlockSize = FileEntry->CompressionBlockSize;
|
|
check((UncompressedFileSize + CompressionBlockSize - 1) / CompressionBlockSize == FileEntry->CompressionBlocks.Num());
|
|
Blocks.AddDefaulted(FileEntry->CompressionBlocks.Num());
|
|
}
|
|
UE_LOG(LogPakFile, Verbose, TEXT("FPakPlatformFile::OpenAsyncRead[%016llX, %016llX) %s"), OffsetInPak, OffsetInPak + CompressedFileSize, Filename);
|
|
check(PakFileSize > 0 && OffsetInPak + CompressedFileSize <= PakFileSize && OffsetInPak >= 0);
|
|
|
|
ReadCallbackFunction = [this](bool bWasCancelled, IAsyncReadRequest* Request)
|
|
{
|
|
RawReadCallback(bWasCancelled, Request);
|
|
};
|
|
|
|
}
|
|
~FPakAsyncReadFileHandle()
|
|
{
|
|
check(!LiveRequests.Num()); // must delete all requests before you delete the handle
|
|
check(!NumLiveRawRequests); // must delete all requests before you delete the handle
|
|
for (FCachedAsyncBlock& Block : Blocks)
|
|
{
|
|
check(Block.RefCount == 0);
|
|
ClearBlock(Block, true);
|
|
}
|
|
}
|
|
|
|
virtual IAsyncReadRequest* SizeRequest(FAsyncFileCallBack* CompleteCallback = nullptr) override
|
|
{
|
|
return new FPakSizeRequest(CompleteCallback, UncompressedFileSize);
|
|
}
|
|
virtual IAsyncReadRequest* ReadRequest(int64 Offset, int64 BytesToRead, EAsyncIOPriority Priority = AIOP_Normal, FAsyncFileCallBack* CompleteCallback = nullptr, uint8* UserSuppliedMemory = nullptr) override
|
|
{
|
|
if (BytesToRead == MAX_int64)
|
|
{
|
|
BytesToRead = UncompressedFileSize - Offset;
|
|
}
|
|
check(Offset + BytesToRead <= UncompressedFileSize && Offset >= 0);
|
|
if (FileEntry->CompressionMethod == COMPRESS_None)
|
|
{
|
|
check(Offset + BytesToRead + OffsetInPak <= PakFileSize);
|
|
check(!Blocks.Num());
|
|
|
|
if (FileEntry->bEncrypted)
|
|
{
|
|
return new FPakEncryptedReadRequest(PakFile, PakFileSize, CompleteCallback, OffsetInPak, Offset, BytesToRead, Priority, UserSuppliedMemory);
|
|
}
|
|
else
|
|
{
|
|
return new FPakReadRequest(PakFile, PakFileSize, CompleteCallback, OffsetInPak + Offset, BytesToRead, Priority, UserSuppliedMemory);
|
|
}
|
|
}
|
|
bool bAnyUnfinished = false;
|
|
FPakProcessedReadRequest* Result;
|
|
{
|
|
FScopeLock ScopedLock(&CriticalSection);
|
|
check(Blocks.Num());
|
|
int32 FirstBlock = Offset / FileEntry->CompressionBlockSize;
|
|
int32 LastBlock = (Offset + BytesToRead - 1) / FileEntry->CompressionBlockSize;
|
|
|
|
check(FirstBlock >= 0 && FirstBlock < Blocks.Num() && LastBlock >= 0 && LastBlock < Blocks.Num() && FirstBlock <= LastBlock);
|
|
|
|
Result = new FPakProcessedReadRequest(this, CompleteCallback, Offset, BytesToRead, Priority, UserSuppliedMemory);
|
|
for (int32 BlockIndex = FirstBlock; BlockIndex <= LastBlock; BlockIndex++)
|
|
{
|
|
|
|
FCachedAsyncBlock& Block = Blocks[BlockIndex];
|
|
Block.RefCount++;
|
|
if (!Block.bInFlight)
|
|
{
|
|
StartBlock(BlockIndex, Priority);
|
|
bAnyUnfinished = true;
|
|
}
|
|
if (!Block.Processed)
|
|
{
|
|
bAnyUnfinished = true;
|
|
}
|
|
}
|
|
if (Result)
|
|
{
|
|
check(!LiveRequests.Contains(Result))
|
|
LiveRequests.Add(Result);
|
|
}
|
|
if (!bAnyUnfinished)
|
|
{
|
|
Result->RequestIsComplete();
|
|
}
|
|
}
|
|
return Result;
|
|
}
|
|
|
|
void StartBlock(int32 BlockIndex, EAsyncIOPriority Priority)
|
|
{
|
|
FCachedAsyncBlock& Block = Blocks[BlockIndex];
|
|
Block.bInFlight = true;
|
|
check(!Block.RawRequest && !Block.Processed && !Block.Raw && !Block.CPUWorkGraphEvent.GetReference() && !Block.ProcessedSize && !Block.RawSize && !Block.bCPUWorkIsComplete);
|
|
Block.RawSize = FileEntry->CompressionBlocks[BlockIndex].CompressedEnd - FileEntry->CompressionBlocks[BlockIndex].CompressedStart;
|
|
if (FileEntry->bEncrypted)
|
|
{
|
|
Block.RawSize = Align(Block.RawSize, FAES::AESBlockSize);
|
|
}
|
|
NumLiveRawRequests++;
|
|
Block.RawRequest = new FPakReadRequest(PakFile, PakFileSize, &ReadCallbackFunction, FileEntry->CompressionBlocks[BlockIndex].CompressedStart, Block.RawSize, Priority, nullptr, true, BlockIndex);
|
|
}
|
|
void RawReadCallback(bool bWasCancelled, IAsyncReadRequest* InRequest)
|
|
{
|
|
FPakReadRequest* Request = static_cast<FPakReadRequest*>(InRequest);
|
|
// Causes a deadlock, hopefully not needed as we are only referencing the block.
|
|
// Potential problem is with cancel
|
|
// FScopeLock ScopedLock(&CriticalSection);
|
|
int32 BlockIndex = Request->GetBlockIndex();
|
|
check(BlockIndex >= 0 && BlockIndex < Blocks.Num());
|
|
FCachedAsyncBlock& Block = Blocks[BlockIndex];
|
|
check((Block.RawRequest == Request || (!Block.RawRequest && Block.RawSize)) // we still might be in the constructor so the assignment hasn't happened yet
|
|
&& !Block.Processed && !Block.Raw);
|
|
if (bWasCancelled)
|
|
{
|
|
Block.RawSize = 0;
|
|
}
|
|
else
|
|
{
|
|
Block.Raw = Request->GetReadResults();
|
|
check(Block.Raw);
|
|
Block.ProcessedSize = FileEntry->CompressionBlockSize;
|
|
if (BlockIndex == Blocks.Num() - 1)
|
|
{
|
|
Block.ProcessedSize = FileEntry->UncompressedSize % FileEntry->CompressionBlockSize;
|
|
if (!Block.ProcessedSize)
|
|
{
|
|
Block.ProcessedSize = FileEntry->CompressionBlockSize; // last block was a full block
|
|
}
|
|
}
|
|
check(Block.ProcessedSize && !Block.bCPUWorkIsComplete);
|
|
Block.CPUWorkGraphEvent = TGraphTask<FAsyncIOCPUWorkTask>::CreateTask().ConstructAndDispatchWhenReady(*this, BlockIndex);
|
|
}
|
|
}
|
|
void DoProcessing(int32 BlockIndex)
|
|
{
|
|
check(BlockIndex >= 0 && BlockIndex < Blocks.Num());
|
|
FCachedAsyncBlock& Block = Blocks[BlockIndex];
|
|
check(Block.Raw && Block.RawSize && !Block.Processed);
|
|
|
|
if (FileEntry->bEncrypted)
|
|
{
|
|
INC_DWORD_STAT(STAT_PakCache_CompressedDecrypts);
|
|
DecryptData(Block.Raw, Align(Block.RawSize, FAES::AESBlockSize));
|
|
}
|
|
|
|
check(Block.ProcessedSize);
|
|
INC_MEMORY_STAT_BY(STAT_AsyncFileMemory, Block.ProcessedSize);
|
|
uint8* Output = (uint8*)FMemory::Malloc(Block.ProcessedSize);
|
|
FCompression::UncompressMemory((ECompressionFlags)FileEntry->CompressionMethod, Output, Block.ProcessedSize, Block.Raw, Block.RawSize, false, FPlatformMisc::GetPlatformCompression()->GetCompressionBitWindow());
|
|
FMemory::Free(Block.Raw);
|
|
Block.Raw = nullptr;
|
|
DEC_MEMORY_STAT_BY(STAT_AsyncFileMemory, Block.RawSize);
|
|
Block.RawSize = 0;
|
|
|
|
{
|
|
FScopeLock ScopedLock(&CriticalSection);
|
|
if (Block.RawRequest)
|
|
{
|
|
Block.RawRequest->WaitCompletion();
|
|
delete Block.RawRequest;
|
|
Block.RawRequest = nullptr;
|
|
NumLiveRawRequests--;
|
|
}
|
|
if (Block.RefCount > 0)
|
|
{
|
|
Block.Processed = Output;
|
|
for (FPakProcessedReadRequest* Req : LiveRequests)
|
|
{
|
|
if (Req->CheckCompletion(*FileEntry, BlockIndex, Blocks))
|
|
{
|
|
Req->RequestIsComplete();
|
|
}
|
|
}
|
|
}
|
|
else
|
|
{
|
|
// must have been canceled, clean up
|
|
FMemory::Free(Output);
|
|
Output = nullptr;
|
|
check(Block.ProcessedSize);
|
|
DEC_MEMORY_STAT_BY(STAT_AsyncFileMemory, Block.ProcessedSize);
|
|
Block.ProcessedSize = 0;
|
|
Block.CPUWorkGraphEvent = nullptr;
|
|
Block.bInFlight = false;
|
|
}
|
|
Block.bCPUWorkIsComplete = true;
|
|
}
|
|
}
|
|
void ClearBlock(FCachedAsyncBlock& Block, bool bForDestructorShouldAlreadyBeClear = false)
|
|
{
|
|
check(!Block.RawRequest);
|
|
Block.RawRequest = nullptr;
|
|
Block.CPUWorkGraphEvent = nullptr;
|
|
if (Block.Raw)
|
|
{
|
|
check(!bForDestructorShouldAlreadyBeClear);
|
|
// this was a cancel, clean it up now
|
|
FMemory::Free(Block.Raw);
|
|
Block.Raw = nullptr;
|
|
check(Block.RawSize);
|
|
DEC_MEMORY_STAT_BY(STAT_AsyncFileMemory, Block.RawSize);
|
|
}
|
|
Block.RawSize = 0;
|
|
if (Block.Processed)
|
|
{
|
|
check(bForDestructorShouldAlreadyBeClear == false);
|
|
FMemory::Free(Block.Processed);
|
|
Block.Processed = nullptr;
|
|
check(Block.ProcessedSize);
|
|
DEC_MEMORY_STAT_BY(STAT_AsyncFileMemory, Block.ProcessedSize);
|
|
}
|
|
Block.ProcessedSize = 0;
|
|
Block.bInFlight = false;
|
|
Block.bCPUWorkIsComplete = false;
|
|
}
|
|
|
|
void RemoveRequest(FPakProcessedReadRequest* Req, int64 Offset, int64 BytesToRead)
|
|
{
|
|
FScopeLock ScopedLock(&CriticalSection);
|
|
check(LiveRequests.Contains(Req));
|
|
LiveRequests.Remove(Req);
|
|
int32 FirstBlock = Offset / FileEntry->CompressionBlockSize;
|
|
int32 LastBlock = (Offset + BytesToRead - 1) / FileEntry->CompressionBlockSize;
|
|
check(FirstBlock >= 0 && FirstBlock < Blocks.Num() && LastBlock >= 0 && LastBlock < Blocks.Num() && FirstBlock <= LastBlock);
|
|
|
|
for (int32 BlockIndex = FirstBlock; BlockIndex <= LastBlock; BlockIndex++)
|
|
{
|
|
FCachedAsyncBlock& Block = Blocks[BlockIndex];
|
|
check(Block.RefCount > 0);
|
|
if (!--Block.RefCount)
|
|
{
|
|
if (Block.RawRequest)
|
|
{
|
|
Block.RawRequest->Cancel();
|
|
Block.RawRequest->WaitCompletion();
|
|
delete Block.RawRequest;
|
|
Block.RawRequest = nullptr;
|
|
NumLiveRawRequests--;
|
|
}
|
|
if (Block.bCPUWorkIsComplete)
|
|
{
|
|
ClearBlock(Block);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
void GatherResults(uint8* Memory, int64 Offset, int64 BytesToRead)
|
|
{
|
|
// no lock here, I don't think it is needed because we have a ref count.
|
|
int32 FirstBlock = Offset / FileEntry->CompressionBlockSize;
|
|
int32 LastBlock = (Offset + BytesToRead - 1) / FileEntry->CompressionBlockSize;
|
|
check(FirstBlock >= 0 && FirstBlock < Blocks.Num() && LastBlock >= 0 && LastBlock < Blocks.Num() && FirstBlock <= LastBlock);
|
|
|
|
for (int32 BlockIndex = FirstBlock; BlockIndex <= LastBlock; BlockIndex++)
|
|
{
|
|
FCachedAsyncBlock& Block = Blocks[BlockIndex];
|
|
check(Block.RefCount > 0 && Block.Processed && Block.ProcessedSize);
|
|
int64 BlockStart = int64(BlockIndex) * int64(FileEntry->CompressionBlockSize);
|
|
int64 BlockEnd = BlockStart + Block.ProcessedSize;
|
|
|
|
int64 SrcOffset = 0;
|
|
int64 DestOffset = BlockStart - Offset;
|
|
if (DestOffset < 0)
|
|
{
|
|
SrcOffset -= DestOffset;
|
|
DestOffset = 0;
|
|
}
|
|
int64 CopySize = Block.ProcessedSize;
|
|
if (DestOffset + CopySize > BytesToRead)
|
|
{
|
|
CopySize = BytesToRead - DestOffset;
|
|
}
|
|
if (SrcOffset + CopySize > Block.ProcessedSize)
|
|
{
|
|
CopySize = Block.ProcessedSize - SrcOffset;
|
|
}
|
|
check(CopySize > 0 && DestOffset >= 0 && DestOffset + CopySize <= BytesToRead);
|
|
check(SrcOffset >= 0 && SrcOffset + CopySize <= Block.ProcessedSize);
|
|
FMemory::Memcpy(Memory + DestOffset, Block.Processed + SrcOffset, CopySize);
|
|
|
|
check(Block.RefCount > 0);
|
|
}
|
|
}
|
|
};
|
|
|
|
void FPakProcessedReadRequest::CancelRawRequests()
|
|
{
|
|
if (CompleteRace.Increment() == 1)
|
|
{
|
|
Owner->RemoveRequest(this, Offset, BytesToRead);
|
|
bHasCancelled = true;
|
|
}
|
|
}
|
|
|
|
void FPakProcessedReadRequest::GatherResults()
|
|
{
|
|
if (CompleteRace.Increment() == 1)
|
|
{
|
|
if (!bUserSuppliedMemory)
|
|
{
|
|
check(!Memory);
|
|
Memory = (uint8*)FMemory::Malloc(BytesToRead);
|
|
INC_MEMORY_STAT_BY(STAT_AsyncFileMemory, BytesToRead);
|
|
}
|
|
check(Memory);
|
|
Owner->GatherResults(Memory, Offset, BytesToRead);
|
|
}
|
|
}
|
|
|
|
void FPakProcessedReadRequest::DoneWithRawRequests()
|
|
{
|
|
if (!bHasCancelled)
|
|
{
|
|
Owner->RemoveRequest(this, Offset, BytesToRead);
|
|
}
|
|
}
|
|
|
|
bool FPakProcessedReadRequest::CheckCompletion(const FPakEntry& FileEntry, int32 BlockIndex, TArray<FCachedAsyncBlock>& Blocks)
|
|
{
|
|
if (!bRequestOutstanding || bHasCompleted)
|
|
{
|
|
return false;
|
|
}
|
|
{
|
|
int64 BlockStart = int64(BlockIndex) * int64(FileEntry.CompressionBlockSize);
|
|
int64 BlockEnd = int64(BlockIndex + 1) * int64(FileEntry.CompressionBlockSize);
|
|
if (Offset >= BlockEnd || Offset + BytesToRead <= BlockStart)
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
int32 FirstBlock = Offset / FileEntry.CompressionBlockSize;
|
|
int32 LastBlock = (Offset + BytesToRead - 1) / FileEntry.CompressionBlockSize;
|
|
check(FirstBlock >= 0 && FirstBlock < Blocks.Num() && LastBlock >= 0 && LastBlock < Blocks.Num() && FirstBlock <= LastBlock);
|
|
|
|
for (int32 MyBlockIndex = FirstBlock; MyBlockIndex <= LastBlock; MyBlockIndex++)
|
|
{
|
|
FCachedAsyncBlock& Block = Blocks[MyBlockIndex];
|
|
if (!Block.Processed)
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
bHasCompleted = true;
|
|
return true;
|
|
}
|
|
|
|
void FAsyncIOCPUWorkTask::DoTask(ENamedThreads::Type CurrentThread, const FGraphEventRef& MyCompletionGraphEvent)
|
|
{
|
|
SCOPED_NAMED_EVENT(FAsyncIOCPUWorkTask_DoTask, FColor::Cyan);
|
|
Owner.DoProcessing(BlockIndex);
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
IAsyncReadFileHandle* FPakPlatformFile::OpenAsyncRead(const TCHAR* Filename)
|
|
{
|
|
check(GConfig);
|
|
#if USE_PAK_PRECACHE
|
|
if (FPlatformProcess::SupportsMultithreading() && GPakCache_Enable > 0)
|
|
{
|
|
FPakFile* PakFile = NULL;
|
|
const FPakEntry* FileEntry = FindFileInPakFiles(Filename, &PakFile);
|
|
if (FileEntry && PakFile && PakFile->GetFilenameName() != NAME_None)
|
|
{
|
|
return new FPakAsyncReadFileHandle(FileEntry, PakFile, Filename);
|
|
}
|
|
}
|
|
#endif
|
|
|
|
return IPlatformFile::OpenAsyncRead(Filename);
|
|
}
|
|
|
|
/**
|
|
* Class to handle correctly reading from a compressed file within a compressed package
|
|
*/
|
|
class FPakSimpleEncryption
|
|
{
|
|
public:
|
|
enum
|
|
{
|
|
Alignment = FAES::AESBlockSize,
|
|
};
|
|
|
|
static FORCEINLINE int64 AlignReadRequest(int64 Size)
|
|
{
|
|
return Align(Size, Alignment);
|
|
}
|
|
|
|
static FORCEINLINE void DecryptBlock(void* Data, int64 Size)
|
|
{
|
|
INC_DWORD_STAT(STAT_PakCache_SyncDecrypts);
|
|
DecryptData((uint8*)Data, Size);
|
|
}
|
|
};
|
|
|
|
/**
|
|
* Thread local class to manage working buffers for file compression
|
|
*/
|
|
class FCompressionScratchBuffers : public TThreadSingleton<FCompressionScratchBuffers>
|
|
{
|
|
public:
|
|
FCompressionScratchBuffers()
|
|
: TempBufferSize(0)
|
|
, ScratchBufferSize(0)
|
|
{}
|
|
|
|
int64 TempBufferSize;
|
|
TUniquePtr<uint8[]> TempBuffer;
|
|
int64 ScratchBufferSize;
|
|
TUniquePtr<uint8[]> ScratchBuffer;
|
|
|
|
void EnsureBufferSpace(int64 CompressionBlockSize, int64 ScrachSize)
|
|
{
|
|
if(TempBufferSize < CompressionBlockSize)
|
|
{
|
|
TempBufferSize = CompressionBlockSize;
|
|
TempBuffer = MakeUnique<uint8[]>(TempBufferSize);
|
|
}
|
|
if(ScratchBufferSize < ScrachSize)
|
|
{
|
|
ScratchBufferSize = ScrachSize;
|
|
ScratchBuffer = MakeUnique<uint8[]>(ScratchBufferSize);
|
|
}
|
|
}
|
|
};
|
|
|
|
/**
|
|
* Class to handle correctly reading from a compressed file within a pak
|
|
*/
|
|
template< typename EncryptionPolicy = FPakNoEncryption >
|
|
class FPakCompressedReaderPolicy
|
|
{
|
|
public:
|
|
class FPakUncompressTask : public FNonAbandonableTask
|
|
{
|
|
public:
|
|
uint8* UncompressedBuffer;
|
|
int32 UncompressedSize;
|
|
uint8* CompressedBuffer;
|
|
int32 CompressedSize;
|
|
ECompressionFlags Flags;
|
|
void* CopyOut;
|
|
int64 CopyOffset;
|
|
int64 CopyLength;
|
|
|
|
void DoWork()
|
|
{
|
|
// Decrypt and Uncompress from memory to memory.
|
|
int64 EncryptionSize = EncryptionPolicy::AlignReadRequest(CompressedSize);
|
|
EncryptionPolicy::DecryptBlock(CompressedBuffer, EncryptionSize);
|
|
FCompression::UncompressMemory(Flags, UncompressedBuffer, UncompressedSize, CompressedBuffer, CompressedSize, false, FPlatformMisc::GetPlatformCompression()->GetCompressionBitWindow());
|
|
if (CopyOut)
|
|
{
|
|
FMemory::Memcpy(CopyOut, UncompressedBuffer+CopyOffset, CopyLength);
|
|
}
|
|
}
|
|
|
|
FORCEINLINE TStatId GetStatId() const
|
|
{
|
|
// TODO: This is called too early in engine startup.
|
|
return TStatId();
|
|
//RETURN_QUICK_DECLARE_CYCLE_STAT(FPakUncompressTask, STATGROUP_ThreadPoolAsyncTasks);
|
|
}
|
|
};
|
|
|
|
FPakCompressedReaderPolicy(const FPakFile& InPakFile, const FPakEntry& InPakEntry, FArchive* InPakReader)
|
|
: PakFile(InPakFile)
|
|
, PakEntry(InPakEntry)
|
|
, PakReader(InPakReader)
|
|
{
|
|
}
|
|
|
|
/** Pak file that own this file data */
|
|
const FPakFile& PakFile;
|
|
/** Pak file entry for this file. */
|
|
const FPakEntry& PakEntry;
|
|
/** Pak file archive to read the data from. */
|
|
FArchive* PakReader;
|
|
|
|
FORCEINLINE int64 FileSize() const
|
|
{
|
|
return PakEntry.UncompressedSize;
|
|
}
|
|
|
|
void Serialize(int64 DesiredPosition, void* V, int64 Length)
|
|
{
|
|
const int32 CompressionBlockSize = PakEntry.CompressionBlockSize;
|
|
uint32 CompressionBlockIndex = DesiredPosition / CompressionBlockSize;
|
|
uint8* WorkingBuffers[2];
|
|
int64 DirectCopyStart = DesiredPosition % PakEntry.CompressionBlockSize;
|
|
FAsyncTask<FPakUncompressTask> UncompressTask;
|
|
FCompressionScratchBuffers& ScratchSpace = FCompressionScratchBuffers::Get();
|
|
bool bStartedUncompress = false;
|
|
|
|
int64 WorkingBufferRequiredSize = FCompression::CompressMemoryBound((ECompressionFlags)PakEntry.CompressionMethod,CompressionBlockSize, FPlatformMisc::GetPlatformCompression()->GetCompressionBitWindow());
|
|
WorkingBufferRequiredSize = EncryptionPolicy::AlignReadRequest(WorkingBufferRequiredSize);
|
|
ScratchSpace.EnsureBufferSpace(CompressionBlockSize, WorkingBufferRequiredSize*2);
|
|
WorkingBuffers[0] = ScratchSpace.ScratchBuffer.Get();
|
|
WorkingBuffers[1] = ScratchSpace.ScratchBuffer.Get() + WorkingBufferRequiredSize;
|
|
|
|
while (Length > 0)
|
|
{
|
|
const FPakCompressedBlock& Block = PakEntry.CompressionBlocks[CompressionBlockIndex];
|
|
int64 Pos = CompressionBlockIndex * CompressionBlockSize;
|
|
int64 CompressedBlockSize = Block.CompressedEnd-Block.CompressedStart;
|
|
int64 UncompressedBlockSize = FMath::Min<int64>(PakEntry.UncompressedSize-Pos, PakEntry.CompressionBlockSize);
|
|
int64 ReadSize = EncryptionPolicy::AlignReadRequest(CompressedBlockSize);
|
|
int64 WriteSize = FMath::Min<int64>(UncompressedBlockSize - DirectCopyStart, Length);
|
|
PakReader->Seek(Block.CompressedStart);
|
|
PakReader->Serialize(WorkingBuffers[CompressionBlockIndex & 1],ReadSize);
|
|
if (bStartedUncompress)
|
|
{
|
|
UncompressTask.EnsureCompletion();
|
|
bStartedUncompress = false;
|
|
}
|
|
|
|
FPakUncompressTask& TaskDetails = UncompressTask.GetTask();
|
|
if (DirectCopyStart == 0 && Length >= CompressionBlockSize)
|
|
{
|
|
// Block can be decompressed directly into output buffer
|
|
TaskDetails.Flags = (ECompressionFlags)PakEntry.CompressionMethod;
|
|
TaskDetails.UncompressedBuffer = (uint8*)V;
|
|
TaskDetails.UncompressedSize = UncompressedBlockSize;
|
|
TaskDetails.CompressedBuffer = WorkingBuffers[CompressionBlockIndex & 1];
|
|
TaskDetails.CompressedSize = CompressedBlockSize;
|
|
TaskDetails.CopyOut = nullptr;
|
|
}
|
|
else
|
|
{
|
|
// Block needs to be copied from a working buffer
|
|
TaskDetails.Flags = (ECompressionFlags)PakEntry.CompressionMethod;
|
|
TaskDetails.UncompressedBuffer = ScratchSpace.TempBuffer.Get();
|
|
TaskDetails.UncompressedSize = UncompressedBlockSize;
|
|
TaskDetails.CompressedBuffer = WorkingBuffers[CompressionBlockIndex & 1];
|
|
TaskDetails.CompressedSize = CompressedBlockSize;
|
|
TaskDetails.CopyOut = V;
|
|
TaskDetails.CopyOffset = DirectCopyStart;
|
|
TaskDetails.CopyLength = WriteSize;
|
|
}
|
|
|
|
if (Length == WriteSize)
|
|
{
|
|
UncompressTask.StartSynchronousTask();
|
|
}
|
|
else
|
|
{
|
|
UncompressTask.StartBackgroundTask();
|
|
}
|
|
bStartedUncompress = true;
|
|
V = (void*)((uint8*)V + WriteSize);
|
|
Length -= WriteSize;
|
|
DirectCopyStart = 0;
|
|
++CompressionBlockIndex;
|
|
}
|
|
|
|
if(bStartedUncompress)
|
|
{
|
|
UncompressTask.EnsureCompletion();
|
|
}
|
|
}
|
|
};
|
|
|
|
bool FPakEntry::VerifyPakEntriesMatch(const FPakEntry& FileEntryA, const FPakEntry& FileEntryB)
|
|
{
|
|
bool bResult = true;
|
|
if (FileEntryA.Size != FileEntryB.Size)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Pak header file size mismatch, got: %lld, expected: %lld"), FileEntryB.Size, FileEntryA.Size);
|
|
bResult = false;
|
|
}
|
|
if (FileEntryA.UncompressedSize != FileEntryB.UncompressedSize)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Pak header uncompressed file size mismatch, got: %lld, expected: %lld"), FileEntryB.UncompressedSize, FileEntryA.UncompressedSize);
|
|
bResult = false;
|
|
}
|
|
if (FileEntryA.CompressionMethod != FileEntryB.CompressionMethod)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Pak header file compression method mismatch, got: %d, expected: %d"), FileEntryB.CompressionMethod, FileEntryA.CompressionMethod);
|
|
bResult = false;
|
|
}
|
|
if (FMemory::Memcmp(FileEntryA.Hash, FileEntryB.Hash, sizeof(FileEntryA.Hash)) != 0)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Pak file hash does not match its index entry"));
|
|
bResult = false;
|
|
}
|
|
return bResult;
|
|
}
|
|
|
|
bool FPakPlatformFile::IsNonPakFilenameAllowed(const FString& InFilename)
|
|
{
|
|
bool bAllowed = true;
|
|
|
|
#if EXCLUDE_NONPAK_UE_EXTENSIONS
|
|
if ( PakFiles.Num() || UE_BUILD_SHIPPING)
|
|
{
|
|
FName Ext = FName(*FPaths::GetExtension(InFilename));
|
|
bAllowed = !ExcludedNonPakExtensions.Contains(Ext);
|
|
}
|
|
#endif
|
|
|
|
FFilenameSecurityDelegate& FilenameSecurityDelegate = GetFilenameSecurityDelegate();
|
|
if (bAllowed)
|
|
{
|
|
if (FilenameSecurityDelegate.IsBound())
|
|
{
|
|
bAllowed = FilenameSecurityDelegate.Execute(*InFilename);;
|
|
}
|
|
}
|
|
|
|
return bAllowed;
|
|
}
|
|
|
|
|
|
#if IS_PROGRAM
|
|
FPakFile::FPakFile(const TCHAR* Filename, bool bIsSigned)
|
|
: PakFilename(Filename)
|
|
, PakFilenameName(Filename)
|
|
, CachedTotalSize(0)
|
|
, bSigned(bIsSigned)
|
|
, bIsValid(false)
|
|
{
|
|
FArchive* Reader = GetSharedReader(NULL);
|
|
if (Reader)
|
|
{
|
|
Timestamp = IFileManager::Get().GetTimeStamp(Filename);
|
|
Initialize(Reader);
|
|
}
|
|
}
|
|
#endif
|
|
|
|
FPakFile::FPakFile(IPlatformFile* LowerLevel, const TCHAR* Filename, bool bIsSigned)
|
|
: PakFilename(Filename)
|
|
, PakFilenameName(Filename)
|
|
, CachedTotalSize(0)
|
|
, bSigned(bIsSigned)
|
|
, bIsValid(false)
|
|
{
|
|
FArchive* Reader = GetSharedReader(LowerLevel);
|
|
if (Reader)
|
|
{
|
|
Timestamp = LowerLevel->GetTimeStamp(Filename);
|
|
Initialize(Reader);
|
|
}
|
|
}
|
|
|
|
#if WITH_EDITOR
|
|
FPakFile::FPakFile(FArchive* Archive)
|
|
: bSigned(false)
|
|
, bIsValid(false)
|
|
{
|
|
Initialize(Archive);
|
|
}
|
|
#endif
|
|
|
|
FPakFile::~FPakFile()
|
|
{
|
|
}
|
|
|
|
FArchive* FPakFile::CreatePakReader(const TCHAR* Filename)
|
|
{
|
|
FArchive* ReaderArchive = IFileManager::Get().CreateFileReader(Filename);
|
|
return SetupSignedPakReader(ReaderArchive, Filename);
|
|
}
|
|
|
|
FArchive* FPakFile::CreatePakReader(IFileHandle& InHandle, const TCHAR* Filename)
|
|
{
|
|
FArchive* ReaderArchive = new FArchiveFileReaderGeneric(&InHandle, Filename, InHandle.Size());
|
|
return SetupSignedPakReader(ReaderArchive, Filename);
|
|
}
|
|
|
|
FArchive* FPakFile::SetupSignedPakReader(FArchive* ReaderArchive, const TCHAR* Filename)
|
|
{
|
|
if (FPlatformProperties::RequiresCookedData())
|
|
{
|
|
if (bSigned || FParse::Param(FCommandLine::Get(), TEXT("signedpak")) || FParse::Param(FCommandLine::Get(), TEXT("signed")))
|
|
{
|
|
if (!Decryptor)
|
|
{
|
|
Decryptor = MakeUnique<FChunkCacheWorker>(ReaderArchive, Filename);
|
|
}
|
|
ReaderArchive = new FSignedArchiveReader(ReaderArchive, Decryptor.Get());
|
|
}
|
|
}
|
|
return ReaderArchive;
|
|
}
|
|
|
|
void FPakFile::Initialize(FArchive* Reader)
|
|
{
|
|
CachedTotalSize = Reader->TotalSize();
|
|
|
|
if (CachedTotalSize < Info.GetSerializedSize())
|
|
{
|
|
UE_LOG(LogPakFile, Fatal, TEXT("Corrupted pak file '%s' (too short). Verify your installation."), *PakFilename);
|
|
}
|
|
else
|
|
{
|
|
// Serialize trailer and check if everything is as expected.
|
|
Reader->Seek(CachedTotalSize - Info.GetSerializedSize());
|
|
Info.Serialize(*Reader);
|
|
UE_CLOG(Info.Magic != FPakInfo::PakFile_Magic, LogPakFile, Fatal, TEXT("Trailing magic number (%ud) in '%s' is different than the expected one. Verify your installation."), Info.Magic, *PakFilename);
|
|
UE_CLOG(!(Info.Version >= FPakInfo::PakFile_Version_Initial && Info.Version <= FPakInfo::PakFile_Version_Latest), LogPakFile, Fatal, TEXT("Invalid pak file version (%d) in '%s'. Verify your installation."), Info.Version, *PakFilename);
|
|
UE_CLOG((Info.bEncryptedIndex == 1) && (FPakPlatformFile::GetPakEncryptionKey() == nullptr), LogPakFile, Fatal, TEXT("Index of pak file '%s' is encrypted, but this executable doesn't have any valid decryption keys"), *PakFilename);
|
|
|
|
LoadIndex(Reader);
|
|
// LoadIndex should crash in case of an error, so just assume everything is ok if we got here.
|
|
bIsValid = true;
|
|
|
|
if (FParse::Param(FCommandLine::Get(), TEXT("checkpak")))
|
|
{
|
|
ensure(Check());
|
|
}
|
|
}
|
|
}
|
|
|
|
void FPakFile::LoadIndex(FArchive* Reader)
|
|
{
|
|
if (CachedTotalSize < (Info.IndexOffset + Info.IndexSize))
|
|
{
|
|
UE_LOG(LogPakFile, Fatal, TEXT("Corrupted index offset in pak file."));
|
|
}
|
|
else
|
|
{
|
|
// Load index into memory first.
|
|
Reader->Seek(Info.IndexOffset);
|
|
TArray<uint8> IndexData;
|
|
IndexData.AddUninitialized(Info.IndexSize);
|
|
Reader->Serialize(IndexData.GetData(), Info.IndexSize);
|
|
FMemoryReader IndexReader(IndexData);
|
|
|
|
// Decrypt if necessary
|
|
if (Info.bEncryptedIndex)
|
|
{
|
|
DecryptData(IndexData.GetData(), Info.IndexSize);
|
|
}
|
|
|
|
// Check SHA1 value.
|
|
uint8 IndexHash[20];
|
|
FSHA1::HashBuffer(IndexData.GetData(), IndexData.Num(), IndexHash);
|
|
if (FMemory::Memcmp(IndexHash, Info.IndexHash, sizeof(IndexHash)) != 0)
|
|
{
|
|
UE_LOG(LogPakFile, Fatal, TEXT("Corrupted index in pak file (CRC mismatch)."));
|
|
}
|
|
|
|
// Read the default mount point and all entries.
|
|
int32 NumEntries = 0;
|
|
IndexReader << MountPoint;
|
|
IndexReader << NumEntries;
|
|
|
|
MakeDirectoryFromPath(MountPoint);
|
|
// Allocate enough memory to hold all entries (and not reallocate while they're being added to it).
|
|
Files.Empty(NumEntries);
|
|
|
|
for (int32 EntryIndex = 0; EntryIndex < NumEntries; EntryIndex++)
|
|
{
|
|
// Serialize from memory.
|
|
FPakEntry Entry;
|
|
FString Filename;
|
|
IndexReader << Filename;
|
|
Entry.Serialize(IndexReader, Info.Version);
|
|
|
|
// Add new file info.
|
|
Files.Add(Entry);
|
|
|
|
// Construct Index of all directories in pak file.
|
|
FString Path = FPaths::GetPath(Filename);
|
|
MakeDirectoryFromPath(Path);
|
|
FPakDirectory* Directory = Index.Find(Path);
|
|
if (Directory != NULL)
|
|
{
|
|
Directory->Add(FPaths::GetCleanFilename(Filename), &Files.Last());
|
|
}
|
|
else
|
|
{
|
|
FPakDirectory& NewDirectory = Index.Add(Path);
|
|
NewDirectory.Add(FPaths::GetCleanFilename(Filename), &Files.Last());
|
|
|
|
// add the parent directories up to the mount point
|
|
while (MountPoint != Path)
|
|
{
|
|
Path = Path.Left(Path.Len()-1);
|
|
int32 Offset = 0;
|
|
if (Path.FindLastChar('/', Offset))
|
|
{
|
|
Path = Path.Left(Offset);
|
|
MakeDirectoryFromPath(Path);
|
|
if (Index.Find(Path) == NULL)
|
|
{
|
|
Index.Add(Path);
|
|
}
|
|
}
|
|
else
|
|
{
|
|
Path = MountPoint;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
bool FPakFile::Check()
|
|
{
|
|
UE_LOG(LogPakFile, Display, TEXT("Checking pak file \"%s\". This may take a while..."), *PakFilename);
|
|
FArchive& PakReader = *GetSharedReader(NULL);
|
|
int32 ErrorCount = 0;
|
|
int32 FileCount = 0;
|
|
|
|
for (FPakFile::FFileIterator It(*this); It; ++It, ++FileCount)
|
|
{
|
|
const FPakEntry& Entry = It.Info();
|
|
void* FileContents = FMemory::Malloc(Entry.Size);
|
|
PakReader.Seek(Entry.Offset);
|
|
uint32 SerializedCrcTest = 0;
|
|
FPakEntry EntryInfo;
|
|
EntryInfo.Serialize(PakReader, GetInfo().Version);
|
|
if (EntryInfo != Entry)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Serialized hash mismatch for \"%s\"."), *It.Filename());
|
|
ErrorCount++;
|
|
}
|
|
PakReader.Serialize(FileContents, Entry.Size);
|
|
|
|
uint8 TestHash[20];
|
|
FSHA1::HashBuffer(FileContents, Entry.Size, TestHash);
|
|
if (FMemory::Memcmp(TestHash, Entry.Hash, sizeof(TestHash)) != 0)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Hash mismatch for \"%s\"."), *It.Filename());
|
|
ErrorCount++;
|
|
}
|
|
else
|
|
{
|
|
UE_LOG(LogPakFile, Display, TEXT("\"%s\" OK."), *It.Filename());
|
|
}
|
|
FMemory::Free(FileContents);
|
|
}
|
|
if (ErrorCount == 0)
|
|
{
|
|
UE_LOG(LogPakFile, Display, TEXT("Pak file \"%s\" healthy, %d files checked."), *PakFilename, FileCount);
|
|
}
|
|
else
|
|
{
|
|
UE_LOG(LogPakFile, Display, TEXT("Pak file \"%s\" corrupted (%d errors ouf of %d files checked.)."), *PakFilename, ErrorCount, FileCount);
|
|
}
|
|
|
|
return ErrorCount == 0;
|
|
}
|
|
|
|
#if DO_CHECK
|
|
/**
|
|
* FThreadCheckingArchiveProxy - checks that inner archive is only used from the specified thread ID
|
|
*/
|
|
class FThreadCheckingArchiveProxy : public FArchiveProxy
|
|
{
|
|
public:
|
|
|
|
const uint32 ThreadId;
|
|
FArchive* InnerArchivePtr;
|
|
|
|
FThreadCheckingArchiveProxy(FArchive* InReader, uint32 InThreadId)
|
|
: FArchiveProxy(*InReader)
|
|
, ThreadId(InThreadId)
|
|
, InnerArchivePtr(InReader)
|
|
{}
|
|
|
|
virtual ~FThreadCheckingArchiveProxy()
|
|
{
|
|
if (InnerArchivePtr)
|
|
{
|
|
delete InnerArchivePtr;
|
|
}
|
|
}
|
|
|
|
//~ Begin FArchiveProxy Interface
|
|
virtual void Serialize(void* Data, int64 Length) override
|
|
{
|
|
if (FPlatformTLS::GetCurrentThreadId() != ThreadId)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Attempted serialize using thread-specific pak file reader on the wrong thread. Reader for thread %d used by thread %d."), ThreadId, FPlatformTLS::GetCurrentThreadId());
|
|
}
|
|
InnerArchive.Serialize(Data, Length);
|
|
}
|
|
|
|
virtual void Seek(int64 InPos) override
|
|
{
|
|
if (FPlatformTLS::GetCurrentThreadId() != ThreadId)
|
|
{
|
|
UE_LOG(LogPakFile, Error, TEXT("Attempted seek using thread-specific pak file reader on the wrong thread. Reader for thread %d used by thread %d."), ThreadId, FPlatformTLS::GetCurrentThreadId());
|
|
}
|
|
InnerArchive.Seek(InPos);
|
|
}
|
|
//~ End FArchiveProxy Interface
|
|
};
|
|
#endif //DO_CHECK
|
|
|
|
FArchive* FPakFile::GetSharedReader(IPlatformFile* LowerLevel)
|
|
{
|
|
uint32 Thread = FPlatformTLS::GetCurrentThreadId();
|
|
FArchive* PakReader = NULL;
|
|
{
|
|
FScopeLock ScopedLock(&CriticalSection);
|
|
TUniquePtr<FArchive>* ExistingReader = ReaderMap.Find(Thread);
|
|
if (ExistingReader)
|
|
{
|
|
PakReader = ExistingReader->Get();
|
|
}
|
|
}
|
|
if (!PakReader)
|
|
{
|
|
// Create a new FArchive reader and pass it to the new handle.
|
|
if (LowerLevel != NULL)
|
|
{
|
|
IFileHandle* PakHandle = LowerLevel->OpenRead(*GetFilename());
|
|
if (PakHandle)
|
|
{
|
|
PakReader = CreatePakReader(*PakHandle, *GetFilename());
|
|
}
|
|
}
|
|
else
|
|
{
|
|
PakReader = CreatePakReader(*GetFilename());
|
|
}
|
|
if (!PakReader)
|
|
{
|
|
UE_LOG(LogPakFile, Fatal, TEXT("Unable to create pak \"%s\" handle"), *GetFilename());
|
|
}
|
|
{
|
|
FScopeLock ScopedLock(&CriticalSection);
|
|
#if DO_CHECK
|
|
ReaderMap.Emplace(Thread, new FThreadCheckingArchiveProxy(PakReader, Thread));
|
|
#else //DO_CHECK
|
|
ReaderMap.Emplace(Thread, PakReader);
|
|
#endif //DO_CHECK
|
|
}
|
|
}
|
|
return PakReader;
|
|
}
|
|
|
|
#if !UE_BUILD_SHIPPING
|
|
class FPakExec : private FSelfRegisteringExec
|
|
{
|
|
FPakPlatformFile& PlatformFile;
|
|
|
|
public:
|
|
|
|
FPakExec(FPakPlatformFile& InPlatformFile)
|
|
: PlatformFile(InPlatformFile)
|
|
{}
|
|
|
|
/** Console commands **/
|
|
virtual bool Exec( UWorld* InWorld, const TCHAR* Cmd, FOutputDevice& Ar ) override
|
|
{
|
|
if (FParse::Command(&Cmd, TEXT("Mount")))
|
|
{
|
|
PlatformFile.HandleMountCommand(Cmd, Ar);
|
|
return true;
|
|
}
|
|
if (FParse::Command(&Cmd, TEXT("Unmount")))
|
|
{
|
|
PlatformFile.HandleUnmountCommand(Cmd, Ar);
|
|
return true;
|
|
}
|
|
else if (FParse::Command(&Cmd, TEXT("PakList")))
|
|
{
|
|
PlatformFile.HandlePakListCommand(Cmd, Ar);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
};
|
|
static TUniquePtr<FPakExec> GPakExec;
|
|
|
|
void FPakPlatformFile::HandleMountCommand(const TCHAR* Cmd, FOutputDevice& Ar)
|
|
{
|
|
const FString PakFilename = FParse::Token(Cmd, false);
|
|
if (!PakFilename.IsEmpty())
|
|
{
|
|
const FString MountPoint = FParse::Token(Cmd, false);
|
|
Mount(*PakFilename, 0, MountPoint.IsEmpty() ? NULL : *MountPoint);
|
|
}
|
|
}
|
|
|
|
void FPakPlatformFile::HandleUnmountCommand(const TCHAR* Cmd, FOutputDevice& Ar)
|
|
{
|
|
const FString PakFilename = FParse::Token(Cmd, false);
|
|
if (!PakFilename.IsEmpty())
|
|
{
|
|
Unmount(*PakFilename);
|
|
}
|
|
}
|
|
|
|
void FPakPlatformFile::HandlePakListCommand(const TCHAR* Cmd, FOutputDevice& Ar)
|
|
{
|
|
TArray<FPakListEntry> Paks;
|
|
GetMountedPaks(Paks);
|
|
for (auto Pak : Paks)
|
|
{
|
|
Ar.Logf(TEXT("%s Mounted to %s"), *Pak.PakFile->GetFilename(), *Pak.PakFile->GetMountPoint());
|
|
}
|
|
}
|
|
#endif // !UE_BUILD_SHIPPING
|
|
|
|
FPakPlatformFile::FPakPlatformFile()
|
|
: LowerLevel(NULL)
|
|
, bSigned(false)
|
|
{
|
|
}
|
|
|
|
FPakPlatformFile::~FPakPlatformFile()
|
|
{
|
|
FCoreDelegates::OnMountPak.Unbind();
|
|
FCoreDelegates::OnUnmountPak.Unbind();
|
|
|
|
#if USE_PAK_PRECACHE
|
|
FPakPrecacher::Shutdown();
|
|
#endif
|
|
{
|
|
FScopeLock ScopedLock(&PakListCritical);
|
|
for (int32 PakFileIndex = 0; PakFileIndex < PakFiles.Num(); PakFileIndex++)
|
|
{
|
|
delete PakFiles[PakFileIndex].PakFile;
|
|
PakFiles[PakFileIndex].PakFile = nullptr;
|
|
}
|
|
}
|
|
}
|
|
|
|
void FPakPlatformFile::FindPakFilesInDirectory(IPlatformFile* LowLevelFile, const TCHAR* Directory, TArray<FString>& OutPakFiles)
|
|
{
|
|
// Helper class to find all pak files.
|
|
class FPakSearchVisitor : public IPlatformFile::FDirectoryVisitor
|
|
{
|
|
TArray<FString>& FoundPakFiles;
|
|
IPlatformChunkInstall* ChunkInstall;
|
|
public:
|
|
FPakSearchVisitor(TArray<FString>& InFoundPakFiles, IPlatformChunkInstall* InChunkInstall)
|
|
: FoundPakFiles(InFoundPakFiles)
|
|
, ChunkInstall(InChunkInstall)
|
|
{}
|
|
virtual bool Visit(const TCHAR* FilenameOrDirectory, bool bIsDirectory)
|
|
{
|
|
if (bIsDirectory == false)
|
|
{
|
|
FString Filename(FilenameOrDirectory);
|
|
if (FPaths::GetExtension(Filename) == TEXT("pak"))
|
|
{
|
|
// if a platform supports chunk style installs, make sure that the chunk a pak file resides in is actually fully installed before accepting pak files from it
|
|
if (ChunkInstall)
|
|
{
|
|
FString ChunkIdentifier(TEXT("pakchunk"));
|
|
FString BaseFilename = FPaths::GetBaseFilename(Filename);
|
|
if (BaseFilename.StartsWith(ChunkIdentifier))
|
|
{
|
|
int32 DelimiterIndex = 0;
|
|
int32 StartOfChunkIndex = ChunkIdentifier.Len();
|
|
|
|
BaseFilename.FindChar(TEXT('-'), DelimiterIndex);
|
|
FString ChunkNumberString = BaseFilename.Mid(StartOfChunkIndex, DelimiterIndex-StartOfChunkIndex);
|
|
int32 ChunkNumber = 0;
|
|
TTypeFromString<int32>::FromString(ChunkNumber, *ChunkNumberString);
|
|
if (ChunkInstall->GetChunkLocation(ChunkNumber) == EChunkLocation::NotAvailable)
|
|
{
|
|
return true;
|
|
}
|
|
}
|
|
}
|
|
FoundPakFiles.Add(Filename);
|
|
}
|
|
}
|
|
return true;
|
|
}
|
|
};
|
|
// Find all pak files.
|
|
FPakSearchVisitor Visitor(OutPakFiles, FPlatformMisc::GetPlatformChunkInstall());
|
|
LowLevelFile->IterateDirectoryRecursively(Directory, Visitor);
|
|
}
|
|
|
|
void FPakPlatformFile::FindAllPakFiles(IPlatformFile* LowLevelFile, const TArray<FString>& PakFolders, TArray<FString>& OutPakFiles)
|
|
{
|
|
// Find pak files from the specified directories.
|
|
for (int32 FolderIndex = 0; FolderIndex < PakFolders.Num(); ++FolderIndex)
|
|
{
|
|
FindPakFilesInDirectory(LowLevelFile, *PakFolders[FolderIndex], OutPakFiles);
|
|
}
|
|
}
|
|
|
|
void FPakPlatformFile::GetPakFolders(const TCHAR* CmdLine, TArray<FString>& OutPakFolders)
|
|
{
|
|
#if !UE_BUILD_SHIPPING
|
|
// Command line folders
|
|
FString PakDirs;
|
|
if (FParse::Value(CmdLine, TEXT("-pakdir="), PakDirs))
|
|
{
|
|
TArray<FString> CmdLineFolders;
|
|
PakDirs.ParseIntoArray(CmdLineFolders, TEXT("*"), true);
|
|
OutPakFolders.Append(CmdLineFolders);
|
|
}
|
|
#endif
|
|
|
|
// @todo plugin urgent: Needs to handle plugin Pak directories, too
|
|
// Hardcoded locations
|
|
OutPakFolders.Add(FString::Printf(TEXT("%sPaks/"), *FPaths::ProjectContentDir()));
|
|
OutPakFolders.Add(FString::Printf(TEXT("%sPaks/"), *FPaths::ProjectSavedDir()));
|
|
OutPakFolders.Add(FString::Printf(TEXT("%sPaks/"), *FPaths::EngineContentDir()));
|
|
}
|
|
|
|
bool FPakPlatformFile::CheckIfPakFilesExist(IPlatformFile* LowLevelFile, const TArray<FString>& PakFolders)
|
|
{
|
|
TArray<FString> FoundPakFiles;
|
|
FindAllPakFiles(LowLevelFile, PakFolders, FoundPakFiles);
|
|
return FoundPakFiles.Num() > 0;
|
|
}
|
|
|
|
bool FPakPlatformFile::ShouldBeUsed(IPlatformFile* Inner, const TCHAR* CmdLine) const
|
|
{
|
|
bool Result = false;
|
|
if (FPlatformProperties::RequiresCookedData() && !FParse::Param(CmdLine, TEXT("NoPak")))
|
|
{
|
|
TArray<FString> PakFolders;
|
|
GetPakFolders(CmdLine, PakFolders);
|
|
Result = CheckIfPakFilesExist(Inner, PakFolders);
|
|
}
|
|
return Result;
|
|
}
|
|
|
|
bool FPakPlatformFile::Initialize(IPlatformFile* Inner, const TCHAR* CmdLine)
|
|
{
|
|
// Inner is required.
|
|
check(Inner != NULL);
|
|
LowerLevel = Inner;
|
|
|
|
#if EXCLUDE_NONPAK_UE_EXTENSIONS
|
|
// Extensions for file types that should only ever be in a pak file. Used to stop unnecessary access to the lower level platform file
|
|
ExcludedNonPakExtensions.Add(TEXT("uasset"));
|
|
ExcludedNonPakExtensions.Add(TEXT("umap"));
|
|
ExcludedNonPakExtensions.Add(TEXT("ubulk"));
|
|
ExcludedNonPakExtensions.Add(TEXT("uexp"));
|
|
#endif
|
|
|
|
FEncryptionKey DecryptionKey;
|
|
FString PakSigningKeyExponent, PakSigningKeyModulus;
|
|
GetPakSigningKeys(PakSigningKeyExponent, PakSigningKeyModulus);
|
|
DecryptionKey.Exponent.Parse(PakSigningKeyExponent);
|
|
DecryptionKey.Modulus.Parse(PakSigningKeyModulus);
|
|
|
|
bSigned = !DecryptionKey.Exponent.IsZero() && !DecryptionKey.Modulus.IsZero();
|
|
|
|
bool bMountPaks = true;
|
|
TArray<FString> PaksToLoad;
|
|
#if !UE_BUILD_SHIPPING
|
|
// Optionally get a list of pak filenames to load, only these paks will be mounted
|
|
FString CmdLinePaksToLoad;
|
|
if (FParse::Value(CmdLine, TEXT("-paklist="), CmdLinePaksToLoad))
|
|
{
|
|
CmdLinePaksToLoad.ParseIntoArray(PaksToLoad, TEXT("+"), true);
|
|
}
|
|
|
|
//if we are using a fileserver, then dont' mount paks automatically. We only want to read files from the server.
|
|
FString FileHostIP;
|
|
const bool bCookOnTheFly = FParse::Value(FCommandLine::Get(), TEXT("filehostip"), FileHostIP);
|
|
const bool bPreCookedNetwork = FParse::Param(FCommandLine::Get(), TEXT("precookednetwork") );
|
|
if (bPreCookedNetwork)
|
|
{
|
|
// precooked network builds are dependent on cook on the fly
|
|
check(bCookOnTheFly);
|
|
}
|
|
bMountPaks &= (!bCookOnTheFly || bPreCookedNetwork);
|
|
#endif
|
|
|
|
if (bMountPaks)
|
|
{
|
|
// Find and mount pak files from the specified directories.
|
|
TArray<FString> PakFolders;
|
|
GetPakFolders(CmdLine, PakFolders);
|
|
TArray<FString> FoundPakFiles;
|
|
FindAllPakFiles(LowerLevel, PakFolders, FoundPakFiles);
|
|
// Sort in descending order.
|
|
FoundPakFiles.Sort(TGreater<FString>());
|
|
// Mount all found pak files
|
|
for (int32 PakFileIndex = 0; PakFileIndex < FoundPakFiles.Num(); PakFileIndex++)
|
|
{
|
|
const FString& PakFilename = FoundPakFiles[PakFileIndex];
|
|
bool bLoadPak = true;
|
|
if (PaksToLoad.Num() && !PaksToLoad.Contains(FPaths::GetBaseFilename(PakFilename)))
|
|
{
|
|
bLoadPak = false;
|
|
}
|
|
if (bLoadPak)
|
|
{
|
|
// hardcode default load ordering of game main pak -> game content -> engine content -> saved dir
|
|
// would be better to make this config but not even the config system is initialized here so we can't do that
|
|
uint32 PakOrder = 0;
|
|
if (PakFilename.StartsWith(FString::Printf(TEXT("%sPaks/%s-"), *FPaths::ProjectContentDir(), FApp::GetProjectName())))
|
|
{
|
|
PakOrder = 4;
|
|
}
|
|
else if (PakFilename.StartsWith(FPaths::ProjectContentDir()))
|
|
{
|
|
PakOrder = 3;
|
|
}
|
|
else if (PakFilename.StartsWith(FPaths::EngineContentDir()))
|
|
{
|
|
PakOrder = 2;
|
|
}
|
|
else if (PakFilename.StartsWith(FPaths::ProjectSavedDir()))
|
|
{
|
|
PakOrder = 1;
|
|
}
|
|
|
|
Mount(*PakFilename, PakOrder);
|
|
}
|
|
}
|
|
}
|
|
|
|
#if !UE_BUILD_SHIPPING
|
|
GPakExec = MakeUnique<FPakExec>(*this);
|
|
#endif // !UE_BUILD_SHIPPING
|
|
|
|
FCoreDelegates::OnMountPak.BindRaw(this, &FPakPlatformFile::HandleMountPakDelegate);
|
|
FCoreDelegates::OnUnmountPak.BindRaw(this, &FPakPlatformFile::HandleUnmountPakDelegate);
|
|
|
|
|
|
return !!LowerLevel;
|
|
}
|
|
|
|
void FPakPlatformFile::InitializeNewAsyncIO()
|
|
{
|
|
#if USE_PAK_PRECACHE
|
|
if (!WITH_EDITOR && FPlatformProcess::SupportsMultithreading() && !FParse::Param(FCommandLine::Get(), TEXT("FileOpenLog")))
|
|
{
|
|
FEncryptionKey DecryptionKey;
|
|
FString PakSigningKeyExponent, PakSigningKeyModulus;
|
|
GetPakSigningKeys(PakSigningKeyExponent, PakSigningKeyModulus);
|
|
DecryptionKey.Exponent.Parse(PakSigningKeyExponent);
|
|
DecryptionKey.Modulus.Parse(PakSigningKeyModulus);
|
|
|
|
FPakPrecacher::Init(LowerLevel, DecryptionKey);
|
|
}
|
|
else
|
|
{
|
|
UE_CLOG(FParse::Param(FCommandLine::Get(), TEXT("FileOpenLog")), LogPakFile, Display, TEXT("Disabled pak precacher to get an accurate load order. This should only be used to collect gameopenorder.log, as it is quite slow."));
|
|
GPakCache_Enable = 0;
|
|
}
|
|
#endif
|
|
}
|
|
|
|
bool FPakPlatformFile::Mount(const TCHAR* InPakFilename, uint32 PakOrder, const TCHAR* InPath /*= NULL*/)
|
|
{
|
|
bool bSuccess = false;
|
|
TSharedPtr<IFileHandle> PakHandle = MakeShareable(LowerLevel->OpenRead(InPakFilename));
|
|
if (PakHandle.IsValid())
|
|
{
|
|
FPakFile* Pak = new FPakFile(LowerLevel, InPakFilename, bSigned);
|
|
if (Pak->IsValid())
|
|
{
|
|
if (InPath != NULL)
|
|
{
|
|
Pak->SetMountPoint(InPath);
|
|
}
|
|
FString PakFilename = InPakFilename;
|
|
if ( PakFilename.EndsWith(TEXT("_P.pak")) )
|
|
{
|
|
// Prioritize based on the chunk version number
|
|
// Default to version 1 for single patch system
|
|
uint32 ChunkVersionNumber = 1;
|
|
FString StrippedPakFilename = PakFilename.LeftChop(6);
|
|
int32 VersionStartIndex = PakFilename.Find("_", ESearchCase::CaseSensitive, ESearchDir::FromEnd);
|
|
if (VersionStartIndex != INDEX_NONE)
|
|
{
|
|
FString VersionString = PakFilename.RightChop(VersionStartIndex);
|
|
if (VersionString.IsNumeric())
|
|
{
|
|
int32 ChunkVersionSigned = FCString::Atoi(*VersionString);
|
|
if (ChunkVersionSigned >= 1)
|
|
{
|
|
// Increment by one so that the first patch file still gets more priority than the base pak file
|
|
ChunkVersionNumber = (uint32)ChunkVersionSigned + 1;
|
|
}
|
|
}
|
|
}
|
|
PakOrder += 100 * ChunkVersionNumber;
|
|
}
|
|
{
|
|
// Add new pak file
|
|
FScopeLock ScopedLock(&PakListCritical);
|
|
FPakListEntry Entry;
|
|
Entry.ReadOrder = PakOrder;
|
|
Entry.PakFile = Pak;
|
|
PakFiles.Add(Entry);
|
|
PakFiles.StableSort();
|
|
}
|
|
bSuccess = true;
|
|
}
|
|
else
|
|
{
|
|
UE_LOG(LogPakFile, Warning, TEXT("Failed to mount pak \"%s\", pak is invalid."), InPakFilename);
|
|
}
|
|
}
|
|
else
|
|
{
|
|
UE_LOG(LogPakFile, Warning, TEXT("Pak \"%s\" does not exist!"), InPakFilename);
|
|
}
|
|
return bSuccess;
|
|
}
|
|
|
|
bool FPakPlatformFile::Unmount(const TCHAR* InPakFilename)
|
|
{
|
|
#if USE_PAK_PRECACHE
|
|
if (GPakCache_Enable)
|
|
{
|
|
FPakPrecacher::Get().Unmount(InPakFilename);
|
|
}
|
|
#endif
|
|
{
|
|
FScopeLock ScopedLock(&PakListCritical);
|
|
|
|
for (int32 PakIndex = 0; PakIndex < PakFiles.Num(); PakIndex++)
|
|
{
|
|
if (PakFiles[PakIndex].PakFile->GetFilename() == InPakFilename)
|
|
{
|
|
delete PakFiles[PakIndex].PakFile;
|
|
PakFiles.RemoveAt(PakIndex);
|
|
return true;
|
|
}
|
|
}
|
|
}
|
|
return false;
|
|
}
|
|
|
|
IFileHandle* FPakPlatformFile::CreatePakFileHandle(const TCHAR* Filename, FPakFile* PakFile, const FPakEntry* FileEntry)
|
|
{
|
|
IFileHandle* Result = NULL;
|
|
bool bNeedsDelete = true;
|
|
FArchive* PakReader = PakFile->GetSharedReader(LowerLevel);
|
|
|
|
// Create the handle.
|
|
if (FileEntry->CompressionMethod != COMPRESS_None && PakFile->GetInfo().Version >= FPakInfo::PakFile_Version_CompressionEncryption)
|
|
{
|
|
if (FileEntry->bEncrypted)
|
|
{
|
|
Result = new FPakFileHandle< FPakCompressedReaderPolicy<FPakSimpleEncryption> >(*PakFile, *FileEntry, PakReader, bNeedsDelete);
|
|
}
|
|
else
|
|
{
|
|
Result = new FPakFileHandle< FPakCompressedReaderPolicy<> >(*PakFile, *FileEntry, PakReader, bNeedsDelete);
|
|
}
|
|
}
|
|
else if (FileEntry->bEncrypted)
|
|
{
|
|
Result = new FPakFileHandle< FPakReaderPolicy<FPakSimpleEncryption> >(*PakFile, *FileEntry, PakReader, bNeedsDelete);
|
|
}
|
|
else
|
|
{
|
|
Result = new FPakFileHandle<>(*PakFile, *FileEntry, PakReader, bNeedsDelete);
|
|
}
|
|
|
|
return Result;
|
|
}
|
|
|
|
bool FPakPlatformFile::HandleMountPakDelegate(const FString& PakFilePath, uint32 PakOrder, IPlatformFile::FDirectoryVisitor* Visitor)
|
|
{
|
|
bool bReturn = Mount(*PakFilePath, PakOrder);
|
|
if (bReturn && Visitor != nullptr)
|
|
{
|
|
TArray<FPakListEntry> Paks;
|
|
GetMountedPaks(Paks);
|
|
// Find the single pak we just mounted
|
|
for (auto Pak : Paks)
|
|
{
|
|
if (PakFilePath == Pak.PakFile->GetFilename())
|
|
{
|
|
// Get a list of all of the files in the pak
|
|
for (FPakFile::FFileIterator It(*Pak.PakFile); It; ++It)
|
|
{
|
|
Visitor->Visit(*It.Filename(), false);
|
|
}
|
|
return true;
|
|
}
|
|
}
|
|
}
|
|
return bReturn;
|
|
}
|
|
|
|
bool FPakPlatformFile::HandleUnmountPakDelegate(const FString& PakFilePath)
|
|
{
|
|
return Unmount(*PakFilePath);
|
|
}
|
|
|
|
IFileHandle* FPakPlatformFile::OpenRead(const TCHAR* Filename, bool bAllowWrite)
|
|
{
|
|
IFileHandle* Result = NULL;
|
|
FPakFile* PakFile = NULL;
|
|
const FPakEntry* FileEntry = FindFileInPakFiles(Filename, &PakFile);
|
|
if (FileEntry != NULL)
|
|
{
|
|
Result = CreatePakFileHandle(Filename, PakFile, FileEntry);
|
|
}
|
|
else
|
|
{
|
|
if (IsNonPakFilenameAllowed(Filename))
|
|
{
|
|
// Default to wrapped file
|
|
Result = LowerLevel->OpenRead(Filename, bAllowWrite);
|
|
}
|
|
}
|
|
return Result;
|
|
}
|
|
|
|
bool FPakPlatformFile::BufferedCopyFile(IFileHandle& Dest, IFileHandle& Source, const int64 FileSize, uint8* Buffer, const int64 BufferSize) const
|
|
{
|
|
int64 RemainingSizeToCopy = FileSize;
|
|
// Continue copying chunks using the buffer
|
|
while (RemainingSizeToCopy > 0)
|
|
{
|
|
const int64 SizeToCopy = FMath::Min(BufferSize, RemainingSizeToCopy);
|
|
if (Source.Read(Buffer, SizeToCopy) == false)
|
|
{
|
|
return false;
|
|
}
|
|
if (Dest.Write(Buffer, SizeToCopy) == false)
|
|
{
|
|
return false;
|
|
}
|
|
RemainingSizeToCopy -= SizeToCopy;
|
|
}
|
|
return true;
|
|
}
|
|
|
|
bool FPakPlatformFile::CopyFile(const TCHAR* To, const TCHAR* From, EPlatformFileRead ReadFlags, EPlatformFileWrite WriteFlags)
|
|
{
|
|
bool Result = false;
|
|
FPakFile* PakFile = NULL;
|
|
const FPakEntry* FileEntry = FindFileInPakFiles(From, &PakFile);
|
|
if (FileEntry != NULL)
|
|
{
|
|
// Copy from pak to LowerLevel->
|
|
// Create handles both files.
|
|
TUniquePtr<IFileHandle> DestHandle(LowerLevel->OpenWrite(To, false, (WriteFlags & EPlatformFileWrite::AllowRead) != EPlatformFileWrite::None));
|
|
TUniquePtr<IFileHandle> SourceHandle(CreatePakFileHandle(From, PakFile, FileEntry));
|
|
|
|
if (DestHandle && SourceHandle)
|
|
{
|
|
const int64 BufferSize = 64 * 1024; // Copy in 64K chunks.
|
|
uint8* Buffer = (uint8*)FMemory::Malloc(BufferSize);
|
|
Result = BufferedCopyFile(*DestHandle, *SourceHandle, SourceHandle->Size(), Buffer, BufferSize);
|
|
FMemory::Free(Buffer);
|
|
}
|
|
}
|
|
else
|
|
{
|
|
Result = LowerLevel->CopyFile(To, From, ReadFlags, WriteFlags);
|
|
}
|
|
return Result;
|
|
}
|
|
|
|
/**
|
|
* Module for the pak file
|
|
*/
|
|
class FPakFileModule : public IPlatformFileModule
|
|
{
|
|
public:
|
|
virtual IPlatformFile* GetPlatformFile() override
|
|
{
|
|
static TUniquePtr<IPlatformFile> AutoDestroySingleton = MakeUnique<FPakPlatformFile>();
|
|
return AutoDestroySingleton.Get();
|
|
}
|
|
};
|
|
|
|
IMPLEMENT_MODULE(FPakFileModule, PakFile);
|