#rb Per.Larsson
#rnx
#preflight 62a9bff813004691f9830b8d
- I think when I was first adding it, the type was originally a struct but was changed to an enum before it was submitted, however I failed to update the name.
#ROBOMERGE-AUTHOR: paul.chipchase
#ROBOMERGE-SOURCE: CL 20668490 via CL 20668521 via CL 20668532
#ROBOMERGE-BOT: UE5 (Release-Engine-Staging -> Main) (v955-20579017)
[CL 20669764 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#rnx
#jira UE-156312
#preflight 62a99553943e7bb256fe174c
### Problem
- Firstly this fix is a hack and not intended for shipping 5.1, but the real fix is already planned work, which involves moving the filtering from the push process to being applied to each payload when the package is saved, just as the asset filtering is done at the moment.
-- Doing the proper fix will take a bit of time and testing so we need a quicker fix so that I can re-enable the virtualization process to unblock some workflows.
- All new code is wrapped in a single preprocessor define 'ENABLE_FILTERING_HACK' making the code easy to identify and remove.
- Submitting this hack will cause the priority of the real fix to be bumped in the backlog so that we can remove the hack asap.
- The problem code comes from a pass we do on the payloads to check if they are already in persistent storage or not. If the payload is then we just change it to be virtualized and don't attempt to push. In addition to fixing the filtering there is also a work item to investigate if we really want to keep this logic or if we want to remove it anyway. This might be looked into before I look into the filtering fix as an alternative way to remove the hack.
### Fix
- After we have queried the system to check which of the local payloads are already in persistent storage, BUT before we change them over to be virtualized we now run a new pass, checking which payloads should be filtered.
-- This is a bespoke code path expected only to be used here, so in this case it is not in the IVirtualizationSystem API but requires us to cast to FVirtualizationManager to gain access. This should stop anyone else from using the hack before it is removed.
- The filtering check mimics the filtering that would be run on a normal push operation.
- Once done we can check for any payload that would be filtered, and if so, set the status to not found. This prevents the payload from being changed to be virtualized and means it will be included in the normal push operation where it will be correctly filtered.
- None of this is the best way to do it, but it did seem like the easiest way to add the hack in isolation as no existing code was changed, so it should be simple to just 'delete' the hack when ready.
#ROBOMERGE-AUTHOR: paul.chipchase
#ROBOMERGE-SOURCE: CL 20667712 via CL 20667730 via CL 20667733
#ROBOMERGE-BOT: UE5 (Release-Engine-Staging -> Main) (v955-20579017)
[CL 20668639 by paul chipchase in ue5-main branch]
#rb trivial
#rnx
#preflight 62a88a39cf54a658ee0de5f7
#jira UE-156319
- Ideally we'd break down the reporting to be per backend but at the moment the slow task system does not let us do that easily because it will only update the text ever 0.2seconds, so if we have a very fast backend followed by a slow one, we'd update the progress bar with the name of the fast backend, then when we update it with the name of the slow backend it will be skipped so the user will still see the name of the fast backend when waiting on the slow one.
- I would rather show less info than risk showing incorrect info.
#ROBOMERGE-AUTHOR: paul.chipchase
#ROBOMERGE-SOURCE: CL 20647614 via CL 20647634 via CL 20647664
#ROBOMERGE-BOT: UE5 (Release-Engine-Staging -> Main) (v955-20579017)
[CL 20650901 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#jira UE-156189
#rnx
#preflight 62a35285ec7332a25c95d7bb
- Now our internal projects are updated we can enable the warning about projects still using the old settings.
- Note that this remains backwards compatible and the legacy settings will be used.
[CL 20595697 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#jira UE-156189
#rnx
#preflight 62a33c245a0ab464fb020897
- The main goal is to split the ini file option that selects the virtualization system to use (SystemName) from the options for our virtualization module.
- The old values should continue to be read and used, but if we detect that ini file values are still under [Core.ContentVirtualization] we will warn about it.
[CL 20594207 by paul chipchase in ue5-main branch]
#rb trivial
#rnx
#preflight 629893b95e715bebb2c2a529
- The DDC2 api is already heavily threaded so we can simply keep throwing new requests onto it and wait once all requests are made and not have to do any work ourselves.
- I did ask and was told that using the batch api provided by DDC2 would not improve performance so for now I am have gone with the simple change. We may still look into changing to the batch API in the future as then we would have a reason to improve the DDC2 api.
[CL 20468437 by paul chipchase in ue5-main branch]
#rb trivial
#rnx
#preflight 628c9337a24685c36f50ec2f
- When the code was converted from single batch to support multiple batches I missed the check to return true if all payloads were already uploaded. So once we were operating on multiple batches if a batch was 100% already present in the backend we would return true and not process any further batches.
- Change the return true, to a continue so that future batches are processed.
[CL 20345282 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#rnx
#jira UE-151377
#preflight 628364050039ea57a52d6989
### Virtualization
- [Core.ContentVirtualization] in the engine ini file now supports an array called 'DisabledAsset' which can be used to name asset types that should not virtualize their payloads.
-- By default (in BaseEngine.ini) we have disabled the StaticMesh asset as we know it will crash if a payload is missing and the SoundWave asset as it still is pending testing.
- This new way to disable virtualization is data driven. The older hard coded method has not been removed but will likely be reworked in a future submit.
- Now when an editor bulkdata is adding it's payload to the package trailer builder during package save it will poll the virtualization system with a call to the new method ::IsDisabledForObject by passing in it's owner.
-- If the owner is valid and was present in the 'DisabledAsset' array then the method will return true and the EPayloadFlags::DisableVirtualization flag will be applied.
### Package Trailer
- The pre-existing functionality of enum EPayloadFilter has been moved to a new enum EPayloadStorageType as will only filter payloads based on their storage type.
- EPayloadFilter has been modified to filter payloads based on functionality although at the moment the only thing it can filter for is to return payloads that can be virtualized, it is left for future expansion.
- EPayloadFlags has been reduced to a uint16 with the remaining 2bytes being turned into a new member EPayloadFilterReason.
- This new member allows us to record the exact reason why a payload is excluded from virtualization. If it is zero then the payload can virtualize, otherwise it will contain one or more reasons as to why it is being excluded. For this reason the enum is a bitfield.
- Added overloads of ::GetPayloads and ::GetNumPayloads that take EPayloadFilter rather than a EPayloadStorageType
- Added wrappers around all AccessMode types for FLookupTableEntry.
- FPackageTrailerBuilder has been extended to take a EPayloadFilterReason so that the caller can already provide a reason why the payload cannot be virtualized.
-- As a future peace of work this will probably be changed and we will ask the caller to pass in the owner UObject pointer instead and then we will process the filtering when building the package trailer to a) keep all of the filtering code in one place b) keep the filtering consistent
### PackageSubmissionChecks
- The virtualization process in will now request the payloads that can be virtualized from the package trailer, which respects the new payload flag, rather than requesting all locally stored payloads.
### UObjects
- There is no need for the SoundWave or MeshDescription classes to opt out of virtualization on construction. This will be done when the package is saved and is now data driven rather than being hardcoded.
### DumpPackagePayloadInfo
- The command has been updated to also display the filter reasons applied to each payload
[CL 20240971 by paul chipchase in ue5-main branch]
#rb trivial
#rnx
#preflight 62820fb0046b81bf93921c6b
- When adding the check to see if a package file could be written to, I tried to do this as early as possible to avoid virtualizing payloads for packages that can't be modified.
- This didn't take into account submits via the editor, when the package is loaded which means the editor will have the package file locked for edit.
- For now I have gone with the simple fix, we check if we can write to the package once we have called ResetLoader on the package. This means that we will virtualize the payloads then potentially skip removing them from the package file if another process has a file lock.
- This approach will minimize impact to the users for now but we should revisit this in the future to try and reduce the error scope further.
[CL 20221653 by paul chipchase in ue5-main branch]
#rb trivial
#jira UE-143675
#rnx
#preflight 6274d9b7732d07cf93f55621
### Problem
- When using the stand alone virtualization tool, to virtualize packages directly from p4v it is possible that the editor will have locked the package files. This means we cannot modify it and remove the payload making the virtualization of the package file impossible.
- At some point in the future we will add the ability for the tool to communicate with the editor to ask that the lock be relinquished OR there is the possibility that we will be moving away from the editor permantly locking package files.
- As this problem will be eventually be solved we'd rather this problem not present to the end users as an error as this will cause them to view the tool as annoying and fiddly to use.
- So for now we will treat this problem as a warning and allow the end user to continue and submit the package non-virtualized.
### Fix
- At the moment once we detect a package has local payloads that might need virtualization we check to see if the file could be written to.
- This might raise some false positives if the local payloads are filtered out or otherwise not suppose to be virtualized.
- At the moment this sort of filtering is done when pushing the payloads meaning we could end up pushing payloads then being unable to modify the package.
- A future work item will be added to clean this up at some point in the future (depending on when we will start work to remove the problem entirely)
[CL 20073356 by paul chipchase in ue5-main branch]
#rb trivial
#rnx
#preflight 6274be518d32cd80d8cfeaa6
- Adding this so that we can more easily track in a build machine log when a payload gets pulled when debugging unexpected behavior.
[CL 20072222 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#jira UE-151000
#rnx
#preflight 627291732f6d177be3c292e3
- When submitting large batches it was found that once a cl description reached a certain length, that p4v would lock up attempting to display it in the history tab, so we want to be able to make smaller submits to keep the description to a managable size.
- In addition it is better for our server set up to make multiple smaller batch submits than insanely large ones.
- The max size of the batch can be set when creating the virtualization graph in the config file.
[CL 20055325 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#rnx
#jira UE-151000
#preflight 627271c83b2ed6f85363f53a
- In the old code when submitting packages for virtualization we would load all local payloads into memory then pass them to the virtualization system to be pushed to persistent storage. This works fine for an average users submits but when resaving large projects we can have submits with thousands of packages containing gigabytes of payloads.
- FPushRequest has been changed to accept either a payload directly (in the form of FCompressedBuffer) or accept a reference to a IPayloadProvider which will return the payload when asked.
- Most of the existing backends make no use of the batched pushing feature and only push one payload at a time. The default implementation of this will be to ask each request one at a time for the payload and then pass it to the backend for a single push. If the IPayloadProvider is being used we will therefor only ever have one payload loaded at a time avoiding memory spikes.
- The source control backend currently does implement the batched pushing feature. This backend writes each payload to disk before submitting them to perforce, so all we have to do is make sure that request one payload, write it to disk, then discard the payload to avoid memory spikes.
- This change required that FPushRequest make it's members private and provide accessors to them, as the caller shouldn't need to care if the payload is loaded or comes via the provider.
- NOTE that this implementation is less efficient if we have packages containing many different payloads as we will end up opening the package, parsing the trailer then load a payload many times over, where as the original code would load all payloads from the file in one go. In practice, given the overhead of the source control backend this didn't make a huge difference and is probably not worth the effort to optimize at this point.
[CL 20040609 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#rnx
#jira UE-147948
#preflight 6271209be16e280be60b5a62
- In the worst case scenario (where a machine can only access the source control backend) we will want to limit the potential number of concurrent connections that they can make.
- In testing, not limiting the connections could cause the occasional connection failure and I imagine having a full team running in the worse case scenario would quickly overwhelm the server.
- Currently we default to 8 connections (the limit allowed in UGS)
- The semaphore is a quick and dirty implementation similar to std::counting_semaphore. I am not spending much time on it as the worst case scenario is not expected to be common and if it does occur, the system is going to be painfully slow no matter how good the thread throttling.
### Future work
- This work could be taken a lot further by either
-- a) reusing connections rather than creating one for each pull, which will require changes to our source control api
-- b) gathering requests from many requesting threads once the limit has been reached and then sending the requests in batches
--c) reserve one connection for the game thread to prevent the user waiting?
[CL 20024594 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#rnx
#jira UE-148223
#preflight 6268efeb2f53f9169aa9b5c5
- Moving this to use the format that the other command lines do will make it easier to find.
- Moved the parsing code from ::ApplySettingsFromCmdline to ::ApplyDebugSettingsFromFromCmdline to indicate that it is a debug command and not expected for production use.
- Removed ::ApplySettingsFromCmdline as we no longer have non-debug commandlines.
- There is no console command version because we cannot change the graph mid process.
[CL 19936091 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#rnx
#jira UE-148223
#preflight 6267c4e2853fdb6fddb20be3
- Remove the Core.ContentVirtualizationDebugOptions section from the ini files entirely.
- Remove the ::ApplyDebugSettingsFromConfigFiles method as we no longer load any debug values from the config files.
[CL 19923780 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#jira UE-148223
#rnx
#preflight 62666fc90634d0904cca8756
- It was probably a bad idea to have debug settings like this applied from the config file as it would be fairly easy for people to accidently forget about them or submit them.
- The single threaded command can now be set via the commandline on start up using the '-VA-XXX' synctax that the system has started to adopt.
- Additionally it has been exposed as a console command so that it can be toggles on and off at runtime.
- Removed 'ForceSingleThreaded' from BaseEngine.ini and removed the parsing code for this from FVirtualizationManager::ApplyDebugSettingsFromConfigFiles.
- Removed 'FailPayloadPullOperations' from BaseEngine.ini, it hasn't been a valid setting for a while.
- Changed ConsoleCommands to ConsoleObjects and use it to store the base class IConsoleObject instead so it can contain console variables
- When pushing payloads we now only take the singlethreaded lock after checking if we have any backends/are enabled.
[CL 19900622 by paul chipchase in ue5-main branch]
#rb Per.Larsson
#jira UE-148223
#rnx
#preflight 62629b8fbc24759dc73452dc
- Moved the debug values to their own structure to make it clearer that they are for debug purposes.
- Moved the code registering the console commands to a specific method.
[CL 19898824 by paul chipchase in ue5-main branch]
#rb PJ.Kack
#jira UE-148223
#rnx
#preflight 626252fed2644b9ff7d3167b
- Added verbose logging to show when a payload is skipped due to the miss chance, or when a backend skips a payload due to the 'VA.MissBackends' command
[CL 19862821 by paul chipchase in ue5-main branch]