This allows skipping the basepass when the path tracer is active, reducing overhead.
VT feedback logic is identical to the rasterizer for now, and will not restart the path tracer accumulation. This is not noticeable in the editor as the right VT pages are loaded fairly quickly. In the offline case, this could potentially lead to some textures being blurrier for some samples, but with proper warmup this should not be noticeable either.
Note that the VT feedback logic is only executed for camera rays for now.
#rb Jeremy.Moore
[CL 27872518 by chris kulla in ue5-main branch]
The current logic was limiting the depth write to only happen for rays passing through the pixel square, but in practice it is more desireable for compositing to have the depth output in sync with the alpha channel. Keep the old logic around temporarily until the new behavior has been confirmed to produce the desired results.
#rb trivial
[CL 27768573 by chris kulla in ue5-main branch]
Implement a basic adaptive sampling strategy. This is not yet exposed in the UI, it must be enabled by the cvar: "r.PathTracing.AdaptiveSampling".
When enabled, the path tracer keeps track of a variance buffer. This variance buffer can then be used to predict the number of samples requires to reach a prescribed error threshold. This is used to de-active pixels that are deemed to be converged already, speeding up rendering of mostly converged areas.
The error estimate operates on multiple scales (mip maps) so that we have a more robust estimate of the variance even with very low sample counts. Currently the error estimate runs after each pass, and is used by the next sample to decide which pixels are still active.
A proper integration with MRQ (in particular the reference motion blur mode) is not yet done, as this will require a different way of stepping through temporal samples.This will be investigated as part of the MRQ 2.0 interface.
This feature (along with a few other path tracer permutations) is now gated behind r.PathTracing.Experimental (off by default) to avoid bloating startup times.
The adaptive metric is relative to the current exposure level. This means that the combination of adaptive sampling and auto-exposure has a feedback loop which could lead to unpredictable results. Therefore manual exposure is recommended when using adaptive sampling. In particular, when using the adaptive sampling visualization modes with auto-exposure, a feedback effect can be observed as the visualization influences the exposure level. A proper fix for this would be to move the visualization of adaptive sampling after tonemapping. This is left as future work for now to minimize the "spread" of this feature for now.
#rb Patrick.Kelly
[CL 27697971 by chris kulla in ue5-main branch]
- modify versioning mechanisms to use UEMETADATA pragmas
- update commands which generate version.ush files to generate such pragmas instead of comments
- include the above pragma-driven version(s) in the input hash when preprocessed job cache is enabled (not needed for the disabled case since as before the entire contents of the version files are hashed)
- fix bug in stb_preprocessor that was causing it to stop expanding macros if any diagnostic was encountered (and rename the array field storing diagnostic messages to diagnostics, since it contains more than just errors)
#rb Yuriy.ODonnell
[CL 27515060 by dan elksnitis in ue5-main branch]
Cleanup various small issues with heterogenous volumes and remove unused arguments (PathRoughness and Bounce were passed around inconsistently but not used in the end)
#rb Patrick.Kelly
#jira UE-192885
[CL 27080180 by chris kulla in ue5-main branch]
This is much faster than Sobol and appears to have similar quality. The default is still Sobol for now until we can accurately quantify the convergence differences.
#rb none
[CL 26837412 by chris kulla in ue5-main branch]
The payload encoding for anisotropy did not allow representing 0.0 anisotropy exactly. This in turn could lead to NaNs when the world tangent aligned with the shading normal.
#rb trivial
[CL 26302420 by chris kulla in ue5-main branch]
This was actually caused by the Tangent Space Normal setting not being respected properly.
Also cleanup unused code for SLW for the substrate case.
#rb Aleksander.Netzel
[CL 26190217 by chris kulla in ue5-main branch]
Note that the new code accepts a pair of random numbers in the unit square instead of on the unit disk. This required updating all callers (which all passed in the result of UniformSampleDisk anyway).
Also, call the aniso variant which saves a useless square followed by sqrt. The old non-aniso method is removed as there is no real savings to be had from the non-anisotropic case anymore.
#rb Sebastien.Hillaire
[CL 26031587 by chris kulla in ue5-main branch]
When the subsurface scattering radius become small, a fair amount of energy is lost which darkens the result in an un-desireable way. To avoid this, clamp the radius to a safe value and blend the subsurface albedo back towards diffuse instead. This keeps the overall energy contribution while avoiding small radii. It also improves the transition between radius 0 and radius slightly above 0.
#rb none
[CL 25870598 by chris kulla in ue5-main branch]
Current limitations:
* The GlintUV derivatives are incorrect/hardcoded for now.
* Sampling is done as a single lobe, without knowledge of the glints.
#rb chris.kulla
[FYI] sebastien.hillaire
[CL 25837504 by charles derousiers in ue5-main branch]