Files
UnrealEngineUWP/Engine/Shaders/Private/PathTracing/PathTracingBuildAdaptiveError.usf
chris kulla ef412c9670 Path Tracing: Implement Adaptive sampling
Implement a basic adaptive sampling strategy. This is not yet exposed in the UI, it must be enabled by the cvar: "r.PathTracing.AdaptiveSampling".

When enabled, the path tracer keeps track of a variance buffer. This variance buffer can then be used to predict the number of samples requires to reach a prescribed error threshold. This is used to de-active pixels that are deemed to be converged already, speeding up rendering of mostly converged areas.

The error estimate operates on multiple scales (mip maps) so that we have a more robust estimate of the variance even with very low sample counts. Currently the error estimate runs after each pass, and is used by the next sample to decide which pixels are still active.

A proper integration with MRQ (in particular the reference motion blur mode) is not yet done, as this will require a different way of stepping through temporal samples.This will be investigated as part of the MRQ 2.0 interface.

This feature (along with a few other path tracer permutations) is now gated behind r.PathTracing.Experimental (off by default) to avoid bloating startup times.

The adaptive metric is relative to the current exposure level. This means that the combination of adaptive sampling and auto-exposure has a feedback loop which could lead to unpredictable results. Therefore manual exposure is recommended when using adaptive sampling. In particular, when using the adaptive sampling visualization modes with auto-exposure, a feedback effect can be observed as the visualization influences the exposure level. A proper fix for this would be to move the visualization of adaptive sampling after tonemapping. This is left as future work for now to minimize the "spread" of this feature for now.

#rb Patrick.Kelly

[CL 27697971 by chris kulla in ue5-main branch]
2023-09-07 20:33:35 -04:00

42 lines
1.2 KiB
Plaintext

// Copyright Epic Games, Inc. All Rights Reserved.
#include "../Common.ush"
int2 InputResolution;
int2 OutputResolution;
SamplerState InputMipSampler;
Texture2D<float4> InputMip;
RWTexture2D<float4> OutputMip;
[numthreads(THREADGROUPSIZE_X, THREADGROUPSIZE_Y, 1)]
void PathTracingBuildAdaptiveErrorTextureCS(uint2 DispatchThreadId : SV_DispatchThreadID)
{
int2 OutPixelCoord = int2(DispatchThreadId);
if (any(OutPixelCoord >= OutputResolution))
{
return;
}
float2 UV = (OutPixelCoord + 0.5) / float2(OutputResolution);
float2 InvInputResolution = rcp(float2(InputResolution));
#if 0
// single tap mipmap
float4 Result = InputMip.SampleLevel(InputMipSampler, UV, 0);
#elif 1
float4 Result = 0.0;
float FilterSum = 0.0;
const int R = 2;
for (int dy = -R; dy <= R; dy++)
for (int dx = -R; dx <= R; dx++)
{
float2 Offset = float2(dx, dy);
float Filter = exp(-0.5 * length2(Offset));
float4 Pixel = InputMip.SampleLevel(InputMipSampler, UV + Offset * InvInputResolution, 0);
FilterSum += Filter;
Result = lerp(Result, Pixel, Filter / FilterSum);
}
#endif
//Result.z *= 4.0; // we are downsampling by 2x in each direction, so the overal normalization in the number of samples should be 4x
OutputMip[OutPixelCoord] = Result;
}