The expected λ value for the tests in question is about 0.59, which
after linear mipmap interpolation should result in a sampled value of
about 0.41. The quantisation step was added to allow results as high as
0.43, as seen on some implementations.
AMD Radeon Pro Vega 20 on macOS 15.5 returns a sampled value of about
0.39, with both Vulkan/MoltenVK and MSL/Metal. This is not an issue with
the bias calculation; the same behaviour could be reproduced with
SampleLevel(), as used in the sample-level tests, if those tests used
more exciting values for the "level" parameter. It also doesn't appear
to be a general Metal issue; Intel UHD Graphics 630 does return the
expected values on the same setup. Instead, this appears to be a mipmap
interpolation issue on this particular GPU/driver. Mapping the sampled
values for "level" from 0.0 to 1.0, it seems the interpolation factor
used is "saturate(frac(λ) * 1.25 - 0.125)", instead of the normal
"frac(λ)".
Fascinating as that may be, the test here mainly cares about whether the
bias value was applied correctly, and in that regard a sampled value of
0.39 isn't any worse than the 0.43 we already accept. This commit
adjusts the bias value so that the expected sampled value is 0.45, which
makes the accepted error the same on both the positive and negative
side.
Depending on the casted operand, the generated values can be
ICB, IDXTEMP or GROUPSHAREDMEM.
The cast decoding code is entirely moved to the second pass, so
that we avoid abusing registers to temporarily store other data.
The stored value is never read, the caller will overwrite it with
the SSA register generated by the whole DXIL instruction.
Since the helper is always used for UINT instructions, change and
rename it accordingly, so we don't have the problem of finding out
which data type to use.
Currently structure type descriptions get interleaved with variable
length string data. The solution is to write all fixed length fields
first, then append strings.
Signed-off-by: Nikolay Sivov <nsivov@codeweavers.com>
In theory commit 7b07d77396 already
did that, but in practice it ended up picking a commit from the
1.8.2405 branch, where a few of our tests fail. Since I hope to
soon enable DXC for macOS again, it's useful to fix that oversight.
They either use geometry shaders or cull distances, which MoltenVK
doesn't support. However d3d12 has no way to indicate they're
unsupported, so the problem doesn't surface as a failed draw,
but rather as a draw that doesn't do anything.
I haven't investigated what's happening in detail. However vkd3d
emits this message, which makes me think the problem is ours:
vkd3d:62178588:fixme:spirv_compiler_get_descriptor_binding Could not find descriptor binding for type 0, space 0, registers [0:2], shader type 0.