This can save a significant amount of temp registers because it allows to
avoid referencing the temp (and having to store it) when not needed.
For instance, this patch lowers the number of required temps for the
following ps_2_0 shader from 24 to 19:
int i;
float3x3 mats[4];
float4 main() : sv_target
{
return mul(mats[i], float3(1, 2, 3)).xyzz;
}
Also, it is needed for SM1 vertex shader relative addressing since
non-constant loads are required to be directly on the uniform ('c'
registers) instead of the temp, and non-constant loads cannot be
transformed by copy propagation.
Since 4a94bfc2f6 we segregate
different D3D12 descriptor types in different Vulkan descriptor sets.
This change was introduced to reduce descriptor wasting when
allocating a new descriptor pool; that can be very useful when
using virtual heaps, which have to often cycle through many
descriptors, but it is expected to have limited impact for Vulkan
heaps, given that in that case most descriptors are allocated through
the descriptor heap rather than through the command allocator.
Instead, it has a rather detrimental effect with Vulkan heaps,
because it tends to use many more Vulkan descriptor sets than
necessary, often with just a handful of descriptors each. This
causes a regression on some Vulkan implementations that support
too few descriptor sets.
With this change we revert to a situation similar to before,
stuffing all the descriptors that do not live in a root
descriptor table in as few descriptor sets as possible (at most
one or two, depending on whether push descriptors are used).
We're already implicitly using it for image layouts in which
either depth or stencil is writeable and the other is not.
Correspondingly, add the _KHR suffix in those cases, so the
extension usage is more evident.
According to the Vulkan Hardware Database, only four reports
without this extension were filed since 2023, and all of them
for configurations we likely don't target.
This could be useful since there are many shaders that contain `#include`
directives or use parameter-defined macros and we can't reproduce bugs
from the source alone.
The current TPF validator enforces that for each register involved
in a DCL_INDEX_RANGE instruction there must be a signature element
for that register and the DCL_INDEX_RANGE write mask. This is an
excessively strong request, and causes some shaders from The Falconeer
to be invalidly rejected.
The excessively strong check was needed to avoid triggering a bug
in the I/O normaliser. Since that bug is now solved, the check
can be relaxed.
A good part of the I/O normaliser job is to merge together signature
elements that are spanned by DCL_INDEX_RANGE instructions. The
current algorithm assumes that each index range touches exactly
one signature element for each index spanned by the range. The
assumption is used in shader_signature_merge() in the form of
expecting that, if the index range is N registers long, then,
once you find the first signature element of an index range, the
other elements that will have to be merged with it are exactly the
following N-1 according to the order given by
signature_element_register_compare() or
signature_element_mask_compare(), depending on the signature type.
This doesn't necessarily happen. For example, The Falconeer has a
few hull shaders in which this happens:
hs_fork_phase
dcl_hs_fork_phase_instance_count 13
dcl_input vForkInstanceId
dcl_output o4.z
dcl_output o5.z
dcl_output o6.z
dcl_output o7.z
dcl_output o12.z
dcl_output o13.z
dcl_output o14.z
dcl_output o15.z
dcl_output o16.z
dcl_output o17.z
dcl_output o18.z
dcl_output o19.z
dcl_output o20.z
dcl_temps 1
dcl_index_range o4.z 17
iadd r0.x, vForkInstanceId.x, l(4)
ult r0.y, vForkInstanceId.x, l(4)
movc r0.x, r0.y, vForkInstanceId.x, r0.x
mov o[r0.x + 4].z, l(0)
ret
Here the index range "skips" o8.z through o11.z, because those
registers only use mask .xy. The current algorithm fails on such
a shader.
Even depending on the signature element order doesn't look ideal.
I don't have a full counterexample for that, but it looks fragile,
especially given that the register allocation algorithm in FXC is
notoriously full of unexpected corner cases.
We solve both problems by slightly changing the architecture of
the normaliser: first we move computing the masks for the merge
signature element from signature_element_range_expand_mask(),
which is executed while merging signature, to
io_normaliser_add_index_range(), which is executed before merging
signatures. Then, while we are merging signatures, we can decide
for each single signature element whether it has to be retained
or not, and how it should be patched. The algorithm becomes
independent of the order, because each signature element can be
processed individually.
The assumptions the I/O normaliser makes on its input program are
rather intricated. In theory the VSIR validator checks should be
strong enough, but the validator isn't run by default anyway.
Whether the TPF parser validation is strong enough is not
completely clear to me, and considering that the I/O normaliser
could end up being used on different programs as well it's
probably better to revalidate locally just in case.
Commit 4717775abb removed the code turning
the corresponding DCL_INPUT instructions into NOPs, presumably because
vsir_program_remove_io_decls() now already removes them. That function
only removes the instructions though, not the declarations as such; we
now need to remove these from the "io_dcls" bitmap instead.