When the client acquires the Vulkan queue it has to ensure that
it is not submitting work before other work it depends on already
submitted through the Direct3D 12 API but currently in the internal
vkd3d queue. Currently we suggest to enqueue signalling a fence and
than wait for it before acquiring the Vulkan queue, which is
correct but excessive: it will wait not just for the work currently
in the queue to be submitted, but for it to be executed too,
introducing useless dependencies.
By adding a way to enqueue signalling a fence on the CPU side we
allow the client to wait for the currently outstanding work to
be submitted to Vulkan, but nothing more.
Commit 1ed8d907b3998b847daa26154ca0261f55f0de23 inadvertently dropped
emitting the tessellator domain for domain shaders. Although Vulkan
environments allow us to write the tessellator domain from the hull
shader, the domain shader, or both, that's not generally true for OpenGL
environments.
We're explicitly replacing zero with one in the only place where the
plane count being zero or one makes a difference. It also make sense:
UNKNOWN is used for buffers, which for all intents and purposes have one
plane.
Uncovered by the ininf() test in the next commit. This is why we insist
on test coverage; unfortunately this one slipped through in
fd1beedc07becc7e5e49b64273f7bde7f9a8b2a0.
Vertex shaders do not have CMP, so we use SLT and MAD.
For example, this vertex shader:
uniform float4 f;
void main(inout float4 pos : position, out float4 t1 : TEXCOORD1)
{
t1 = (int4)f;
}
results in:
vs_2_0
dcl_position v0
slt r0, c0, -c0
frc r1, c0
add r2, -r1, c0
slt r1, -r1, r1
mad oT1, r0, r1, r2
mov oPos, v0
while we have the lower_cmp() pass, each time it is applied many
instructions are generated, so this patch introduces a specialized
version of the cast-to-int lowering for efficiency.
Turns out we are not doing this correctly in SM1, because the rounding
should be to the number that is closer to zero and lower_casts_to_int()
doesn't do that.
A vertex shader test is added since in SM1 they must rely on the SLT
operation instead of the CMP operation.
fxc compiles this method even without the backcompat option.
Furthermore, it even does it on ps_2_0 despite the fact that it maps to
a texldd instruction, which is not available on plain ps_2_0, nor ps_2_b,
only on ps_2_a and ps_3_0 according to documentation.
It is worth mentioning that the additional offset parameter is not
supported when lowering.
Otherwise ubsan reports these errors on the bitwise.shader_test:
libs/vkd3d-shader/hlsl_constant_ops.c:970:50: runtime error: left shift of 1 by 31 places cannot be represented in type 'int'
libs/vkd3d-shader/hlsl_constant_ops.c:970:50: runtime error: left shift of negative value -12
Otherwise ubsan reports errors such as:
libs/vkd3d-shader/spirv.c:7266:5: runtime error: null pointer passed as argument 1, which is declared to never be null
Otherwise ubsan reports runtime errors such as:
libs/vkd3d-shader/ir.c:4731:5: runtime error: null pointer passed as argument 1, which is declared to never be null
Otherwise when passing "-fsanitize=undefined" to the compiler, ubsan
reports such as:
libs/vkd3d-shader/ir.c:3794:5: runtime error: null pointer passed as argument 1, which is declared to never be null
Since we have -Wswitch, this forces the developer to update all relevant
switches when an enum case is added.
Places where the default is just a FIXME are left alone.
We apply distributivity to applicable expressions, specifically with
the following rewrite rules:
(x OPL a) OPR (x OPL b) -> x OPL (a OPR b)
(y OPR (x OPL a)) OPR (x OPL b) -> y OPR (x OPL (a OPR b))
((x OPL a) OPR y) OPR (x OPL b) -> (x OPL (a OPR b)) OPR y
(x OPL a) OPR ((x OPL b) OPR y) -> (x OPL (a OPR b)) OPR y
(x OPL a) OPR (y OPR (x OPL b)) -> (x OPL (a OPR b)) OPR y
where a, b are constants.
Like we did before commit 067e6deee4aba447c6627ce02b58c6bdcd90470e.
These skips aren't all that interesting; it's entirely intentional that
e.g. a 2.0-3.0 run wouldn't run 4.0 shaders.