The practical effect this has is that we avoid potential trailing padding at
the end of DXBC blobs. Unfortunately this also means we need to be more
careful about using bytecode_get_size() to find the offset where subsequent
data would get written, although in many cases this follows a put_u32() call.
Normalise the incoming vkd3d_shader_instruction IR to the shader model 6
pattern. This allows generation of a single patch constant function in
SPIR-V.
Since the introduction of instruction arrays, the parser location no
longer matches the location of the current instruction. Ultimately we'll
likely want to add some kind of explicit location information to struct
vkd3d_shader_instruction_array, because we might do transformations that
change the order of the original instructions.
VKD3D_SM4_OP_DCL_RESOURCE currently has 1 for "dst_count", but NULL for
"dst". This is largely harmless because we never attempt to access the
destination register of VKD3DSIH_DCL instructions, but nevertheless not
quite proper.
d3d12_command_queue_flush_ops() can renter itself while processing signal
events. Since we don't use recursive mutexes, we currently have to check
some of the queue variables without holding the mutex, which is not safe.
This is solved by allowing the queue to release its mutex while it is
processing entries: when flushing, the queue is briefly locked, the
is_flushing flag is set, the queue content is copied away and the
queue is unlocked again. After having processed the entries, the
queue is locked again to check is something else was added in the
meantime. This is repeated until the queue is empty (or a wait operation
is blocking it).
This should also remove some latency when a thread pushes to the queue
while another one is processing it, but I didn't try to measure any
impact. While it is expected that with this patch the queue mutex
will be locked and unlocked more frequently, it should also remain
locked for less time, hopefully creating little contention.
The goal is to simplify the CS queue handling: with this and the following
changes operations are always started by d3d12_command_queue_flush_ops(),
in order to make further refactoring easier.
Notice that while with this change executing an operation on an empty CS
queue is a bit less efficient, it doesn't require more locking. On the other
hand, this change paves the road for executing CS operations without holding
the queue lock.
Otherwise it could be added more than once.
Note that the deleted comment is wrong: between when d3d12_command_queue_flush_ops()
returns and when the queue is added back to the blocked list, the queue
might have been pushed to and flushed an arbitrary number of times.
Without this patch, dp3 and dp4 map src swizzles to the dst writemask,
which is not correct.
Before b84f560bdf, these ops worked
despite this, because the dst register had, incorrectly, the full
writemask.
To solve this problem, write_sm1_binary_op_dot() is introduced,
similarly to write_sm4_binary_op_dot().
But still throw hlsl_fixme() when there is more than one.
Prioritizing among multiple compatible function overloads in the same way
as the native compiler would require systematic testing.
Otherwise we may create nodes of different dimensions than the ones we
are replacing.
"count" is the number of components of the source deref (without
considering the swizzle), while "instr_component_count" is the actual
number of components of the instruction to be replaced.
This was originally left alone in order to allow functions without early return
to succeed, since in that case we would already emit the correct bytecode
despite not handling the HLSL_IR_JUMP_RETURN instruction.
Now that we lower return statements, however, any unhandled instructions are
either definitely going to result in invalid bytecode, or rare enough that it's
not worth returning success anyway.
SM1 dp2add doesn't map src swizzles to the dst writemask, also it
expects the last argument to have a replicate swizzle.
Before this patch we were writing the operation as:
```
dp2add r0.x, r1.x, r0.x, r2.x
```
and now it is:
```
dp2add r0.x, r1.xyxx, r0.xyxx, r2.x
```
dp2add now has its own function, write_sm1_dp2add(), since it seems to
be the only instruction with this structure.
Ideally we would be using the default swizzles for the first two src
arguments:
```
dp2add r0.x, r1, r0, r2.x
```
since, according to native's documentation, these are supported for all
sm < 4.
But this change -- along with following the convention of repeating the
last component of the swizzle when fewer than 4 components are to be
specified -- would require more global changes, probably in
hlsl_swizzle_from_writemask() and hlsl_map_swizzle().
Because of the change introduced in
f21693b2 vkd3d-shader/hlsl: Use reg_size as component count when allocating a single register.
SM1 scalars and vectors were not longer getting the correct writemask
when they are allocated.
This happened because they have to reserve the whole register even if
they only use some of its components, so their reg_size may differ from
the number of components.
This commit fixes that.
Prevent them from being ever looked up.
Our naming scheme for synthetic variables already effectively prevents this, but
this is better for clarity. We also will need to be able to move some named
variables into a dummy scope to account for complexities around function
definition and declarations.
In practice they never fail. If they fail, it means that there
is some underlying platform problem and there is little we can do
anyway. Under pthreads function prototypes allow returning failure,
but that's only used for "error checking" mutexes, which we
don't use.
On the other hand, error handling in vkd3d is rather inconsistent:
sometimes the errors are ignored, sometimes logged, sometimes
passed to the caller. It's hard to handle failures appropriately
if you can't even keep your state consistent, so I think it's
better to avoid trying, assume that synchronization primitives do
not fail and at least have consistent logging if something goes
wrong.
Co-authored-by: Francisco Casas <fcasas@codeweavers.com>
Co-authored-by: Zebediah Figura <zfigura@codeweavers.com>
Because copy_propagation_transform_object_load() replaces a deref
instead of an instruction, it is currently prone to two problems:
1- It can replace a deref with the same deref, returning true every
time and getting the compilation stuck in an endless loop of
copy-propagation iterations.
2- When performed multiple times in the same deref, the second time it
can replace the deref with a deref from a temp that is only valid in
another point of the program execution, resulting in an incorrect value.
This patch preempts this by avoiding replacing derefs when the new deref
doesn't point to a uniform variable. Because, uniform variables cannot
be written to.
Reinterpret min16float, min10float, min16int, min12int, and min16uint
as their regular counterparts: float, float, int, int, uint,
respectively.
A proper implementation would require adding minimum precision
indicators to all the dxbc-tpf instructions that use these types.
Consider the output of fxc 10.1 with the following shader:
uniform int i;
float4 main() : sv_target
{
min16float4 a = {0, 1, 2, i};
min16int2 b = {4, i};
min10float3 c = {6.4, 7, i};
min12int d = 9.4;
min16uint4x2 e = {14.4, 15, 16, 17, 18, 19, 20, i};
return mul(e, b) + a + c.xyzx + d;
}
However, if the graphics driver doesn't have minimum precision support,
it ignores the minimum precision indicators and runs at 32-bit
precision, which is equivalent as working with regular types.
A pointer to the containing descriptor heap can be derived from this
information.
PE build of vkd3d uses Windows critical sections for synchronisation,
and these slow down on the very high lock/unlock rate during multithreaded
descriptor copying in Shadow of the Tomb Raider. This patch speeds up the
demo by about 8%. By comparison, using SRW locks in the allocators and
locking them for read only where applicable is about 4% faster.
If the offset of a gather resource load can be represented as an
aoffimmi (vectori of ints from -8 to 7), use one.
This is of particular importance for 4.0 profiles, where this is the only
valid way of representing offsets for this operation.
If a hlsl_ir_load loads a variable whose components are stored from different
instructions, copy propagation doesn't replace it.
But if all these instructions are constants (which currently is the case
for value constructors), the load could be replaced with a constant value.
Which is expected in some other instructions, e.g. texel_offsets when
using aoffimmi modifiers.
For instance, this shader:
```
sampler s;
Texture2D t;
float4 main() : sv_target
{
return t.Gather(s, float2(0.6, 0.6), int2(0, 0));
}
```
results in the following IR before applying the patch:
```
float | 6.00000024e-01
float | 6.00000024e-01
uint | 0
| = (<constructor-2>[@4].x @2)
uint | 1
| = (<constructor-2>[@6].x @3)
float2 | <constructor-2>
int | 0
int | 0
uint | 0
| = (<constructor-5>[@11].x @9)
uint | 1
| = (<constructor-5>[@13].x @10)
int2 | <constructor-5>
float4 | gather_red(resource = t, sampler = s, coords = @8, offset = @15)
| return
| = (<output-sv_target0> @16)
```
and this IR afterwards:
```
float2 | {6.00000024e-01 6.00000024e-01 }
int2 | {0 0 }
float4 | gather_red(resource = t, sampler = s, coords = @2, offset = @3)
| return
| = (<output-sv_target0> @4)
```
Rename it to copy_propagation_replace_with_single_instr() accordingly.
The idea is to introduce a constant vector replacement pass which will do the
same thing.