This patch makes index expressions on resources hlsl_ir_index nodes
instead of hlsl_ir_resource_load nodes, because it is not known if they
will be used later as the lhs of an hlsl_ir_resource_store.
For now, the only benefit is consistency.
Otherwise, in the added test, we get:
vkd3d-compiler: vkd3d-shader/hlsl.c:452: hlsl_init_deref_from_index_chain: Assertion `chain' failed.
because on the path that triggers the following error:
E5002: Wrong type for argument 1 of 'tex3D': expected 'sampler' or 'sampler3D', but got 'sampler2D'.
a NULL params.resource is passed to hlsl_new_resource_load() and
then to hlsl_init_deref_from_index_chain().
Since in SM1 all vector types use 4 register components, and since SM1
doesn't consider vectors of different dimx incompatible, it is necessary
to ensure that the semantic var is created with dimx=4, and to add a
cast node.
The use of the hlsl_semantic.reported_duplicated_output_next_index field
allows reporting multiple overlapping indexes, such as in the following
vertex shader:
void main(out float1x3 x : OVERLAP0, out float1x3 y : OVERLAP1)
{
x = float3(1.0, 2.0, 3.2);
y = float3(5.0, 6.0, 5.0);
}
apple.hlsl:1:41: E5013: Output semantic "OVERLAP1" is used multiple times.
apple.hlsl:1:13: First use of "OVERLAP1" is here.
apple.hlsl:1:41: E5013: Output semantic "OVERLAP2" is used multiple times.
apple.hlsl:1:13: First use of "OVERLAP2" is here.
While at the same time avoiding reporting overlaps more than once for
large arrays:
struct apple
{
float2 p : sv_position;
};
void main(out apple aps[4])
{
}
apple.hlsl:3:8: E5013: Output semantic "sv_position0" is used multiple times.
apple.hlsl:3:8: First use of "sv_position0" is here.
The descriptor component of struct d3d12_desc is replaced with a union
containing a pointer which can be swapped out using
InterlockedExchangePointer(). To make it safe to increment the refcount
of such an object it is necessary to cache freed objects. Elimination
of the descriptor mutexes on games which use multithreaded descriptor
writes nearly doubles framerate on recent hardware.
Eliminates vk_sets_mutex. Performance on average may be lower until
the descriptor mutexes are replaced and Vulkan writes are buffered
to reduce thunk calls.
Fix a compile warning:
../vkd3d/libs/vkd3d-shader/hlsl_codegen.c: In function 'allocate_semantic_register':
../vkd3d/libs/vkd3d-shader/hlsl_codegen.c:2947:85: error: passing argument 4 of 'hlsl_sm4_register_from_semantic' from incompatible pointer type [-Werror=incompatible-pointer-types]
2947 | if ((builtin = hlsl_sm4_register_from_semantic(ctx, &var->semantic, output, &type, NULL, &has_idx)))
| ^~~~~
| |
| unsigned int *
In file included from ../vkd3d/libs/vkd3d-shader/hlsl_codegen.c:21:
../vkd3d/libs/vkd3d-shader/hlsl.h:1171:52: note: expected 'enum vkd3d_sm4_register_type *' but argument is of type 'unsigned int *'
1171 | bool output, enum vkd3d_sm4_register_type *type, enum vkd3d_sm4_swizzle_type *swizzle_type, bool *has_idx);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~
It is illegal to match a SPIR-V multisampled resource to a Vulkan resource which
is not multisampled. Vulkan considers a resource to be multisampled if its
sample count is greater than 1 (and SPIR-V does not care about the sample count).
This fixes validation errors in the case where the sample count does actually
match the resource. In order to provide correct behaviour when there is a
mismatch, or when the sample count is missing, we will need yet another
additional interface. In the absence of that it seems best to provide a best
guess.
This fixes a validation error with the not-yet-committed merge request 135, when
the d3d11 runner is run through Wine with the Vulkan backend.
From this point on, it is no longer true that only hlsl_ir_loads can
return objects, because an object can also come from chain of
hlsl_ir_indexes that ends in an hlsl_ir_load.
The lower_index_loads pass takes care of lowering all hlsl_ir_indexes
into hlsl_ir_loads.
For this reason, hlsl_resource_load_params now expects both the resource
as the sampler to be just an hlsl_ir_node pointer instead of a pointer
to a more specific hlsl_ir_load.
This node type is intended for use during parse-time.
While we parse an indexing expression such as "a[3]", we don't know if
it will end up as part of an expression (in which case it must be folded
into a load) or it is for the lhs of a store (in which case it must be
folded into the store's deref).
The SPIR-V backend will emit a default control point phase. Inserting
inputs into the IR allows handling of declarations via the usual path
instead of an ad hoc implementation which may not match later changes
to input handling.
In SPIR-V the address must include the invocation id, but in TPF it
is implicit. Move the register index up one slot and insert an
OUTPOINTID relative address.
The practical effect this has is that we avoid potential trailing padding at
the end of DXBC blobs. Unfortunately this also means we need to be more
careful about using bytecode_get_size() to find the offset where subsequent
data would get written, although in many cases this follows a put_u32() call.
Normalise the incoming vkd3d_shader_instruction IR to the shader model 6
pattern. This allows generation of a single patch constant function in
SPIR-V.
Since the introduction of instruction arrays, the parser location no
longer matches the location of the current instruction. Ultimately we'll
likely want to add some kind of explicit location information to struct
vkd3d_shader_instruction_array, because we might do transformations that
change the order of the original instructions.
VKD3D_SM4_OP_DCL_RESOURCE currently has 1 for "dst_count", but NULL for
"dst". This is largely harmless because we never attempt to access the
destination register of VKD3DSIH_DCL instructions, but nevertheless not
quite proper.
d3d12_command_queue_flush_ops() can renter itself while processing signal
events. Since we don't use recursive mutexes, we currently have to check
some of the queue variables without holding the mutex, which is not safe.
This is solved by allowing the queue to release its mutex while it is
processing entries: when flushing, the queue is briefly locked, the
is_flushing flag is set, the queue content is copied away and the
queue is unlocked again. After having processed the entries, the
queue is locked again to check is something else was added in the
meantime. This is repeated until the queue is empty (or a wait operation
is blocking it).
This should also remove some latency when a thread pushes to the queue
while another one is processing it, but I didn't try to measure any
impact. While it is expected that with this patch the queue mutex
will be locked and unlocked more frequently, it should also remain
locked for less time, hopefully creating little contention.
The goal is to simplify the CS queue handling: with this and the following
changes operations are always started by d3d12_command_queue_flush_ops(),
in order to make further refactoring easier.
Notice that while with this change executing an operation on an empty CS
queue is a bit less efficient, it doesn't require more locking. On the other
hand, this change paves the road for executing CS operations without holding
the queue lock.
Otherwise it could be added more than once.
Note that the deleted comment is wrong: between when d3d12_command_queue_flush_ops()
returns and when the queue is added back to the blocked list, the queue
might have been pushed to and flushed an arbitrary number of times.
Without this patch, dp3 and dp4 map src swizzles to the dst writemask,
which is not correct.
Before b84f560bdf, these ops worked
despite this, because the dst register had, incorrectly, the full
writemask.
To solve this problem, write_sm1_binary_op_dot() is introduced,
similarly to write_sm4_binary_op_dot().
But still throw hlsl_fixme() when there is more than one.
Prioritizing among multiple compatible function overloads in the same way
as the native compiler would require systematic testing.
Otherwise we may create nodes of different dimensions than the ones we
are replacing.
"count" is the number of components of the source deref (without
considering the swizzle), while "instr_component_count" is the actual
number of components of the instruction to be replaced.
This was originally left alone in order to allow functions without early return
to succeed, since in that case we would already emit the correct bytecode
despite not handling the HLSL_IR_JUMP_RETURN instruction.
Now that we lower return statements, however, any unhandled instructions are
either definitely going to result in invalid bytecode, or rare enough that it's
not worth returning success anyway.
SM1 dp2add doesn't map src swizzles to the dst writemask, also it
expects the last argument to have a replicate swizzle.
Before this patch we were writing the operation as:
```
dp2add r0.x, r1.x, r0.x, r2.x
```
and now it is:
```
dp2add r0.x, r1.xyxx, r0.xyxx, r2.x
```
dp2add now has its own function, write_sm1_dp2add(), since it seems to
be the only instruction with this structure.
Ideally we would be using the default swizzles for the first two src
arguments:
```
dp2add r0.x, r1, r0, r2.x
```
since, according to native's documentation, these are supported for all
sm < 4.
But this change -- along with following the convention of repeating the
last component of the swizzle when fewer than 4 components are to be
specified -- would require more global changes, probably in
hlsl_swizzle_from_writemask() and hlsl_map_swizzle().
Because of the change introduced in
f21693b2 vkd3d-shader/hlsl: Use reg_size as component count when allocating a single register.
SM1 scalars and vectors were not longer getting the correct writemask
when they are allocated.
This happened because they have to reserve the whole register even if
they only use some of its components, so their reg_size may differ from
the number of components.
This commit fixes that.
Prevent them from being ever looked up.
Our naming scheme for synthetic variables already effectively prevents this, but
this is better for clarity. We also will need to be able to move some named
variables into a dummy scope to account for complexities around function
definition and declarations.
In practice they never fail. If they fail, it means that there
is some underlying platform problem and there is little we can do
anyway. Under pthreads function prototypes allow returning failure,
but that's only used for "error checking" mutexes, which we
don't use.
On the other hand, error handling in vkd3d is rather inconsistent:
sometimes the errors are ignored, sometimes logged, sometimes
passed to the caller. It's hard to handle failures appropriately
if you can't even keep your state consistent, so I think it's
better to avoid trying, assume that synchronization primitives do
not fail and at least have consistent logging if something goes
wrong.
Co-authored-by: Francisco Casas <fcasas@codeweavers.com>
Co-authored-by: Zebediah Figura <zfigura@codeweavers.com>
Because copy_propagation_transform_object_load() replaces a deref
instead of an instruction, it is currently prone to two problems:
1- It can replace a deref with the same deref, returning true every
time and getting the compilation stuck in an endless loop of
copy-propagation iterations.
2- When performed multiple times in the same deref, the second time it
can replace the deref with a deref from a temp that is only valid in
another point of the program execution, resulting in an incorrect value.
This patch preempts this by avoiding replacing derefs when the new deref
doesn't point to a uniform variable. Because, uniform variables cannot
be written to.
Reinterpret min16float, min10float, min16int, min12int, and min16uint
as their regular counterparts: float, float, int, int, uint,
respectively.
A proper implementation would require adding minimum precision
indicators to all the dxbc-tpf instructions that use these types.
Consider the output of fxc 10.1 with the following shader:
uniform int i;
float4 main() : sv_target
{
min16float4 a = {0, 1, 2, i};
min16int2 b = {4, i};
min10float3 c = {6.4, 7, i};
min12int d = 9.4;
min16uint4x2 e = {14.4, 15, 16, 17, 18, 19, 20, i};
return mul(e, b) + a + c.xyzx + d;
}
However, if the graphics driver doesn't have minimum precision support,
it ignores the minimum precision indicators and runs at 32-bit
precision, which is equivalent as working with regular types.
A pointer to the containing descriptor heap can be derived from this
information.
PE build of vkd3d uses Windows critical sections for synchronisation,
and these slow down on the very high lock/unlock rate during multithreaded
descriptor copying in Shadow of the Tomb Raider. This patch speeds up the
demo by about 8%. By comparison, using SRW locks in the allocators and
locking them for read only where applicable is about 4% faster.
If the offset of a gather resource load can be represented as an
aoffimmi (vectori of ints from -8 to 7), use one.
This is of particular importance for 4.0 profiles, where this is the only
valid way of representing offsets for this operation.
If a hlsl_ir_load loads a variable whose components are stored from different
instructions, copy propagation doesn't replace it.
But if all these instructions are constants (which currently is the case
for value constructors), the load could be replaced with a constant value.
Which is expected in some other instructions, e.g. texel_offsets when
using aoffimmi modifiers.
For instance, this shader:
```
sampler s;
Texture2D t;
float4 main() : sv_target
{
return t.Gather(s, float2(0.6, 0.6), int2(0, 0));
}
```
results in the following IR before applying the patch:
```
float | 6.00000024e-01
float | 6.00000024e-01
uint | 0
| = (<constructor-2>[@4].x @2)
uint | 1
| = (<constructor-2>[@6].x @3)
float2 | <constructor-2>
int | 0
int | 0
uint | 0
| = (<constructor-5>[@11].x @9)
uint | 1
| = (<constructor-5>[@13].x @10)
int2 | <constructor-5>
float4 | gather_red(resource = t, sampler = s, coords = @8, offset = @15)
| return
| = (<output-sv_target0> @16)
```
and this IR afterwards:
```
float2 | {6.00000024e-01 6.00000024e-01 }
int2 | {0 0 }
float4 | gather_red(resource = t, sampler = s, coords = @2, offset = @3)
| return
| = (<output-sv_target0> @4)
```
Rename it to copy_propagation_replace_with_single_instr() accordingly.
The idea is to introduce a constant vector replacement pass which will do the
same thing.
copy_propagation_compute_replacement() is not doing very much for us, and
conceptually is a bit of an odd fit anyway, since it's meant to deal with
multi-component types.
validate_static_object_references() validates that uninitialized static
objects are not referenced in the shader.
In case a static variable contains both numeric and object types, the
"Static variables cannot have both numeric and resource components."
error should preempt uninitialized numeric values to reach further
compilation steps.
Note that in the future we should call
validate_static_object_references() after DCE and pruning branches,
because shaders such as these compile (at least in more modern versions
of the native compiler):
Branch pruning:
```
static RWTexture2D<float> tex;
float4 main() : sv_target
{
if (0)
{
tex[int2(0, 0)] = 2;
}
return 0;
}
```
DCE:
```
static Texture2D tex;
uniform uint i;
float4 main() : sv_target
{
float4 unused = tex.Load(int3(0, 1, 2));
return 0;
}
```
These are "todo" tests in hlsl-static-initializer.shader_test
that depend on this.
We are currently not initializing static values to zero by default.
Consider the following shader:
```hlsl
static float4 va;
float4 main() : sv_target
{
return va;
}
```
we get the following output:
```
ps_5_0
dcl_output o0.xyzw
dcl_temps 2
mov r0.xyzw, r1.xyzw
mov o0.xyzw, r0.xyzw
ret
```
where r1.xyzw is not initialized.
This patch solves this by assigning the static variable the value of an
uint 0, and thus, relying on complex broadcasts.
This seems to be the behaviour of the 9.29.952.3111 version of the native
compiler, since it retrieves the following error on a shader that lacks
an initializer on a data type with object components:
```
error X3017: cannot convert from 'uint' to 'struct <unnamed>'
```
We have a different system of generating intrinsics, which makes it easier to
deal with "polymorphic" arithmetic functions.
Defining and storing intrinsics as hlsl_ir_function_decls would also require
more space in memory (and more optimization passes to get rid of the parameter
variables), and doesn't really save us any effort in terms of source code.
Using add_unary_arithmetic_expr() instead of hlsl_new_unary_expr()
allows the intrinsic to work with matrices.
Otherwise we get:
E5017: Aborting due to not yet implemented feature: Copying from unsupported node type.
because an HLSL_IR_EXPR reaches split_matrix_copies().
Atomic ops on images with Unknown type will cause SPIR-V validation failure,
and assertion failure in Mesa debug builds. D3D12 allows atomics on typed
buffers, and this requires a distinction to be made between UAV reads and
atomic ops.
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=53874
Unlike compatible_data_types() and implicit_compatible_data_types(),
this function is intended to be symmetrical. So it makes sense to
preserve the names "t1" and "t2" for the arguments.
Otherwise, for instance, the added test results in:
debug_hlsl_writemask: Assertion `!(writemask & ~VKD3DSP_WRITEMASK_ALL)' failed.
Which happens in allocate_variable_temp_register() when the variable's
type reg_size is <= 4 but its component count is larger, which may
happen if it contains objects.
We would like to generate SPIR-V for input formats other than DXBC.
The "vkd3d_" prefix is dropped, partly to make names shorter, and partly to help
clarify what is an internal function.
I prefer avoiding the vkd3d_* prefix on all internal functions, for these
reasons. However, I'm open to restoring it.
The function has far too many arguments, including multiple different arguments
with the same type. Use a structure for clarity and to avoid errors.
Merge hlsl_new_sample_lod() into hlsl_new_resource_load() accordingly.
This should silence warnings about some branches non returning any value
without requiring additional "return 0" statement or similar.
Also, in theory this might enable to compiler to optimize the program
a little bit more, though that's unlikely to have any measurable effect.
Also, TextureCube and TextureCubeArray don't support the offset
argument, so this check is updated here too.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Also, TextureCube and TextureCubeArray don't support the offset
argument, so this check is updated.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
HLSL_ARRAY_ELEMENTS_COUNT_IMPLICIT (zero) is used as a temporal value
for elements_count for implicit size arrays.
This value is replaced by the correct one after parsing the initializer.
In case the implicit array is not initialized correctly, hlsl_error()
is called but the array size is kept at 0. So the rest of the code
must handle these cases.
In shader model 5.1, unlike in 5.0, declaring a multi-dimensional
object-type array with the last dimension implicit results in
an error. This happens even in presence of an initializer.
So, both gen_struct_fields() and declare_vars() first check if the
shader model is 5.1, the array elements are objects, and if there is
at least one implicit array size to handle the whole type as an
unbounded resource array.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
It is responsibility of the shader's programmer to ensure that
object references can be solved statically.
Resource arrays for ps_5_1 and vs_5_1 are an exception which is not
properly handled yet. They probably deserve a different object type.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Otherwise we get false in implicit_compatible_data_types() when passing
types that are equal but not convertible according to
convertible_data_type(); e.g. getting:
"Can't implicitly convert from Texture2D<float4> to Texture2D<float4>."
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Don't assume that enums and uint32_t parameters are identical. Clang
16 changes the diagonstic for incompatible function pointer types
from a warning into an error by default.
This fixes the following error, when built (for aarch64, but probably
also for other architectures) in MSVC mode:
../src/libs/vkd3d/libs/vkd3d-shader/spirv.c:1083:13: error: incompatible function pointer types passing 'uint32_t (struct vkd3d_spirv_builder *, uint32_t, SpvDim, uint32_t, uint32_t, uint32_t, uint32_t, SpvImageFormat)' (aka 'unsigned int (struct vkd3d_spirv_builder *, unsigned int, enum SpvDim_, unsigned int, unsigned int, unsigned int, unsigned int, enum SpvImageFormat_)') to parameter of type 'vkd3d_spirv_build7_pfn' (aka 'unsigned int (*)(struct vkd3d_spirv_builder *, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int)') [-Wincompatible-function-pointer-types]
vkd3d_spirv_build_op_type_image);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/libs/vkd3d/libs/vkd3d-shader/spirv.c:612:68: note: passing argument to parameter 'build_pfn' here
SpvOp op, const uint32_t *operands, vkd3d_spirv_build7_pfn build_pfn)
^
Signed-off-by: Martin Storsjö <martin@martin.st>
hlsl_new_store() and hlsl_new_load() are deleted, so now there are no more
direct ways to create derefs with offsets in hlsl.c and hlsl.h.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
This can be done now, to ensure that register offsets are no longer used
in hlsl.c and hlsl.h.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
At this point, the parse code is free of offsets; it only uses index
paths.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
The transform_deref_paths_into_offsets pass turns these index paths back
into register offsets.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
At this point add_load() is split into add_load_component() and
add_load_index(); register offsets are hidden for these functions.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Once a event is signaled, the corresponding struct vkd3d_waiting_event
entry is considered dead and could be overwritten, so it's not safe to
keep a pointer to it in d3d12_fence_SetEventOnCompletion(). Instead,
keep the latch bit in d3d12_fence_SetEventOnCompletion() and put a
pointer to it in struct vkd3d_waiting_event.
Fixes reflections in Control appearing with only their red component.
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=52146
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Specifying R32 for UAVs created with a vector format, e.g. R32G32B32A32_FLOAT,
results in only the red being loaded/stored, potentially causing images to
contain only the red component.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Pthread mandates that a mutex must be unlocked before being destroyed.
In pratice I doubt this make a difference on any platform (certainly
it doesn't on Linux), but let's comply to standards.
An unblocking Signal() on the CPU must be handled after the blocked op
is written, or the op will not be flushed until the next signal.
The device is locked while the fence is already locked, so the fence must
never be locked after locking the device. Currently this never occurs.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Otherwise the following sequence can occur:
1. A command queue is added to the blocked list during a Wait() call.
2. An unblocking Signal() occurs on the CPU in another thread, flushing the
blocked ops, but as no op has been written, the queue is removed from the
blocked list.
3. The blocked op is written.
3. Another op is queued and the queue is not re-added to the blocked
list because this only happens for the first op.
World of Warcraft triggers this issue.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
When using PE vkd3d through Wine, debug output may be swallowed by writing to
Win32 stderr. Avoid this by providing a way to hook up vkd3d log output to Wine
output.
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
When using PE vkd3d through Wine, debug output may be swallowed by writing to
Win32 stderr. Avoid this by providing a way to hook up vkd3d log output to Wine
output.
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
When using PE vkd3d through Wine, debug output may be swallowed by writing to
Win32 stderr. Avoid this by providing a way to hook up vkd3d log output to Wine
output.
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Strictly increasing timeline values must be mapped to fence virtual values
to avoid invalid use of Vulkan timeline semaphores. In particular, non-
increasing values and value jumps of >= 4G are permitted in d3d12.
Different virtual D3D12 command queues may map to the same Vulkan queue.
If a wait of value N is submitted on one command queue, and then a signal
for >= N is submitted on another, but they are sent to the same Vk queue,
the wait will never complete. The solution is to buffer out-of-order waits
and any subsequent queue commands until an unblocking signal value is
submitted to a different D3D12 queue, or signaled on the CPU.
Buffering out-of-order waits also fixes the old fence implementation so it
is fully functional, though a bit less efficient than timeline semaphores.
Based in part on vkd3d-proton patches by Hans-Kristian Arntzen. Unlike the
vkd3d-proton implementation, this patch does not use worker threads for
submissions to the Vulkan queue.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Order does not need to be preserved here, and another function will add
to this array when mapped timeline semaphores are implemented.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Simplifies the handling of GPU waits, and in vkd3d-proton is reported
to increase performance when support for multiple Vulkan queues is
enabled, because it avoids the problem of fences being signaled while
they sit in the pending buffer waiting to be moved to the wait buffer.
Based on a vkd3d-proton patch by Philip Rebohle.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Simplifies the preservation of fence objects until worker threads are
done with them, and will be needed when threaded queue submission is
added.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Otherwise we can get failed assertions:
assert(node->type == HLSL_IR_LOAD);
because broadcasts to these types are not implemented yet.
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
The current workaround is broken for texture cube arrays, which already have 4
components.
Sampling with the components not packed together apparently succeeds with newer
NVidia drivers, so just remove the workaround. Tested with 470.103.01 on a GTX
1060.
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=52886
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Fixes crashes in Shadow of the Tomb Raider, GRID 2019 and probably others.
Signed-off-by: Conor McCarthy <cmccarthy@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
This commit includes work by Francisco Casas.
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
This commit includes work by Francisco Casas.
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
With this change it is possible to store booleans as 0xffffffff,
similarly as what happens at runtime.
Signed-off-by: Giovanni Mascellani <gmascellani@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Francisco Casas <fcasas@codeweavers.com>
Signed-off-by: Matteo Bruni <mbruni@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
This is especially a problem when e.g. it introduces a whitespace
before a #pragma directive, breaking shader compilation.
Signed-off-by: Matteo Bruni <mbruni@codeweavers.com>
Signed-off-by: Zebediah Figura <zfigura@codeweavers.com>
Signed-off-by: Henri Verbeet <hverbeet@codeweavers.com>
Signed-off-by: Alexandre Julliard <julliard@winehq.org>