If the parameter is HLSL_STORAGE_IN, we add a cast from the arg to the
param type so that it can enter the function, however this cast should
not be considered part of the lhs on the implicit assignment that happens
if the var is also HLSL_STORAGE_OUT.
While so far it has been posible to do this at parse time, this must
happen after knowing if the complex cast is on the lhs or not.
The modified tests show that before this patch we are currently
miscompiling when this happens, because a complex lhs cast is transformed
into a load, and add_assigment() just stores to the generated "cast"
temp.
For example, given two arguments, half3 and float, and two functions,
func(float, float) and func(float3, float3), fxc/d3dcompiler prefers to
widen both arguments to float3.
Pixel shader 1.x constants must be between -1 and 1, or they will be clamped,
even constants defined in the shader.
Also mark 1.x-specific features if any.
Pixel shader 1.x constants must be between -1 and 1, or they will be clamped,
even constants defined in the shader.
Also mark 1.x-specific features if any.
Pixel shader 1.x constants must be between -1 and 1, or they will be clamped,
even constants defined in the shader.
Also mark 1.x-specific features if any.
Pixel shader 1.x constants must be between -1 and 1, or they will be clamped,
even constants defined in the shader.
Also mark 1.x-specific features if any.
Pixel shader 1.x constants must be between -1 and 1, or they will be clamped,
even constants defined in the shader.
Also mark 1.x-specific features if any.
Pixel shader 1.x constants must be between -1 and 1, or they will be clamped,
even constants defined in the shader.
Also mark 1.x-specific features if any.
Pixel shader 1.x constants must be between -1 and 1, or they will be clamped,
even constants defined in the shader.
Also mark 1.x-specific features if any.
When the client acquires the Vulkan queue it has to ensure that
it is not submitting work before other work it depends on already
submitted through the Direct3D 12 API but currently in the internal
vkd3d queue. Currently we suggest to enqueue signalling a fence and
than wait for it before acquiring the Vulkan queue, which is
correct but excessive: it will wait not just for the work currently
in the queue to be submitted, but for it to be executed too,
introducing useless dependencies.
By adding a way to enqueue signalling a fence on the CPU side we
allow the client to wait for the currently outstanding work to
be submitted to Vulkan, but nothing more.
Commit 1ed8d907b3998b847daa26154ca0261f55f0de23 inadvertently dropped
emitting the tessellator domain for domain shaders. Although Vulkan
environments allow us to write the tessellator domain from the hull
shader, the domain shader, or both, that's not generally true for OpenGL
environments.