Compiling and running with UBSan reported the following errors:
tests/d3d12.c:31063:5: runtime error: index 4 out of bounds for type 'float [4][8]'
tests/d3d12.c:31063:5: runtime error: index 8 out of bounds for type 'float [8]'
tests/d3d12.c:31063:5: runtime error: load of address 0x557ee85a1500 with insufficient space for an object of type 'const float'
tests/d3d12.c:31248:5: runtime error: index 4 out of bounds for type 'float [4][4]'
tests/d3d12.c:31248:5: runtime error: index 4 out of bounds for type 'float [4]'
tests/d3d12.c:31248:5: runtime error: load of address 0x557ee85a10d0 with insufficient space for an object of type 'const float'
This used to work when the macOS runner had a Sonoma host system.
Now it has Sequoia, even if the guest is still Sonoma, and the
test crashes with:
[mvk-error] VK_TIMEOUT: MTLCommandBuffer "vkQueueSubmit MTLCommandBuffer on Queue 3-0" execution failed (code 2): Caused GPU Hang Error (00000003:kIOGPUCommandBufferCallbackErrorHang)
vkd3d:56072:err:vkd3d_wait_for_gpu_timeline_semaphore Failed to wait for Vulkan timeline semaphore, vr -4.
Upgrading MoltenVK or the guest to Sequoia doesn't seem to help.
I haven't investigated the problem, but my experience is that
the paravirtualized Metal driver has a number of problems.
We currently check that non-shader-visible heaps have a NULL
handle, but that doesn't seem to be guaranteed: beside WARP, also
NVIDIA drivers still return a valid pointer. And that's a pretty
useless check anyway; rather, check that shader-visible heaps
have a valid pointer, which is more interesting.
Currently the tests expect that creating buffers in COMMON or
COPY_SOURCE state on UPLOAD heaps or in COMMON state on READBACK
heaps leads to a failure. I tested WARP, AMD and NVIDIA, and in
all cases the operations is successful.
I think the D3D12 runtime used reject resources created in the
configurations detailed above, but it doesn't any more (both
using the latest Agility SDK and the runtime distributed with
an updated Windows 11 system). However the CI still uses an
earlier runtime, so the old behavior is still allowed as
broken.
It seems that the NVIDIA drivers leaves VBVs bindings untouched
when they are NULL (or the GPU buffer address is NULL), instead of
setting them to a null binding.
Differently from other cases of inconsistent behaviour between AMD
and NVIDIA, here I'm explicitly marking the NVIDIA behaviour as
broken, because the expected behaviour is spelled out explicitly
(at least for the D3D12 specification standards).
It's hard to pinpoint exactly what's going wrong with these
tests. They seem to be related to atomics and GPU timestamps,
both categories that are known to have problems on MoltenVK in a
way or another; those failures clearly depend on a few factors
like the MoltenVK version, the macOS version and whether we're in
a virtual machine or not, but the exact dependency on those factors
is hard to describe (for example, in general the paravirtualized
device offered inside virtual machines has a lot more problems than
real devices, but I've seen tests, fixed all other conditions,
working on the paravirtualized device and not on the real device).
The only thing all tests in this batch have in common is that I've
never seen them fail on a Sequoia system, thus I've settled for
using just that as the bug_if() condition. Ultimately, wasting a
lot of time to get to the bottom of each single test failure is
pointless, and being able to mark the CI job as not allowed to
fail gives better regression protection than investigating each
of those. Also, I routinely run the tests on a Sequoia system, so
if these tests get broken this is going to be noticed anyway.
On MoltenVK it seems that all draws are always executed,
independently of the early depth stencil test. The problem doesn't
seem to belong to vkd3d or MoltenVK, because the generated Metal
commands look correct. I tried looking at a GPU capture with Xcode,
which was not very conclusive because it doesn't state clearly
whether early fragment tests were passed or not. Sometimes it
says that a fragment shader execution had no thread execution
data, which I interpret as the early fragment tests having
prevented the fragment shader from running, but it's not really
consistent, and it's never clear which results are based on
software simulation and which on the hardware run.
However taking everything into account I think the most likely
explanation is some incorrect optimization at the Metal level.