This used to work when the macOS runner had a Sonoma host system.
Now it has Sequoia, even if the guest is still Sonoma, and the
test crashes with:
[mvk-error] VK_TIMEOUT: MTLCommandBuffer "vkQueueSubmit MTLCommandBuffer on Queue 3-0" execution failed (code 2): Caused GPU Hang Error (00000003:kIOGPUCommandBufferCallbackErrorHang)
vkd3d:56072:err:vkd3d_wait_for_gpu_timeline_semaphore Failed to wait for Vulkan timeline semaphore, vr -4.
Upgrading MoltenVK or the guest to Sequoia doesn't seem to help.
I haven't investigated the problem, but my experience is that
the paravirtualized Metal driver has a number of problems.
There is currently no need to make a special case for 16-bit
values, since the SPIR-V backend currently confuses them with
32-bit values. The generated VSIR code is not correct, but that
will have to be handled at a different level.
1D read() is specified to support a level/lod parameter. The MSL
specification claims it needs to be 0 because "mipmaps are not supported
for 1D textures", but that restriction isn't documented for the
"mipmapLevelCount" property of MTLTextureDescriptor. Other APIs do
supported mipmapped 1D textures. Multi-sample textures aren't supported
by msl_ld(), so we don't need to worry about them.
The cube variants are simply disallowed in Direct3D, and currently
mishandled by msl_ld(). It's less clear whether multi-sample fetches
should be allowed, and how they're supposed to behave if they are,
although typically the "ld2dms" instruction would be used for those.
They're not supported on the MSL side either way.
I'm not specifically interested in that, but since I ran into
those idiosyncrasies while writing other TGSM tests I decided that
it might turn out useful to keep them.