- Refactored the PC stereo spatialization code to move shared data and calculations to platform-independent code
- Implemented stereo spatialization on PS4 using the non A3D api
- More work needs to be done to determine A3D support and stereo spatialization
#codereview marcus.wassmer
[CL 2713854 by Aaron McLeran in Main branch]
CL 2704794 2704931 2704948 2704962 2705238 2706353 2700643 2705458
Uniform Buffer layout name
and
more debug info for this and other bugs: OR-7159 CRASH: Client crashed at start of match in D3D11Commands.cpp
Adding OptionalData (key value pairs) to the shaders (to put shader name into the shader code for better debugging), affects all RHI, invalidate DDC key for all shaders, cleanup existing code (was adding 0/1/5 bytes and had to compensate in many areas)
OptionalData:
key is a char, value is up to 255 bytes, up to 64K in total
#platformnotify Josh.Adams
[CL 2706923 by Martin Mittring in Main branch]
- Removing the async DestroyVoice code
- Adding format-based pools
- Created a separate pool for voices that use the spatialization effect (splitting mono-to-stereo) since xaudio2 defaults to a max per-voice effect output channel count equal to the input channel count unless created with an effect chain. Therefore xaudio2 voices that need to use the mono-to-stereo effect need to be created with it.
- Put new pools and related code in FXAudioDeviceProperties since it's the struct we use for xaudio2-specific handles to resources, e.g. IXAudio2, IXAudio2MasteringVoice, etc.
- Added pool shutdown code and moved other xaudio2 shutdown code into the destructor of FXAudioDeviceProperties
[CL 2701170 by Aaron McLeran in Main branch]
- Changing the way the Oculus Audio plugin deals with parameter passing, rather than using CXAPOParameterBase, we do our own parameter passing now.
- Removing COM ref counting in case of double-stopping and premature destruction
[CL 2688262 by Aaron McLeran in Main branch]
- New gameplay static function that will set a global pitch value
- An optional paramter can be given which will interpolate the current global pitch value to the new target value over time
- Non-UI sounds will be pitched
- This is a PC-only implementation, other platforms will follow once this feature is confirmed to be sufficient
[CL 2661592 by Aaron McLeran in Main branch]
- Moving the async worker thread declaration and implementation to be inside XAudio2Device.cpp
- Added a threadsafe counter to track the number of in-flight async destroy-voice calls per audio device
- Blocked the call to tear-down the audio device until all in-flight destroy-voice calls occur.
- Should rarely actualy block but this will prevent Xaudio2 from shutting down before an XAudio2 source get "Destroy" called on it. If a block does occur on shutdown, will only be for a neglible time.
- Wasn't able to repro after rigorous and prolonged audio start/stop spammage in the blueprint editor.
[CL 2660265 by Aaron McLeran in Main branch]
- PC/Xaudio2 implementation
- Added new parameter added to sound attenuation object that specifies stereo spread for 3d spatialized assets that use the attenuation setting
- High-level implementation description: split the stereo file channels into 2 mono-source 3d calculations in XAudio2 (though the stereo voice still counts as a single voice).
- Currently HRTF spatialization only works for mono-sources.
- The L/R channels are oriented perpendicular to the listener and spread from each other by the spread parameter.
- Added the ability to visualize 3d audio using
-- the new audio command is "Audio3dVisualize", we can change the name to something else if there's a different name designers would like to use
-- 3d visualization will show the name of the wave instance, a white crosshair where the emitter position is, and a red/green crosshair for the left/right channels. This way sound designers can
visualize the spread parameter. The position used to display the crosshairs is the exact position used by the audio engine to spatialize the channels.
-- We can add more data to this visualization later(can display current volume, color based on occlusion, etc)
-- If the sound is mono-spatialized, it will only display a white crosshair (and the name of the wave instance).
-- It only shows actively playing voices that have audio components (to render into the component's world) so can be used as a means to determine playback activity of audio components.
-- Only implemented for PC at the moment, though this feature should be ported to our other platforms
[CL 2660161 by Aaron McLeran in Main branch]