- checked in first-pass on CoreAudio (Mac/iOS) implementation of audio device module of new UnrealAudio
- fixed an issue with wrapping phase for sine functions (and thus fixed unit tests)
- changed simple device out unit test to be more simple (pure sine tone on each channel in octaves every second)
[CL 2519686 by Aaron McLeran in Main branch]
- Changing design to not include audio device input in main user callback since there were issues with patching XAudio2 (which doesn't support device input) and Wasapi capture devices when the device sample rates were not exactly the same. I could fix this but its probably not worth the time at the moment. We'll add mic input back into the audio engine in the future, but likely in a different mechanism, perhaps as an audio generator object.
- Fully implemented unreal audio device module for XAudio2
- Added ability to switch which device module to test from the commandlet, now to optionally test a specific module (i.e. flip between Wasapi and XAudio2 on pc), type:
-command UnrealEd.AudioTestCommandlet XAudio2 device all
or:
-command UnrealEd.AudioTestCommandlet Wasapi device all
The default as of this check in is Wasapi, so you can also do the following to test Wasapi:
-command UnrealEd.AudioTestCommandlet device all
Of course, the "in" flavors of tests have been removed since there is no longer support for device input
[CL 2510537 by Aaron McLeran in Main branch]
options: Nop / Read / Write
added ensure() and check() to find bad usage patterns
this fixes DBuffer decal rendering, allows optimizations in the RHI (near hardware APIs like DX12 or consoles)
[CL 2505694 by Martin Mittring in Main branch]
- First check in for new audio system
- Creation of new UnrealAudio module interface and implementation
- UnrealAudioDevice module interface (for platform-dependent audio device I/O)
- Intialize/Shutdown of new UnrealAudio module
- Windows WASAPI implementation of audio device interface
- Support for querying audio devices
- Support for streaming audio from input devices
- Support for streaming audio to output devices
- Support for mutliple output channels (up to 7.1 for now)
- Beginnings of XAudio2 implementation of audio device interface
- A bunch of unit tests testing audio input/output streaming
- A commandlet to run said unit tests
- To run commandlet use "[UE4Editor executable] -command UnrealEd.AudioTestCommandlet [arguments]"
- arguments to use (and their test) are:
- "device query" - Prints out information about every connected audio device
- "device out" - Plays a decaying sinusoid in harmonic multiples at different rates on available output channels
- "device out_fm" - Plays an FM synthesizer bank with randomized parameters on all available output channels
- "device out_pan" - Plays a single noise source (with LFO LP filter) "3D" panning clockwise around output channels
- "device in" - Plays mic input into all output channels (simple passthrough)
- "device in_delay" - Plays mic input through a multi-tap delay line with random delays through all output channels
- "device all" - Runs all the above with 10 seconds for each test
[CL 2501316 by Aaron McLeran in Main branch]
* Now we get proper formatting and other gpu profiler functionality that already works for other RHI's (r.profilegpu.root)
[CL 2499048 by Daniel Wright in Main branch]
I have reviewed each change carefully, but it is a large change and I could have missed something! Here is a summary of the types of changes in this CL:
* Made nullptr checks consistent (the plurality of the changes are of this type)
* Completed switch statements (IE, switch did not explicitly handle default case, but had unhandled enum entries - this is the second most popular type of fix)
* Removed unused variables
* Removed redundant initializations
* WidgetNavigationCustomization.cpp was fixed by the owner
* integers converted to floats where result was stored in a float
* Removed redundent null checks (e.g. before delete statements)
* Renamed variables to prevent non-obvious shadowing
* Fixed use of bitwise & when checking for equality to an enum entry (which is often 0)
* Fixes for some copy paste errors (e.g. FoliageEdMode.cpp)
[CL 2498053 by Dan Oconnor in Main branch]
- The entire editor can now be compiled using Clang on Windows. It also runs (as long as you use the MSVC linker.)
- Use UEBuildWindows.bCompileWithClang=true to help test Clang support
- Most UE4 programs can now be compiled using Clang on Windows, and linked using the Clang linker on Windows
- Many C++ syntax fixes to resolve Clang errors and warnings when on Windows
- Clang on Windows now supports PCH files properly (but not "shared" PCHs, yet.)
- Hacked the DirectX XAudio2 headers slightly to work around a macro pasting bug in Clang
[CL 2494439 by Mike Fricker in Main branch]
Consistent null checks, fixes for copy/pasted conditions, copy pasted LOCTEXT identifiers, comparisons against literals of the wrong type, missing return statements, dead code, and some system calls that were ignoring their return value
[CL 2494390 by Dan Oconnor in Main branch]
- Everything but the factory continues to use the 1.0 interfaces.
- This requires Vista SP2+, but we should already be falling back to OpenGL at that point, so hopefully this doesn't have any compatibility regressions.
#codereview nick.penwarden
[CL 2492814 by JJ Hoesing in Main branch]
Original Notes (adapted)
Implementation and Integration of Oculus Audio SDK for VR Audio Spatialization
- Adding new audio extension which wraps the oculus audio sdk and the new XAPO plugin
- Adding new XAPO (Xaudio2 Audio Processing Object) effect for processing mono audio source streams inside the new module
- Added new enumeration which allows users to select which spatialization algorithm to use for spatialized mono sources
- Refactored the regular sound source spatialization/effect in XAudio2 device code to support the new HRTF mono-to-stereo effect
- Designed feature so that if the sound spatialization module isn't used, the spatialization will default to normal spatialization algorithm
Notes on implementation:
- Because the audio engine doesn't bifurcate spatialized vs. non-spatialized sound sources into separate source pools, I had to create up-front the effects for the full number of supported sounds 32). This is because the oculus sdk requires at initialization the total number of sound sources that will be used in the SDK.
- Because the oculus SDK refers to sound source instances by index (into the pre-allocated array of sources set at init), I had to save each sound source's index so that it can be used on the HRTF XAPO effect, and then piped to the oculus SDK inside the UE audio extension.
- The audio engine assumes that mono-sources will be treated a certain way (during send-routing and spatialization). Because this new HRTF effect is effectively turning mono-sources into stereo-sources, some code had to be changed to send/route these audio sources as if they were stereo.
- This implementation is slightly different than the original GDC implementation to better work with multiple audio devices. Each audio device creates an IAudioSpatializationAlgorithm object which contains the oculus HRTF processing contexts.
#codereview Nick.Whiting Marc.Audy
[CL 2488287 by Aaron McLeran in Main branch]