The basic idea is to write out the signal that came in with the same number of
channels as it had when it came in. Things get a bit more complicated when
one output block may be derived from more than one input block, each having
different numbers of channels. When this happens, the input blocks with fewer
channels are upmixed, so as not to lose (or distort) any signal in the block
with more channels.
HRTFPanner no longer uses exponential decay (with time constant 20ms) for
delay changes, but a smoother linear transition during cross-fade time (~45ms).
--HG--
rename : content/media/webaudio/DelayProcessor.cpp => content/media/webaudio/DelayBuffer.cpp
rename : content/media/webaudio/DelayProcessor.h => content/media/webaudio/DelayBuffer.h
extra : rebase_source : 18453d631779cd7d0672b5325e110b107ab4237d
The subsample alignment of resampled buffers provides seamless playback even
when buffer durations are not an integer number of track ticks.
--HG--
extra : rebase_source : 0fcd52e8a9560de881aa73931cf22a02f984d748
This removes the dependence on AllInputsFinished() which didn't return true
for many input types.
The DelayProcessor is no longer continuously reset (bug 921457) and the
reference is now correctly added again when all inputs are finished and then
new inputs are connected.
--HG--
extra : rebase_source : b85c62305a6fcfce57bd40a11edaeaaf2a63c188
The ObtainInputBlock API is also changed to create an input block for one input
block at a time. An array of these input blocks is then sent to
ProduceAudioBlock for processing, which generates an array of AudioChunks as
output.
Backwards compatibilty with existing engines is achieved by keeping the
existing ProduceAudioBlock API for use with engines with only a maximum of one
input and output port.
These MediaStreams are used as a way to down-mix the input AudioChunks, and
also as a way to get proper stream processing ordering. The MediaStream for
the source AudioNode is an input to these streams, and these streams in turn
are inputs to the MediaStream that the AudioNode that owns the AudioParam owns.
This way, the Media Streams Graph processing code will order the streams so
that by the time that the MediaStream for a given node is processed, all of the
MediaStreams belonging to the AudioNode(s) feeding into the AudioParam have
been processed.
This has a tricky side-effect that those streams also being considered when
determining the input block for the AudioNodeStream belonging to the
AudioParam's owner AudioNode. In order to fix that, we simply special case
those streams and make AudioNodeStream::ObtainInputBlock ignore them.
Without this patch, if we have an AudioBufferSourceNode which is finished, the
nodes following it will not see its mLastChunk. This breaks the Web Audio
block processing logic since we rely on the assumption that the buffer for the
last output of AudioBufferSourceNode is correctly passed to the following
engines.
Here is what this patch does:
* Got rid of the JSBindingFinalized stuff
* Made all nodes wrappercached
* Started to hold a self reference while the AudioBufferSourceNode is playing back
* Converted the input references to weak references
* Got rid of all of the SetProduceOwnOutput and UpdateOutputEnded logic
The nodes are now collected by the cycle collector which calls into
DisconnectFromGraph which drops the references to other nodes and destroys the
media stream. Note that most of the cycles that are now inherent in the
ownership model are between nodes and their AudioParams (that is, the cycles
not created by content.)
Also removes NotifyHasCurrentData's boolean parameter; we just fire this
once and treat a stream that has once had current data as always
having current data (since we block a stream that would advance
beyond its available data).
Modifies MediaStreamGraph to always advance its time by a multiple of
WEBAUDIO_BLOCK_SIZE.
--HG--
extra : rebase_source : 99524b09edd4ac0e1bc6607f2ba14925bc2f11c2