Backed out changeset e811d4258d45 (bug 849713)
Backed out changeset 26aa58e87d5d (bug 849713)
Backed out changeset 9a6552161eff (bug 849713)
Backed out changeset 3551877d9b92 (bug 849713)
Landing on a CLOSED TREE
The logic in this function is mostly around the mLoop variable. mLoop being
NotLooping means that the playback of this node is never going to loop, in
which case the logic doesn't change compared to the ProduceAudioBlock function
before this patch.
If the loop mode is turned on when start() is called, mLoop will initially be
WillLoop. In that case, we play back until mLoopEnd, and then we will wrap
around to mLoopStart, set mLoop to be IsLooping, and start playback again.
From that point on, mLoop will always be IsLooping, and we will loop between
mLoopStart and mLoopEnd. Where possible, we'll just use BorrowFromInputBuffer
to avoid copying the buffer, and if we hit the edges right around the time that
we loop, we copy some frames from the end of the input buffer and some from the
beginning in the memcpy loops at the end of the ProduceAudioBlock function.
With the move of all codec-specific code to DecoderTraits, there is no
need to pass compiler flags for GStreamer when building the webaudio
library. This patch updates the Makefile template accordingly.
With the move of all codec-specific code to DecoderTraits, there is no
need to pass compiler flags for GStreamer when building the webaudio
library. This patch updates the Makefile template accordingly.
There were merges in configure.in and some Makefile.in. None had any
conflicts. I spot verified the Makefile.in changes and confirmed that
the changes did not touch any DIRS* variables.
We need this in order to update the MediaStreamGraph thread when an
AudioParam changes. This enables each AudioParam to be registered with
a callback from its owner node, so that the owner node can have custom
processing code for each AudioParam's mutation.
This is a mega-patch that was too hard to disentangle. Here's what it does:
-- Create infrastructure around AudioNode::UpdateOutputEnded to detect
when a node can no longer produce any output. When that becomes true,
disconnect it from the AudioNode graph.
-- Have AudioNode implement JSBindingFinalized to use as input in
UpdateOutputEnded.
-- Give every AudioNode a MediaStream, and give every connection
a MediaInputPort.
-- Actually play the audio that reaches the AudioContext's destination node.
-- Force AudioContext to use the audio sample rate defined by MediaStreamGraph.
-- Fix AudioBufferSourceNode's start and stop methods to possibly throw and
take default 'when' parameters.
-- Create an AudioNodeStream for AudioBufferSourceNode and give it a
AudioBufferSourceNodeEngine that does what's needed. Set parameters for
this engine in the start() and stop() methods.
-- Create AudioBuffer::GetThreadSharedChannelsForRate, which is responsible
for stealing the contents of any JS array buffers, and bundling them up
into a thread-shared read-only buffer object which can be used as
part of an AudioChunk. This method will also be responsible for
resampling and caching as necessary.
--HG--
rename : content/media/MediaStreamGraph.cpp => content/media/MediaStreamGraphImpl.h
extra : rebase_source : 9fa0ec0efa304acd6513e427103d6339c78efa53
We need this in order to update the MediaStreamGraph thread when an
AudioParam changes. This enables each AudioParam to be registered with
a callback from its owner node, so that the owner node can have custom
processing code for each AudioParam's mutation.
This is a mega-patch that was too hard to disentangle. Here's what it does:
-- Create infrastructure around AudioNode::UpdateOutputEnded to detect
when a node can no longer produce any output. When that becomes true,
disconnect it from the AudioNode graph.
-- Have AudioNode implement JSBindingFinalized to use as input in
UpdateOutputEnded.
-- Give every AudioNode a MediaStream, and give every connection
a MediaInputPort.
-- Actually play the audio that reaches the AudioContext's destination node.
-- Force AudioContext to use the audio sample rate defined by MediaStreamGraph.
-- Fix AudioBufferSourceNode's start and stop methods to possibly throw and
take default 'when' parameters.
-- Create an AudioNodeStream for AudioBufferSourceNode and give it a
AudioBufferSourceNodeEngine that does what's needed. Set parameters for
this engine in the start() and stop() methods.
-- Create AudioBuffer::GetThreadSharedChannelsForRate, which is responsible
for stealing the contents of any JS array buffers, and bundling them up
into a thread-shared read-only buffer object which can be used as
part of an AudioChunk. This method will also be responsible for
resampling and caching as necessary.