The zeroth component is not removed from the BufferComplexMultiply() call so
as not to disrupt alignment.
The mOutputBuffer[halfSize].i assertions are removed because the code no
longer uses these components, and so their values are irrelevant.
This increases the maximum PeriodicWave size to 8192 and adds an optimization
to use 8192 elements only in the case where we receive more than 4096
components. In accordance with the spec, a maximum number of components is no
longer enforced.
Fake audio tracks would rely on an nsITimer firing every 10ms, and on
each fire they would append 10ms of data.
This didn't work in practice as the intervals observed were most of the
time larger than 10ms, e.g.:
> Last Notify() 12,042ms ago
> Last Notify() 11,327ms ago
> Last Notify() 11,097ms ago
> Last Notify() 11,601ms ago
> Last Notify() 11,694ms ago
> Last Notify() 11,593ms ago
> Last Notify() 11,698ms ago
> Last Notify() 12,492ms ago
This patch first appends a slight buffer to the fake audio track to have
some resilience against underruns when the timer exceeds its interval
like this. It also measures the actual time between two Notify() calls
so that the exact number of consumed audio samples can be appended back.
Should we be under such heavy CPU load that the MediaManager thread is
starved out, we'll print a warning and avoid appending exceedingly much
data by appending only the size of the initial buffer.
TL;DR requesting a fake stream always gives you a fake stream. No magic.
The gUMConstraint `fake: true` should take precedence and if set always
use MediaEngineDefault.
If it is set the state of `faketracks` is passed
on to MediaEngineDefault.
If it is not set, but (any of) audio/video loopback devices are set, the
device enumeration will filter out only those.
TL;DR requesting a fake stream always gives you a fake stream. No magic.
The gUMConstraint `fake: true` should take precedence and if set always
use MediaEngineDefault.
If it is set the state of `faketracks` is passed
on to MediaEngineDefault.
If it is not set, but (any of) audio/video loopback devices are set, the
device enumeration will filter out only those.
This simplifies updating to a specific revision instead of
always defaulting to master. e.g.
npm install
node update-webvtt.js -d ~/vtt.js -r v0.12.1
Note the script will clobber the given repo's HEAD, checking
out the rev (or master) instead.
YouTube and WebVR have been experimenting with 8k video for
immersive applications, where you need more than 4k resolution
even on a mid-resolution display because you're not looking
at the whole scene simultaneously.
We were rejecting video frames larger than 4000x3000,
or 16k in any one dimension, to limit resource exhaustion
attacks. Bump this to accept 8k video now that there's
a demand for it.
Now, the most FFT work that happens during one realtime processing block is
when one 2048-size stage and the 256-size stage are performed at the same
phase-offset. Before FFT timing was controlled by initial input buffer offset
(bug 1221831), two 1024-size stages as well as the 512- and 256-size stages
performed FFTs at one offset. Thus, the maximum work in one block is reduced
by a ratio of about 11 to 9.
Measurements also indicate a similar reduction in total rendering thread
CPU usage.
Previously the alignment of the eleven 1024-size realtime stages was such
that, in three consecutive blocks, two 1024-size stages would peform their
FFTs. Now, the 2048-size stages is aligned so that none of these perform
their FFTs in consecutive blocks.
as with the main thread.
The comment was incomplete as ReverbConvolverStage also supports multiples of
the FFT halfsize, but only values up to WEBAUDIO_BLOCK_SIZE.
This makes PlanarYCbCrImage abstract and moves the recycling functionality
into RecyclingPlanarYCbCrImage. This decreases the size of
SharedPlanarYCbCrImage and makes it possible for us to do part 3 of bug
1216644.
This is in the mochitest suite so that Android and B2G tests can run it, but
designed so that it can be moved to web-platform-tests when they run on all
platforms.
(Doing the extra ProcessBlock for the sake of downstream nodes was unnecessary
even before the inactive check was delayed until after their processing, because
downstream nodes would have only had null chunks to process anyway.)
Since changes for bug 1217625, the node and downstream nodes won't be made
inactive until after downstream nodes have done their processing, and so there
is no need to wait for the first silent output block.
This essentially reverts 5c607f3f39d55544838f3111ede9e11a00d3c25e.
This will allow streams to be suspended when they are discovered inactive.
Suspending is not possible while iterating over stream lists for processing.
The approach of delaying the transition to inactive state may result in a
couple of extra processing iterations, but can save on the number of messages
that need to be created when compared to the approach of traversing downstream
nodes during stream processing.
This makes PlanarYCbCrImage abstract and moves the recycling functionality
into RecyclingPlanarYCbCrImage. This decreases the size of
SharedPlanarYCbCrImage and makes it possible for us to do part 3 of bug
1216644.
The way we pass in AudioDataValue arrays into AudioData is non-uniform:
sometimes we have nsAutoArrayPtrs, sometimes we don't, and it's not
immediately obvious from the function signature of the constructor that
we're actually taking ownership of this array. Let's fix that by using
UniquePtr<AudioDataValue[]> smart pointers to hold the data prior to
creating AudioData values, and for passing in to AudioData's
constructor. Using standard-er C++ things instead of our homegrown ones
is a good thing.
If AsyncAllocateVideoMediaCodec() run faster enough, mNativeWindow will be created after codecReserved() is called and we'll configure decoder w/o native window.
Having HASH_NODE_ID_WITH_DEVICE_ID #defined is enough for GMPLoader to start
using the Mac version of GetRawMachineId.
Note: The stack (that may contain information gathered during GetRawMachineId)
is not erased, so it could theoretically be possible for a compromised GMP to
find out some sensitive user information. Another bug will deal with this.
Necessary routines were extracted from other files in:
6c3bf03265/
(otherwise a lot of code would have had to be imported, most of which would be
unused anyway.)
These extracted routines were reduced to only the actually-used code.
base::StringPrintf was only used to stringify a few hex values, this particular
use was easier to reimplement in a small loop rather than trying to extract the
whole printf suite.
base::UTF8toUTF16 is not needed, as we just return bytes. So internally a
std::string (containing UTF8) is used and its contents transferred to the
output buffer.
GetRawMachineId was returning its generated data through a 'string16', which on
Windows was conveniently equivalent to a std::wstring.
However on Mac, wstring uses 32-bit characters, so in order to comply with the
string16 interface, a lot of non-trivial code would have to be imported and
vetted.
Also, in the end GMPLoader::Load passes this string16 to SHA256_Update() as a
sequence of bytes, the actual type of the data is lost!
So to simplify this work, GetRawMachineId will now return its data through a
vector of bytes, and the platform-dependent implementations may use whatever
data type they want internally.
The Windows GetRawMachineId actually returns the same data in this vector, so
it stays compatible with the previous code.
The logic was extracted from LibAV cmdutils.c. FFmpeg provides an API for that (av_frame_get_best_effort_timestamp()) unfortunately this isn't provided by LibAV.
So copy the logic instead in order to keep compatibility with the two forks.
This is the primary reason why we got no pts returned (pts were set to 0) when using early version of LibAV. Apparently you are expected to set the pts when allocating the buffer of a frame.
This is undocumented, but both LibAV and FFmpeg do so internally. So do the same.