* Allow more than one input bitmap at a time.
* Allow Compositor to take in and set up an Executor. Otherwise,
Compositor resources may be created on one thread and accessed on another.
* Add a Compositor TestRunner to reuse test code more.
* Update VideoFrameProcessingTaskExecutor to use a new onError listener, so
that it's more reusable in non-DVFP contexts, like for Compositor.
PiperOrigin-RevId: 547206053
For sync-sample-only formats, we have an optimization to drop all buffers
with less than the start time when writing them to the queue.
For the same formats, if we set a new start time (=seek), we only seek
to the buffer at or before the start time. This means the first sample
in the queue is different depending on whether we seek to a start time
or set a start time and then write samples. This is inconsistent and
effectively means the first sample depends on a race condition between
the Loader thread (writing samples) and the playback thread (attempting
an initial seek in the already loaded samples).
The effect of this inconsistency is that we have to decode one sample
we don't need (and could have skipped) and that some tests become flaky
if the test setup runs into the mentioned race condition.
The fix is to change the SampleQueue seek method to also seek to
a sample at or after the specified time, to align the behavior to the
case where we write the same samples to an empty queue.
The change also clarifies the Javadoc of
MimeTypes.allSamplesAreSyncSamples to note that this should really only
return true if the samples have no "duration" that matters. Otherwise,
we could reasonably return true for most subtitle formats although it
would break subtitle display because we'd remove samples that start
before the seek time.
PiperOrigin-RevId: 547189941
Also parse the PCM encoding for lpcm in MP4, and update `MatroskaExtractor`
similarly.
Tested manually in the demo app using an MP4 with 24-bit big endian audio.
PiperOrigin-RevId: 546878505
Add documentation for threading requirements at the class level (in
addition to existing documentation on the methods) to improve
discoverablility. Also fix a couple of nits in the javadoc (US English
spelling, avoid passive voice) and in `OnInputFrameProcessedListener`.
PiperOrigin-RevId: 546303732
Otherwise, errors like `GL_INVALID_FRAMEBUFFER_OPERATION` will only
report a `null` error string, instead of the proper error string.
PiperOrigin-RevId: 546273328
The actual errors are all positive hex values. Without this CL, we must first
convert decimal errors to hex ones before figuring out what went wrong.
PiperOrigin-RevId: 544695961
We introduced truncation to 32 chars in <unknown commit>
and included indent and offset in the calculation. I think this is
technically correct, but it causes problems with the content in
Issue: google/ExoPlayer#11019 and it doesn't seem a problem to only truncate actual
cue text (i.e. ignore offset and indent).
#minor-release
PiperOrigin-RevId: 544677965
On a MediaItem change, the input Format (and Effects to apply) may be
different. Therefore the AudioProcessingPipeline must be reconfigured
to determine what processing is active, and what the AudioFormat of the
data output is. In the event that it is different, additional
AudioProcessor instances must be used to ensure the encoder will still
be able to accept the audio buffers.
PiperOrigin-RevId: 544338451
Providing the sync token in the api allows the client to decide which waiting method they would like to use depending on the use case, allowing them to optimise if possible.
PiperOrigin-RevId: 543997311
MP4 edit lists sometimes ask to start playback between two samples.
If this happens, we currently change the timestamp of the first
sample to zero to trim it (e.g. to display the first frame for a
slightly shorter period of time). However, we can't do this to audio
samples are they have an inherent duration and trimming them this
way is not possible.
#minor-release
PiperOrigin-RevId: 543420218
By passing this class where it's needed, implementations don't need to store it
in a field (reducing boilerplate) and it's clearer that it can't be unset when
needed.
PiperOrigin-RevId: 542823522
The effects pipeline must receive images in the sRGB colorspace due to the color transfers applied in the shaders. Currently the burden to making sure images are in the right colorspaces falls onto apps. This CL ensures that this is not the case anymore.
PiperOrigin-RevId: 542323613
To more accurately describe what they do, especially as Compositor will
starts to use more contexts or threads, and it's important to know what
needs to be reset/recreated/focused before what methods.
PiperOrigin-RevId: 541010135
*** Original commit ***
Rollback of 2a6f893fba
*** Original commit ***
Set video size to 0/0 when video render is disabled
In terms of MCVR wi...
***
PiperOrigin-RevId: 540525069
The existing NullableType has been deprecated 5 years ago and causes
crashes in Kotlin apps because Kotlin doesn't recognize this annotation
as a nullable type annotation.
While we can't align on a single @Nullable annotation yet, we can at
least replace this one by JSR305's @Nonnull(MAYBE) as it fulfils all
requirements, including full Kotlin compatiblity. To avoid the
cumbersome name, we can redefine it as our own @NullableType
annotation. (We can't use @Nullable to avoid name clashes with the main
@Nullable annotation from AndroidX)
Issue: google/ExoPlayer#6792
PiperOrigin-RevId: 540497469
*** Original commit ***
Add a timer to end a video stream prematurely in ExtTexMgr
***
This has been submitting for more than 1.5hrs. "This presubmit is running slowly because you have been throttled by Build Queue due to using too much of your Product Area's quota."
adding NO_SQ as this is a pure rollback
PiperOrigin-RevId: 539135970
Add `HlsMediaSource.Factory.setTimestampAdjusterInitializationTimeoutMs(long)` to set the timeout for the loading thread to wait for the `TimestampAdjuster` to initialize. If the initialization doesn't complete before the timeout, a `PlaybackException` is thrown to avoid the playback endless stalling. The timeout is set to zero by default.
This can avoid HLS playback endlessly stalls when manifest has missing discontinuities. According to the HLS spec, all variants and renditions have discontinuities at the same points in time. If not, the one with discontinuities will have a new `TimestampAdjuster` not shared by the others. When the loading thread of that variant is waiting for the other threads to initialize the timestamp and hits the timeout, the playback will stall.
Issue: androidx/media#323
#minor-release
PiperOrigin-RevId: 539108886
This CL introduces the new public API setSuppressPlaybackWhenUnsuitableOutput which if set to TRUE will cause suppression of a requested playback if that is going to happen on an unsuitable audio output (e.g. builtin speaker on a WearOS device).
PiperOrigin-RevId: 538867212
The sample timestamp carried by the emsg box can have a significant delta when comparing to the earliest presentation timestamp of the segment. Using this timestamp to intialize the timestamp offset in TimestampAdjuster will cause the media sample to have a wrong adjusted timestamp. So we should defer adjusting the metadata sample timestamp until the TimestampAdjuster is initialized with a real media sample.
PiperOrigin-RevId: 538172841