hasPendingData() should signal whether the final audio sink still
holds unconsumed frames so that the renderer can accurately detect a
buffering state.
Before this change, a media item transition could incorrectly trigger
a buffering state in the internal sequence players and cause
CompositionPlayer to interrupt playback. See Issue: androidx/media#2228.
PiperOrigin-RevId: 756252645
Calls to release() can lead to player errors -> move release()
in the test body instead of tearDown()
The resettable count down latch was error-prone:
* Calling reset() swaps out the CountDownLatch and anything waiting on
the previous latch times out
* Calling unblock() after reset(INT_MAX) hangs the test
Replace the resettable count down latch with a normal CountDownLatch.
PiperOrigin-RevId: 756249512
Prewarming relies on two BufferingVideoSinks controlling the same
underlying video sink. If BufferingVideoSink queues a flush operation
caused by a prewarming renderer, when we transition between renderers
the BufferingVideoSink will flush ongoing playback and will cause
unwanted stuttering.
Flushing the video sink causes a chain reaction of making the video
renderer not ready, triggering a player BUFFERING state, and having
CompositionPlayer disable all renderers and pausing the final audio sink
as a result.
This is a partial fix for Issue: androidx/media#2228. The pending part of the fix is
to implement AudioGraphInputAudioSink#hasPendingData() to report an
accurate value.
PiperOrigin-RevId: 756238558
Added documentation and release notes for using `imageDurationMs` to determine image asset loading in `DefaultAssetLoaderFactory`.
PiperOrigin-RevId: 756228166
Export can be ended twice if an exception is thrown before Transformer
was fully released.
Also add a lock to make sure the released variable doesn't become true
right after its value is checked.
PiperOrigin-RevId: 756196139
This test previews a composition containing only a single audio asset
sequence. The audio asset is a 1s, stereo, locally available WAV file.
Catching an underrun in this test is a likely indication of something
being seriously wrong with the device's state or a performance
regression on the audio pipeline.
This test is a verification of the fix in 2e20d35c3d. Without this fix
the newly added test fails because MediaCodecAudioRenderer attempts to
use dynamic scheduling with AudioGraphInputAudioSync
(which is unsupported) after EoS is signalled.
PiperOrigin-RevId: 753552825
Updated the createAssetLoader method to directly use the `mediaItem.localConfiguration.imageDurationMs` to determine if an ImageAssetLoader should be created for an image. This is specifically used for determining whether a motion photo should be treated as an image or as a video.
PiperOrigin-RevId: 753235616
The output of CompositionPlayer should be considered as a single clip
(not a playlist) so we should only propagate the first stream change
event in that case.
PiperOrigin-RevId: 752661523
Previously when there were negative timestamps, the tkhd duration was incorrectly equal to the full track duration rather than the presentation duration of the edit list. From the [docs](https://developer.apple.com/documentation/quicktime-file-format/track_header_atom/duration) - "The value of this field is equal to the sum of the durations of all of the track’s edits".
PiperOrigin-RevId: 752655137
Before this CL, the buffer adjustment (which allows to convert from
ExoPlayer time to VideoGraph time) was added to the frame timestamps
before feeding them to the VideoGraph, and then subtracted at the
VideoGraph output. The playback position and stream start position used
for rendering were in ExoPlayer time.
This doesn't work for multi-sequence though because the adjustment might
be different depending on the player (after a seek for example).
To solve this problem, this CL handles rendering in VideoGraph time
instead of ExoPlayer time. More concretely, the VideoGraph output
timestamps are unchanged, and the playback position and stream start
position are converted to VideoGraph time.
PiperOrigin-RevId: 752260744
The code for setting the video output is almost the same across both places,
with one callsite supporting less types of video output. I think it's still
better to share the code, to minimize the margin for mistake later.
PiperOrigin-RevId: 751423005
If set on the requested VideoEncoderSettings, then
these parameters are passed into the MediaFormat
used by the DefaultEncoderFactory to configure the
underlying codec.
PiperOrigin-RevId: 751059914
Before, we were starting and stopping video rendering when the
renderers were started/stopped. This doesn't work for multi-video
sequences though because we shouldn't stop and start rendering at every
MediaItem transition in any of the input sequences.
PiperOrigin-RevId: 750206410
Add dolby vision with hevc and avc codec in Mp4Muxer according to Dolby
ISO media format standard. As initialization data is required for creation of dovi box, CSD is populated in BoxParser.
PiperOrigin-RevId: 749765993
`Util.SDK_INT` was introduced to be able to simulate any SDK version during tests.
This is possible by using Robolectric's `@Config(sdk)` annotation.
All usages of `Util.SDK_INT` have been replaced by `Build.VERSION.SDK_INT`.
This is a similar change to what was done in #2107.
Instead of calling it in the tests setup, the change modifies ShadowMediaCodecConfig#after() method to call EncoderUtil#clearCachedEncoders().
PiperOrigin-RevId: 748625111
MediaCodec decoders sometimes output frames in
the wrong order. Make our asserts more permissive to
reduce noise in tests.
PiperOrigin-RevId: 747414696
This is a step towards unifying ShadowMediaCodecConfig structure to accommodate having both ExoPlayer and Transcoding codecs configuration.
This includes:
* Adding ability to configure encoders by calling `addEncoders(CodecInfo...)`
* A new factory method that takes specific encoders and decoders CodecInfos
* A new method `addCodecs(boolean, CodecConfig, CodecInfo...) that configures codecs with specified behavior by passing a `CodecConfig`.
This CL also includes migrating Transformer tests to ShadowMediaCodecConfig.
PiperOrigin-RevId: 747390451
This is because the replay cache needs to clear the cache after one mediaItem is fully played.
That is, if the last two frames are cached, we need to wait until they are both rendered before
receiving inputs from the next input stream because the texture size might change. And when the
texture size changes, we teardown all previous used textures, and this causes a state confusion
in the shader program in that, the cache thinks one texture id is in-use (because it's not released)
but the baseGlShaderProgram texturePool thinks it's already free (as a result of size change)
Also fixes an issue that, if replaying a frame after EOS is signalled, the EOS
signal is lost because we flush the pipeline.
PiperOrigin-RevId: 745191032
Earlier when gap is at the start of a sequence
it was automatically filled with silent audio.
Now setting forceAudioTrack flag is mandatory to
indicate that the gap at the start of a sequence
must be filled with silent audio.
PiperOrigin-RevId: 742625236
I could've add another test that seeks into the media before replying, but I
don't think fundamentally it's different from the one added.
I wish I could add one that replays while playing, but it'd be hard to match
the frames perfectly.
I'll add more timestamp based tests
PiperOrigin-RevId: 742229436