Before, we were using a boolean to wait for the input stream to be
fully registered to the CompositionVideoFrameProcessor before queueing
an input texture. This should instead be done by checking the return
value of queueInputTexture().
Also fix a few small stylistic issues.
PiperOrigin-RevId: 755770940
We currently accept any difference in reported frame position
as an "advancing timestamp". However, we don't check if the
resulting position is actually advancing or was only slightly
corrected, but is not yet advancing reliably.
This results in AV sync issues if the timestamp is used too early
while it's not actually advancing yet, and the potential correction
is only read 10 seconds later when we query the timestamp again.
The fix is to check whether the timestamp is actually advancing
as expected (the resulting position from the previous and the
current snapshot don't differ by more than 1ms). To avoid
permanent polling in 10ms intervals for devices that never report
a reliable timestamp, we introduce another timeout after which we
give up.
PiperOrigin-RevId: 755373666
The position tracker has two different position sources,
getPlaybackHeadPosition and getTimestamp. Whenever we switch
between these sources, we smooth out the position differences
over a period of one second.
This has two problems:
1. The smoothing duration is 1 sec regardless of the actual
position difference. So for small differences, assuming
the new timestamp is more correct, we needlessly keep
the tracker with a position offset for longer. For large
differences, the smoothing may result in an extremely
large speedup (up to 5x in theory, for a max allowed diff
of 5 seconds smoothed over a 1 second real time period).
The solution to this issue is to adjust the smoothing
period to the actual difference by using a maximum
speedup/slowdown set to 10% at the moment. Smaller
differences are corrected faster and larger differences
are corrected in a slightly smoother way without speeding
up drastically. We still need an upper bound though (set
to 1 second difference) where plainly jumping to the correct
position is likely a better user experience than having a lenghty
smoothing.
2. The smoothing is only applied when switching between position
sources. That means any position drift or jump coming from the
same source is always taken as it is without any smoothing. This
is problematic for the getTimstamp-based position in particular
as it is only sampled every 10 seconds.
The solution to this problem is to entirely remove the condition
that smoothing only happens when switching between position
sources. Instead we can always check whether the position drift
compared to the last known position is more than the maximum allowed
speedup/slowdown of 10% and if so, start applying smoothing.
This helps to smooth out the position progress at the start of
playback and after resumption when we switch between the position
sources and both sources are not super reliable yet and it also
helps for unexpected jumps in the position of getTimestamp later
on during playback.
PiperOrigin-RevId: 755348271
This moves the position estimation and plausibility checks inside
the class.
Mostly a no-op change, except that the plausible checks now use
the estimated timestamp from getTimestamp and the playback head
position to make them actually comparable. Before, we were comparing
the last snapshot states which may not necessarily be close together.
Given the large error threshold of 5 seconds, this shouldn't make
a practical difference though, just avoids noise and confusion.
PiperOrigin-RevId: 754035464
Work around a bug where MediaCodec fails to adapt between
formats that have the same decoded picture resolution but
different crop.
Add a playlist of two MP4 files that reproduce the issue.
This CL implements the workaround for H.265 and Mp4
PiperOrigin-RevId: 753976872
This already the case for the platform Queue's MediaDescription, but
we currently use the localConfiguration.uri for the platform
MediaMetadata field even though we intentionally strip locaConfiguration
fields before sharing with other apps.
PiperOrigin-RevId: 753952733
This test previews a composition containing only a single audio asset
sequence. The audio asset is a 1s, stereo, locally available WAV file.
Catching an underrun in this test is a likely indication of something
being seriously wrong with the device's state or a performance
regression on the audio pipeline.
This test is a verification of the fix in 2e20d35c3d. Without this fix
the newly added test fails because MediaCodecAudioRenderer attempts to
use dynamic scheduling with AudioGraphInputAudioSync
(which is unsupported) after EoS is signalled.
PiperOrigin-RevId: 753552825
Prior to this change, DefaultAudioSink (via AudioTrackPositionTracker)
would use best-effort logic to infer underruns in the underlying
AudioTrack. This logic would miss underrun events (e.g. newly added test
fails if run without any changes to AudioTrackPositionTracker).
This change should help more accurately detect regressions in the audio
pipeline.
PiperOrigin-RevId: 753550187
Updated the createAssetLoader method to directly use the `mediaItem.localConfiguration.imageDurationMs` to determine if an ImageAssetLoader should be created for an image. This is specifically used for determining whether a motion photo should be treated as an image or as a video.
PiperOrigin-RevId: 753235616
This change renames the existing `sample_with_ssa_subtitles.mkv` test
file (with `S_TEXT/ASS` codec ID) to `sample_with_ass_subtitles.mkv`
(so the file name matches the codec ID), and forks a new test file
called `sample_with_ssa_subtitles.mkv` with the `S_TEXT/SSA` codec ID.
`MatroskaExtractorTest` then asserts that both these files produce
identical dump files.
Issue: androidx/media#2384
PiperOrigin-RevId: 753164883
PresentationState creation and listener registration are not an atomic operation. This happens because the LaunchedEffect which starts the listen-to-Player-Events coroutine can be pre-empted with other side effects, including the ones that change something in the Player.
rememberPresentationState function creates a PresentationState object and initialises it with the correct fresh values pulled out of the Player. The subscription to Player events with a registration of a Listener (via PresentationState.observe()) is not immediate. It *returns* immediately, but is instead scheduled to happen later, although within the same Handler message. Other LaunchedEffects could have been scheduled earlier and could take place between the button state creation and listener subscription.
This is not a problem if no changes to the player happen, but if we miss the relevant player events, we might end up with a UI that is out of sync with reality. The way to fix this is to pull the latest values out of the Player on demand upon starting to listen to Player events.
PiperOrigin-RevId: 753129489
`PlayerSurface` is just an `AndroidView` composable wrapping `SurfaceView` and `TextureView`. It uses the `Player` passed to it to set the `SurfaceHolder`s from those `SurfaceView`/`TextureView` objects. However, it does not technically need the Player to be created - the result will just be an empty Surface.
Add an optimisation to PlayerSurfaceInternal that avoids preemptive clearing of the Surface on the old Player. This helps avoid a costly creation of a surface placeholder right before (potentially) assigning the new Surface to that player.
PiperOrigin-RevId: 752840145
The main part of adding resumption support is storing the
last media id and position whenever the mediaItem or
isPlaying state changes.
PiperOrigin-RevId: 752783723
The output of CompositionPlayer should be considered as a single clip
(not a playlist) so we should only propagate the first stream change
event in that case.
PiperOrigin-RevId: 752661523
Previously when there were negative timestamps, the tkhd duration was incorrectly equal to the full track duration rather than the presentation duration of the edit list. From the [docs](https://developer.apple.com/documentation/quicktime-file-format/track_header_atom/duration) - "The value of this field is equal to the sum of the durations of all of the track’s edits".
PiperOrigin-RevId: 752655137
Previously a stream error from one playlist item was incorrectly
preserved when switching to the next one, resulting in playback hanging.
Issue: androidx/media#2328
PiperOrigin-RevId: 752624979
Before this CL, the buffer adjustment (which allows to convert from
ExoPlayer time to VideoGraph time) was added to the frame timestamps
before feeding them to the VideoGraph, and then subtracted at the
VideoGraph output. The playback position and stream start position used
for rendering were in ExoPlayer time.
This doesn't work for multi-sequence though because the adjustment might
be different depending on the player (after a seek for example).
To solve this problem, this CL handles rendering in VideoGraph time
instead of ExoPlayer time. More concretely, the VideoGraph output
timestamps are unchanged, and the playback position and stream start
position are converted to VideoGraph time.
PiperOrigin-RevId: 752260744
The code for setting the video output is almost the same across both places,
with one callsite supporting less types of video output. I think it's still
better to share the code, to minimize the margin for mistake later.
PiperOrigin-RevId: 751423005