The content timeline field may be updated when the live timeline
is refreshed in the looper event preceding the runnable that is
posted to the player thread. Hence a new timeline may contain a
new period uid that is not present in the ad playback state map.
Using a final reference makes sure period and ad playback state
match when asserted.
PiperOrigin-RevId: 518228165
This change makes sure that live ad periods that are played are
skip when attempted to be added to the queue. To make this work
the existing filter logic had to be take into account the content
resume offset that live periods use.
PiperOrigin-RevId: 518068138
* Add `release` method to Renderer and AudioSink interfaces.
* Call the `release` method for renderers when the ExoPlayer is going to be released.
PiperOrigin-RevId: 517135677
While playback thread is 'asleep' during audio offload playback, the playbackInfo.positionUs is not being constantly updated. During this time, the returned value from getCurrentPosition should return an estimate based on the most recent value and playback speed.
PiperOrigin-RevId: 516550509
* Add AudioDeviceCallbackApi23 class which extends the AudioDeviceCallback.
* Add registerAudioDeviceCallback and unregisterAudioDeviceCallback methods for Api23.
* Modify the logics of AudioCapabilitiesReceiver constructor, register and unregister methods.
PiperOrigin-RevId: 516183997
This change makes sure that the `AdPlaybackState` of any period can
contain an empty postroll placeholder.
The placeholder postroll should be represented in the `MediaPeriodId`
of a content period as `nextAdGroupIndex`, but should be ignored when
building the list of `MediaPeriodInfo` in the `MediaPeriodQueue`. This
is required to allow to add an ad group to ad playback state of the
content period that is currently being played, instantly insert an ad
period into the media period queue and immediately transition playback
to the new period.
This change makes sure and tests that
- a live server side inserted postroll placeholder can be inserted to
a `AdPlaybackState` in well-defined and tested way (helper method)
- a postroll placeholder is NOT ignored when
`AdPlaybackState.getAdGroupIndexAfterPositionUs` is called (this
is required when evaluating the `nextAdGroupIndex`).
- a postroll placeholder is ignored when
`AdPlaybackState.getAdGroupIndexForPositionUs` is called (this is
required to not attempt to play the ad and is analogous to ignore the
post roll placeholder in a single period timeline).
- `MediaPeriod.getFollowingMediaPeriodInfo()` does not include a
`MediaPeriodInfo` for the placeholder postroll when building the
queue.
PiperOrigin-RevId: 515079136
We currently rely on the raw playback head position to check if any
data is pending in the AudioTrack (e.g. to know if the renderer is
still ready). We can use the value returned from getCurrentPositionUs
instead to align the "isReady" logic with the playback position logic.
This has the side effect that getPlaybackHeadPosition position is called
less often when the position is obtained via getTimestamp.
PiperOrigin-RevId: 514747613
Once the value returned from AudioTimestampPoller advances, we
only need getPlaybackHeadPosition to sample sync params and
verify the returned timestamp. Both of these happen less often
and we can avoid calling getPlaybackHeadPosition if we don't
actually need it.
PiperOrigin-RevId: 512882170
Playback parameter signalling can be quite complex because
(a) the renderer clock often has a delay before it realizes
that it doesn't support a previously set speed and
(b) the speed set on media clock sometimes intentionally
differs from the one surfaced to the user, e.g. during
live speed adjustment or when overriding ad playback
speed to 1.0f.
This change fixes two problems related to this signalling:
1. When resetting the media clock speed at a period transition,
we don't currently tell the renderers that this happened.
2. When a delayed speed change update from the media clock is
pending and the renderer for this media clock is disabled
before the change can be handled, the pending update becomes
stale but it still applied later and overrides any other valid
speed set in the meantime.
Both edge cases are also covered by extended or new player tests.
Issue: google/ExoPlayer#10882
#minor-release
PiperOrigin-RevId: 512658918
MediaCodecRenderer currently has two independent paths to trigger
events at stream changes:
1. Detection of the last output buffer of the old stream to trigger
onProcessedStreamChange and setting the new output stream offset.
2. Detection of the first input buffer of the new stream to trigger
onOutputFormatChanged.
Both events are identical for most media. However, there are two
problematic cases:
A. (1) happens after (2). This may happen if the declared media
duration is shorter than the actual last sample timestamp.
B. (2) is too late and there are output samples between (1) and (2).
This can happen if the new media outputs samples with a timestamp
less than the first input timestamp.
This can be made more robust by:
- Keeping a separate formatQueue for each stream to avoid case A.
- Force outputting the first format after a stream change to
avoid case B.
Issue: google/ExoPlayer#8594
#minor-release
PiperOrigin-RevId: 512586838
Some devices were reported to have wrong PerformancePoint sets
that cause 60 fps to be marked as unsupported even though they
are supported.
Issue: google/ExoPlayer#10898
#minor-release
PiperOrigin-RevId: 512580395
The output info for a new stream is marked pending until the last
sample of the previous stream has been processed. However, this fails
if the previous stream has already been fully processed. We need to
detect this case explicitly to avoid signalling the output change one
sample too late.
#minor-release
PiperOrigin-RevId: 512572854
This test became flaky after ab7e84fb34 because some of the
unrealistic frame times ended up on the same release time.
Using realistic numbers avoids the flakiness.
PiperOrigin-RevId: 512566469
Protected system broadcasts should not specify the export flag.
Marking them as NOT_EXPORTED breaks sticky broadcasts in some
cases.
Issue: google/ExoPlayer#10970
#minor-release
PiperOrigin-RevId: 512020154
The current logic uses manual array operations to keep track of pending
changes. Modernize this code by using an ArrayDeque and a data class.
This also allows to extend the output stream information in the future.
This also fixes a bug where a position reset accidentally assigns a pending
stream offset instead of keeping the current one.
#minor-release
PiperOrigin-RevId: 511787571
When rendering frames at a rate higher than the screen refresh rate,
e.g. playing at 8x, the player is releasing multiple frames at the same
release time (nanos) which are then dropped by the platform. The output
buffers are available later and as a result MediaCodec cannot keep up
decoding fast enough.
This change skips releasing multiple video frames on the same vsync
period and proactivelly drops the frame. The frame is counted as skipped
rather than dropped to differentiate with frames dropped due to slow
decoding.
PiperOrigin-RevId: 510964976
This call may cause performance overhead in some situations,
for example if the AudioTrack needs to query an offload DSP
for the current position. We don't need to check this multiple
times per doSomeWork iteration as the value is unlikely to
change in any meaningful way.
PiperOrigin-RevId: 510957116
Rename ScaleToFitTransformation to ScaleAndRotateTransformation.
This better represents the operations that can be accomplished using this
effect. The name was originally named ScaleToFit* because it's not obvious how
to scale to fit using OpenGL, and this effect handled the scaling to fit in a way that no other MatrixTransformations did.
However, it's hard to discover how to rotate when skimming names of effects, so
it's probably more useful to convey that this effect rotates, than that it
scales to fit.
PiperOrigin-RevId: 510480078
Previously, any constructors instrumented by Robolectric were made public. This
caused two types of issues:
1) If Android classes had non-public constructors which were made public and
added to the Android API, Robolectric allowed tests to incorrectly use the
constructors on older SDK levels (where they were non-public). This most
commonly occurs for AccessibiltyEvent and AccessibilityNodeInfo.
2) When reflection was used to instantiate classes that were instrumented by
Robolectric, all constructors were accessible, which did not match what
happened when running on an Android test.
Update the instrumentation in Robolectric to prevent making all public
constructors.
PiperOrigin-RevId: 510049123
Tests in `SilenceSkippingAudioProcessorTest` used half as many short integers as needed for channel values when generating alternating silence/noise input. Fix this by passing left and right channel input.
PiperOrigin-RevId: 509188074
The AsynchronousMediaCodecAdapter's queuing thread stores any exceptions
raised by MediaCodec and re-throws them on the next call to
queueInputBuffer()/queueSecureInputBuffer(). However, if MediaCodec
raises and error while queueing, it goes into a failed state and does
not announce available input buffers. If there is no input available
input buffer, the MediaCodecRenderer will never call
queueInputBuffer()/queueSecureInputBuffer(), hence playback is stalled.
This change surfaces the queueing error through the adapter's dequeueing
methods.
PiperOrigin-RevId: 508637346
`TrackSelectorResult.rendererConfigurations` can contain null elements:
> A null entry indicates the corresponding renderer should be disabled.
This wasn't caught by the nullness checker because `ExoPlayerImpl` is
currently excluded from analysis.
#minor-release
Issue: google/ExoPlayer#10977
PiperOrigin-RevId: 508619169
Previously, Robolectric's instrumentation updated all constructors to be
public. This caused two main types of problems:
1) When non-public constructors were made public and added to the Android API,
Robolectric allowed tests to incorrectly use the constructors on older SDK
levels (where they were non-public). This most commonly occurs for
AccessibiltyEvent and AccessibilityNodeInfo.
2) When reflection was used to instantiate classes that were instrumented by
Robolectric, all constructors were accessible.
Fix issues across Google3 Robolectric tests that were affected by this issue.
A forthcoming change will fix the instrumentation in Robolectric to prevent
this type of issue from occurring.
Tested:
TAP --sample ran all affected tests and none failed
http://test/OCL:507861075:BASE:507805409:1675803313108:f2128fa4
PiperOrigin-RevId: 508087822
If a renderer error happens while processing readahead data for the
next item in the playlist, we currently throw the error immediately
and only set the item id in the error details. This makes it harder
to associate the error to the right item. For example, the user
facing UI is likely not updated to show the failing item when the
error is reported.
This can be improved slighly by force setting the position to the
failing item. The playback still fails immediately, but this can't
be avoided because the renderer itself went into an error state.
PiperOrigin-RevId: 507808635
The AudioTrackPositionTracker needs to correct positions by
the speed set on the AudioTrack itself whenever it makes
estimations based on real-time (=the real-time playout
duration is not equal to the media duration played).
This happens for the main playback path already, but not for
the mode in which the position is estimated from the playback
head position and also not in the phase after the track has
been stopped. Both cases are not very noticeable during
normal playback, but become relevant when playing in offload
mode.
PiperOrigin-RevId: 507736408
Based on [this conversation thread](https://chat.google.com/room/AAAA--f88ao/76Rem_cRCK8), I've opted to update the existing FrameProcessor.create() rather than deprecate it, as it is unlikely to be in use by apps outside google3.
PiperOrigin-RevId: 506920930
Before this CL, the `renderedLastFrame` flag is not set if the last frame is released immediately (force render), or when it's dropped.
PiperOrigin-RevId: 506358626
Flushing resets all the texture processors within the `FrameProcessor`. This
includes:
- At the back, the FinalMatrixTextureProcessorWrapper, and its MatrixTextureProcessor
- At the front, the ExternalTextureManager
- All the texture processors in between
- All the ChainingGlTextureProcessorListeners in between texture processors
- All the internal states in the aforementioned components
The flush process follows the order, from `GlEffectsFrameProcessor.flush()`
1. Flush the `FrameProcessingTaskExecutor`, so that after it returns, all tasks queued before calling `flush()` completes
2. Post to `FrameProcessingTaskExecutor`, to flush the `FinalMatrixTextureProcessorWrapper`
3. Flushing the `FinalMatrixTextureProcessorWrapper` will propagate flushing through, via the `ChainingGlTextureProcessorListener`
Startblock:
has LGTM from christosts
and then
add reviewer andrewlewis
PiperOrigin-RevId: 506296469
Can be used to combine multiple media items into a single timeline window.
Issue: androidx/media#247
Issue: google/ExoPlayer#4868
PiperOrigin-RevId: 506283307