The `@CallSuper` annotation should help catch cases where subclasses are
calling `delegate.addListener` instead of `super.addListener` but it
will also (unintentionally) prevent subclasses from either completely
no-opping the listener registration, or implementing it themselves in a
very custom way. I think that's probably OK, since these cases are
probably unusual, and they should be able to suppress the warning/error.
Issue: androidx/media#258
#minor-release
PiperOrigin-RevId: 513848402
All audio tracks should either all be transcoded or all be transmuxed.
Same for video tracks.
To achieve this, simplify the behaviour of transmuxAudio/Video.
PiperOrigin-RevId: 513809287
Renamed MuxerEndToEndTest.java to Mp4MuxerEndToEndTest.java to align it with class under test.
Removed muxed prefix from dump file name because Mp4 implicitely means muxed only.
PiperOrigin-RevId: 513574681
Used an actual captured image with set color profile for test to minimise the chance of the test flaking. Also renamed the media/bitmap/overlay folder to media/bitmap/input_images for clarity.
PiperOrigin-RevId: 513273353
If the Metadata passed to SegmentSpeedProvider is null, then the
SegmentSpeedProvider will always return 1f from getSpeed.
Initializing a SpeedChangingAudioProcessor requires a SpeedProvider.
Once configured,this audioProcessor is always active, so buffers are
passed through it. Because getSpeed is always 1, the processor performs
a no-op, but still has to do a buffer copy for each buffer.
By not initializing the audio processor when metadata is null, this
copy can be skipped and the audio pipeline is more performant.
Note: This change does not affect the multiple media-item case, which
is not supported with speed changes, as per Transformer API
documentation.
PiperOrigin-RevId: 513261811
Once the value returned from AudioTimestampPoller advances, we
only need getPlaybackHeadPosition to sample sync params and
verify the returned timestamp. Both of these happen less often
and we can avoid calling getPlaybackHeadPosition if we don't
actually need it.
PiperOrigin-RevId: 512882170
(cherry picked from commit 408b4449ff75e29a9bda7adc1b530b993fc47814)
Playback parameter signalling can be quite complex because
(a) the renderer clock often has a delay before it realizes
that it doesn't support a previously set speed and
(b) the speed set on media clock sometimes intentionally
differs from the one surfaced to the user, e.g. during
live speed adjustment or when overriding ad playback
speed to 1.0f.
This change fixes two problems related to this signalling:
1. When resetting the media clock speed at a period transition,
we don't currently tell the renderers that this happened.
2. When a delayed speed change update from the media clock is
pending and the renderer for this media clock is disabled
before the change can be handled, the pending update becomes
stale but it still applied later and overrides any other valid
speed set in the meantime.
Both edge cases are also covered by extended or new player tests.
Issue: google/ExoPlayer#10882
PiperOrigin-RevId: 512658918
(cherry picked from commit e79b47ccff39363543c514937aef517a855994f0)
Changes include:
1. Move the test fine into muxer module.
2. Use dump file infra for test cases.
3. Add one additional test for adding float metadata.
4. Few improvements in the code.
In next CL will remove Mp4 term from the file name as we are not using this term in test file names.
PiperOrigin-RevId: 513222506
We shouldn't have this logging unless we really need it to debug
a specific problem, as it can be noisy (even at debug level).
PiperOrigin-RevId: 512904412
Reference docs are now generated by the standard Jetpack machinery, so there's no need for us to generate these docs ourselves.
PiperOrigin-RevId: 512898248
Once the value returned from AudioTimestampPoller advances, we
only need getPlaybackHeadPosition to sample sync params and
verify the returned timestamp. Both of these happen less often
and we can avoid calling getPlaybackHeadPosition if we don't
actually need it.
PiperOrigin-RevId: 512882170
Based on 1000 test runs an emulator, with the current timeout releasing
fails (even with no custom effects) about one percent of the time.
Releasing normally completes in about 30 ms but occasionally
`eglTerminate` took up to 200 ms (and even releasing an effect
took up to 80 ms in one case).
With the new timeout of 500 ms, we still catch stuck effects reasonably
quickly but the number of flaky test failures should be less than one in
ten thousand.
PiperOrigin-RevId: 512690715
This timeline will be used in unit test cases of follow-up
CLs. It basically can be used to emulate the timeline created by a
multi-period live media source when the real time advances.
PiperOrigin-RevId: 512665552
Playback parameter signalling can be quite complex because
(a) the renderer clock often has a delay before it realizes
that it doesn't support a previously set speed and
(b) the speed set on media clock sometimes intentionally
differs from the one surfaced to the user, e.g. during
live speed adjustment or when overriding ad playback
speed to 1.0f.
This change fixes two problems related to this signalling:
1. When resetting the media clock speed at a period transition,
we don't currently tell the renderers that this happened.
2. When a delayed speed change update from the media clock is
pending and the renderer for this media clock is disabled
before the change can be handled, the pending update becomes
stale but it still applied later and overrides any other valid
speed set in the meantime.
Both edge cases are also covered by extended or new player tests.
Issue: google/ExoPlayer#10882
#minor-release
PiperOrigin-RevId: 512658918
MediaCodecRenderer currently has two independent paths to trigger
events at stream changes:
1. Detection of the last output buffer of the old stream to trigger
onProcessedStreamChange and setting the new output stream offset.
2. Detection of the first input buffer of the new stream to trigger
onOutputFormatChanged.
Both events are identical for most media. However, there are two
problematic cases:
A. (1) happens after (2). This may happen if the declared media
duration is shorter than the actual last sample timestamp.
B. (2) is too late and there are output samples between (1) and (2).
This can happen if the new media outputs samples with a timestamp
less than the first input timestamp.
This can be made more robust by:
- Keeping a separate formatQueue for each stream to avoid case A.
- Force outputting the first format after a stream change to
avoid case B.
Issue: google/ExoPlayer#8594
PiperOrigin-RevId: 512586838
(cherry picked from commit 3970343846d7bae5d8ae331d74241c50777ce18a)
Some devices were reported to have wrong PerformancePoint sets
that cause 60 fps to be marked as unsupported even though they
are supported.
Issue: google/ExoPlayer#10898
PiperOrigin-RevId: 512580395
(cherry picked from commit d0cbf0fce84aa73be5eb68935d6a4dd2f2e1dc3d)