This is following a renaming of registerInputFrame to handleInputFrame.
- queueInputBitmap is renamed to handleInputBitmap for consistency with
handleInputFrame.
- registerInputStream is renamed to onInputStreamChanged for consistency
with media3 method names.
PiperOrigin-RevId: 655529699
Build upon Transformer.videoFrameProcessorFactory in MultipleInputVideoGraph
Check that Transformer.videoFrameProcessorFactory is DefaultVideoFrameProcessor
for multi-input video
Fixes https://github.com/androidx/media/issues/1509
PiperOrigin-RevId: 655232381
This test is flaky at p4head, because CompositionPlayer has no logic to
ensure a specific input is used to configure the audio graph. Depending
on which sequence registers input first, it changes the output format
of the media.
By setting effects on the 2nd sequence, both inputs are the same format
and this flakiness is avoided in the test.
PiperOrigin-RevId: 653582441
-----
Context:
* Each sequence is wrapped as a single MediaSource, each being played
by an underlying ExoPlayer.
* Repeat mode is typically implemented in Players as a seek to the next
item in the playlist.
-----
This CL:
Repeat mode is triggered by listening for when the main input player
sees a play when ready change due to the end of the media item.
There is a slight delay at the end of the playback, before it repeats.
Setting repeat mode on the underlying players addresses this, but means
that the players will seek without waiting for the CompositionPlayer,
and as such previewAudioPipeline does not get the correct signals
around blocking/flushing.
PreviewAudioPipeline - The seek position can validly be C.TIME_UNSET,
however preview pipeline did not handle this case.
CompositionPlayer getContentPosition is given (through a lambda) as a
supplier to the State object, which means any comparisons between
previous/new state for this value does not work. In SimpleBasePlayer,
there is logic to use the positionDiscontinuityPositionUs for the
position change (see getPositionInfo called from
updateStateAndInformListeners), however this logic is not considered in
getMediaItemTransitionReason, so a condition needed to be added for
this case.
-----
Tests:
* Dump files clearly show the position and data is repeated.
* Assertions on the reasons for transitions or position
discontinuities.
PiperOrigin-RevId: 653210278
Sources (for example media projection) can populate the `Surface` from
`SurfaceAssetLoader` with timestamps that don't start from zero. But
`MuxerWrapper` assumes the latest sample timestamp can be used as the duration
when it calculates average bitrate and notifies its listener.
This can cause a crash because the calculated average bitrate can be zero if
the denominator duration is large enough.
Use the max minus first sample timestamp across tracks instead to get the
duration.
Side note: the large timestamps from the surface texture when using media
projection arrive unchanged (apart from conversion from ns to us) in effect
implementations and in the muxer wrapper (and are passed to the underlying
muxer). The outputs of media3 muxer and the framework muxer are similar.
PiperOrigin-RevId: 652422674
This should have no influence on app behavior and other policies
and just allows code to depend on new API 35 platform symbols.
PiperOrigin-RevId: 652414026
Also promote all H.265 constants to be public in `NalUnitUtil` with
`H265_` prefixes, for consistency.
A lot of these names are used in h.265 too (and `NalUnitUtil` handles
both), but with different values, so this rename aims to avoid
accidentally using an h.264 value in an h.265 context.
PiperOrigin-RevId: 651774188
This is because the force EOS workaround for videos is longer than the original
muxer timeout. This way the apps depending on Transformer won't have to manually
set the muxer timeout on emulators.
PiperOrigin-RevId: 651355755
VideoSink.registerInputFrame is now called for every input frame (not
only the ones that should be rendered to the input surface) because it's
the VideoSink that decides whether it wants the frame to be rendered.
PiperOrigin-RevId: 651049851
MCVR crashed because MCVR registers a new input stream to VideoSink on every
`onOutputFormatChanged()`, assuming that `onOutputFormatChanged()` is only
invoked on media item transition. However, it can be called multiple times for
one media item.
PiperOrigin-RevId: 649050576
Hardware decoder on some devices fails to write 1920x1080 YUV_420_888 buffers into an ImageReader. This change allows us to remove skipCalculateSsim device workaround in ExportTest.java.
VideoDecodingWrapper now uses media3.MediaExtractorCompat: necessary for parsing of MediaFormat#KEY_CODECS_STRING and decoder capabilities check
PiperOrigin-RevId: 648726721