After this CL, DVFP waits for flushing until all frames registered previously
arrives.
Previously, ETM records the difference between the number of registered frames,
and the number of frames arrivd on the SurfaceTexture, when flushing. (Note
that ETM is flushed the last in the chain, as flushing is done backwards from
FinalShaderProgramWrapper). ETM then waits until the number of frames arrive
after flush.
The normal flow is, MediaCodecVideoRenderer (MCVR) registers a new decoded
frame, in `processOutputBuffer()` to DVFP, MCVR call `codec.releaseOutputBuffer()`
to have MediaCodec render the frame, and then the frame arrives in DVFP's ETM.
However there might be a discrepancy. When registering the frame, ETM records
the frame on the calling thread, ~instantly. Later when the rendered frame
arrive, ETM records a frame is available on the task executor thread (or
commonly known as the GL thread). More specifically, when a frame arrives
in `onFrameAvailableListener`, ETM posts all subsequent processing to
the task executor. When seeking, the task executor is flushed as the first
step. It might be a frame that has already arrived on ETM, and the processing
of such frame has already been queued into the task executor; only to be
flushed as a result of flushing the task executor. If this happens, the frame
is considered to be never have arrived. This causes a freeze on the app,
because ETM'll wait until this frame arrives to declare flushing has completed.
PiperOrigin-RevId: 631524332
AudioGraphInput now accepts a range of inputs, as long as the effects
provided modify the audio to be int 16.
As part of this, add the workaround to DefaultCodec to ensure pcm
encoding is correct, and remove parameterized tests that are not valid.
PiperOrigin-RevId: 631404152
The second stage of the changes remove the conversion to linear colors in the SDR effects pipeline by default.
also resolves Issue: androidx/media#1050
PiperOrigin-RevId: 630108296
Also adds first frame rendered test for playing back compositions.
- This test checks the output pixels using an `ImageReader` to retrieve the
output bitmap
PiperOrigin-RevId: 630100817
Part of a two stage change to remove the conversion to linear colors in the SDR effects pipeline by default. Changes the boolean to an intdef, introducing a third option that gets all sdr input into the same colorspace.
This is a planned API breaking change, but this change should not change the behavior of the pipeline.
PiperOrigin-RevId: 629013747
This ensures that `C.TIME_UNSET` is more clear in dump files. Some of
these call-sites will **never** pass `C.TIME_UNSET`, but it seems
clearest to always use `addTime` and maybe it will ensure when these
sites are copied in future, `addTime` will be used in the new location
too.
PiperOrigin-RevId: 628363183
Add ctts box implementation to handle muxing B-frame videos.
Add method convertPresentationTimestampsToCompositionOffset to
provide sample offsets. Return empty ctts box in case of video
does not contain B-frame. Add ctts box to MoovStructure to handle
muxing the video containing B-frames.
PiperOrigin-RevId: 628346257
also makes the setter more flexible by ignoring the value of the setter when the output is hdr rather than throwing (since all HDR content must be have a linear color space)
PiperOrigin-RevId: 627388436
Instead of initializing the video sink outside the renderer with an
empty format for composition preview, we initialize it in the renderer
with the input format for video.
PiperOrigin-RevId: 627313708
Before this CL, if all the video (resp. audio) samples were fed before
the audio (resp. video) output format was reported, the
SequenceAssetLoader was incorrectly reporting a MediaItem change with a
null format.
PiperOrigin-RevId: 625978268
Transformer's input shouldn't be constrained to the number of playable audio channels on the current device because the media may be edited (to mix channels for example) or encoded for playback on another device (a server for example).
PiperOrigin-RevId: 625604243
The raw audio decoder's output audio format is stereo when the number of input
channels is (for example) 10 channels. Add a temporary workaround that uses the
input channel count for raw audio. This code should be removed in future when
we bypass the decoder entirely for raw audio.
Tested manually on a WAVE file with 18 audio channels.
PiperOrigin-RevId: 625307624
Before this CL, currentInputBufferBeingOutput was set to null without
adding the buffer to the queue of available buffers, which was making
this buffer unusable. After multiple seeks, playback was stuck because
the AudioGraphInput had no input buffer left.
PiperOrigin-RevId: 624943271
After this CL, Transformer will throw if the clipping start and end
positions are the same because MediaMuxer doesn't support writing a
file with no samples. This should work once we default to the in-app
muxer.
Issue: androidx/media#1242
PiperOrigin-RevId: 624861950
Reduces flakiness of tests that assert on PCM audio. Tests now have to
clearly choose how they want the capturing muxer to handle pcm audio.
Note that the only dump files that have changed are those that deal
with PCM audio (.wav, sowt, twos, silence). Because of the continuous
nature of PCM, timestamps are not part of the dump.
PiperOrigin-RevId: 623155302