From the `sampleMimeType`, we can know whether the media item contains video data, image data, and audio data. This is done for input media items, and the output media item.
PiperOrigin-RevId: 723459732
- It's never used and handling multi-threading is costly.
- If the VideoCompositor and the VideoFrameProcessors use separate
threads and the same GlObjectsProvider, the GlObjectsProvider is
accessed from multiple threads. This class doesn't seem designed for
multi-threading.
PiperOrigin-RevId: 723448013
Previously, the codec names for the input were collected from `Format.codecs` which return the RFC 6381 string not the codec name used. This was changed to retain the decoder name from `ProcessedInput` instead.
PiperOrigin-RevId: 723129667
MediaCodecVideoRenderer is becoming unwieldy with the numerous constructors and optional settings. This refactors MediaCodecVideoRenderer to use a builder pattern for simplicity.
PiperOrigin-RevId: 723022129
Also updated DefaultVideoFrameProcessor to create GlShaderPrograms with the working ColorInfo rather than the output ColorInfo.
PiperOrigin-RevId: 721748002
Most of the values for the output `MediaItemInfo` will be retained from the `ExportResult`, so it's now passed to `onExportSuccess` and `onExportError` directly. For now, the duration of the ouput is recorded for metrics, and more values will be added in the following CLs.
PiperOrigin-RevId: 721324689
This is necessary for prewarming. With prewarming, in a sequence of 2
videos, the second renderer is enabled before the first one is disabled,
and decode-only frames should be skipped before the second renderer is
started. The problem is that the second renderer will forward frames to
a BufferingVideoSink before it is started, which will delay the frame
handling and therefore not skip the frame before the renderer is
started.
PiperOrigin-RevId: 721032049
Before the change, `SequenceAssetLoader#onTrackAdded` was being called twice, for audio and video. `ExportResult.ProcessedInput` took one format, which was the latest to be written. The change leads to capturing both, the audio and video formats of the input, and prevents the issue of a format overwriting the other.
PiperOrigin-RevId: 720487697
playback_sequencesOfVideos_effectsReceiveCorrectTimestamps is failing
for prewarming. The following is happening:
- For prewarming, there are 2 alternating video renderers per sequence.
- When the first MediaItem ends (for both sequences), signalEndOfInput
is not called on the InputVideoSink (which is expected).
- The DefaultVideoCompositor doesn't receive the end-of-input-source
signal (which is also expected).
- As a result, the DefaultVideoCompositor never outputs the last frame
because it waits for more input frames to be fed.
- The VideoGraph thus doesn't output the last frame either, and the
first video renderer never ends.
- This causes playback to get stuck.
This is similar to the problem of supporting multiple video sequences
with images and videos in CompositionPlayer.
PiperOrigin-RevId: 720106413
Previously, the input media item's duration was collected from `ProcessedInput.durationUs`. However, this value turned out to be the duration of the media item after clipping. Getting the duration of the input media item before clipping is tricky, so it will be dropped from Editing Metrics V1.
PiperOrigin-RevId: 719254077
The fix is to update `AudioGraphInputAudioSink.lastHandledPositionUs` when a buffer is handled, and end the `AudioGraphInputAudioSink` as the final audio sink plays out further than this position.
PiperOrigin-RevId: 718901825
This is pre work required to remove `Muxer.java` interface
from the muxer module.
`Mp4Muxer` and `FragmentedMp4Muxer` will no longer implement
the `Muxer` interface.
PiperOrigin-RevId: 716669531
The `DataSpace` contains the Color Standard, Range, and Transfer. These values were mapped as follows:
* Standard: From `@C.ColorSpace` to `@DataSpace.DataSpaceStandard`
* Range: From `@C.ColorRange` to `@DataSpace.DataSpaceRange`
* Transfer: From `@C.ColorTransfer` to `@DataSpace.DataSpaceTransfer`
PiperOrigin-RevId: 716157142
If a format is passed to `ExportResult.ProcessedInput`, the `containerMimeType` and `sampleMimeType` are extracted from the Format and passed inside the `MediaItemInfo` object.
PiperOrigin-RevId: 715840400
This is to prevent facing errors in apps using Transformer. It will be enabled again once the project is done and comprehensive tests are added.
PiperOrigin-RevId: 715435613
Some checks in SingleInputVideoGraph were causing CompositionPlayer to
throw for a single media item sequence when repeat mode was enabled. The
reason was that, in this case, no new input stream is registered to the
VideoFrameProcessor.
PiperOrigin-RevId: 715409509