Split test/ TransformerEndToEndTest into SingleMediaItemEndToEndTest and
SingleSequenceEndToEndTest to reduce the file size and split the tests
by category.
PiperOrigin-RevId: 515039502
Callbacks onTrackCount and onTrackAdded can be called simultaneously
from different threads.
Before this fix, it was possible for the MuxerWrapper and
FallbackListener track count to never be set, or to be set
with incorrect values.
PiperOrigin-RevId: 514779719
- This is to make sure we know about all the tracks before initializing
the SamplePipelines. This allows to set the muxer and the fallback
listener track count before the SamplePipelines are built.
- As a result, the test files had to be updated because the order in
which the tracks are written has changed.
- The ImageAssetLoader also had to be updated to call onOutputFormat
repeatedly until it returns a non-null SampleConsumer.
- Also fix the trackCount sent to the muxer and fallback listener. The
correct track count can be computed now that we know about all the
tracks before building the SamplePipelines.
PiperOrigin-RevId: 514426123
All audio tracks should either all be transcoded or all be transmuxed.
Same for video tracks.
To achieve this, simplify the behaviour of transmuxAudio/Video.
PiperOrigin-RevId: 513809287
If the Metadata passed to SegmentSpeedProvider is null, then the
SegmentSpeedProvider will always return 1f from getSpeed.
Initializing a SpeedChangingAudioProcessor requires a SpeedProvider.
Once configured,this audioProcessor is always active, so buffers are
passed through it. Because getSpeed is always 1, the processor performs
a no-op, but still has to do a buffer copy for each buffer.
By not initializing the audio processor when metadata is null, this
copy can be skipped and the audio pipeline is more performant.
Note: This change does not affect the multiple media-item case, which
is not supported with speed changes, as per Transformer API
documentation.
PiperOrigin-RevId: 513261811
We shouldn't have this logging unless we really need it to debug
a specific problem, as it can be noisy (even at debug level).
PiperOrigin-RevId: 512904412
Uses the first mediaItem's format as the output format.
If there is `Presentation` supplied in the `Composition.effects`, add it as the
last effect of the first EditedMediaItem.
PiperOrigin-RevId: 512082659
Add format codec info, which can make test skipping checks more similar to the
actual Transformer decoder checks.
Also for the test file, the actual format was 720p, but somehow the file name and
media metadata indicated 1080p. This format mismatch led to some decoding errors,
so fix the format (and associated errors). This also allows us to remove the
exception catch in ForceInterpretHdrVideoAsSdrTest, which was included due to
errors from the incorrect format.
PiperOrigin-RevId: 511809507
- Split the transmux setting into transmuxAudio and transmuxVideo. This
is more flexible for apps and will also be useful for unit testing
(particularly as we can't test video transcoding on Robolectric at the
moment).
- Move these settings to Composition. It makes sense for these settings
to be next to forceAudioTrack. Apps may also want to set these
settings based on the current Composition's MediaItems.
- Add a Composition.Builder because Composition now contains a few
optional fields.
PiperOrigin-RevId: 511708618
Logcat had the following lines, with no other information.
```
DefaultEncoderFactory: Encoders removed for resolution:
DefaultEncoderFactory: Encoders removed for bitrate:
DefaultEncoderFactory: Encoders removed for bitrate mode:
```
PiperOrigin-RevId: 511470231
Implement getMediaFormatInteger, a helper method simulating mediaformat.getInteger(name, defaultValue).
This reduces the API 29 restriction from MediaFormatUtil.getColorInfo to API 24, in
particular removing the method-based restriction to a constant-based restriction,
so that we can reduce usage of the API 29 class.
This also allows us to slightly simplify prior use-cases where we'd check
containsKey and getInteger to have a default value.
PiperOrigin-RevId: 511184301
When clipping a MediaItem with start time > 0, the audio was ending
before the video. This is because:
- Audio timestamps are computed based on the sample sizes, with a start
time set to streamOffsetUs (i.e. the streamStartPositionUs is not
taken into account).
- The SamplePipeline was subtracting streamStartPositionUs from the
timestamps before sending the samples to the muxer.
- As a result, the audio timestamps were shifted by
streamStartPositionUs, while they should be shifter by streamOffsetUs.
PiperOrigin-RevId: 511175923
* Account for `AssetLoader` output types.
* Consider cases that are not audio/video specific.
* Use `Format#sampleMimeType` for track specific conditions to check.
* Untangle `SamplePipeline` initilization from `AssetLoader` state.
PiperOrigin-RevId: 511020865
- Add silent audio when the output contains an audio track but the
current MediaItem doesn't have any audio.
- Add an audio track when generateSilentAudio is set to true.
PiperOrigin-RevId: 511005887
Previously, this was limited to API 29. Expand this to all API versions.
Also, update the test to:
(1) skip based on SDR format input instead of HDR format input
(2) Check the exception message in order to disambiguate between the decoder tone
mapping error, and general video format support error.
PiperOrigin-RevId: 511002218
Unstuck the muxer if the next timestamp in the track with the minimum
timestamp is larger than this minimum timestamp plus
MAX_TRACK_WRITE_AHEAD_US.
PiperOrigin-RevId: 510977088
Before, if the upstream AssetLoader provides HDR to the VideoSamplePipeline when
HDR_MODE_EXPERIMENTAL_FORCE_INTERPRET_HDR_AS_SDR is requested, the
VideoSamplePipeline would attempt to tell the AssetLoader to output SDR, which
could be accomplished via MediaCodec tone-mapping in the AssetLoader.
However, this makes an assumption of the AssetLoader implementation, and
AssetLoaders may not all implement support for decoder tone-mapping. Remove javadoc
attempting to explain how AssetLoaders (ex. custom ones) could behave.
PiperOrigin-RevId: 510956820
This method uses sampleRate, channelCount and pcmEncoding, so passing
AudioFormat is easier.
This will lead into a future change that builds the
encoderInputAudioFormat from encoder.getConfigurationFormat()
PiperOrigin-RevId: 510956177
Rename ScaleToFitTransformation to ScaleAndRotateTransformation.
This better represents the operations that can be accomplished using this
effect. The name was originally named ScaleToFit* because it's not obvious how
to scale to fit using OpenGL, and this effect handled the scaling to fit in a way that no other MatrixTransformations did.
However, it's hard to discover how to rotate when skimming names of effects, so
it's probably more useful to convey that this effect rotates, than that it
scales to fit.
PiperOrigin-RevId: 510480078
Format.toString unfortunately doesn't log colorInfo, and as Format holds a very
large set of values, it's unclear that it should. ColorInfo is useful for codec
exceptions though, so log this in ExportException.createForCodec.
PiperOrigin-RevId: 510475520