MediaCodecRenderer currently has two independent paths to trigger
events at stream changes:
1. Detection of the last output buffer of the old stream to trigger
onProcessedStreamChange and setting the new output stream offset.
2. Detection of the first input buffer of the new stream to trigger
onOutputFormatChanged.
Both events are identical for most media. However, there are two
problematic cases:
A. (1) happens after (2). This may happen if the declared media
duration is shorter than the actual last sample timestamp.
B. (2) is too late and there are output samples between (1) and (2).
This can happen if the new media outputs samples with a timestamp
less than the first input timestamp.
This can be made more robust by:
- Keeping a separate formatQueue for each stream to avoid case A.
- Force outputting the first format after a stream change to
avoid case B.
Issue: google/ExoPlayer#8594
#minor-release
PiperOrigin-RevId: 512586838
Some devices were reported to have wrong PerformancePoint sets
that cause 60 fps to be marked as unsupported even though they
are supported.
Issue: google/ExoPlayer#10898
#minor-release
PiperOrigin-RevId: 512580395
Following test cases are added:
1. Mux H264 video
2. Mux H265 video
3. Mux HDR video
Each test case performs following actions:
1. Extract track and samples from input Mp4 using MediaExtractor.
2. Feed those samples into Mp4 muxer.
3. Use extractor to extract the samples from muxed file and create dump file.
PiperOrigin-RevId: 512589069
MediaCodecRenderer currently has two independent paths to trigger
events at stream changes:
1. Detection of the last output buffer of the old stream to trigger
onProcessedStreamChange and setting the new output stream offset.
2. Detection of the first input buffer of the new stream to trigger
onOutputFormatChanged.
Both events are identical for most media. However, there are two
problematic cases:
A. (1) happens after (2). This may happen if the declared media
duration is shorter than the actual last sample timestamp.
B. (2) is too late and there are output samples between (1) and (2).
This can happen if the new media outputs samples with a timestamp
less than the first input timestamp.
This can be made more robust by:
- Keeping a separate formatQueue for each stream to avoid case A.
- Force outputting the first format after a stream change to
avoid case B.
Issue: google/ExoPlayer#8594
#minor-release
PiperOrigin-RevId: 512586838
Some devices were reported to have wrong PerformancePoint sets
that cause 60 fps to be marked as unsupported even though they
are supported.
Issue: google/ExoPlayer#10898
#minor-release
PiperOrigin-RevId: 512580395
The output info for a new stream is marked pending until the last
sample of the previous stream has been processed. However, this fails
if the previous stream has already been fully processed. We need to
detect this case explicitly to avoid signalling the output change one
sample too late.
#minor-release
PiperOrigin-RevId: 512572854
The output info for a new stream is marked pending until the last
sample of the previous stream has been processed. However, this fails
if the previous stream has already been fully processed. We need to
detect this case explicitly to avoid signalling the output change one
sample too late.
#minor-release
PiperOrigin-RevId: 512572854
This test became flaky after ab7e84fb34 because some of the
unrealistic frame times ended up on the same release time.
Using realistic numbers avoids the flakiness.
PiperOrigin-RevId: 512566469
Uses the first mediaItem's format as the output format.
If there is `Presentation` supplied in the `Composition.effects`, add it as the
last effect of the first EditedMediaItem.
PiperOrigin-RevId: 512082659
This test became flaky after cbb6878f9f because some of the
unrealistic frame times ended up on the same release time.
Using realistic numbers avoids the flakiness.
PiperOrigin-RevId: 512566469
Also remove @WorkerThread annotations, as static checks associated with
this annotation aren't useful in this part of the codebase because
almost no methods are called on the main thread.
This change should be a no-op.
PiperOrigin-RevId: 512060367
Currently if releasing a shader program throws we don't release the GL
context, which could leak resources, and any errors are silently dropped
as we suppress notifications during releasing.
Improve resource cleanup and debuggability of errors from custom effects
by continuing to release shaders on failure (for runtime and
`VideoFrameProcessingException`s) and always clean up the GL context.
Note: this doesn't help with the case where releasing a custom shader
blocks for a long time, causing releasing the frame processor to
time out.
PiperOrigin-RevId: 512042501
Protected system broadcasts should not specify the export flag.
Marking them as NOT_EXPORTED breaks sticky broadcasts in some
cases.
Issue: google/ExoPlayer#10970
#minor-release
PiperOrigin-RevId: 512020154
Uses the first mediaItem's format as the output format.
If there is `Presentation` supplied in the `Composition.effects`, add it as the
last effect of the first EditedMediaItem.
PiperOrigin-RevId: 512082659
Add format codec info, which can make test skipping checks more similar to the
actual Transformer decoder checks.
Also for the test file, the actual format was 720p, but somehow the file name and
media metadata indicated 1080p. This format mismatch led to some decoding errors,
so fix the format (and associated errors). This also allows us to remove the
exception catch in ForceInterpretHdrVideoAsSdrTest, which was included due to
errors from the incorrect format.
PiperOrigin-RevId: 511809507
The current logic uses manual array operations to keep track of pending
changes. Modernize this code by using an ArrayDeque and a data class.
This also allows to extend the output stream information in the future.
This also fixes a bug where a position reset accidentally assigns a pending
stream offset instead of keeping the current one.
#minor-release
PiperOrigin-RevId: 511787571
In most cases, this means updating Subject subclasses and their assertThat methods to accept null actual values. (They should always accept null so that assertions like "assertThat(foo).isNull()" succeed instead of throwing NullPointerException.)
Occasionally, it involves other changes, like writing `isGreaterThan(1L)` instead of `isGreaterThan(1)` to resolve an ambiguity it Kotlin overload resolution.
PiperOrigin-RevId: 511776581
- Split the transmux setting into transmuxAudio and transmuxVideo. This
is more flexible for apps and will also be useful for unit testing
(particularly as we can't test video transcoding on Robolectric at the
moment).
- Move these settings to Composition. It makes sense for these settings
to be next to forceAudioTrack. Apps may also want to set these
settings based on the current Composition's MediaItems.
- Add a Composition.Builder because Composition now contains a few
optional fields.
PiperOrigin-RevId: 511708618
Also remove @WorkerThread annotations, as static checks associated with
this annotation aren't useful in this part of the codebase because
almost no methods are called on the main thread.
This change should be a no-op.
PiperOrigin-RevId: 512060367
Logcat had the following lines, with no other information.
```
DefaultEncoderFactory: Encoders removed for resolution:
DefaultEncoderFactory: Encoders removed for bitrate:
DefaultEncoderFactory: Encoders removed for bitrate mode:
```
PiperOrigin-RevId: 511470231
Also, allow isNoOp to default to false without the TODO, so that implementations
of isNoOp must opt-in to implementing the override in order to be considered for
skipping the effect (ex. for transcoding in Transformer).
PiperOrigin-RevId: 511223540
Implement getMediaFormatInteger, a helper method simulating mediaformat.getInteger(name, defaultValue).
This reduces the API 29 restriction from MediaFormatUtil.getColorInfo to API 24, in
particular removing the method-based restriction to a constant-based restriction,
so that we can reduce usage of the API 29 class.
This also allows us to slightly simplify prior use-cases where we'd check
containsKey and getInteger to have a default value.
PiperOrigin-RevId: 511184301
When clipping a MediaItem with start time > 0, the audio was ending
before the video. This is because:
- Audio timestamps are computed based on the sample sizes, with a start
time set to streamOffsetUs (i.e. the streamStartPositionUs is not
taken into account).
- The SamplePipeline was subtracting streamStartPositionUs from the
timestamps before sending the samples to the muxer.
- As a result, the audio timestamps were shifted by
streamStartPositionUs, while they should be shifter by streamOffsetUs.
PiperOrigin-RevId: 511175923
Currently if releasing a shader program throws we don't release the GL
context, which could leak resources, and any errors are silently dropped
as we suppress notifications during releasing.
Improve resource cleanup and debuggability of errors from custom effects
by continuing to release shaders on failure (for runtime and
`VideoFrameProcessingException`s) and always clean up the GL context.
Note: this doesn't help with the case where releasing a custom shader
blocks for a long time, causing releasing the frame processor to
time out.
PiperOrigin-RevId: 512042501
* Account for `AssetLoader` output types.
* Consider cases that are not audio/video specific.
* Use `Format#sampleMimeType` for track specific conditions to check.
* Untangle `SamplePipeline` initilization from `AssetLoader` state.
PiperOrigin-RevId: 511020865
- Add silent audio when the output contains an audio track but the
current MediaItem doesn't have any audio.
- Add an audio track when generateSilentAudio is set to true.
PiperOrigin-RevId: 511005887