If the output sample MIME type is inferred from the input
but is not supported by the muxer, we fallback to transcoding
to a supported sample MIME type.
The audio and video renderers need to make sure not to select the PassthroughSamplePipeline for this case. Which sample MIME type
to choose is decided by the EncoderFactory.
PiperOrigin-RevId: 423272812
Refactor the overall module to place the unified vorbis tags into a
single package called `vorbis`. Also re-intoduce the vorbis tags
in their original `flac` module, but deprecate them.
The `main` role distinguishes a track from an `alternate`, but unlike
`SELECTION_FLAG_DEFAULT` it doesn't imply the track should be selected
unless user preferences state otherwise. e.g. in the case of a text
track, the player shouldn't enable subtitle rendering just because a
`main` text track is present in the manifest.
The `main`/`alternate` distinction is still available through
`Format.roleFlags` and the `ROLE_FLAG_MAIN` and `ROLE_FLAG_ALTERNATE`
values.
This behaviour was originally [added in 2.2.0](7f967f3057),
however at the time the `C.RoleFlags` IntDef did not exist. The IntDef
was [added in 2.10.0](a86a9137be).
PiperOrigin-RevId: 418937747
This is better than silently dropping tracks as done previously. Later,
we will implement fallback to transcoding to a supported MIME type.
PiperOrigin-RevId: 418006258
For DASH manifests, we merge min/max live latency values from various
sources and they may not be consistent with each other. To ensure we
use a sensible configuration in all cases, we can add more correctness
checks:
1. Limit the min/max values to fall into the available live window.
2. Ensure that maxLatency >= minLatency in all cases.
Issue: google/ExoPlayer#9750
PiperOrigin-RevId: 415282938
Sometimes the empty end of stream buffer has a non-zero
data limit. Calling flip first, resets the limit to the
position which is zero in these cases.
PiperOrigin-RevId: 413156455
The test extracts and decodes the first video frame in the test media, renders it to the frame editor's input surface and then processes data. It then reads back the output from the frame editor, converts it to a bitmap and then compares that with a 'golden' bitmap (which is just the same as the test media's first video frame).
PiperOrigin-RevId: 413131811
Follow-up to a comment on 6f0f7dd1be: Buffers that are useful to pass
to the sample/passthrough pipeline should either contain data or the
end of input flag. Otherwise, passing these buffers along is unnecessary
and may even cause the decoder to allocate a new input buffer which is
wasteful.
PiperOrigin-RevId: 411060709
We already parse essential and supplemental properties from the
Representation, but don't add them to our Representation class so that
they can be accessed by users.
Issue: google/ExoPlayer#9579
PiperOrigin-RevId: 409961990
We already parse essential and supplemental properties from the
Representation, but don't add them to our Representation class so that
they can be accessed by users.
Issue: google/ExoPlayer#9579
PiperOrigin-RevId: 409961990
If the number of samples changes, the sizes will help us to verify whether they are just split differently or extra data was added.
PiperOrigin-RevId: 407346280
When dropping the remainder, the decoder and encoder timestamps start diverging after a few buffers when no speed changes are supposed to occur. Tracking the remainder keeps them in sync.
PiperOrigin-RevId: 408341074
`TransformerAudioRenderer` reads input and passes `DecoderInputBuffer`s to the `AudioSamplePipeline`. The `AudioSamplePipeline` handles all steps from decoding to encoding. `TransformerAudioRenderer` receives `DecoderInputBuffer`s from the `AudioSamplePipeline` and passes their data to the muxer.
`AudioSamplePipeline` implements a new interface `SamplePipeline`. A pass-through pipeline will be added in a future cl.
PiperOrigin-RevId: 407555102
If the number of samples changes, the sizes will help us to verify whether they are just split differently or extra data was added.
PiperOrigin-RevId: 407346280
The presentation time in fMP4 is calculated by adding and subtracting
3 values. All 3 values are currently converted to microseconds first
before the calculation, leading to rounding errors. The rounding errors
can be avoided by doing the conversion to microseconds as the last step.
For example:
In timescale 96000: 8008+8008-16016 = 0
Rounding to us first: 83416+83416-166833=-1
#minor-release
PiperOrigin-RevId: 406809844
This helps to prevent issues where decoders can't handle negative
timestamps. In particular it avoids issues when the media accidentally
or intentionally starts with small negative timestamps. But it also
helps to prevent other renderer resets at a later point, for example
if a live stream with a large start offset is enqueued in the playlist.
#minor-release
PiperOrigin-RevId: 406786977
The presentation time in fMP4 is calculated by adding and subtracting
3 values. All 3 values are currently converted to microseconds first
before the calculation, leading to rounding errors. The rounding errors
can be avoided by doing the conversion to microseconds as the last step.
For example:
In timescale 96000: 8008+8008-16016 = 0
Rounding to us first: 83416+83416-166833=-1
#minor-release
PiperOrigin-RevId: 406809844
This helps to prevent issues where decoders can't handle negative
timestamps. In particular it avoids issues when the media accidentally
or intentionally starts with small negative timestamps. But it also
helps to prevent other renderer resets at a later point, for example
if a live stream with a large start offset is enqueued in the playlist.
#minor-release
PiperOrigin-RevId: 406786977
Test file produced with:
$ MP4Box -add "sample.mp4#video:colr=nclc,1,1,1" -new sample_18byte_nclx_colr.mp4
And then manually changing the `nclc` bytes to `nclx`.
This produces an 18-byte `colr` box with type `nclx`. The bitstream of
this file does not contain HDR content, so the file itself is invalid
for playback with a real decoder, but adding the box is enough to test
the extractor change in this commit.
(aside: MP4Box will let you pass `nclx`, but it requires 4 parameters, i.e. it
requires the full_range_flag to be set, resulting in a valid 19-byte colr box)
#minor-release
Issue: #9332
PiperOrigin-RevId: 405842520
Enable subtitle output in the PlaybackOutput and disable the text
renderer in the MkvPlaybackTest. Add WebvttPlaybackTest to test the
output of side-loaded WebVTT subtitles.
PiperOrigin-RevId: 402526588
- Fix use of getTimestampOffsetUs in TsExtractor where
getFirstSampleTimestampUs should have been used.
- Don't reset TimestampAdjuster if it's in no-offset mode.
- Improve comment clarity
#minor-release
PiperOrigin-RevId: 388682711
This CL addresses the github issue [#8946](https://github.com/google/ExoPlayer/issues/8964). That issue requests support for `font-size` CSS property in WebVTT subtitle format. This CL:
* Adds support for `font-size` property by extending capabilities of WebVTT `CssParser`. Implementation of `font-size` property value parsing is based on the one in `TtmlDecoder`.
* Adds unit test along with test file containing WebVTT subtitles with all currently supported `font-size` units.
#minor-release
PiperOrigin-RevId: 388423859
After this change, multiple BaseURL elements are parsed, but the player still only uses the first BaseURL element appearing in the manifest and its corresponding availabilityTimeOffsetUs.
PiperOrigin-RevId: 380775256