For DASH manifests, we merge min/max live latency values from various
sources and they may not be consistent with each other. To ensure we
use a sensible configuration in all cases, we can add more correctness
checks:
1. Limit the min/max values to fall into the available live window.
2. Ensure that maxLatency >= minLatency in all cases.
Issue: google/ExoPlayer#9750
PiperOrigin-RevId: 415282938
Sometimes the empty end of stream buffer has a non-zero
data limit. Calling flip first, resets the limit to the
position which is zero in these cases.
PiperOrigin-RevId: 413156455
The test extracts and decodes the first video frame in the test media, renders it to the frame editor's input surface and then processes data. It then reads back the output from the frame editor, converts it to a bitmap and then compares that with a 'golden' bitmap (which is just the same as the test media's first video frame).
PiperOrigin-RevId: 413131811
Follow-up to a comment on
ac8e418f3d
Buffers that are useful to pass to the sample/passthrough pipeline
should either contain data or the end of input flag. Otherwise, passing
these buffers along is unnecessary and may even cause the decoder to
allocate a new input buffer which is wasteful.
PiperOrigin-RevId: 411060709
We already parse essential and supplemental properties from the
Representation, but don't add them to our Representation class so that
they can be accessed by users.
Issue: google/ExoPlayer#9579
PiperOrigin-RevId: 409961990
When dropping the remainder, the decoder and encoder timestamps start diverging after a few buffers when no speed changes are supposed to occur. Tracking the remainder keeps them in sync.
PiperOrigin-RevId: 408341074
`TransformerAudioRenderer` reads input and passes `DecoderInputBuffer`s
to the `AudioSamplePipeline`. The `AudioSamplePipeline` handles all
steps from decoding to encoding. `TransformerAudioRenderer` receives
`DecoderInputBuffer`s from the `AudioSamplePipeline` and passes their
data to the muxer.
`AudioSamplePipeline` implements a new interface `SamplePipeline`.
A pass-through pipeline will be added in a future cl.
PiperOrigin-RevId: 407555102
If the number of samples changes, the sizes will help us to verify
whether they are just split differently or extra data was added.
PiperOrigin-RevId: 407346280
The presentation time in fMP4 is calculated by adding and subtracting
3 values. All 3 values are currently converted to microseconds first
before the calculation, leading to rounding errors. The rounding errors
can be avoided by doing the conversion to microseconds as the last step.
For example:
In timescale 96000: 8008+8008-16016 = 0
Rounding to us first: 83416+83416-166833=-1
#minor-release
PiperOrigin-RevId: 406809844
This helps to prevent issues where decoders can't handle negative
timestamps. In particular it avoids issues when the media accidentally
or intentionally starts with small negative timestamps. But it also
helps to prevent other renderer resets at a later point, for example
if a live stream with a large start offset is enqueued in the playlist.
#minor-release
PiperOrigin-RevId: 406786977
Test file produced with:
$ MP4Box -add "sample.mp4#video:colr=nclc,1,1,1" -new sample_18byte_nclx_colr.mp4
And then manually changing the `nclc` bytes to `nclx`.
This produces an 18-byte `colr` box with type `nclx`. The bitstream of
this file does not contain HDR content, so the file itself is invalid
for playback with a real decoder, but adding the box is enough to test
the extractor change in this commit.
(aside: MP4Box will let you pass `nclx`, but it requires 4 parameters, i.e. it
requires the full_range_flag to be set, resulting in a valid 19-byte colr box)
#minor-release
Issue: #9332
PiperOrigin-RevId: 405842520