Added tests for APIs `getDrmInitData()`, `getPsshInfo()`, `getLogSessionId()` and `setLogSessionId(LogSessionId)`.
The Widevine encrypted sample was created from already existing `sample_fragmented.mp4` using `mp4encrypt`.
PiperOrigin-RevId: 726881977
AV1 input buffers contain multiple compressed pictures.
Enable skipping only the last showable frame, while leaving
any reference pictures to be decoded later, as part of
the next decoder input buffer.
Partial skipping of AV1 input buffer is only applied when:
* fewer than 8 OBUs are delayed
* there's likely to be enough capacity in the decoder input buffer
for the next frame
PiperOrigin-RevId: 723496060
Parsing AV1 bitstreams allows us to identify frames that are
not used as reference, and improve seeking or frame dropping
behavior.
The AV1 bitstream format is relatively quick to parse
PiperOrigin-RevId: 723462680
When sample batching is disabled, copying of the ByteBuffer data
is not necessary as samples are written as they arrive.
Copying of the BufferInfo is necessary because the info is needed
for writing the moov atom.
The input ByteBuffer can be in little endian order, or have its
position set. AnnexBUtils now ensures big endian order before
inspecting bytes, and supports reading from a non-zero position.
This change reduces the amount of memory allocations by Mp4Muxer
in its default configuration
PiperOrigin-RevId: 723401822
The exception occurred when an edit list started at a non-sync frame with no preceding sync frame. The fix searches forward for the next sync frame in such cases, preventing the out-of-bounds access.
Issue: androidx/media#2062
#cherrypick
PiperOrigin-RevId: 720642687
Add an option to GlMatrixTransformation to choose the OpenGL texture
minification filter.
When mipmaps are requested, mipmaps are generated with
`glGenerateMipmap()`.
PiperOrigin-RevId: 720629807
The file in Issue: androidx/media#2052 contains a cue with the following timecode:
```
0:00:00:00,0:00:00:00
```
The content of this cue seems to be some 'converted by' metadata, i.e.
it's basically a comment and clearly not intended to be shown on
screen (since it has zero duration).
There is some fiddly logic later in `SsaParser` to support overlapping
cues with the old `Subtitle` structure [1], and this logic gets tripped
up by the start and end time being equal, which results in a
**single**, empty `List<Cue>` being added - which trips up another
assumption that every SSA cue line results in at least two `List<Cue>`
entries (one containing the cue text, and another containing an empty
list to signal the end of the cues).
This fiddly logic is no longer required, because overlapping
`CuesWithTiming` objects can now be merged in `TextRenderer`, so there
is a possible future simplification to `SsaParser` which removes a lot
of this complexity.
[1] Added in <unknown commit>
PiperOrigin-RevId: 718380386
This is aligned with the documentation of `MediaCodec` which says the
timestamp of a buffer with `BUFFER_FLAG_END_OF_STREAM` should be
ignored:
https://developer.android.com/reference/android/media/MediaCodec#end-of-stream-handling
Add a test that exercises this by clipping off the end of a sample with
CEA-608 captions, because this creates an EOS-flagged buffer with a
non-EOS timestamp.
Also add a straightforward playback test for the
`fragmented_captions.mp4` sample.
PiperOrigin-RevId: 715716036
Deprecate BundledChunkExtractor.experimentalParseWithinGopSampleDependencies
in favour of
ChunkExtractor.experimentalSetCodecsToParseWithinGopSampleDependencies
which takes a VideoCodecFlags IntDef flags that represent a set of codecs.
Add a DASH test using the new API with an H.265 video.
PiperOrigin-RevId: 714901602
Earlier muxer wrote a different entry for every chunk but as
per the spec (ISO 14496-12) exact same chunks can be combined
and only the first chunk number can be written.
PiperOrigin-RevId: 714154531
Add a new flag to FragmentedMp4Extractor
FLAG_READ_WITHIN_GOP_SAMPLE_DEPENDENCIES_H265
Read two bytes from H.265 videos to determine NAL unit type and
temporal layer id.
PiperOrigin-RevId: 714046987
ExoPlayer assumed 4-bytes for length in two places (by assuming the
length is the same as the 4-byte NAL start code):
1. In `AvcConfig` we transform length-delimited to start-delimited
before writing into `initializationData`, and then skip
'nal unit length field' bytes when parsing from `initializationData`
(when we should skip 'start code length' bytes instead).
2. In `Mp4Extractor.readSample` we modify the local variable
`sampleSize` to fix the difference between length field length and
start code length, but **only on the first attempt to read a
sample**. If we are resuming in the middle of reading a sample (after
a recoverable I/O error), this fix for `sampleSize` is not done,
which means we end up missing the last 2-3 bytes of the sample when
the NAL length is 1-2 bytes.
* This is fixed by moving the `sampleSize` 'fixing' code to outside
the `if (sampleCurrentNalBytesRemaining == 0)` block.
* `FragmentedMp4Extractor` has very similar code, but uses a
field for `sampleSize`, rather than a local, so doesn't look
vulnerable to the same problem (though I haven't totally
tested this).
This change adds a test file with 2-byte NAL lengths, generated by
hacking the media3 muxer to emit 2-byte NAL lengths and transforming
`sample.mp4` using the transformer demo app.
PiperOrigin-RevId: 713709203
The previous FragmentedMp4 captions test asset doesn't have captions.
Fix a bug where captions before extractor seek were output after.
PiperOrigin-RevId: 713665817
At the point of playing period transition pre-warming has completed and the renderers should receive necessary resources for playback. This CL adds the `Renderer.MessageType` `MSG_TRANSFER_RESOURCES` to direct a renderer to transfer relevant resources to another renderer.
PiperOrigin-RevId: 713372754
Add a new Mp4Extractor.FLAG_READ_WITHIN_GOP_SAMPLE_DEPENDENCIES_H265
Read two bytes from H.265 videos to determine NAL unit type and
temporal layer id.
PiperOrigin-RevId: 713248154
The number of temporal sub-layers is required for
H.265 non-reference frame identification as
only frames from the highest temporal sub-layer can be
discarded.
PiperOrigin-RevId: 713247354
We previously parsed an arbitrary number of decimal places, but assumed
the value was in milliseconds, which doesn't make sense if there is
greater or fewer than 3. This change restricts the parsing to match
exactly 3, meaning the millisecond assumption is always true.
The WebVTT spec requires there to be exactly 3 decimal places:
https://www.w3.org/TR/webvtt1/#webvtt-timestamp
The SubRip spec is less clearly defined, but the Wikipedia article
defines it as having exactly 3 decimal places
(https://en.wikipedia.org/wiki/SubRip#Format) and ExoPlayer has always
assumed 3 decimal places (anything else is already handled incorrectly),
so this change just ensures we don't show subtitles at the wrong time.
Issue: androidx/media#1997
PiperOrigin-RevId: 712885023
After this change, if the sample rate supported by the encoder differs from the requested sample rate and enableFallback is true, the AudioSampleExporter will convert audio to a sample rate supported by the encoder. This fixes a bug where the audio track is distorted when an unsupported sample rate is requested.
PiperOrigin-RevId: 712822358
Some videos include zero length NAL units in the
length-delimited MP4 samples.
Empty NAL units are not spec-compliant (see ISO/IEC 14496-15 section 4.3.3.3),
but other players are able to play these videos (with warnings or errors).
With this change, we check track.sampleTable.sizes[sampleIndex] before
reading a byte from the NAL unit itself.
PiperOrigin-RevId: 711720621
Replaced an existing APV file as bitstream syntax has now changed and
previous clip is not a valid bitstream anymore.
This clip was provided by the openAPV team and is under BSD-3 license.
PiperOrigin-RevId: 711318578
AMR-NB and AMR-WB are inherently mono, so channel count is set to 1. Sample rate is also hard-coded to adhere to codec standards.
Also removed unused parameter `hasAdditionalViews` in `StriData`.
#cherrypick
PiperOrigin-RevId: 707606245
The dump file for VP9 mp4 clips varied across SDK versions due to inconsistent CSDs from the platform extractor. By replacing the platform extractor with `MediaExtractorCompat`, the Media3 extractor will provide consistent CSDs across all SDK versions.
PiperOrigin-RevId: 707509473
Before this change, the value of the `dwLength` in the stream header was
interpreted as the number of chunks in the file. Seeking and timestamp
calculation use the media duration and total chunk count. However, in some
files the `dwLength` field appears not to store the number of chunks. For
example, there are CBR MP3 and AC3 files where this field seems to store the
total number of bytes of compressed media instead. That caused seeking and
timestamp calculation to give much smaller values than expected (because the
`dwLength` is very large), which broke seeking.
Work around this using the `idx1` index header if present. We only support
audio formats where every audio sample is a sync sample in AVI, and all chunks
should therefore be listed in this index. Based on testing on many sample AVI
files this gives a reliable total chunk count and fixes seeking.
The test media file is a transcoded version of Big Buck Bunny but manually
edited to overwrite the length, rate and sample size header files to simulate
the error case.
PiperOrigin-RevId: 705103651
`RenderersFactory#createSecondaryRenderer` can be implemented to provide secondary renderers for pre-warming. These renderers must match their primaries in terms of reported track type support and `RendererCapabilities`.
If a secondary renderer is provided, ExoPlayer will enable it for a subsequent media item as soon as its `MediaPeriod` is prepared. This will cause the renderer to start decoding and processing content so that it is ready to play as soon as playback transitions to that media item.
PiperOrigin-RevId: 704326302
The test complements the existing E2E test for HLS in `HlsPlaybackTest` by verifying similar functionality for DASH in `DashPlaybackTest`.
PiperOrigin-RevId: 703454388
This means we can complete preparation (and trigger track selection)
before opening a `DataSource`, which then means we only end up loading
the data for a selected subtitle track (instead of all tracks as
currently happens).
By making preparation trivial in this case (with no reasonable cause
of error), we can also remove the `suppressPrepareError` option added in
b3290eff10.
This change also fixes the implementation of
`ProgressiveMediaPeriod.maybeStartDeferredRetry` to only short-circuit
return `false` if the chosen track is not audio or video **and** there
is at least one audio or video track in this period.
Issue: androidx/media#1721
PiperOrigin-RevId: 702275968
The previous code assumed that the `VBRI` Table of Contents (ToC)
covers all the MP3 data in the file. In a file with an invalid VBRI ToC
where this isn't the case, this results in playback silently stopping
mid-playback (and either advancing to the next item, or continuing to
count up the playback clock forever). This change considers the `bytes`
field to determine the end of the MP3 data, in addition to deriving it
from the ToC. If they disagree we log a warning and take the max value.
This is because we handle accidentally reading non-MP3 data at the end
(or hitting EoF) better than stopping reading valid MP3 data partway
through.
Issue: androidx/media#1904
#cherrypick
PiperOrigin-RevId: 700319250