Our previous test video was difficult to use for testing our tone-mapping
algorithm, because it didn't have many different colors. Use a better
video for tone-map tests, by having one with more different colors
PiperOrigin-RevId: 606274843
The new API will take both `creation time` and `modification time`.
Till now, Mp4Muxer wrote `modification time` in both
`creation time` and `modification time` field, which was
incorrect.
PiperOrigin-RevId: 605590623
Previously, we missed the BT2020->BT709 color-space conversion.
A user-visible impact of this is that red and green channels used to be
undersaturated, but now are more correctly saturated.
PiperOrigin-RevId: 605411926
The track id must be the index in the list of published tracks
as it's used as such elsewhere. This is currently not true if we
skip an empty track as all subsequent tracks get a wrong or even
invalid id.
#minor-release
PiperOrigin-RevId: 604929178
Earlier implementation compared the whole file against a golden
data. The new implementation compares only the metadata being tested.
This will avoid updating the golden data when any unrelated change
(unrelated to scenario being tested) is made.
Added a separate test to compare the whole output file against a golden data.
PiperOrigin-RevId: 604692985
The current implementation of `JpegMotionPhotoExtractor.sniff` returns
`true` for any image with Exif data (not just motion photos). Improving
this is tracked by b/324033919. In the meantime, when we 'extract' a
non-motion photo with `JpegMotionPhotoExtractor`, the result is
currently a single empty image track and no video track (since there's
no video, since this isn't a motion photo). This 'empty' image track
is usually used to transmit metadata about the video parts of the
image file (in the form of `MotionPhotoMetadata`), but this metadata is
also (understandably) absent for non-motion photos. Therefore there's
no need to emit this image track at all, and it's clearer to emit no
tracks at all when extracting a non-motion photo using
`JpegMotionPhotoExtractor`.
This change also removes a `TODO` that is misplaced, since there's no
image bytes being emitted here (and never was).
PiperOrigin-RevId: 604688053
These are often the same for image tracks, since we usually drop the
whole image file (both the container and actual encoded image bytes)
into a single sample, but there are cases where we emit a track with
`containerMimeType=image/jpeg` but **no** samples (from
`JpegMotionPhotoExtractor`, to carry some metadata about the image +
video byte offsets).
It's therefore more correct to implement the `supportsFormat` check
based on `sampleMimeType`, so that these 'empty' image tracks are not
considered 'supported' by `ImageRenderer`.
#minor-release
PiperOrigin-RevId: 604672331
Before supporting transmuxing when both no op effects and regular rotations are set, move setting the muxerWrapper rotation out of shouldTranscodeVideo() to ensure the muxerWrapper rotation is only set at the appropriate times.
This cl also ensures the state between the muxerWrapper and the list of video effects is consistent by clearing the list of videoEffects in trim optimization. If trim optimisation is being applied, then EditedMediItem.effects.videoEffects only contains no-op effects or regular rotations that get be applied in the muxer wrapper. Therefore, we should clear the list of video effects to ensure that no effect gets applied twice.
PiperOrigin-RevId: 604292052
The new test introduced in 45bd5c6f0a is flaky because we only
wait until the media is fully buffered. However, we can't fully
control how much of this data is initially read by the Robolectric
codec and thus the output dump files (containing these codec
interactions) are flaky.
This can be fixed by fully playing the media once and then seeking
back instead.
#minor-release
PiperOrigin-RevId: 603324068
If seeking between last image sample and end of the file where the current stream is not final, then EoS sample will not be provided to `ImageRenderer`. ImageRenderer must still produce the last image sample.
PiperOrigin-RevId: 603312090
The implementation of fragmented MP4 caused a regression where muxer
started writing empty tracks even for non fragmented MP4.
PiperOrigin-RevId: 603091348
Previously, input assets had to be all SDR or all HDR.
After this CL, if tone-mapping is requested, HDR and SDR may mix in any order. If tone-mapping is not requested, SDR may precede HDR, but not vice versa, until SDR to HDR tone-mapping is implemented
Some changes to accomplish this include:
1. Inputting the decoded format's color to VideoFrameProcessor.registerInputStream
for each stream.
2. Calculating the estimated decoded format's color for each stream, by estimating
it based on MediaCodec tone-mapping.
PiperOrigin-RevId: 602747837
As per MP4 spec ISO 14496-12: 8.7.5 Chunk Offset Box, Both "stco" and
"co64" can be used to store chunk offsets. While "stco" supports 32-bit
offsets, "co64" supports 64-bit offsets.
In non fragmented MP4, the mdat box can be extremely large, hence muxer
uses "co64" box.
But for fragmented MP4, muxer does not write any data in this chunk offset
box (present in "moov" box) because all sample related info is present in
"moof" box.
Technically, "co64" box should also work in fragmented MP4because
its empty only but QuickTime player fails to play video if "co64"
box is present in fragmented MP4 output file.
Testing: Verified that QuickTime player does not play video when "co64"
box is present but is able to play when "stco" box is present.
#minor-release
PiperOrigin-RevId: 601147046
This fix makes output playable on VLC player.
The output does not play on QuickTime player which is being fixed in
a separate CL.
#minor-release
PiperOrigin-RevId: 601118813
Since mdat box can be huge so there is a provision to use 64 bit size field.
In case of fragmented MP4, individual fragments should not have large mdat box
so a 32 bit size field should be sufficient.
PiperOrigin-RevId: 599219041
Imported from GitHub PR https://github.com/androidx/media/pull/275
Added below mentioned features.
- Support for extracting DTS LBR(DTS Express) and DTS UHD Profile 2(DTS:X) descriptor ID from PSI PMT
- The DTSReader class is updated for extracting a DTS LBR.
- Newly added DtsUhdReader class for extracting DTS UHD frame.
- The DTSUtil class is updated to parse the DTS LBR or DTS UHD frame and report the format information.
Feature request for ExoPlayer: https://github.com/google/ExoPlayer/issues/11075
Merge 21efa0810db31550d6b215639f9ca2af6a32139a into 104cfc322c095b40f88e705eb4a6c2f029bacdd6
COPYBARA_INTEGRATE_REVIEW=https://github.com/androidx/media/pull/275 from rahulnmohan:dts-mpeg2ts-update 21efa0810db31550d6b215639f9ca2af6a32139a
PiperOrigin-RevId: 598854998
MatroskaExtractor will no longer be wrapped in SubtitleTranscodingExtractor, but instead use SubtitleTranscodingExtractorOutput under the hood.
FLAG_EMIT_RAW_SUBTITLE_DATA flag will be used to toggle between subtitle parsing during extraction (before the sample queue) or during decoding (after the sample queue).
The new extractor dump files generated by `MatroskaExtractorTest` now follow the new parsing logic and hence have mimeType as `x-media3-cues`.
PiperOrigin-RevId: 598616231
The seek table in a Xing/Info header is very imprecise (max resolution
of 255 to describe each of 100 byte positions in the file). Seeking
using a constant bitrate assumption is more accurate, especially for
longer files (which exacerbates the imprecision of the Info header).
VBR files should contain an Xing header, while an Info header is
identical but indicates the file is CBR.
Issue: androidx/media#878
PiperOrigin-RevId: 597827891
WebvttExtractor will no longer be wrapped in SubtitleTranscodingExtractor, but instead use SubtitleTranscodingExtractorOutput under the hood.
A new constructor will take a boolean parameter to toggle between subtitle parsing during extraction (before the sample queue) or during decoding (after the sample queue).
PiperOrigin-RevId: 597604942
An audio file can only play sound between two PCM samples (the 'start'
and 'end' of section of a wave form). Therefore when calculating
duration from a count of PCM samples we need to subtract one first (the
'end' sample which has no duration of its own).
This only changes durations by one PCM sample (21us - 22us for 44.1kHz sample
rate).
PiperOrigin-RevId: 596990306
The extractor knows the PCM encoding of the losslessly
encoded data in the samples and should set it in the
Format to allow downstream components to use this information.
PiperOrigin-RevId: 596974863
This file is CBR encoded with LAME, so it has an `Info` header (the CBR
equivalent to `Xing`).
A follow-up change will use this file in `Mp3ExtractorTest`.
Issue: androidx/media#878
PiperOrigin-RevId: 595938327
Changes includes;
1. Public API to enable fMP4 and to pass fragment duration.
2. Added `FragmentedMp4Writer`.
3. Added logic to create fragments based on given fragment duration.
4. Write "moov" box only once in the beginning.
3. Add all the required boxes for current implementation.
4. Unit tests for all the new boxes.
5. E2E test for generating fMP4.
Note: The output file is un seek-able with this first implementation.
PiperOrigin-RevId: 594426486
Mp4Muxer does not support out of order B-frames. Currently it
silently writes out of order B-frames, producing an invalid file (with
negative sample durations).
Although `Mp4Extractor` is somehow able to process this invalid file and
`Exoplayer` is able to play it but that is unexpected.
The `sample.mp4` test file contains B frames. Other test files does not
contain `H264 video + AAC audio` format hence created a new test file by
running `sample.mp4` via `Transformer` after applying some effects.
PiperOrigin-RevId: 594016144
HLS distinguishes between 'subtitles' (WebVTT or TTML distributed in
separate files with their own playlist) and 'captions' (CEA-608 or 708,
distributed muxed into the video file).
The format transformation added in 7b762642db
only applies to subtitles and not captions. This change makes the same
transformation for caption formats.
This resolves an error like:
```
SampleQueueMappingException: Unable to bind a sample queue to TrackGroup with MIME type application/cea-608.
```
Also add two playback tests for HLS CEA-608, one that parses during
decoding (old way) and one during extraction (new way). Adding these
tests is what alerted me to this issue.
PiperOrigin-RevId: 592571284
This was generated by combining the existing `ts/bbb_2500ms.ts` test
asset and a temporary `.srt` file using
https://cloud.google.com/transcoder/docs/how-to/captions-and-subtitles
This doesn't directly reproduce the problem fixed by
7ca26f898d,
because the CEA-608 subs are structured differently to the stream I
discovered the problem with (from Issue: androidx/media#887). However this test
does fail if that fix is reverted after
486230fbd7.
I'm also not able to repro the character duplication reported in
Issue: androidx/media#887 by just changing the manifest in this CL. I'm not yet
sure on the exact differences between the stream provided on GitHub
and this stream.
This stream does provide some regression protection, because it
currently fails with 'new' subtitle parsing
(`DashMediaSource.Factory.experimentalParseSubtitlesDuringExtraction(true)`),
though I'm not sure on the exact reason for that yet.
PiperOrigin-RevId: 592476328
This is because currently
1. Player sets a surfaceView to render to
2. Player intializes the renderer
3. MCVR initializes the VideoSinkProvider, by extension VideoGraph
But when 1 happens, MCVR doesn't set the surfaceView on the VideoGraph because
it's not initialized. Consequently after VideoGraph is initialized, it doesn't
have a surface to render to, and thus dropping the first a few frames.
Also adds a test for first frame to verify the correct first frame is rendered.
PiperOrigin-RevId: 591228174
The MP4 data in JPEG motion photos can contain multiple `video/hevc` tracks, but only the first is at a playable frame rate while the others are low-fps, high-res tracks designed for specific use-cases (not direct video playback).
ExoPlayer currently selects the unplayable track by default, because it
has a higher resolution. This change introduces a flag to
`Mp4Extractor` that results in the first video track being marked as
`ROLE_FLAG_MAIN`, and all subsequent video tracks `ROLE_FLAG_ALTERNATE`
- this then results in the playable lower-res track being selected by
default.
PiperOrigin-RevId: 589832072
This image has two video tracks in the MP4 data, one is a 'real' video
which we want to play by default, and the other is a low-fps video track
which isn't intended to be directly played, it's encoded in HEVC for
compression and decoding efficiency.
This test demonstrates ExoPlayer's current behaviour default extraction
and playback, which results in selecting the high-res, low-fps track
(actually single sample in this example), instead of playing the actual
video.
PiperOrigin-RevId: 588068908