When using a MatrixTransformationFrameProcessor per transformation
matrix, each frame processor's shader applies the matrix to the
vertices and clips the result to the NDC range when drawing the
output frame.
This change combines consecutive MatrixTransformations into a single
MatrixTransformationFrameProcessor by multiplying the individual
matrices while updating and clipping the visible polygon after
each matrix and mapping the resulting visible polygon back to the
input space so that its vertices and the combined transformation
matrix can be used in the shader.
PiperOrigin-RevId: 448521068
We add an entire class like we do for parsing other codec initialization formats; it's currently not doing any parsing though (... initialization data is really simple for AV1 though: just the entire contents of the box).
For testing, we add the sample file, having been re-encoded with ffmpeg (and we also happen to have another av1 file, too).
PiperOrigin-RevId: 444890282
To ensure frame processor operations operate on square pixels,
make the frame taller or wider for non-square input pixels.
In addition to automated tests, this was tested by changing the
inputFormat.pixelWidthHeightRatio in the TransformerVideoRenderer.
PiperOrigin-RevId: 444553517
This test should run on all devices from API 21 (the media uses Baseline
profile level 3.0 H.264) to give us coverage of the full pipeline (forcing
re-encoding) and SSIM calculation on all devices.
PiperOrigin-RevId: 443650002
* Group what's now many related test PNGs by moving them to their own directory.
* Move bitmap references to files where they're used, as each bitmap is only
used once each, except the original bitmap.
PiperOrigin-RevId: 441485489
We add an entire class like we do for parsing other codec initialization formats; it's currently not doing any parsing though (... initialization data is really simple for AV1 though: just the entire contents of the box).
For testing, we add the sample file, having been re-encoded with ffmpeg (and we also happen to have another av1 file, too).
PiperOrigin-RevId: 439453823
This provides better compatibility with MediaExtractor, which does read these fields; we also need them for being able to mux file contents into another mp4 file.
Also, there is a minor refactor included so that we have an actual type for esds box contents instead of a pair.
PiperOrigin-RevId: 438673825
The FrameProcessorChain manages a List<GlFrameProcessor>.
FrameProcessorChainDataProcessingTest now tests chaining ScaleToFit-
and AdvancedFrameProcessors.
PiperOrigin-RevId: 436468037
ExternalCopyFrameProcessor's output dimensions match the input
size not the output size. So the intermediate texture size
should match the input size.
Also rename configureOutputDimensions to configureOutputSize.
PiperOrigin-RevId: 435058789
* Move auto-adjustments for transformation matrices from the
VideoTranscodingSamplePipeline constructor to the new
ScaleToFitFrameProcessor.
* Add GlFrameProcessor#getOutputDimensions() to allow for GlFrameProcessors with
different input and output dimensions. This is a prerequisite for
Presentation.
* Tested with unit tests (and manually just in case).
* A follow up CL will implement change the FrameProcessor input to be scale and
rotate values as requested by the user. This was kept out of this CL to
reduce CL review size. Presentation will also be implemented in a follow up
CL.
PiperOrigin-RevId: 434774854
Re-enable tests that have no muxer support for timestamps going backwards.
Tests running on the B-frame sample will be added in a future commit.
#mse-bug-week
PiperOrigin-RevId: 429599177
This change makes sure played server side ads are skipped in a single period
timeline. It avoids creating an ad-MediaPeriodInfo for played postrolls and
creates a content info instead. It also sets the end position for content infos
that terminate the stream before the stream is actually finished. This prevents
the player from continue playing the remaining media delivered by the
MediaPeriod.
We also make sure that the discontinuity of played ads are not reported because
there is actually no discontinuity.
#minor-release
PiperOrigin-RevId: 428734387
If the output sample MIME type is inferred from the input
but is not supported by the muxer, we fallback to transcoding
to a supported sample MIME type.
The audio and video renderers need to make sure not to select the PassthroughSamplePipeline for this case. Which sample MIME type
to choose is decided by the EncoderFactory.
PiperOrigin-RevId: 423272812
The `main` role distinguishes a track from an `alternate`, but unlike
`SELECTION_FLAG_DEFAULT` it doesn't imply the track should be selected
unless user preferences state otherwise. e.g. in the case of a text
track, the player shouldn't enable subtitle rendering just because a
`main` text track is present in the manifest.
The `main`/`alternate` distinction is still available through
`Format.roleFlags` and the `ROLE_FLAG_MAIN` and `ROLE_FLAG_ALTERNATE`
values.
This behaviour was originally [added in 2.2.0](7f967f3057),
however at the time the `C.RoleFlags` IntDef did not exist. The IntDef
was [added in 2.10.0](a86a9137be).
PiperOrigin-RevId: 418937747
This is better than silently dropping tracks as done previously. Later,
we will implement fallback to transcoding to a supported MIME type.
PiperOrigin-RevId: 418006258
For DASH manifests, we merge min/max live latency values from various
sources and they may not be consistent with each other. To ensure we
use a sensible configuration in all cases, we can add more correctness
checks:
1. Limit the min/max values to fall into the available live window.
2. Ensure that maxLatency >= minLatency in all cases.
Issue: google/ExoPlayer#9750
PiperOrigin-RevId: 415282938
Sometimes the empty end of stream buffer has a non-zero
data limit. Calling flip first, resets the limit to the
position which is zero in these cases.
PiperOrigin-RevId: 413156455
The test extracts and decodes the first video frame in the test media, renders it to the frame editor's input surface and then processes data. It then reads back the output from the frame editor, converts it to a bitmap and then compares that with a 'golden' bitmap (which is just the same as the test media's first video frame).
PiperOrigin-RevId: 413131811
Follow-up to a comment on
ac8e418f3d
Buffers that are useful to pass to the sample/passthrough pipeline
should either contain data or the end of input flag. Otherwise, passing
these buffers along is unnecessary and may even cause the decoder to
allocate a new input buffer which is wasteful.
PiperOrigin-RevId: 411060709
We already parse essential and supplemental properties from the
Representation, but don't add them to our Representation class so that
they can be accessed by users.
Issue: google/ExoPlayer#9579
PiperOrigin-RevId: 409961990
When dropping the remainder, the decoder and encoder timestamps start diverging after a few buffers when no speed changes are supposed to occur. Tracking the remainder keeps them in sync.
PiperOrigin-RevId: 408341074
`TransformerAudioRenderer` reads input and passes `DecoderInputBuffer`s
to the `AudioSamplePipeline`. The `AudioSamplePipeline` handles all
steps from decoding to encoding. `TransformerAudioRenderer` receives
`DecoderInputBuffer`s from the `AudioSamplePipeline` and passes their
data to the muxer.
`AudioSamplePipeline` implements a new interface `SamplePipeline`.
A pass-through pipeline will be added in a future cl.
PiperOrigin-RevId: 407555102
If the number of samples changes, the sizes will help us to verify
whether they are just split differently or extra data was added.
PiperOrigin-RevId: 407346280
The presentation time in fMP4 is calculated by adding and subtracting
3 values. All 3 values are currently converted to microseconds first
before the calculation, leading to rounding errors. The rounding errors
can be avoided by doing the conversion to microseconds as the last step.
For example:
In timescale 96000: 8008+8008-16016 = 0
Rounding to us first: 83416+83416-166833=-1
#minor-release
PiperOrigin-RevId: 406809844
This helps to prevent issues where decoders can't handle negative
timestamps. In particular it avoids issues when the media accidentally
or intentionally starts with small negative timestamps. But it also
helps to prevent other renderer resets at a later point, for example
if a live stream with a large start offset is enqueued in the playlist.
#minor-release
PiperOrigin-RevId: 406786977
Test file produced with:
$ MP4Box -add "sample.mp4#video:colr=nclc,1,1,1" -new sample_18byte_nclx_colr.mp4
And then manually changing the `nclc` bytes to `nclx`.
This produces an 18-byte `colr` box with type `nclx`. The bitstream of
this file does not contain HDR content, so the file itself is invalid
for playback with a real decoder, but adding the box is enough to test
the extractor change in this commit.
(aside: MP4Box will let you pass `nclx`, but it requires 4 parameters, i.e. it
requires the full_range_flag to be set, resulting in a valid 19-byte colr box)
#minor-release
Issue: #9332
PiperOrigin-RevId: 405842520