- Corrected trackEventBytes traversal in TrackChunk where the array position marker was not incremented properly, leading to duplicate sample output.
- Amended TrackEvent "Meta Event" parsing to account for the length of variable length bytes.
- Fixed a bug with TrackEvent message parsing where message data was parsed incorrectly due to miscalculated length.
PiperOrigin-RevId: 456967968
This change adds a SurfaceProvider interface which is necessary to
allow for texture processors whose output size becomes available
asynchronously in follow-ups.
VTSP's implementation of this interface wraps the encoder and provides
its input surface together with the output frame width, height, and
orientation as used for encoder configuration.
The FrameProcessorChain converts the output frames to the provided
orientation and resolution using a ScaleToFitTransformation and
Presentation replacing EncoderCompatibilityTransformation.
PiperOrigin-RevId: 455112598
- Fixed MidiExtractor state issues which caused seeking to behave unexpectedly. Ensures the extractor is now always in the file loading state after returning RESULT_END_OF_INPUT.
- Fixed an infinite loop in MidiExtractor caused by the file data array having an initial size of 0. The extractor attempted to increase the capacity of the array by using this size of 0 in it's calculations.
PiperOrigin-RevId: 455107511
This removes the prior restriction of needing to remember not to crop and set aspect ratio in the same Presentation.Builder, and makes each class a bit more targeted.
This is partially made feasible by the past work to merge consecutive
MatrixTransformations into a single MatrixTransformationFrameProcessor, which
ensures that there's no loss in quality between successive MatrixTransformations.
PiperOrigin-RevId: 453660582
This reinstates the permissive behaviour removed by
fe7e5b8181
Test file created by opening bear.opus in a hex editor and naively
duplicating the two header packets, starting at (and including) the
first `OggS` in the file and ending just before the third `OggS`.
#minor-release
Issue: google/ExoPlayer#10038
PiperOrigin-RevId: 452015662
When using a MatrixTransformationFrameProcessor per transformation
matrix, each frame processor's shader applies the matrix to the
vertices and clips the result to the NDC range when drawing the
output frame.
This change combines consecutive MatrixTransformations into a single
MatrixTransformationFrameProcessor by multiplying the individual
matrices while updating and clipping the visible polygon after
each matrix and mapping the resulting visible polygon back to the
input space so that its vertices and the combined transformation
matrix can be used in the shader.
PiperOrigin-RevId: 448521068
We add an entire class like we do for parsing other codec initialization formats; it's currently not doing any parsing though (... initialization data is really simple for AV1 though: just the entire contents of the box).
For testing, we add the sample file, having been re-encoded with ffmpeg (and we also happen to have another av1 file, too).
PiperOrigin-RevId: 444890282
To ensure frame processor operations operate on square pixels,
make the frame taller or wider for non-square input pixels.
In addition to automated tests, this was tested by changing the
inputFormat.pixelWidthHeightRatio in the TransformerVideoRenderer.
PiperOrigin-RevId: 444553517
This test should run on all devices from API 21 (the media uses Baseline
profile level 3.0 H.264) to give us coverage of the full pipeline (forcing
re-encoding) and SSIM calculation on all devices.
PiperOrigin-RevId: 443650002
* Group what's now many related test PNGs by moving them to their own directory.
* Move bitmap references to files where they're used, as each bitmap is only
used once each, except the original bitmap.
PiperOrigin-RevId: 441485489
We add an entire class like we do for parsing other codec initialization formats; it's currently not doing any parsing though (... initialization data is really simple for AV1 though: just the entire contents of the box).
For testing, we add the sample file, having been re-encoded with ffmpeg (and we also happen to have another av1 file, too).
PiperOrigin-RevId: 439453823
This provides better compatibility with MediaExtractor, which does read these fields; we also need them for being able to mux file contents into another mp4 file.
Also, there is a minor refactor included so that we have an actual type for esds box contents instead of a pair.
PiperOrigin-RevId: 438673825
The FrameProcessorChain manages a List<GlFrameProcessor>.
FrameProcessorChainDataProcessingTest now tests chaining ScaleToFit-
and AdvancedFrameProcessors.
PiperOrigin-RevId: 436468037
ExternalCopyFrameProcessor's output dimensions match the input
size not the output size. So the intermediate texture size
should match the input size.
Also rename configureOutputDimensions to configureOutputSize.
PiperOrigin-RevId: 435058789
* Move auto-adjustments for transformation matrices from the
VideoTranscodingSamplePipeline constructor to the new
ScaleToFitFrameProcessor.
* Add GlFrameProcessor#getOutputDimensions() to allow for GlFrameProcessors with
different input and output dimensions. This is a prerequisite for
Presentation.
* Tested with unit tests (and manually just in case).
* A follow up CL will implement change the FrameProcessor input to be scale and
rotate values as requested by the user. This was kept out of this CL to
reduce CL review size. Presentation will also be implemented in a follow up
CL.
PiperOrigin-RevId: 434774854
Re-enable tests that have no muxer support for timestamps going backwards.
Tests running on the B-frame sample will be added in a future commit.
#mse-bug-week
PiperOrigin-RevId: 429599177
This change makes sure played server side ads are skipped in a single period
timeline. It avoids creating an ad-MediaPeriodInfo for played postrolls and
creates a content info instead. It also sets the end position for content infos
that terminate the stream before the stream is actually finished. This prevents
the player from continue playing the remaining media delivered by the
MediaPeriod.
We also make sure that the discontinuity of played ads are not reported because
there is actually no discontinuity.
#minor-release
PiperOrigin-RevId: 428734387
If the output sample MIME type is inferred from the input
but is not supported by the muxer, we fallback to transcoding
to a supported sample MIME type.
The audio and video renderers need to make sure not to select the PassthroughSamplePipeline for this case. Which sample MIME type
to choose is decided by the EncoderFactory.
PiperOrigin-RevId: 423272812
The `main` role distinguishes a track from an `alternate`, but unlike
`SELECTION_FLAG_DEFAULT` it doesn't imply the track should be selected
unless user preferences state otherwise. e.g. in the case of a text
track, the player shouldn't enable subtitle rendering just because a
`main` text track is present in the manifest.
The `main`/`alternate` distinction is still available through
`Format.roleFlags` and the `ROLE_FLAG_MAIN` and `ROLE_FLAG_ALTERNATE`
values.
This behaviour was originally [added in 2.2.0](7f967f3057),
however at the time the `C.RoleFlags` IntDef did not exist. The IntDef
was [added in 2.10.0](a86a9137be).
PiperOrigin-RevId: 418937747
This is better than silently dropping tracks as done previously. Later,
we will implement fallback to transcoding to a supported MIME type.
PiperOrigin-RevId: 418006258
For DASH manifests, we merge min/max live latency values from various
sources and they may not be consistent with each other. To ensure we
use a sensible configuration in all cases, we can add more correctness
checks:
1. Limit the min/max values to fall into the available live window.
2. Ensure that maxLatency >= minLatency in all cases.
Issue: google/ExoPlayer#9750
PiperOrigin-RevId: 415282938
Sometimes the empty end of stream buffer has a non-zero
data limit. Calling flip first, resets the limit to the
position which is zero in these cases.
PiperOrigin-RevId: 413156455
The test extracts and decodes the first video frame in the test media, renders it to the frame editor's input surface and then processes data. It then reads back the output from the frame editor, converts it to a bitmap and then compares that with a 'golden' bitmap (which is just the same as the test media's first video frame).
PiperOrigin-RevId: 413131811
Follow-up to a comment on
ac8e418f3d
Buffers that are useful to pass to the sample/passthrough pipeline
should either contain data or the end of input flag. Otherwise, passing
these buffers along is unnecessary and may even cause the decoder to
allocate a new input buffer which is wasteful.
PiperOrigin-RevId: 411060709
We already parse essential and supplemental properties from the
Representation, but don't add them to our Representation class so that
they can be accessed by users.
Issue: google/ExoPlayer#9579
PiperOrigin-RevId: 409961990