Add an check when loading progressive media in case the load
is canceled. If the player is released very early, the progressive
media period may carry on with the initial loading unnecessarily.
PiperOrigin-RevId: 586288385
In `TextOverlay` and `DrawableOverlay`, treat `Bitmap` as a buffer, where we
allocate it rarely and reuse it as long as possible before making a new one.
In `BitmapOverlay`, avoid allocating GL textures too often as well.
Strongly reduces allocations and memory usage growth (saving ~100-150 MB on 4k60fps
at high end), at the cost of more code complexity and low-end using 70MB more, on
1/1 comparisons.
PiperOrigin-RevId: 585990602
In `ExternalTextureManager` fields are accessed from the GL thread and the class needs to be constructed on the GL thread.
Also visibly document threading requirement in the parent class.
PiperOrigin-RevId: 585941284
The issue that motivated adding this (frames unexpectedly being dropped by the
decoder) has been addressed, so we can turn off the logging to reduce
unnecessary allocations during transformation. We can easily turn on debug
logging in future as needed by setting `DebugTraceUtil.DEBUG = true`.
Also avoid allocations for string building at logging call sites by passing a
format string for extra info. Formatting the string now only happens when
debugging is turned on.
Tested manually by running transformations in the new state (DEBUG = false) and
with debugging turned on.
PiperOrigin-RevId: 585666349
Serializing bitmap cues is currently broken, but this test is
incorrectly passing. This change makes two changes to introduce the same
failure (both changes are necessary, each one alone still passes):
1. Move from Robolectric to an instrumentation test.
2. Trigger the `Bitmap` to be serialized using a file descriptor, either
by calling `Bitmap.asShared` in the test when constructing the `Cue`,
or constructing the `Bitmap` from a 'real' image byte array instead a
1x1 token image.
Issue: androidx/media#836
PiperOrigin-RevId: 585643486
The decoder and encoder won't accept high values for frame rate, so avoid
setting the key when configuring the decoder, and set a default value for the
encoder (where the key is required).
Also skip SSIM calculation for 4k, where the device lacks concurrent decoding
support.
PiperOrigin-RevId: 585604976
At least some Android 14 devices still have the same invalid URL issue
when using ClearKey DRM, so might as well apply the check to newer
versions of Android as well.
Using `Integer.MAX_VALUE` risks causing arithmetic overflow in the codec
implementation.
Issue: androidx/media#810
#minor-release
PiperOrigin-RevId: 585104621
We don't get the buffer back after decoding to make a proper decision
about whether dropping the buffer is needed, so we do the next best
thing and tell the codec to drop the buffer if its input timestamp is
less than the intended start time.
PiperOrigin-RevId: 584863144
Move the reflective loading of VideoFrameProcessor from
MediaCodecVideoRenderer to the CompositingVideoSinkProvider. This is so
that all reflective code lives in one place. The
CompositingVideoSinkProvider already has reflective code to load the
default PreviewingVideoGraph.Factory.
PiperOrigin-RevId: 584852547
These usages have no need for the double ended input functionality. All
other usages across media3 are ConcurrentLinkedQueue.
PiperOrigin-RevId: 584841104
In OverlayShaderProgram, this method is used quite a lot, and is the only method from Util.java in this file. Marginally reduce complexity by using a static import instead.
PiperOrigin-RevId: 584828455
Reduce short-lived allocations of potentially large objects, like Bitmap.
Unfortunately, this does make the TextureOverlay interface more messy though, requiring a way to signal whether the texture should be flipped vertically.
PiperOrigin-RevId: 584661400
These changes are also compatible with FFmpeg 5.1, which is now minimum supported version.
Also set -Wl,-Bsymbolic flag via target_link_options command which is more correct.
Split CompositingVideoSinkProvider.VideoSinkImpl in two classes:
- VideoSinkImpl now only receives input from MediaCodecVideoRenderer and
forwards frames to its connected VideoFrameProcessor
- VideoFrameRenderControl takes composited frames out of the VideoGraph
and schedules the rendering of those.
- CompositingVideoSinkProvider connects VideoSinkImpl with
VideoFramesRenderer.
PiperOrigin-RevId: 584605078
This change adds `MediaController.getSessionExtras()` through
which a controller can access the session extras.
The session extras can be set for the entire session when
building the session. This can be overridden for specific
controllers in `MediaSession.Callback.onConnect`.
PiperOrigin-RevId: 584430419
This change fixes a bug with seeking forward in MIDI. When seeking forward,
the progressive media period attempts to seek within the sample queue, if a
key-frame exists before the seeking position. With MIDI, however, we can
only skip Note-On and Note-Off samples and all other samples must be sent
to the MIDI decoder.
When seeking outside the sample queue, the MidiExtractor already
instructs the player to start from the beginning of the MIDI input. With
this change, only the first output sample is a key-frame, thus the
progressive media period can no longer seek within the sample queue and
is forced to seek from the MIDI input start always.
Issue: androidx/media#704
#minor-release
PiperOrigin-RevId: 584321443
The live offset override is used to replace the media-defined
live offset after user seeks to ensure the live adjustment adjusts
to the new user-provided live offset and doesn't go back to the
original one.
However, the code currently clips the override to the min/max
live offsets defined in LiveConfiguration. This is useful to
clip the default value (in case of inconsistent values in the media),
but the clipping shouldn't be applied to user overrides as
the player will then adjust the position back to the min/max
and doesn't stay at the desired user position.
See 2416d99857 (r132871601)
#minor-release
PiperOrigin-RevId: 584311004
Composition and EditedMediaItemSequence don't allow empty lists in their main
constructors, so shouldn't the vararg API. This is more inline with Effective
Java item 53.
PiperOrigin-RevId: 583415124
When transmuxing, the `EncodedSampleExporter` maintains a queue of input
buffers that get filled with encoded data by the asset loader. The number of
buffers was limited to avoid using more and more memory if producer (asset
loader) gets far ahead of the consumer (exporter).
Previously this limit was fixed at 10 buffers, but increasing the number of
buffers can make some transmux operations much faster. Allow allocating between
a min and max number of buffers, and also set a target allocation size beyond
which new buffers can't be allocated. This allows audio formats which require
many small buffers to be processed more quickly, while preventing allocating
too much memory for hypothetical very high bitrate formats.
'Remove video' edits on local videos in particular get much faster, because
audio buffers are very short and there are lots of them. With a sample 10
minute video, a 'remove video' edit took 2 seconds (36 seconds before this
change). With a sample 1 minute removing video took 0.25 seconds after this
change (2.5 seconds before).
The speed improvement is smaller for other types of edits that retain the video
track. Transmuxing a 10 minute video retaining the video track took 26 seconds
(40 seconds before).
PiperOrigin-RevId: 583390284