VideoFrameProcessor treats BT601 and BT709 as roughly equivalent now, so we
shouldn't be making checks that assume BT709 <-> requires tone-mapping.
Also, the color transfer is a better determinant for tone-mapping than color range, so use just the transfer to determine if tone-mapping is required.
PiperOrigin-RevId: 603083100
This happens when using `ExoPlayer.setVideoEffects()`.
This CL also fixes the first frame not rendered problem, originally solved in
7e65cce967, but rolled back in 5056dfaa2b because the solution introduces
the flash that is observed in b/292111083.
Before media3 1.1 release, the output size of `VideoFrameProcessor` is not
reported to the app. This was changed later after introducing
`CompositingVideoSinkProvider`, where the video size after processing **is**
reported to the app. After this CL, the size is again, not reported.
PiperOrigin-RevId: 602345087
Many usages are needed to support other deprecations and some
can be replaced by the recommended direct alternative.
Also replace links to deprecated/redirected dev site
PiperOrigin-RevId: 601795998
since we now can support taking in the inputColor upon registering the stream, there is no need to hardcode the image input color anymore. We should now be able to support switching between texture and image input which we couldn't before, but this is untested and not necessary.
PiperOrigin-RevId: 601784149
This also fixes issue introduced by frames being released from a prior version of a
GlShaderProgram
Tested by seeking within a playlist with one SDR then one HDR video.
PiperOrigin-RevId: 599475959
Previously, 8f69bb0d9d updated external input (video input)
but not internal input (image/texture input). Update internal input as
well to match.
PiperOrigin-RevId: 599235813
7e65cce967 introduced a regression on ExoPlayer.setVideoEffects()
where there is flash on the screen after the first few frames are shown.
Before 7e65cce967, the first frames of the content are missed, until
MediaCodecVideoRenderer sends the onVideoSizeChanged() callback. The
first frame is processed but not shown, the onVideoSizeChanged() is
triggered and the renderer receives a video output resolution message as
a response from the UI.
7e65cce967 fixed the missed first frames by setting a surface on the
CompositingVideoSinkProvider before the provider is initialized, and as
as result:
- the first frames are rendered
- the MediaCodecVideoRenderer sends the the onVideoSizeChanged() after
frames are shown
- the UI sends a video output resolution change
- the MediaCodecVideoRenderer updates the CompositingVideoSinkProvider
which causes the flash.
The underlying problem is with onVideoSizeChanged() not being
triggered early enough, before the first frame is shown.
Because the flashing is a regression, this commit is reverting
7e65cce967 but keeps the test that was added as ignored, so that the
test is re-enabled as soon as we address the underlying issue.
PiperOrigin-RevId: 597814013
Partially addresses the following TODO, by simplifying the DefaultShaderProgram
API surface.
```
// TODO(b/274109008): Refactor DefaultShaderProgram to create a class just for sampling.
```
PiperOrigin-RevId: 597575575
Whenever the inputColorInfo updates, update the samplingGlShaderProgram.
Also, allow either SDR or gamma2.2 to be used for HDR->SDR tone-mapping
`outputColorInfo` request. This is required because we can't update the
`outputColorInfo`, but plan to always use gamma2.2 for `outputColorInfo` in the
future.
This allows VideoFrameProcessor to work as is for exoplayer previewing, but
only when not seeking. As we haven't plumbed the per-stream inputColorInfo from
ExoPlayer down to VFP.registerInputStream, follow-up CLs will be needed to
properly support previewing with changing inputColorInfo.
PiperOrigin-RevId: 596627890
Instead, for input videos, use the colorInfo provided by the extractor. Similarly, for input images, use sRGB, the only color currently in use.
Textures do still need the input ColorInfo provided though.
PiperOrigin-RevId: 592875967
Move `inputColorInfo` from `VideoFrameProcessor`'s `Factory.create` to `FrameInfo`,
input via `registerInputStream`.
Also, manually tested on exoplayer demo that setVideoEffects still works.
PiperOrigin-RevId: 591273545
This is because currently
1. Player sets a surfaceView to render to
2. Player intializes the renderer
3. MCVR initializes the VideoSinkProvider, by extension VideoGraph
But when 1 happens, MCVR doesn't set the surfaceView on the VideoGraph because
it's not initialized. Consequently after VideoGraph is initialized, it doesn't
have a surface to render to, and thus dropping the first a few frames.
Also adds a test for first frame to verify the correct first frame is rendered.
PiperOrigin-RevId: 591228174
Otherwise, there's a memory leak of ~30MB, as this is never released.
This likely used to be considered released as part of what now became
`intermediateGlShaderPrograms`, but its release was missed after we split
`finalShaderProgramWrapper` out from the larger glShaderProgram list.
PiperOrigin-RevId: 590954785
To prepare to move `inputColorInfo` from `VFP.Factory.create` to
`VFP.registerInputStream`, move all usage of `inputColorInfo` to be *after*
`registerInputStream`.
To do this, defer creation of `externalShaderProgram` instances, which require
`inputColorInfo`. However, we must still initialize `InputSwitcher` and OpenGL
ES 3.0 contexts in the VFP create, to create and request the input surface from
ExternalTextureManager.
PiperOrigin-RevId: 590937251
Despite GL 3.0 not being required on API 29+, it is experimentally
determined to always be supported on our testing devices, on API 29+.
That said, still fall back to OpenGL 2.0 if 3.0 is not supported,
just in case.
PiperOrigin-RevId: 590569772
Before, a translucent overlay over an opaque video would result in a
translucent output. This is not consistent with physical properties of light
(if putting a translucent object in front of an opaque object, you can't see
behind the opaque object).
Using the mixing properties from DefaultVideoCompositor.
PiperOrigin-RevId: 586636275
In `TextOverlay` and `DrawableOverlay`, treat `Bitmap` as a buffer, where we
allocate it rarely and reuse it as long as possible before making a new one.
In `BitmapOverlay`, avoid allocating GL textures too often as well.
Strongly reduces allocations and memory usage growth (saving ~100-150 MB on 4k60fps
at high end), at the cost of more code complexity and low-end using 70MB more, on
1/1 comparisons.
PiperOrigin-RevId: 585990602
In `ExternalTextureManager` fields are accessed from the GL thread and the class needs to be constructed on the GL thread.
Also visibly document threading requirement in the parent class.
PiperOrigin-RevId: 585941284
The issue that motivated adding this (frames unexpectedly being dropped by the
decoder) has been addressed, so we can turn off the logging to reduce
unnecessary allocations during transformation. We can easily turn on debug
logging in future as needed by setting `DebugTraceUtil.DEBUG = true`.
Also avoid allocations for string building at logging call sites by passing a
format string for extra info. Formatting the string now only happens when
debugging is turned on.
Tested manually by running transformations in the new state (DEBUG = false) and
with debugging turned on.
PiperOrigin-RevId: 585666349
Move the reflective loading of VideoFrameProcessor from
MediaCodecVideoRenderer to the CompositingVideoSinkProvider. This is so
that all reflective code lives in one place. The
CompositingVideoSinkProvider already has reflective code to load the
default PreviewingVideoGraph.Factory.
PiperOrigin-RevId: 584852547
In OverlayShaderProgram, this method is used quite a lot, and is the only method from Util.java in this file. Marginally reduce complexity by using a static import instead.
PiperOrigin-RevId: 584828455
Reduce short-lived allocations of potentially large objects, like Bitmap.
Unfortunately, this does make the TextureOverlay interface more messy though, requiring a way to signal whether the texture should be flipped vertically.
PiperOrigin-RevId: 584661400
Split CompositingVideoSinkProvider.VideoSinkImpl in two classes:
- VideoSinkImpl now only receives input from MediaCodecVideoRenderer and
forwards frames to its connected VideoFrameProcessor
- VideoFrameRenderControl takes composited frames out of the VideoGraph
and schedules the rendering of those.
- CompositingVideoSinkProvider connects VideoSinkImpl with
VideoFramesRenderer.
PiperOrigin-RevId: 584605078
Use renderengine's PQ to SDR tone-mapping implementation instead
of naive implementation from before.
This improves luminance on highlights, as seen in the test image.
PiperOrigin-RevId: 582318045
All methods in VideoFrameProcessor are expected to be called by the owning thread,
as far as I understand (vs. 10 threads each queuing frames/textures/streams, which
invalidates blocking done by registerInputStream and flush)
PiperOrigin-RevId: 582295240
Throws when calling flush when there's no active input, for example
before an input stream is registered or after all output streams have
ended.
PiperOrigin-RevId: 577165419