These calls have no effect because the VideoFrameReleaseControl of
CompositionPlayer is created with allowedJoiningTimeMs set to 0.
PiperOrigin-RevId: 644274524
This is an internal refactor with no logic changed.
There is a duplication in information and a lack of consistency around
the use of test assets in transformer androidTest. This change
refactors each asset to be defined as its own AssetInfo, with the other
relevant parts stored alongside the uri.
Once this is in place, we can consider other useful functionality, such
as having boolean flags for what tracks an asset has, helper methods
for whether an asset is local or remote, and more. This will reduce the
manual overhead necessary to use more assets in tests, and in
particular leads towards easily using new & existing assets in
parameterized tests.
PiperOrigin-RevId: 644040595
This CL is a pure refactor of the existing tests, iterative changes
will be done in follow-ups.
The only logic change is having all MP4 assets have the presentaton of
height=360. The old code had some with height 480.
PiperOrigin-RevId: 643020773
- Video release should check for buffer timestamp (which is renderer-offsetted), rather than the frame timestamp
- ImageRenderer should report ended after all of it's outputs are released, rather than when finished consuming its input.
Add tests for timestamp handling
PiperOrigin-RevId: 642587290
In order to do that, make the VideoSink nullable in MCVR.
We want to avoid calling VideoFrameReleaseControl.setClock directly
from MCVR when the sink is enabled. The goal is to handle all the
communication with the release control from the sink/sink provider.
PiperOrigin-RevId: 642542063
Rational for keeping field package private:
Since the overall export already very complex, by looking at the output file
its hard to know if the desired processing happened or not. Since we now
support sequences with "n" media items, its even more important to know if all
the media items were processed or not.
Although the field exposes implementation details, it seems ok as we get benefit of detailed testing.
PiperOrigin-RevId: 642337888
Before this change, the timestamps output from composition playback is offset
with the renderer offset. After this change, the offset is removed and the
timestamp behaviour converges with Transformer, that is, the timestamps of
video/images frames will follow that of the composition. For example, with a
composition of two 10-s items, clipping the first with 2s at the start, the
timestamp of the first frame in the second item, will be 8s.
PiperOrigin-RevId: 641121358
Ensures valid progress state is returned. Should not return NOT_STARTED
once transformer.start has been called, until export ends.
PiperOrigin-RevId: 640533805
Usages removed in this CL are:
- onProcessedStreamChange, which was already called from the VideoSink
(via VideoFrameRenderControl)
- setOutputSurface, which was also already called from the VideoSink
- setFrameRate, which this CL now sets in the VideoSink
PiperOrigin-RevId: 640530903
Removes the flakiness of
MediaItemExportTest.getProgress_unknownDuration_returnsConsistentStates
by using a longer input asset, such that ExoPlayer does not determine
the duration of the media.
PiperOrigin-RevId: 640502470
Add SeparableConvolution.configure(inputSize) to allow effect configuration
depending on input dimensions.
Add LanczosResample.scaleToFit method to scale input images to fit inside
given dimensions.
PiperOrigin-RevId: 640498008
These changes are possible because getProgress is no longer a blocking
operation on transformer.
* Tests call getProgress after every looper message executed.
* Use longer media assets for getProgress tests to give more progress
intervals.
* Remove conditional assertions.
PiperOrigin-RevId: 639734368
Switch from 4-channel RGBA_16F lookup texture to 1-channel R_16F.
Do not use a bitmap when creating the lookup table texture.
Instead, fill the texture directly.
Do not manually convert 32-bit float to 16-bit. Instead, let OpenGL
libraries do this for us.
PiperOrigin-RevId: 639717235
AudioGraphInput.onMediaItemChanged is called on input thread. Pending
media item changes are processed on processing thread, inside calls to
getOutput().
This change allows multiple pending media item changes to be enqueued,
and processed in sequence.
PiperOrigin-RevId: 638995291
Degammaing has been removed in cb4b2ea55c. The goldens for
TransformerHdrTest (previously TransformerSequenceEffectTestWithHdr)
were not regenerated because the test wasn't running due to its name
(fixed in e41a966237).
PiperOrigin-RevId: 638645635
Add DefaultVideosFrameProcessor experimental flag that controls
whether input Bitmaps are sampled once for a repeating sequence of
output frames with the same contents, or once for each output frame.
PiperOrigin-RevId: 637921350
Only sample from input bitmap when the input image has changed.
Introduce GainmapShaderProgram.newImmutableBitmap API that signals
input bitmap changes to GainmapShaderProgram (DefaultShaderProgram).
PiperOrigin-RevId: 637920207
Add a wait in DefaultCodec.signalEndOfInputStream when no
video encoder output has been seen. This avoids a thread synchronization problem
between writing frames to video surface, and signaling end of stream,
which was hit for video input of only one frame on some devices.
PiperOrigin-RevId: 637844690
PQ and HLG have different luminance ranges (max 10k nits and max 1k nits resp). In GL, colors work in a normalised 0 to 1 scale, so for PQ content, 1=10k nits and and for HLG content, 1=1k nits.
This cl scales and normalises PQ content appropriately so that all HDR content works in the HLG luminance range. This fixes two things
1. Conversions between HLG and PQ are "fixed" (before the output colors looked too bright or too dark depending on which way you are converting)
2. color-altering effects will be able to work consistently across HLG and PQ content
1 is tested in this cl. 2 will be tested when ultra HDR overlays are implemented, both cases have been manually tested to ensure the output looks correct on a screen.
PiperOrigin-RevId: 636851701
Remove redundant test logic to add file size to ExportResult because
the file size is already added to export result as part of an export
finishing.
PiperOrigin-RevId: 636499236
This change avoids a muxer deadlock when:
1. Sequence of items
2. First item has audio track that is shorter than video
3. Audio finishes, and muxer refuses to write more than 500ms of video
consecutively.
SequenceAssetLoader fails to progress to the second item. A muxer
deadlock is possible when the audio of the first item finishes,
audio end-of-stream is not propagated through AudioGraph, and muxer blocks
video, preventing SequenceAssetLoader to move to the next item in sequence.
By triggering silence generation early as soon as audio EOS is
encountered, we ensure SequenceAssetLoader can progress to the next item.
PiperOrigin-RevId: 636179966
This was previously only set on images because it was not ignored on
other media types. This parameter was made no-op for non-images in
7b2a1b4443.
PiperOrigin-RevId: 636078142