Rename AudioMixerImpl to DefaultAudioMixer.
Removes the AudioMixerImpl specific getOutputAudioFormat, as the caller
defines and sets this.
PiperOrigin-RevId: 555887722
To ensure that for each output bitmap from the compostor, the right input
timestamps were used.
Only applied on a subset of tests to avoid needing to upload+maintain
too many files/size in the test binary, especially when it would test
duplicate behavior
PiperOrigin-RevId: 555222530
This means we now require 2+ input frames per input, and compare the primary
stream timestamp with secondary stream timestamps in order to select the
correct output timestamp. We also must release frames and back-pressure as
soon as possible to avoid blocking upstream VFPs.
Also, improve signalling of VFP onReadyToAcceptInputFrame
PiperOrigin-RevId: 553448965
ExoPlayer queues the EOS buffer to the decoder with offset/size/timestamp all
equal to zero, and a EOS flag.
69769c77b3 set TIME_END_OF_SOURCE on the EOS buffer from the extractor.
Queueing the EOS buffer to the decoder with TIME_END_OF_SOURCE causes some
decoders to output wrong timestamps in its output.
This CL replicates what ExoPlayer does in Transformer.
PiperOrigin-RevId: 553104213
This allows for custom implementations of this interface, like
a TestVideoCompositor or partner-implemented implementation
PiperOrigin-RevId: 552541631
After this change, every queued bitmap is treated as an individual input stream
(like a new MediaItems).
This change merges the FrameDropTest and FrameDropPixelTest into one (while maintaining all the test cases)
- This is accomplished by generating bitmaps with timestamps on it in FrameDropTest and compare them with goldens (one may call this a pixel test, please lmk if you want this to be renamed)
- The most part of the change comes from DefaultVideoFrameProcessorVideoFrameRenderingTest. The overall working is
- We bypass the input manager
- The TestFrameGenerator generates frames based on timestamps. In this case, we generate frames with timestamps on it
- The generated frame is sent to texture output and in turn saved to bitmaps
- We then compare the generated bitmap with the goldens
PiperOrigin-RevId: 551795770
Move shared test logic to the test runner.
This does increase indirection, which isn't usually preferable in tests.
However, we will have many different tests that would use logic
like this, so this allows us to reduce repetition.
PiperOrigin-RevId: 551536438
This should make no functional difference because `SampleExporter` always
checks for H.265 and H.264 first. However, in case we ever change that code,
these are used in priority order so it's better to order them accordingly.
PiperOrigin-RevId: 550894935
When generating silence for AudioProcessingPipeline, audio never
queued EOS downstream.
Linked to this, when silence followed an item with audio, the silence
was added to SilentAudioGenerator before the mediaItem reconfiguration
occurred. If the silence had effects, the APP would be flushed after
silence queued EOS, resetting APP.isEnded back to false, so AudioGraph
never ended.
Regression tests reproduce failure without fix, but pass with it.
PiperOrigin-RevId: 550853714
For each event, the timestamp and presentation time is logged. The trace can
then be dumped to a tsv file and easily imported in a spreadsheet.
PiperOrigin-RevId: 550839156
signalEndOfInputStream is needed for when streams have different amounts of
frames, so that if the primary stream finishes after a secondary stream, it
can end without waiting indefinitely for the secondary stream's matching
timestamps.
onEnded mirrors this API on the output side, which will be necessary to
know when to call signalEndOfInput on downstream components (ex. on downstream)
VideoFrameProcessors
PiperOrigin-RevId: 549969933
AudioMixingUtil#mix handles input & output in float or Int16 PCM. Given
Float and Int16 use different sample ratnes, this util handles
conversion between the two, based on the encoding being mixed to.
Migrate AudioMixer to use the util, removing AudioMixingAlgorithm
interface and implementation. ChannelMixingAudioProcessor will be
migrated after additional performance checks.
PiperOrigin-RevId: 548994584
This serves as a sort of test with more frames for the compositor for now
(before more varied video system tests come later when integrating with
Transformer), showing it doesn't error out and outputs the right
amount of frames.
Due to the VFPTestRunner having a 5s timeout, and mostly due to presubmit
emulators being very slow with OpenGL, there is a sort of limitation
to how many frames this type of test can have, depending on the test target
PiperOrigin-RevId: 548159762
SingleInputVideoGraph implements GraphInput now, so the asset loaders would
interface directly with SIVG, rather than VideoSampleExporter. This is to pave
way for multi-asset video processing.
PiperOrigin-RevId: 547561042
* Allow more than one input bitmap at a time.
* Allow Compositor to take in and set up an Executor. Otherwise,
Compositor resources may be created on one thread and accessed on another.
* Add a Compositor TestRunner to reuse test code more.
* Update VideoFrameProcessingTaskExecutor to use a new onError listener, so
that it's more reusable in non-DVFP contexts, like for Compositor.
PiperOrigin-RevId: 547206053
durationUs is almost always going to be a larger number than the sample
rate, so pass as the main value, rather than the multiplier.
PiperOrigin-RevId: 547193927