This brings together the multiple details about a muxer track, and
reduces the need for additional variables and more complicated track
tracking.
PiperOrigin-RevId: 499872145
The old names are not really correct anymore because:
- The Audio/VideoTranscodingSamplePipelines do not decode anymore.
- The pipelines now mux the encoded data.
PiperOrigin-RevId: 499498446
Whilst testing fallback functionality, I found that we were
aggressively reducing the resolution if it was not supported. A quick
test found that we could reduce by a much smaller increments.
Performance wise it appears these checks are incredibly quick.
The code for checking supported sizes was duplicated, with one case
having a bug because of this duplication (2/3 case). This CL abstracts
this into a loop.
PiperOrigin-RevId: 499497646
When a listener method is deprecated, the new method should (by
default) called through to the deprecated one. This is because any
class that implements the method that is now deprecated needs to still
receive that callback.
It appears when onTransformationError(MediaItem, Exception) was
deprecated in favour of onTransformationError(MediaItem,
TransformationException), this deprecation was the wrong way round, and
the newer callback - onTransformationError(MediaItem,
TransformationResult, TransformationException) continued this mistake.
This CL now corrects this.
PiperOrigin-RevId: 498221504
Events on the wrapper should be propagated to TransformerInternal as
soon as they occur, switching round the process so TransformerInternal
does not have to query MuxerWrapper.
This CL is a prerequisite for the child CL, where MuxerWrapper can
simplify the internal state and logic.
PiperOrigin-RevId: 497267202
Before, it was possible for onDurationUs() and onAllTracksRegistered()
to be called before onTrackAdded() because they are called from
different threads. onDurationUs() and onAllTracksRegistered() are called
from the Transformer internal thread, and onTrackAdded() is called from
the playback thread.
PiperOrigin-RevId: 497102556
AssetLoader declares the tracks with a setTrackCount() method. Setting
the track count on the MuxerWrapper is easier than calling
registerTrack() as many times as the number of tracks.
PiperOrigin-RevId: 496933501
AssetLoader declares the tracks with a setTrackCount() method. Setting
the track count on the FallbackListener is easier than calling
registerTrack() as many times as the number of tracks.
PiperOrigin-RevId: 496919969
The TransformationRequest passed to FallbackListener (and
createSupportedTransformationRequest) can have null for values that
are inferred from source. Within fallback, this can be height, width,
video mime type and audio mime type (HDR mode is not linked to source).
requestedFormat has these values populated from source, and
supportedFormat then finds the closest supported values to the
requested.
If any of the values in supportedFormat do not match the
requestedFormat, then this method would build upon the
TransformationRequest and update ALL possible fallback fields. This is
a problem because the fallback listener compares the original request
to the fallback one and notifies about all the fields that have
changed.
This CL changes this so that only the values that are not the same as
requested are changed in the supported request that is given to the
fallback listener.
PiperOrigin-RevId: 496908492
When trying to run the test on Android Studio, error "incompatible
types: Buffer cannot be converted to ByteBuffer" is logged. This is
because ByteBuffer.flip() returns a Buffer (and not a ByteBuffer).
Annotation @CovariantReturnType on ByteBuffer.flip() should resolve this
automatically but it doesn't seem supported at the moment.
PiperOrigin-RevId: 496894723
Note that we simply use GlEffectsFrameProcessor in-app / GL tone-mapping, so PQ->SDR tone-mapping isn't yet implemented.
Tested manually using the demo on Pixel 7, to confirm that device and in-app tone
mapping behave similarly.
PiperOrigin-RevId: 496700231
Adds the AudioMixerAlgorithm interface which allows for specialized
implementations of audio mixing that also efficiently convert between
source and mixing formats.
Initial implementation has two algorithms:
1. Float -> float (with channel mixing)
2. S16 -> float (with channel mixing)
PiperOrigin-RevId: 496686805
To do that, rename PROGRESS_STATE_NO_TRANSFORMATION to
PROGRESS_STATE_NOT_STARTED and update Javadoc of ProgressState to not be
Transformer specific.
PiperOrigin-RevId: 496653460
This should allow us to focus on HDR failures instead of network buffering
failures when debugging HDR issues.
These files are each used on several files, so it should be more worth the
test binary impact to move these files to local first.
Locally, tests did take less time after this diff
PiperOrigin-RevId: 496398130
The MediaSourceFactory won't be used by the other AssetLoaders
In order to do that, ExoPlayerAssetLoader has been made public, and the
DefaultAssetLoaderFactory has become a wrapper around
ExoPlayerAssetLoader.
PiperOrigin-RevId: 496386853
Otherwise, the decoders are not captured. It works at the moment for the
video decoder because decoding is still done on the sample pipeline but
it will moved to the AssetLoader soon.
PiperOrigin-RevId: 495275575
`AudioProcessor`s expect direct buffers. This shouldn't make any functional difference in our code, but a custom audio processor might try to access the buffer from JNI in which case a direct byte buffer is more efficient.
PiperOrigin-RevId: 495241669
`SilentAudioGenerator` could output a fractional audio frame, and this could cause downstream components to throw because of trying to read a complete audio frame but only seeing a partial one.
Calculate the output buffer size based on the frame size (which is a no-op for stereo 16-bit audio) and calculate a total number of frames to output then multiple by the frame size.
PiperOrigin-RevId: 494992941