In startTransformation method we were throwing UnsupportedEncodingException (IOException) when mediaItem with unsupported arguments is passed.
Changed this to IllegalArgumentException which seems more logical here.
PiperOrigin-RevId: 487259296
(cherry picked from commit 4598cc92485c149f5c613d3c926ae4493a457668)
Problem: We are initialising muxer as soon as we start the transformation. Now the startTransformation() method can be called from main thread, but muxer creation is an I/O operation and should be not be done on main thread.
Solution: Added lazy initialisation of muxer object. The actual transformation happens on background thread so the muxer will be initialised lazily from background thread only.
Another way was to provide an initialize() method on MuxerWrapper which will explicitly initialise muxer object but with this approach the caller need to call the initialise method before calling anything else. With current implementation the renderers are calling MuxerWrapper methods on various callbacks (Not sequentially) and also we are sharing same muxer with multiple renderers so It might become confusing for the caller on when to call the initialise() method. Also there are few methods on MuxerWrapper which dont really need muxer object. So in short it might make MuxerWrapper APIs more confusing.
Validation: Verified the transformation from demo app.
PiperOrigin-RevId: 486735787
(cherry picked from commit b10b4e6d4694c7240073b81c3a09227042b58c21)
This allows to throw when the Transformer is stuck or is too slow.
PiperOrigin-RevId: 484179037
(cherry picked from commit 376ee77f52bed47de54c6478b4006f1b25a543d0)
- The naming DefaultMuxer is more consistent with the rest of
Transformer codebase (e.g. DefaultEncoderFactory).
- By hiding the implementation details of DefaultMuxer, the transition
to in-app Muxer will be seamless for apps using DefaultMuxer.
- The current plan is that DefaultMuxer will become the in-app muxer.
PiperOrigin-RevId: 481838790
(cherry picked from commit b4d7f066dd31cce1e3d7ab14cc47d3b7be364a88)
- This method is redundant with getSupportedSampleMimeTypes().
- This is to prepare the Muxer class to become public.
PiperOrigin-RevId: 480840902
(cherry picked from commit 8962f5a3f4b505224ceb22ac5771b85f24e30358)
* Transform the intermediate color space to linear SDR by applying the SMPTE 170M EOTF and OETF.
* Use linear colors for the color filter pixel tests and update all golden bitmaps.
PiperOrigin-RevId: 476124592
(cherry picked from commit afd829e5db17c48abccea30842eec20b702aa330)
By skipping every other row and column, SSIM calculation time reduces by 10-30%.
PiperOrigin-RevId: 474286702
(cherry picked from commit 6dd2a6dac655f538020a4057e3ab6d997449a9fe)
As part of this change, MssimCalculator is moved from androidTest/ to main/
PiperOrigin-RevId: 473771344
(cherry picked from commit 20aa22c9fad2c1e2d237dc9a8bee81781a729445)
This will allow effects preview in ExoPlayer to use the
Effect and FrameProcessor interface (and the interfaces
they depend on) without depending on transformer or the
future effects module.
PiperOrigin-RevId: 464060047
(cherry picked from commit 06d41c2775137158f831a1b2f19f137efb1e08b0)
Size requires API 21. Using Pair instead will allow effects to be
used from API 18 during previewing once they are moved out of
transformer.
PiperOrigin-RevId: 463206474
(cherry picked from commit b994f8bfa06c82fbfe96044b986cd7b8d243acdd)
This is needed for applying effects to a playlist.
The effects are applied based on the presentation time of the
frame in its corresponding media item and the offset is added
back before encoding.
Each time the offset changes, end of input stream is signalled
to the texture processors. This is needed because the texture
processors can expect monotonically increasing timestamp within
the same input stream but when the offset changes, the timstamps
jump back to 0.
PiperOrigin-RevId: 462714966
(cherry picked from commit 46ab06b816d93c2a034c182552fa4483a6d82cba)
- Update profile selection logic to pick an HDR-compatible profile when doing HDR editing on H.264/AVC videos.
- Handle doing the capabilities check for all MIME types that support HDR (not just H.265/HEVC).
- Fix a bug where we would pass an HDR input color format to the encoder when using tone-mapping.
- Tweak how `EncoderWrapper` works so decisions at made at construction time.
Manually tested cases:
- Transformation of an SDR video.
- Transformation of an HDR video to AVC (which triggers fallback/tone-mapping on a device that doesn't support HDR editing for AVC).
- Transformation of an HDR video with HDR editing.
PiperOrigin-RevId: 461572973
(cherry picked from commit 0db07c6791620d06db9f524bf0f47e90df635756)
- Added setter to disable this feature.
- Added accompanying tests.
- Plan to run tests on the same set of settings on H265.
PiperOrigin-RevId: 460238673
(cherry picked from commit 6e126000993f5bbb9d1f2fd73df141158e8c7785)
The FinalMatrixTransformationProcessorWrapper ensures that the
surface is only replaced when it is not being rendered to and vice
versa.
PiperOrigin-RevId: 458007639
(cherry picked from commit e5527a8addff1e35380d3383dea237c70198a63a)
This change is just renaming. There is no functional change intended.
The FrameProcessor interface will be created in a follow-up.
PiperOrigin-RevId: 456741628
(cherry picked from commit 216fefd669ab2bfdb03c202094b35eb935272d96)
After this change, FrameProcessorChain chains any GlTextureProcessors
instead of only SingleFrameGlTextureProcessors.
The GlTextureProcessors are chained in a bidirectional manner using
ChainingGlTextureProcessorListener to feed input and output related
events forward and release events backwards.
PiperOrigin-RevId: 456478414
(cherry picked from commit 3a966916545f92c5ce08a640c912054ede215152)
In follow-ups the FrameProcessorChain will set an instance of this
listener for each GlTextureProcessor to chain it with its previous
and next GlTextureProcesssor.
PiperOrigin-RevId: 455628942
(cherry picked from commit c92e18ec583eccde1bc80c11842ec81c9a6fedef)
This change adds a SurfaceProvider interface which is necessary to
allow for texture processors whose output size becomes available
asynchronously in follow-ups.
VTSP's implementation of this interface wraps the encoder and provides
its input surface together with the output frame width, height, and
orientation as used for encoder configuration.
The FrameProcessorChain converts the output frames to the provided
orientation and resolution using a ScaleToFitTransformation and
Presentation replacing EncoderCompatibilityTransformation.
PiperOrigin-RevId: 455112598
(cherry picked from commit d20f68498666ddf1f1946799cf002a9edb573a40)
Based on
https://developer.android.com/reference/android/media/MediaCodec#using-an-output-surface,
frame dropping behaviour depends on the target SDK version.
After this change transformer will only use
MediaFormat#KEY_ALLOW_FRAME_DROP if both the target and system SDK
version are at least 29 and default to its pre 29 behaviour where each
decoder output frame must be processed before a new one is rendered
to prevent frame dropping otherwise.
Also remove deprecated Transformer.Builder constructor without a
context and the context setter.
PiperOrigin-RevId: 453971097
(cherry picked from commit 3f718b0d10e1d577620cca9ec76d0af38efc542c)
This removes the prior restriction of needing to remember not to crop and set aspect ratio in the same Presentation.Builder, and makes each class a bit more targeted.
This is partially made feasible by the past work to merge consecutive
MatrixTransformations into a single MatrixTransformationFrameProcessor, which
ensures that there's no loss in quality between successive MatrixTransformations.
PiperOrigin-RevId: 453660582
(cherry picked from commit b33dc5e57b96d81f9c246a369a09fb7a23a119e2)
Also update names of implementations to match design doc.
In follow-ups, SingleFrameGlTextureProcessor will become
an abstract implementation of a new GlTextureProcessor
interface.
Texture processor makes sense as it processes OpenGL textures.
The term frame processor will be used for something else in
follow-ups.
PiperOrigin-RevId: 451142085
When using a MatrixTransformationFrameProcessor per transformation
matrix, each frame processor's shader applies the matrix to the
vertices and clips the result to the NDC range when drawing the
output frame.
This change combines consecutive MatrixTransformations into a single
MatrixTransformationFrameProcessor by multiplying the individual
matrices while updating and clipping the visible polygon after
each matrix and mapping the resulting visible polygon back to the
input space so that its vertices and the combined transformation
matrix can be used in the shader.
PiperOrigin-RevId: 448521068
ScaleToFitFrameProcessor, PresentationFrameProcessor,
and EncoderCompatibilityFrameProcessor now each implement
MatrixTransformation instead of wrapping
MatrixTransformationFrameProcessor.
PiperOrigin-RevId: 446480286
This change splits AdvancedFrameProcessor into 4 files:
- MatrixTransformationFrameProcessor for the GlFrameProcessor
implementation
- MatrixTransformation and GlMatrixTransformation for the GlEffect
specification
- MatrixUtils for the static matrix helpers
PiperOrigin-RevId: 446236384
Split rotationDegrees changes to EncoderCompatibilityFrameProcessor, a new
FrameProcessor.
This removes automatic rotationDegrees adjustments from Presentation, which
allows Presentation to be used for changes before the end of a
FrameProcessorChain pipeline.
PiperOrigin-RevId: 443387226
This allows apps to use AdvancedFrameProcessor to apply transformations
in 3D space. This functionality is not used in transformer otherwise.
PiperOrigin-RevId: 439313406
The encoder surface is no longer needed for the OpenGL setup and frame
processor initialization, as a placeholder surface is used instead. So
all of the setup can now be done in the factory method.
PiperOrigin-RevId: 438844450
Configuring the frame sizes between frame processors is now the
FrameProcessorChain's rather than the caller's responsibility.
The caller can getOutputSize() and override it for encoder fallback
in configure().
PiperOrigin-RevId: 437048436