I was trying to understand why this isn't always checked and didn't
realise there was already a bug tracking this - this way a future
confused reader knows something isn't working quite as intended.
PiperOrigin-RevId: 313765935
Guava is heavily optimized for Android and the impact on binary size
is minimal (and outweighed by the organic growth of the ExoPlayer
library).
This change also replaces Util.toArray() with Guava's Ints.toArray()
in order to introduce a Guava usage into a range of modules.
PiperOrigin-RevId: 312449093
The ad break time in seconds from IMA was "-1" for postrolls, but this didn't
match C.TIME_END_OF_SOURCE in the ad group times array.
Handle an ad break time of -1 directly by mapping it onto the last ad group,
instead of trying to look it up in the array.
PiperOrigin-RevId: 312064886
ANSI/CTA-608-E R-2014 spec defines exactly 32 columns on the screen,
and limits all lines to this length.
See 3.2.2 definition of 'Column'.
issue:#7341
PiperOrigin-RevId: 311549881
FFmpeg requires input buffers to be sized larger than the size
of the data they contain. This is to allow optimized decoder
implementations that read data in fixed size chunks, without
the risk of such decoders reading beyond the end of the buffer.
Issue: #2159
PiperOrigin-RevId: 310946866
Unmarshal from json to MediaItem instead of Sample. Further the playlist
of MediaItems is converted to Intent extras which are read by the
PlayerActivity.
PiperOrigin-RevId: 308141231
While most ExoPlayer code parsing ByteBuffers is called with buffers in big
endian, in certain situation, buffers in little endian are used too.
MediaCodec produced ByteBuffers are in little endian, while buffers
receive from the sources are in big endian (ByteBuffer's default).
As a result, some code called from AudioSink in passthrough parsed
bytebuffer in little endian. This is not correct because those
format are specified in BigEndian.
Changing the endianness of the ByteBuffer returned from MediaCodec
would impact a lot more code that can currently be tested in the
current COVID lockdown situation.
As a result, this patch instead make the parsing code independent
of the ByteBuffer.order() set. All the code that is called from
DefaultAudioSink now parses the buffer explicitly in Big Endian.
Additionally, the MPEG big endian header data of size 4 bytes was
retrieved with ByteBuffer.get, which only returns one byte.
PiperOrigin-RevId: 308116173
This is the missing attribute to support all features of the Sample with MediaItem. Hence PlayerActivity can use the setMediaItems() method directly without creating actual media sources in the app code.
PiperOrigin-RevId: 307102036
This is required to migrate the PlayerActivity away from Sample to MediaItem. It hence needs adding buildUpon to MediaItem to mix in the customCacheKey and streamKeys before playback.
PiperOrigin-RevId: 306710643
Create a Builder that creates SimpleExoPlayer instances with fake
components, suitable for testing.
Basically extracts the Builder from ExoPlayerTestRunner to a standalone
class that can be re-used.
PiperOrigin-RevId: 305458419
These were introduced in c7164a30a0
In each case I checked that the groups are not optional,
so if they match they must be non-null.
PiperOrigin-RevId: 305213293
When ClippingMediaPeriod first tried to read a buffer, if its end
position was before the end of the stream and it was buffered to its end
position, it would sometimes erroneously signal end-of-stream for
protected content because the sample queue might be waiting for DRM keys
at this point.
Work around the issue temporarily by signaling this specific case back
to ClippingMediaPeriod via the DecoderInputBuffer.
There will likely be a cleaner fix as a result of adding support for
dynamic clip end points in the future, at which point this can be
reverted.
issue:#7188
PiperOrigin-RevId: 305081757
ExoPlayer needs a codec to decide among WEBVTT and TTML decoder mimeType.
Apple describes IMSC1 in MP4 in
[RFC-8216 Section 3.6](https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-04#section-3.6).
The DASH manifest specifies the SMPTE-TT captions in the codecs in the manifest
(from W3C [TTML Profiles for Internet Media Subtitles and Captions 1.1](https://www.w3.org/TR/ttml-imsc1.1/#general-0).
DASH just doesn't require the rendition linking, but HLS does.
Apple implies the CODECS attribute of the variant needs to be do this. That is
with SHOULD and MAY language to imply the codec to use for it in the
[Authoring Guidelines](https://developer.apple.com/documentation/http_live_streaming/hls_authoring_specification_for_apple_devices)
This change defaults to WebVTT if no codec is specifed (same as current behavior) otherwise it picks it from the variants
referencing the media.
This is required to align google3's stub (currently marked non-null) with Checker Framework's.
More information: go/matcher-nullness-lsc
Tested:
TAP train for global presubmit queue
http://test/OCL:303344687:BASE:303326748:1585344475427:29edc250
PiperOrigin-RevId: 303709053
This is less confusing than having audio processing functionality (e.g., playback
speed adjustment) just "not work" for some pieces of media.
If this change is merged, I will update #6749 to also track making DefaultAudioSink
intelligently enable/disable float output depending on how the audio processors are
configured.
Issue: #7134
PiperOrigin-RevId: 302871568
This CL removes the prefixes to the tests added after a6d0caaa3c.
This CL is generated by following command
$ find -name '*Test.java' | xargs -I{} sed -i 's/^\ \ public\ void\ test\([A-Z]\)\(.*\)$/ public void \L\1\E\2/' {}
PiperOrigin-RevId: 300547504
This avoids duplicate events being dispatched to object foo if
addListener(foo) is called more than once.
Part of issue:#6765
PiperOrigin-RevId: 300529733