Support for opus content in ogg container.
TODO: Sample duration and seeking.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=121940392
- This means DrmInitData is propagated through sample queues (i.e.
is effectively attached to every sample, so we can see when it
changes when reading from the queue).
- It also allows different DrmInitData per track, which is possible
in muxed MKV/WebM, and per Representation for DASH, although we
wont be able to seamlessly adapt in the latter case.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=121821928
This allows limiting the horizontal extension of the cues.
NOTE: So far, we only support percentages for size magnitudes.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=121682404
- This moves to a single DrmInitData implementation that supports
both specific and universal UUID matching. You can also do a
combination of the two, which was not previously possible.
- The object model is simplified as a result. This is a precursor
to a change where DrmInitData will be included directly in the
Format to enable key rotation.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=121472592
This will be required for key rotation to work, since
we'll need a way to determine whether or not the init
data has changed.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=121018211
This CL allows near-complete support to CSS selectors (I say near because not every
CSS rule applies to WebVTT).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=120717498
This CL provides the necessary infrastructure to add styling by class. This was separated
into two different CLs to ease reviewing.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=120336976
- RollingSampleBuffer -> DefaultTrackOutput
- TsChunk -> HlsMediaChunk
- Established hls.playlist package for HLS playlist things
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=120325049
- Use same constant for unknown/unset microsecond times/durations.
- Change constant values to be nowhere near the "normal" range.
- Misc cleanup.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=119944019
Allow styling <v Someone>Hello</v> with ::cue(v[voice="Someone"]) { ... }.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=119748009
This CL allows style blocks to reference elements. For example: we could style
a cue with text "Sometimes <b>bold</b> is not enough" with the style block
::cue(b) { ... }.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=119734779
In CSS, ids are references using #. The absence of # references elements.
NOTE: If the id of a cue was "1", we support its reference with ::cue(#1).
In CSS, however, this is not valid, and the number should be escaped with
\3 as in ::cue(\31). We still do not use number escaping (and I doubt
whether we should at some point).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=119634708
This CL adds the support of CSS styling in Cues through id and "universal" cue selector.
The more sophisticated selectors will be left for later, because they requier a bit more
complex logic. Also narrowed a little bit the responsibilities of the WebvttCueParser to
move some to the WebvttParser.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=119547731
Both of these features are being promoted to first class
citizens in V2 (multi-period support will be handled via
playlists, seeking-in-window will be handled by exposing
the window/timeline from the player and via the normal
seek API). For now, it's much easier to continue the
refactoring process with the features removed.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=119518675
This replaces calls to unescape except for SEI unescaping.
Use the new ParsableNalUnitBitArray for reading the slice header in HLS
access unit detection and slice_type reading.
Unescape the SPS before parsing in FLV and MP4. Before this change it was
parsed in its original (escaped) form.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=118777869
- Use FakeDataSource as the upstream source.
- Actually validate that caching is happening (i.e. reads happen
on the upstream source only if the data hasn't been read through
the CacheDataSource already).
- Move FakeClock to sit alongside the other Fake classes.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=118555903
In V2 we'll at some point start using DataSource factories
for creating DataSource instances. If there are two DataSource
interfaces this gets unnecessarily awkward.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=118470751
Also use MediaCodec buffer flag constants instead of those on MediaExtractor.
This is in preparation for merging InputBuffer and SampleHolder.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=117810136
This is the first version and is still not linked to the WebVTT parser nor
does it support all the intended features, but it was left this way to
ease the review a little bit.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=117722492
Some devices fail to decode an avc3 stream that doesn't start with an SPS (for
example, if an access unit delimiter appears first). Workaround the issue by
discarding input sample data up to the first SPS on those devices.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=117224602
The idea here is that you'll be able to feed data through an extractor
to a FakeExtractorOutput, then do the same again with some or all of the
simulated flakiness settings toggled on FakeExtractorInput, and then
assert that the output was the same in both cases.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=117224019
allow "center".
This value (and not "middle") is listed in
https://w3c.github.io/webvtt/ ( WebVTT: The Web Video Text Tracks Format, Draft
Community Group Report, 21 December 2015).
Leaving the behavior for "middle" unchanged.
It was the value listed in older drafts, e.g.
https://www.w3.org/TR/2014/WD-webvtt1-20141113/ ( W3C First Public Working
Draft 13 November 2014 )
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=117220612
- Use it to simplify a bunch of tests.
- Will also replace RecordableExtractorInput in a subsequent CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=117220030
- Format can represent both container and sample formats.
If a container contains a single track (as is true in
DASH and SmoothStreaming) then the container Format can
also contain sufficient information about the samples
to allow for track selection. This avoids the Format to
MediaFormat conversions that we were previously doing in
ChunkSource implementations.
- One important result of this change is that adaptive
format evaluation and static format selection now use the
same format objects, which is a whole lot less confusing
for someone who wants to implement both initial selection
and subsequent adaptation logic. It's not in the V2 doc,
but it may well make sense if the TrackSelector not only
selects the tracks to enable for an adaptive playback, but
also injects a FormatEvaluator when enabling them that will
control the subsequent adaptive selections. That would make
it so that all format selection logic originates from the
same place.
- As part of this change, the adaptiveX variables are removed
from the format object; they don't really correspond to a
single format. This also saves on having to inject the max
video dimensions through a bunch of classes.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=114546777
Duration was originally included in MediaFormat to match the
framework class, but it actually doesn't make much sense. In
many containers there's no such thing as per-stream duration,
and in any case we don't really care. Setting the duration on
each format required excessive piping.
This change moves duration into SeekMap instead, which seems
to make a lot more sense because it's at the container level,
and because being able to seek is generally couplied with
knowing how long the stream is.
This change is also a step toward merging Format and MediaFormat
into a single class (because Format doesn't have a duration),
which is coming soon.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=114428500
This change removes the need for SourceBuilders to load their
manifests before building their sources. This is done by pushing
initial manifest loads into the ChunkSource classes. This simplifies
the SourceBuilders a lot, and also DemoPlayer.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=113259259
Notes:
1. The logic in ExoPlayerImplInternal is very temporary, until we
have proper TrackSelector implementations. Ignore the fact that
it's crazy and has loads of nesting.
2. This change removes all capabilities checking. TrackRenderer
implementations will be updated to perform these checks in a
subsequent CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=113151233