Moved the behaviors related to Cue's to the WebvttCueParser class.
This way, the parsing methods will be more easily accessible to
other classes, such as the MP4Webvtt parser. This class also has
some methods that require state to avoid repetitive avoidable
allocations. The method visibility is subject to changes in
further CLs.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=111616824
This is in preparation for removing BufferingInput,
and using peeking instead.
Also add tests for peeking with allowEndOfInput and
resetPeekPosition.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=111318236
This CL is prepares the ground for refactoring the Webvtt parser,
so as to use the common parsing algorithms in both parsers. In order
to do this, the Webvtt Parser will be refactored. As a side note, many
more test cases will be added once the new subtitle features are
implemented. Some useful test cases have also been left for a following
CL, to allow an easy code review.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=110466914
Fix behavior of getTimeUs when seeking after the last entry in the table of
contents. Round correctly in getPosition, clipping to the stream duration based
on the input length (if known), falling back to the stream size from the header.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=109993852
- parse webvtt cue
- remove all tags from string (supported or not)
- apply spans for b, i and u
- honor class names in tags to properly parse the cue but do not apply styles for them
* Split findNextCueHeader and validateWebVttHeader into static methods.
This is a step toward WebVTT in HLS, where we'll need to re-use these
to peek at the top of the WebVTT file (they'll be moved into a util
class).
* Made parser robust against bad cue headers + added a test.
* Removed spurious looking assertion in WebvttSubtitle.
They don't seem particularly useful; they don't technically force
strict compliance, but rather just catch a few token things in
each case. Furthermore, for playback, probably the right thing to
do is to always turn strict mode off.
We were previously using the container format of the media being
played as the mimeType generating key requests, but this is not
always correct. As an example, where a manifest contains webm streams
but specifies initialization data using cenc:pssh elements in the
manifest, the media has a webm mimeType, but the DRM initialization
data has an mp4 mimeType.
Added the property 'id' to the MediaFormat class
which serves as an identifier for the track.
DASH Representations will have the "id's" from their
Media Presentation Description mapped to the id property
in the MediaFormat class that will represent the track.
We needed this for an use case where we wanted to read the 'id'
value from the DASH representation and present it to the user
in order for the user to select the right track.
Try re-sync'ing to the next level 1 element when invalid data is found. This
corrects the behavior for test case 4 in the mkv test suite.
Partially Fixes Issue #631
- Line parameter
- Added support for value and line alignment attributes.
- Support negative numbers when line is an absolute number (not a
percentage).
- Position parameter
- Added support for value and position alignment attributes
- Added support for WebVTT comment blocks
- Percentage values now accept decimal numbers (as webvtt spec states)
- Added new WebVTT tests for testing all new implemented features
- do not denormalize styles at parsing time but only put normalized style info
into TtmlNode tree. Resolve styles on demand when Cues are requested for a
given timeUs.
- create TtmlRenderUtil to have static render functions separate
- added unit test for TtmlRenderUtil
- adjusted testing strategy for unit test to check resolved style on Spannables after rendering
- With this change, you can select from the individual video formats in
the demo app, as well as the regular "auto" (adaptive) track.
- DashRendererBuilder no longer needs to create MultiTrackChunkSource
instances for the multiple tracks to be exposed.
- It's not possible to determine a period's duration just from the
corresponding Period element in a DASH manifest. It's necessary
to look at the start time of the next period (or the duration of the
manifest for the last period) to determine how long a period is. We
don't currently do this in the parser, and hence set duration incorrectly.
We also set period start times incorrectly because we don't set it to
equal to sum of the durations of prior periods in the case where it's
not explicitly defined.
- We're currently propagating these (incorrect) values all over the place
through data-structures that we build when parsing the Period element.
- This CL removes this redundancy, storing only the start time of each
period in Period elements, and not propagating it elsewhere. It's then
used when required in DashChunkSource.
- Remove unused method in DashChunkSource.
- Remove inputEncoding parameter for subtitle parsers. We're
ignoring it in all but one of the parsers, and for the one
that does use it, it'll only ever receive null, since that's
all we're passing.
- Make TextTrackRenderer advance to the next subtitle even if
the current one hasn't finished, in the case that they overlap.
This shouldn't ever really happen, but it seems best to trust
the start time of the new sample rather than the last event
time of the previous one.