Fix audio processor draining for reconfiguration
When transitioning to a new stream in a different format, the audio processors are reconfigured. After this, they are drained and then flushed so that they are ready to handle data in updated formats for the new stream. Before this change, some audio processors made the assumption that after reconfiguration no more input would be queued in their old input format, but this assumption is not correct: during draining more input may be queued. Fix this behavior so that the new configuration is not referred to while draining and only becomes active once flushed. Issue: #6601 PiperOrigin-RevId: 282515359
This commit is contained in:
parent
4799993d3b
commit
a81149d962
@ -7,9 +7,14 @@
|
|||||||
This extractor does not support seeking and live streams. If
|
This extractor does not support seeking and live streams. If
|
||||||
`DefaultExtractorsFactory` is used, this extractor is only used if the FLAC
|
`DefaultExtractorsFactory` is used, this extractor is only used if the FLAC
|
||||||
extension is not loaded.
|
extension is not loaded.
|
||||||
|
* Video tunneling: Fix renderer end-of-stream with `OnFrameRenderedListener`
|
||||||
|
from API 23, tunneled renderer must send a special timestamp on EOS.
|
||||||
|
Previously the EOS was reported when the input stream reached EOS.
|
||||||
* Require an end time or duration for SubRip (SRT) and SubStation Alpha
|
* Require an end time or duration for SubRip (SRT) and SubStation Alpha
|
||||||
(SSA/ASS) subtitles. This applies to both sidecar files & subtitles
|
(SSA/ASS) subtitles. This applies to both sidecar files & subtitles
|
||||||
[embedded in Matroska streams](https://matroska.org/technical/specs/subtitles/index.html).
|
[embedded in Matroska streams](https://matroska.org/technical/specs/subtitles/index.html).
|
||||||
|
* Use `ExoMediaDrm.Provider` in `OfflineLicenseHelper` to avoid `ExoMediaDrm`
|
||||||
|
leaks ([#4721](https://github.com/google/ExoPlayer/issues/4721)).
|
||||||
* Improve `Format` propagation within the `MediaCodecRenderer` and subclasses.
|
* Improve `Format` propagation within the `MediaCodecRenderer` and subclasses.
|
||||||
For example, fix handling of pixel aspect ratio changes in playlists where
|
For example, fix handling of pixel aspect ratio changes in playlists where
|
||||||
video resolution does not change.
|
video resolution does not change.
|
||||||
@ -17,8 +22,7 @@
|
|||||||
* Rename `MediaCodecRenderer.onOutputFormatChanged` to
|
* Rename `MediaCodecRenderer.onOutputFormatChanged` to
|
||||||
`MediaCodecRenderer.onOutputMediaFormatChanged`, further
|
`MediaCodecRenderer.onOutputMediaFormatChanged`, further
|
||||||
clarifying the distinction between `Format` and `MediaFormat`.
|
clarifying the distinction between `Format` and `MediaFormat`.
|
||||||
* Reconfigure audio sink when PCM encoding changes
|
* Fix byte order of HDR10+ static metadata to match CTA-861.3.
|
||||||
([#6601](https://github.com/google/ExoPlayer/issues/6601)).
|
|
||||||
* Make `MediaSourceEventListener.LoadEventInfo` and
|
* Make `MediaSourceEventListener.LoadEventInfo` and
|
||||||
`MediaSourceEventListener.MediaLoadData` top-level classes.
|
`MediaSourceEventListener.MediaLoadData` top-level classes.
|
||||||
|
|
||||||
@ -52,25 +56,15 @@
|
|||||||
* Fix issue where player errors are thrown too early at playlist transitions
|
* Fix issue where player errors are thrown too early at playlist transitions
|
||||||
([#5407](https://github.com/google/ExoPlayer/issues/5407)).
|
([#5407](https://github.com/google/ExoPlayer/issues/5407)).
|
||||||
* DRM:
|
* DRM:
|
||||||
* Inject `DrmSessionManager` into the `MediaSources` instead of `Renderers`.
|
* Inject `DrmSessionManager` into the `MediaSources` instead of `Renderers`
|
||||||
This allows each `MediaSource` in a `ConcatenatingMediaSource` to use a
|
|
||||||
different `DrmSessionManager`
|
|
||||||
([#5619](https://github.com/google/ExoPlayer/issues/5619)).
|
([#5619](https://github.com/google/ExoPlayer/issues/5619)).
|
||||||
* Add `DefaultDrmSessionManager.Builder`, and remove
|
* Add a `DefaultDrmSessionManager.Builder`.
|
||||||
`DefaultDrmSessionManager` static factory methods that leaked
|
* Add support for the use of secure decoders in clear sections of content
|
||||||
`ExoMediaDrm` instances
|
([#4867](https://github.com/google/ExoPlayer/issues/4867)).
|
||||||
([#4721](https://github.com/google/ExoPlayer/issues/4721)).
|
|
||||||
* Add support for the use of secure decoders when playing clear content
|
|
||||||
([#4867](https://github.com/google/ExoPlayer/issues/4867)). This can
|
|
||||||
be enabled using `DefaultDrmSessionManager.Builder`'s
|
|
||||||
`setUseDrmSessionsForClearContent` method.
|
|
||||||
* Add support for custom `LoadErrorHandlingPolicies` in key and provisioning
|
* Add support for custom `LoadErrorHandlingPolicies` in key and provisioning
|
||||||
requests ([#6334](https://github.com/google/ExoPlayer/issues/6334)). Custom
|
requests ([#6334](https://github.com/google/ExoPlayer/issues/6334)).
|
||||||
policies can be passed via `DefaultDrmSessionManager.Builder`'s
|
* Remove `DefaultDrmSessionManager` factory methods that leak `ExoMediaDrm`
|
||||||
`setLoadErrorHandlingPolicy` method.
|
instances ([#4721](https://github.com/google/ExoPlayer/issues/4721)).
|
||||||
* Use `ExoMediaDrm.Provider` in `OfflineLicenseHelper` to avoid leaking
|
|
||||||
`ExoMediaDrm` instances
|
|
||||||
([#4721](https://github.com/google/ExoPlayer/issues/4721)).
|
|
||||||
* Track selection:
|
* Track selection:
|
||||||
* Update `DefaultTrackSelector` to set a viewport constraint for the default
|
* Update `DefaultTrackSelector` to set a viewport constraint for the default
|
||||||
display by default.
|
display by default.
|
||||||
@ -88,19 +82,18 @@
|
|||||||
configuration of the audio capture policy.
|
configuration of the audio capture policy.
|
||||||
* Video:
|
* Video:
|
||||||
* Pass the codec output `MediaFormat` to `VideoFrameMetadataListener`.
|
* Pass the codec output `MediaFormat` to `VideoFrameMetadataListener`.
|
||||||
* Fix byte order of HDR10+ static metadata to match CTA-861.3.
|
* Support out-of-band HDR10+ metadata for VP9 in WebM/Matroska.
|
||||||
* Support out-of-band HDR10+ dynamic metadata for VP9 in WebM/Matroska.
|
|
||||||
* Assume that protected content requires a secure decoder when evaluating
|
* Assume that protected content requires a secure decoder when evaluating
|
||||||
whether `MediaCodecVideoRenderer` supports a given video format
|
whether `MediaCodecVideoRenderer` supports a given video format
|
||||||
([#5568](https://github.com/google/ExoPlayer/issues/5568)).
|
([#5568](https://github.com/google/ExoPlayer/issues/5568)).
|
||||||
* Fix Dolby Vision fallback to AVC and HEVC.
|
* Fix Dolby Vision fallback to AVC and HEVC.
|
||||||
* Fix early end-of-stream detection when using video tunneling, on API level
|
|
||||||
23 and above.
|
|
||||||
* Audio:
|
* Audio:
|
||||||
* Fix the start of audio getting truncated when transitioning to a new
|
* Fix the start of audio getting truncated when transitioning to a new
|
||||||
item in a playlist of Opus streams.
|
item in a playlist of Opus streams.
|
||||||
* Workaround broken raw audio decoding on Oppo R9
|
* Workaround broken raw audio decoding on Oppo R9
|
||||||
([#5782](https://github.com/google/ExoPlayer/issues/5782)).
|
([#5782](https://github.com/google/ExoPlayer/issues/5782)).
|
||||||
|
* Reconfigure audio sink when PCM encoding changes
|
||||||
|
([#6601](https://github.com/google/ExoPlayer/issues/6601)).
|
||||||
* UI:
|
* UI:
|
||||||
* Make showing and hiding player controls accessible to TalkBack in
|
* Make showing and hiding player controls accessible to TalkBack in
|
||||||
`PlayerView`.
|
`PlayerView`.
|
||||||
|
@ -43,7 +43,7 @@ public final class GvrAudioProcessor implements AudioProcessor {
|
|||||||
private static final int OUTPUT_FRAME_SIZE = OUTPUT_CHANNEL_COUNT * 2; // 16-bit stereo output.
|
private static final int OUTPUT_FRAME_SIZE = OUTPUT_CHANNEL_COUNT * 2; // 16-bit stereo output.
|
||||||
private static final int NO_SURROUND_FORMAT = GvrAudioSurround.SurroundFormat.INVALID;
|
private static final int NO_SURROUND_FORMAT = GvrAudioSurround.SurroundFormat.INVALID;
|
||||||
|
|
||||||
private AudioFormat inputAudioFormat;
|
private AudioFormat pendingInputAudioFormat;
|
||||||
private int pendingGvrAudioSurroundFormat;
|
private int pendingGvrAudioSurroundFormat;
|
||||||
@Nullable private GvrAudioSurround gvrAudioSurround;
|
@Nullable private GvrAudioSurround gvrAudioSurround;
|
||||||
private ByteBuffer buffer;
|
private ByteBuffer buffer;
|
||||||
@ -58,7 +58,7 @@ public final class GvrAudioProcessor implements AudioProcessor {
|
|||||||
public GvrAudioProcessor() {
|
public GvrAudioProcessor() {
|
||||||
// Use the identity for the initial orientation.
|
// Use the identity for the initial orientation.
|
||||||
w = 1f;
|
w = 1f;
|
||||||
inputAudioFormat = AudioFormat.NOT_SET;
|
pendingInputAudioFormat = AudioFormat.NOT_SET;
|
||||||
buffer = EMPTY_BUFFER;
|
buffer = EMPTY_BUFFER;
|
||||||
pendingGvrAudioSurroundFormat = NO_SURROUND_FORMAT;
|
pendingGvrAudioSurroundFormat = NO_SURROUND_FORMAT;
|
||||||
}
|
}
|
||||||
@ -116,7 +116,7 @@ public final class GvrAudioProcessor implements AudioProcessor {
|
|||||||
buffer = ByteBuffer.allocateDirect(FRAMES_PER_OUTPUT_BUFFER * OUTPUT_FRAME_SIZE)
|
buffer = ByteBuffer.allocateDirect(FRAMES_PER_OUTPUT_BUFFER * OUTPUT_FRAME_SIZE)
|
||||||
.order(ByteOrder.nativeOrder());
|
.order(ByteOrder.nativeOrder());
|
||||||
}
|
}
|
||||||
this.inputAudioFormat = inputAudioFormat;
|
pendingInputAudioFormat = inputAudioFormat;
|
||||||
return new AudioFormat(inputAudioFormat.sampleRate, OUTPUT_CHANNEL_COUNT, C.ENCODING_PCM_16BIT);
|
return new AudioFormat(inputAudioFormat.sampleRate, OUTPUT_CHANNEL_COUNT, C.ENCODING_PCM_16BIT);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -164,8 +164,8 @@ public final class GvrAudioProcessor implements AudioProcessor {
|
|||||||
gvrAudioSurround =
|
gvrAudioSurround =
|
||||||
new GvrAudioSurround(
|
new GvrAudioSurround(
|
||||||
pendingGvrAudioSurroundFormat,
|
pendingGvrAudioSurroundFormat,
|
||||||
inputAudioFormat.sampleRate,
|
pendingInputAudioFormat.sampleRate,
|
||||||
inputAudioFormat.channelCount,
|
pendingInputAudioFormat.channelCount,
|
||||||
FRAMES_PER_OUTPUT_BUFFER);
|
FRAMES_PER_OUTPUT_BUFFER);
|
||||||
gvrAudioSurround.updateNativeOrientation(w, x, y, z);
|
gvrAudioSurround.updateNativeOrientation(w, x, y, z);
|
||||||
pendingGvrAudioSurroundFormat = NO_SURROUND_FORMAT;
|
pendingGvrAudioSurroundFormat = NO_SURROUND_FORMAT;
|
||||||
@ -180,7 +180,7 @@ public final class GvrAudioProcessor implements AudioProcessor {
|
|||||||
maybeReleaseGvrAudioSurround();
|
maybeReleaseGvrAudioSurround();
|
||||||
updateOrientation(/* w= */ 1f, /* x= */ 0f, /* y= */ 0f, /* z= */ 0f);
|
updateOrientation(/* w= */ 1f, /* x= */ 0f, /* y= */ 0f, /* z= */ 0f);
|
||||||
inputEnded = false;
|
inputEnded = false;
|
||||||
inputAudioFormat = AudioFormat.NOT_SET;
|
pendingInputAudioFormat = AudioFormat.NOT_SET;
|
||||||
buffer = EMPTY_BUFFER;
|
buffer = EMPTY_BUFFER;
|
||||||
pendingGvrAudioSurroundFormat = NO_SURROUND_FORMAT;
|
pendingGvrAudioSurroundFormat = NO_SURROUND_FORMAT;
|
||||||
}
|
}
|
||||||
|
@ -88,8 +88,9 @@ public interface AudioProcessor {
|
|||||||
* the configured output audio format if this instance is active.
|
* the configured output audio format if this instance is active.
|
||||||
*
|
*
|
||||||
* <p>After calling this method, it is necessary to {@link #flush()} the processor to apply the
|
* <p>After calling this method, it is necessary to {@link #flush()} the processor to apply the
|
||||||
* new configuration before queueing more data. You can (optionally) first drain output in the
|
* new configuration. Before applying the new configuration, it is safe to queue input and get
|
||||||
* previous configuration by calling {@link #queueEndOfStream()} and {@link #getOutput()}.
|
* output in the old input/output formats. Call {@link #queueEndOfStream()} when no more input
|
||||||
|
* will be supplied in the old input format.
|
||||||
*
|
*
|
||||||
* @param inputAudioFormat The format of audio that will be queued after the next call to {@link
|
* @param inputAudioFormat The format of audio that will be queued after the next call to {@link
|
||||||
* #flush()}.
|
* #flush()}.
|
||||||
|
@ -26,10 +26,13 @@ import java.nio.ByteOrder;
|
|||||||
*/
|
*/
|
||||||
public abstract class BaseAudioProcessor implements AudioProcessor {
|
public abstract class BaseAudioProcessor implements AudioProcessor {
|
||||||
|
|
||||||
/** The configured input audio format. */
|
/** The current input audio format. */
|
||||||
protected AudioFormat inputAudioFormat;
|
protected AudioFormat inputAudioFormat;
|
||||||
|
/** The current output audio format. */
|
||||||
|
protected AudioFormat outputAudioFormat;
|
||||||
|
|
||||||
private AudioFormat outputAudioFormat;
|
private AudioFormat pendingInputAudioFormat;
|
||||||
|
private AudioFormat pendingOutputAudioFormat;
|
||||||
private ByteBuffer buffer;
|
private ByteBuffer buffer;
|
||||||
private ByteBuffer outputBuffer;
|
private ByteBuffer outputBuffer;
|
||||||
private boolean inputEnded;
|
private boolean inputEnded;
|
||||||
@ -37,6 +40,8 @@ public abstract class BaseAudioProcessor implements AudioProcessor {
|
|||||||
public BaseAudioProcessor() {
|
public BaseAudioProcessor() {
|
||||||
buffer = EMPTY_BUFFER;
|
buffer = EMPTY_BUFFER;
|
||||||
outputBuffer = EMPTY_BUFFER;
|
outputBuffer = EMPTY_BUFFER;
|
||||||
|
pendingInputAudioFormat = AudioFormat.NOT_SET;
|
||||||
|
pendingOutputAudioFormat = AudioFormat.NOT_SET;
|
||||||
inputAudioFormat = AudioFormat.NOT_SET;
|
inputAudioFormat = AudioFormat.NOT_SET;
|
||||||
outputAudioFormat = AudioFormat.NOT_SET;
|
outputAudioFormat = AudioFormat.NOT_SET;
|
||||||
}
|
}
|
||||||
@ -44,14 +49,14 @@ public abstract class BaseAudioProcessor implements AudioProcessor {
|
|||||||
@Override
|
@Override
|
||||||
public final AudioFormat configure(AudioFormat inputAudioFormat)
|
public final AudioFormat configure(AudioFormat inputAudioFormat)
|
||||||
throws UnhandledAudioFormatException {
|
throws UnhandledAudioFormatException {
|
||||||
this.inputAudioFormat = inputAudioFormat;
|
pendingInputAudioFormat = inputAudioFormat;
|
||||||
outputAudioFormat = onConfigure(inputAudioFormat);
|
pendingOutputAudioFormat = onConfigure(inputAudioFormat);
|
||||||
return isActive() ? outputAudioFormat : AudioFormat.NOT_SET;
|
return isActive() ? pendingOutputAudioFormat : AudioFormat.NOT_SET;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean isActive() {
|
public boolean isActive() {
|
||||||
return outputAudioFormat != AudioFormat.NOT_SET;
|
return pendingOutputAudioFormat != AudioFormat.NOT_SET;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@ -79,6 +84,8 @@ public abstract class BaseAudioProcessor implements AudioProcessor {
|
|||||||
public final void flush() {
|
public final void flush() {
|
||||||
outputBuffer = EMPTY_BUFFER;
|
outputBuffer = EMPTY_BUFFER;
|
||||||
inputEnded = false;
|
inputEnded = false;
|
||||||
|
inputAudioFormat = pendingInputAudioFormat;
|
||||||
|
outputAudioFormat = pendingOutputAudioFormat;
|
||||||
onFlush();
|
onFlush();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -86,6 +93,8 @@ public abstract class BaseAudioProcessor implements AudioProcessor {
|
|||||||
public final void reset() {
|
public final void reset() {
|
||||||
flush();
|
flush();
|
||||||
buffer = EMPTY_BUFFER;
|
buffer = EMPTY_BUFFER;
|
||||||
|
pendingInputAudioFormat = AudioFormat.NOT_SET;
|
||||||
|
pendingOutputAudioFormat = AudioFormat.NOT_SET;
|
||||||
inputAudioFormat = AudioFormat.NOT_SET;
|
inputAudioFormat = AudioFormat.NOT_SET;
|
||||||
outputAudioFormat = AudioFormat.NOT_SET;
|
outputAudioFormat = AudioFormat.NOT_SET;
|
||||||
onReset();
|
onReset();
|
||||||
|
@ -24,12 +24,10 @@ import java.nio.ByteBuffer;
|
|||||||
* An {@link AudioProcessor} that applies a mapping from input channels onto specified output
|
* An {@link AudioProcessor} that applies a mapping from input channels onto specified output
|
||||||
* channels. This can be used to reorder, duplicate or discard channels.
|
* channels. This can be used to reorder, duplicate or discard channels.
|
||||||
*/
|
*/
|
||||||
// the constructor does not initialize fields: pendingOutputChannels, outputChannels
|
|
||||||
@SuppressWarnings("nullness:initialization.fields.uninitialized")
|
@SuppressWarnings("nullness:initialization.fields.uninitialized")
|
||||||
/* package */ final class ChannelMappingAudioProcessor extends BaseAudioProcessor {
|
/* package */ final class ChannelMappingAudioProcessor extends BaseAudioProcessor {
|
||||||
|
|
||||||
@Nullable private int[] pendingOutputChannels;
|
@Nullable private int[] pendingOutputChannels;
|
||||||
|
|
||||||
@Nullable private int[] outputChannels;
|
@Nullable private int[] outputChannels;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -47,9 +45,7 @@ import java.nio.ByteBuffer;
|
|||||||
@Override
|
@Override
|
||||||
public AudioFormat onConfigure(AudioFormat inputAudioFormat)
|
public AudioFormat onConfigure(AudioFormat inputAudioFormat)
|
||||||
throws UnhandledAudioFormatException {
|
throws UnhandledAudioFormatException {
|
||||||
outputChannels = pendingOutputChannels;
|
@Nullable int[] outputChannels = pendingOutputChannels;
|
||||||
|
|
||||||
int[] outputChannels = this.outputChannels;
|
|
||||||
if (outputChannels == null) {
|
if (outputChannels == null) {
|
||||||
return AudioFormat.NOT_SET;
|
return AudioFormat.NOT_SET;
|
||||||
}
|
}
|
||||||
@ -76,19 +72,24 @@ import java.nio.ByteBuffer;
|
|||||||
int[] outputChannels = Assertions.checkNotNull(this.outputChannels);
|
int[] outputChannels = Assertions.checkNotNull(this.outputChannels);
|
||||||
int position = inputBuffer.position();
|
int position = inputBuffer.position();
|
||||||
int limit = inputBuffer.limit();
|
int limit = inputBuffer.limit();
|
||||||
int frameCount = (limit - position) / (2 * inputAudioFormat.channelCount);
|
int frameCount = (limit - position) / inputAudioFormat.bytesPerFrame;
|
||||||
int outputSize = frameCount * outputChannels.length * 2;
|
int outputSize = frameCount * outputAudioFormat.bytesPerFrame;
|
||||||
ByteBuffer buffer = replaceOutputBuffer(outputSize);
|
ByteBuffer buffer = replaceOutputBuffer(outputSize);
|
||||||
while (position < limit) {
|
while (position < limit) {
|
||||||
for (int channelIndex : outputChannels) {
|
for (int channelIndex : outputChannels) {
|
||||||
buffer.putShort(inputBuffer.getShort(position + 2 * channelIndex));
|
buffer.putShort(inputBuffer.getShort(position + 2 * channelIndex));
|
||||||
}
|
}
|
||||||
position += inputAudioFormat.channelCount * 2;
|
position += inputAudioFormat.bytesPerFrame;
|
||||||
}
|
}
|
||||||
inputBuffer.position(limit);
|
inputBuffer.position(limit);
|
||||||
buffer.flip();
|
buffer.flip();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected void onFlush() {
|
||||||
|
outputChannels = pendingOutputChannels;
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
protected void onReset() {
|
protected void onReset() {
|
||||||
outputChannels = null;
|
outputChannels = null;
|
||||||
|
@ -65,6 +65,8 @@ public final class SonicAudioProcessor implements AudioProcessor {
|
|||||||
private float speed;
|
private float speed;
|
||||||
private float pitch;
|
private float pitch;
|
||||||
|
|
||||||
|
private AudioFormat pendingInputAudioFormat;
|
||||||
|
private AudioFormat pendingOutputAudioFormat;
|
||||||
private AudioFormat inputAudioFormat;
|
private AudioFormat inputAudioFormat;
|
||||||
private AudioFormat outputAudioFormat;
|
private AudioFormat outputAudioFormat;
|
||||||
|
|
||||||
@ -83,6 +85,8 @@ public final class SonicAudioProcessor implements AudioProcessor {
|
|||||||
public SonicAudioProcessor() {
|
public SonicAudioProcessor() {
|
||||||
speed = 1f;
|
speed = 1f;
|
||||||
pitch = 1f;
|
pitch = 1f;
|
||||||
|
pendingInputAudioFormat = AudioFormat.NOT_SET;
|
||||||
|
pendingOutputAudioFormat = AudioFormat.NOT_SET;
|
||||||
inputAudioFormat = AudioFormat.NOT_SET;
|
inputAudioFormat = AudioFormat.NOT_SET;
|
||||||
outputAudioFormat = AudioFormat.NOT_SET;
|
outputAudioFormat = AudioFormat.NOT_SET;
|
||||||
buffer = EMPTY_BUFFER;
|
buffer = EMPTY_BUFFER;
|
||||||
@ -167,19 +171,19 @@ public final class SonicAudioProcessor implements AudioProcessor {
|
|||||||
pendingOutputSampleRate == SAMPLE_RATE_NO_CHANGE
|
pendingOutputSampleRate == SAMPLE_RATE_NO_CHANGE
|
||||||
? inputAudioFormat.sampleRate
|
? inputAudioFormat.sampleRate
|
||||||
: pendingOutputSampleRate;
|
: pendingOutputSampleRate;
|
||||||
this.inputAudioFormat = inputAudioFormat;
|
pendingInputAudioFormat = inputAudioFormat;
|
||||||
this.outputAudioFormat =
|
pendingOutputAudioFormat =
|
||||||
new AudioFormat(outputSampleRateHz, inputAudioFormat.channelCount, C.ENCODING_PCM_16BIT);
|
new AudioFormat(outputSampleRateHz, inputAudioFormat.channelCount, C.ENCODING_PCM_16BIT);
|
||||||
pendingSonicRecreation = true;
|
pendingSonicRecreation = true;
|
||||||
return outputAudioFormat;
|
return pendingOutputAudioFormat;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean isActive() {
|
public boolean isActive() {
|
||||||
return outputAudioFormat.sampleRate != Format.NO_VALUE
|
return pendingOutputAudioFormat.sampleRate != Format.NO_VALUE
|
||||||
&& (Math.abs(speed - 1f) >= CLOSE_THRESHOLD
|
&& (Math.abs(speed - 1f) >= CLOSE_THRESHOLD
|
||||||
|| Math.abs(pitch - 1f) >= CLOSE_THRESHOLD
|
|| Math.abs(pitch - 1f) >= CLOSE_THRESHOLD
|
||||||
|| outputAudioFormat.sampleRate != inputAudioFormat.sampleRate);
|
|| pendingOutputAudioFormat.sampleRate != pendingInputAudioFormat.sampleRate);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@ -231,6 +235,8 @@ public final class SonicAudioProcessor implements AudioProcessor {
|
|||||||
@Override
|
@Override
|
||||||
public void flush() {
|
public void flush() {
|
||||||
if (isActive()) {
|
if (isActive()) {
|
||||||
|
inputAudioFormat = pendingInputAudioFormat;
|
||||||
|
outputAudioFormat = pendingOutputAudioFormat;
|
||||||
if (pendingSonicRecreation) {
|
if (pendingSonicRecreation) {
|
||||||
sonic =
|
sonic =
|
||||||
new Sonic(
|
new Sonic(
|
||||||
@ -253,6 +259,8 @@ public final class SonicAudioProcessor implements AudioProcessor {
|
|||||||
public void reset() {
|
public void reset() {
|
||||||
speed = 1f;
|
speed = 1f;
|
||||||
pitch = 1f;
|
pitch = 1f;
|
||||||
|
pendingInputAudioFormat = AudioFormat.NOT_SET;
|
||||||
|
pendingOutputAudioFormat = AudioFormat.NOT_SET;
|
||||||
inputAudioFormat = AudioFormat.NOT_SET;
|
inputAudioFormat = AudioFormat.NOT_SET;
|
||||||
outputAudioFormat = AudioFormat.NOT_SET;
|
outputAudioFormat = AudioFormat.NOT_SET;
|
||||||
buffer = EMPTY_BUFFER;
|
buffer = EMPTY_BUFFER;
|
||||||
|
@ -80,7 +80,16 @@ public final class TeeAudioProcessor extends BaseAudioProcessor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
protected void onFlush() {
|
protected void onQueueEndOfStream() {
|
||||||
|
flushSinkIfActive();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected void onReset() {
|
||||||
|
flushSinkIfActive();
|
||||||
|
}
|
||||||
|
|
||||||
|
private void flushSinkIfActive() {
|
||||||
if (isActive()) {
|
if (isActive()) {
|
||||||
audioBufferSink.flush(
|
audioBufferSink.flush(
|
||||||
inputAudioFormat.sampleRate, inputAudioFormat.channelCount, inputAudioFormat.encoding);
|
inputAudioFormat.sampleRate, inputAudioFormat.channelCount, inputAudioFormat.encoding);
|
||||||
|
@ -26,8 +26,7 @@ import java.nio.ByteBuffer;
|
|||||||
|
|
||||||
private int trimStartFrames;
|
private int trimStartFrames;
|
||||||
private int trimEndFrames;
|
private int trimEndFrames;
|
||||||
private int bytesPerFrame;
|
private boolean reconfigurationPending;
|
||||||
private boolean receivedInputSinceConfigure;
|
|
||||||
|
|
||||||
private int pendingTrimStartBytes;
|
private int pendingTrimStartBytes;
|
||||||
private byte[] endBuffer;
|
private byte[] endBuffer;
|
||||||
@ -72,14 +71,7 @@ import java.nio.ByteBuffer;
|
|||||||
if (inputAudioFormat.encoding != OUTPUT_ENCODING) {
|
if (inputAudioFormat.encoding != OUTPUT_ENCODING) {
|
||||||
throw new UnhandledAudioFormatException(inputAudioFormat);
|
throw new UnhandledAudioFormatException(inputAudioFormat);
|
||||||
}
|
}
|
||||||
if (endBufferSize > 0) {
|
reconfigurationPending = true;
|
||||||
trimmedFrameCount += endBufferSize / bytesPerFrame;
|
|
||||||
}
|
|
||||||
bytesPerFrame = inputAudioFormat.bytesPerFrame;
|
|
||||||
endBuffer = new byte[trimEndFrames * bytesPerFrame];
|
|
||||||
endBufferSize = 0;
|
|
||||||
pendingTrimStartBytes = trimStartFrames * bytesPerFrame;
|
|
||||||
receivedInputSinceConfigure = false;
|
|
||||||
return trimStartFrames != 0 || trimEndFrames != 0 ? inputAudioFormat : AudioFormat.NOT_SET;
|
return trimStartFrames != 0 || trimEndFrames != 0 ? inputAudioFormat : AudioFormat.NOT_SET;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -92,11 +84,10 @@ import java.nio.ByteBuffer;
|
|||||||
if (remaining == 0) {
|
if (remaining == 0) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
receivedInputSinceConfigure = true;
|
|
||||||
|
|
||||||
// Trim any pending start bytes from the input buffer.
|
// Trim any pending start bytes from the input buffer.
|
||||||
int trimBytes = Math.min(remaining, pendingTrimStartBytes);
|
int trimBytes = Math.min(remaining, pendingTrimStartBytes);
|
||||||
trimmedFrameCount += trimBytes / bytesPerFrame;
|
trimmedFrameCount += trimBytes / inputAudioFormat.bytesPerFrame;
|
||||||
pendingTrimStartBytes -= trimBytes;
|
pendingTrimStartBytes -= trimBytes;
|
||||||
inputBuffer.position(position + trimBytes);
|
inputBuffer.position(position + trimBytes);
|
||||||
if (pendingTrimStartBytes > 0) {
|
if (pendingTrimStartBytes > 0) {
|
||||||
@ -137,10 +128,8 @@ import java.nio.ByteBuffer;
|
|||||||
public ByteBuffer getOutput() {
|
public ByteBuffer getOutput() {
|
||||||
if (super.isEnded() && endBufferSize > 0) {
|
if (super.isEnded() && endBufferSize > 0) {
|
||||||
// Because audio processors may be drained in the middle of the stream we assume that the
|
// Because audio processors may be drained in the middle of the stream we assume that the
|
||||||
// contents of the end buffer need to be output. For gapless transitions, configure will be
|
// contents of the end buffer need to be output. For gapless transitions, configure will
|
||||||
// always be called, which clears the end buffer as needed. When audio is actually ending we
|
// always be called, so the end buffer is cleared in onQueueEndOfStream.
|
||||||
// play the padding data which is incorrect. This behavior can be fixed once we have the
|
|
||||||
// timestamps associated with input buffers.
|
|
||||||
replaceOutputBuffer(endBufferSize).put(endBuffer, 0, endBufferSize).flip();
|
replaceOutputBuffer(endBufferSize).put(endBuffer, 0, endBufferSize).flip();
|
||||||
endBufferSize = 0;
|
endBufferSize = 0;
|
||||||
}
|
}
|
||||||
@ -152,9 +141,24 @@ import java.nio.ByteBuffer;
|
|||||||
return super.isEnded() && endBufferSize == 0;
|
return super.isEnded() && endBufferSize == 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected void onQueueEndOfStream() {
|
||||||
|
if (reconfigurationPending) {
|
||||||
|
// Trim audio in the end buffer.
|
||||||
|
if (endBufferSize > 0) {
|
||||||
|
trimmedFrameCount += endBufferSize / inputAudioFormat.bytesPerFrame;
|
||||||
|
}
|
||||||
|
endBufferSize = 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
protected void onFlush() {
|
protected void onFlush() {
|
||||||
if (receivedInputSinceConfigure) {
|
if (reconfigurationPending) {
|
||||||
|
reconfigurationPending = false;
|
||||||
|
endBuffer = new byte[trimEndFrames * inputAudioFormat.bytesPerFrame];
|
||||||
|
pendingTrimStartBytes = trimStartFrames * inputAudioFormat.bytesPerFrame;
|
||||||
|
} else {
|
||||||
// Audio processors are flushed after initial configuration, so we leave the pending trim
|
// Audio processors are flushed after initial configuration, so we leave the pending trim
|
||||||
// start byte count unmodified if the processor was just configured. Otherwise we (possibly
|
// start byte count unmodified if the processor was just configured. Otherwise we (possibly
|
||||||
// incorrectly) assume that this is a seek to a non-zero position. We should instead check the
|
// incorrectly) assume that this is a seek to a non-zero position. We should instead check the
|
||||||
|
Loading…
x
Reference in New Issue
Block a user