A/V Sync with low-latency audio effects
A/V Sync with low-latency audio effects
- Subject: A/V Sync with low-latency audio effects
- From: Fred Melbow via Coreaudio-api <email@hidden>
- Date: Tue, 8 Oct 2019 21:10:45 +0200
Hi everyone,
I’m working on an iOS app currently that has video & audio playback, and
applies some real-time effect to the audio. I’ve started by using
AVSampleBufferDisplayLayer for video, AVSampleBufferAudioRenderer for audio,
and adding both to an AVSampleBufferRenderSynchronizer instance to synchronize
the two. I apply my effect to my PCM buffers just before I hand them over to
the AVSampleBufferAudioRenderer. The effect is just one audio unit and for
simplicity can be thought an EQ with the cutoff directly controlled by user
input via a slider or USB controller (this itself has low latency i.e. <20ms).
This is where the trouble starts however, since the synchronizer tries to
buffer at least one second of samples ahead of the playhead, so applying my
effect before feeding the buffers to the AVSampleBufferAudioRenderer gives a
very noticeable delay before the effect is heard. I’ve experimented with trying
to slow down the polling within the requestMediaDataWhenReady{} by checking
against the synchronizer.currentTime() and this reduced the delay somewhat but
I only managed to get down to about 300ms before audio dropouts began. My goal
is for this to be under 100ms, ideally 50ms or so.
So it got me thinking - is there an elegant way of adding a 'post-processor’ to
my app’s existing audio, when I’m using a high-level API like AVPlayer or
AVSampleBufferRenderSynchronizer? Essentially I’m looking for a hook that I can
tap into, which says “here are your samples produced by AVPlayer (or similar
high-level API) and are imminently about to go to the sound card, this is your
last chance to modify them.” I kind of doubt this exists but I’m open to ideas!
The obvious solution is to stop using high-level APIs and do all the A/V
synchronization myself, but this involves accounting for lots of weirdness like
Bluetooth, interruptions, seeking etc. that the render-synchronizer already
handles nicely. So I thought of using Inter-App Audio and to install the
effects as an extension after my app, but since this is being deprecated I
didn’t want to start with that. Another thing I thought about was if it’s
possible to use an AVAudioEngine and add my effect as an AVAudioNode after a
node containing the synced AVSampleBufferAudioRenderer, but I have no way of
setting up such a graph (cannot insert AVSampleBufferAudioRenderer into an
audio-engine graph).
Any suggestions would be much appreciated, especially they avoid me having to
re-implement AVSampleBufferRenderSynchronizer myself!
Many thanks,
Fred
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden