Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- Subject: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- From: Paul Davis <email@hidden>
- Date: Wed, 23 Nov 2011 08:39:53 -0500
On Wed, Nov 23, 2011 at 5:47 AM, Heinrich Fink <email@hidden> wrote:
> This is a good point. In my understanding, the whole idea of executing a preroll phase first, instead of directly using a realtime context is to have a safety buffer available that compensates for hiccups in the processing chain. This should avoid dropped frames which might cause artifacts or even worse: out-of-sync playback. In other words, the point of having a preroll buffer in first place, is that you DON’T have a system that is capable to operate in real time, unlike AudioUnits which are mostly fully functional in a real-time context.
this is a bit confused. all block-oriented hardware (read: any audio
interface connected to the CPU via a PCI-like bus, which includes all
USB, Firewire and other devices) effectively has a "preroll" buffer.
the only question is how *much* buffering goes on before you start the
device. CoreAudio adds its own somewhat hidden "safety buffer" to
whatever the application asks for, though not really with the intent
of providing more buffering to the application.
Adding more buffering provides protection against scheduling induced
jitter (i.e. code not running when it should) and against variability
in code execution times caused by specific algorithms and code design.
however, it does not provide any protection or capabilities to a
system that fundamentally cannot run in realtime.
put another way, the requirement to run in realtime is:
T = A*nframes + B
that is, the time taken to process/render nframes of audio is
A*nframes + B where A and B are constants. If this is not true, or is
A and B are so large that the value of T exceeds the time represented
by the nframes of audio, then you don't have a realtime system and no
amount of buffering can every solve it.
what is more often the case is something like:
T = A*nframes + B + C
where C is non-constant and not entirely under application control.
This now creates jitter/variability in the value of T, and some amount
of buffering can help with this. Things that could contribute to C
include the impact of disk i/o, windowed algorithms, OS kernel
scheduling latency/jitter, hardware issues (eg. devices locking the
PCI bus).
Takehome message: if you have a design that can't run in realtime,
then it can't run in realtime no matter how much preroll or other
buffering you do. If you have a system that can run in realtime, then
the amount of buffering you need depends on a variety of things.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden