AudioUnit latencies
AudioUnit latencies
- Subject: AudioUnit latencies
- From: Roger Butler <email@hidden>
- Date: Wed, 29 Aug 2001 15:37:09 +1000
Here's a question on minimizing latency in audio units. Suppose I have an
algorithm that always generates a fixed number of output samples and takes a
while to generate each block of N such samples. The destination AU requests
a slice of M samples from my RenderSlice() callback. M is usually less than
N but can be anything. I can see two ways of delivering slices to the
destination AU. Both get away from the assumption that PullInput() must
always request M samples from the source AU. Instead it will request N
samples. Both maintain a buffer within the AU.
1. Only generate new samples when needed. If the AU buffer contains at
least M processed samples then deliver these to the destination AU.
Otherwise call PullInput(N) and Algorithm(N) (maybe repeatedly) to generate
a new batch of processed samples until we have M worth. Deliver these to
the destination AU. There will normally be some left over.
2. Always try to keep the buffer full. Deliver M samples to the
destination AU from the buffer. Then call PullInput(N) and Algorithm(N)
until we have at least M processed samples back in the buffer. This assumes
M is the same size each time but we can probably build in a case for when it
isn't.
Which method gives the least latency? Or do they both give the same latency
but shift the buffering around?
Roger Butler,
Lake Technology.