Re: How to set buffer size for Audio Output Unit?
Re: How to set buffer size for Audio Output Unit?
- Subject: Re: How to set buffer size for Audio Output Unit?
- From: Chris Rogers <email@hidden>
- Date: Thu, 30 Aug 2001 12:38:32 -0800
Chris,
I think if you follow Roger's suggestion, you won't need to resort to using a
separate thread to render (which should definitely be avoided if possible).
The problem is that I have an existing quite complex decoding algorithm
with a framesize of 1024 samples. It would be very inconvenient to split
this algorithm up into smaller parts. I actually view this decoder as a black
box, being only capable of decoding N=1024 (or different values for future
versions) at once. During the time of 1024 samples playing
back I can easily generate these samples. But the time of the short buffer
which is supplied to me in AudioUnitCallback is not enough. I am spending
too much time in the callback function and breaking the audio.
Therefore it would be very desirable for me to increase the AudioUnit
buffer size such that I can spend more time in AudioUnitCallBack to
safely complete decoding a complete frame.
Under these circumstances do you still see Roger's proposed solution
as being possible for me?
Thanks for your advice,
Chris
P.S. I did not put this to the list, since I did not want to flud it too much
with my questions :-)
Christof,
I'm also sending this to coreaudio-api because I think this brings up an
interesting issue.
I see your point. You could implement the buffering solution, but then you
end up doing a huge calculation every few calls to AudioUnitRenderSlice(),
but doing no work the other times since you're just dishing out
already calculated
samples from the last 1024 calculation. This creates a huge imbalance in the
CPU load, instead of distributing it evenly over every call to
AudioUnitRenderSlice().
In this case, perhaps your solution of rendering in a separate thread
isn't such a bad idea
after all. The problem then becomes balancing the priority of this
other thread with the
amount of output buffering it needs to do in order to avoid underflow...
I assume you're doing an FFT or some other kind of frequency domain
analysis which
requires you to work on large chunks at a time? Achieving
low-latency with these types
of algorithms is not always possible - such is life.
Chris