Re: How to set buffer size for Audio Output Unit?
Re: How to set buffer size for Audio Output Unit?
- Subject: Re: How to set buffer size for Audio Output Unit?
- From: Christof Faller <email@hidden>
- Date: Fri, 31 Aug 2001 08:16:50 -0400
Chris,
>
In this case, perhaps your solution of rendering in a separate thread
>
isn't such a bad idea
>
after all. The problem then becomes balancing the priority of this
>
other thread with the
>
amount of output buffering it needs to do in order to avoid underflow...
My separate NSThread has quite low priority so I need quite large buffers
(is there a way to increase priority of an NSThread?). For now I can
just use it
for playback, but not low-latency real-time communications.
>
I assume you're doing an FFT or some other kind of frequency domain
>
analysis which
>
requires you to work on large chunks at a time? Achieving low-latency
>
with these types
>
of algorithms is not always possible - such is life.
Yes, the algorithm is FFT based. The algorithm has several applications,
such as
low bitrate stereo audio coding, but also low-latency conferencing. For
the latter
implementation I would really like to have lowest possible latency. On a
computer
system this can only be achieved if the AudioOutputUnit (or audio
device) buffer size
can be exactly matched to the algorithm frame size. As far as I
understand the
Apple coreaudio documentation, I guess with audio devices such a thing is
possible. But what has made me use AudioOutputUnits is their re-sampling
capability.
Would it be a big deal for Apple to provide such a buffer-size setting
capability with
AudioOutputUnits for future version of CoreAudio? I believe that would
greatly
benefit low-latency real-time applications on MacOS X.
Thanks for your remarks,
Chris