Thanks for the reply Brian!
Your understanding of CoreAudio makes sense. It does seem unreasonable to expect all other audio units to adapt to the fixed frame count of my choice. I can certainly go down the path of allocating and catering for the worst buffer size case as you suggest but was hoping to avoid that if I can. I am working on an audio engine that works on other platforms that do guarantee constant buffer lengths (game consoles prefer constant performance, rather than super flexible but possibly spiky performance). Am hoping to achieve the same consistent latency on OS X, without having to change the engine too much to cater for worst case buffer sizes.
The only other time I have fed audio to the sound card on OS X is through the use of PortAudio which does allow the user to specify the buffer size and sample rate and it all just magically works :o) This time I was hoping to use CoreAudio directly. My first approach was to use AudioQueue, but that meant I couldn't get bufferSize smaller than 125ms. That is why I turned to AUHAL. Do you think going directly to HAL (without the default Audio Unit), would allow me to achieve fixed size buffers? The thinking there is that there won't be a audio graph of other Audio Units with differing buffer size requirements. The hope is that it is feeding the buffer directly to the sound card... perhaps that is wishful thinking?
If that doesn't work, then I will need to make it work with worst case buffer sizes. Now the challenge is to find how to use HAL directly :o)
Thanks,
Aristotel
> CC: email@hidden
> From: email@hidden
> Subject: Re: kAudioDevicePropertyBufferFrameSize != inNumberFrames
> Date: Fri, 2 Sep 2011 18:41:05 -0700
> To: email@hidden
>
>
> On Sep 2, 2011, at 18:16, Aristotel Digenis wrote:
> > I have finally figured how to alter the frame count of the Default
> > Output Audio Unit, using the kAudioDevicePropertyBufferFrameSize
> > property. It was normally 512, but now I am changing it to 256. The
> > property setting function returns no error, and if I again retrieve
> > the kAudioDevicePropertyBufferFrameSize property from the audio
> > unit, it continues to be set at 256. However the inNumberOfFrames
> > argument in the render callback is 236.
> >
> > Does anybody have any ideas as to why that is?
>
> My understanding is that you cannot completely control the frame
> count during the render callback. CoreAudio is not so simple that
> you can force everything to fit into one set of limitations.
>
> For one thing, if there is a parameter change and your AudioUnit is
> set up for sample-accurate parameter rendering, then your frame will
> be split into smaller pieces so that the audio can be rendered with
> the correct parameter value in each time frame. The size of these
> smaller buffers is almost entirely unpredictable because the timing
> of the parameter change could cause it to fall at literally any point
> within the buffer.
>
> Another issue is that the frame size probably has to match for all
> AudioUnits within a graph, so if one AU prefers 256 and another
> prefers 512, you cannot have both getting their preference.
> CoreAudio allows you to determine the maximum frame size so that you
> can pre-allocate enough memory to handle the largest possible frame,
> but there is no guarantee that the size will not be smaller than the
> maximum. The buffer size is generally controlled by the output
> device, and the graph is set up in a pull model where each node
> produces the requested number of samples after pulling the same
> number of samples from its inputs. There are exceptions, but most
> AudioUnits pull the same samples on input as they are required to
> produce on output, and thus you cannot force one AU to have a
> specific buffer size if the output device and other AudioUnits do not
> all have the exact same size.
>
> Finally, if the reason that you want to set your frame size to
> exactly 256 is because you're doing some form of processing in 256-
> sample chunks, then you're going about this whole thing in the wrong
> way. Any AudioUnit that has a fixed processing chunk size must
> buffer the incoming data until the required number of samples have
> been collected, and at the same time must advertise that its latency
> involves that many samples (256-sample latency in your case). Then,
> it doesn't matter if you get 236 samples on one call because you will
> be buffering samples until 256 have been collected. Meanwhile, your
> code will return 236 samples from previously completed processing
> callbacks, thus the latency.
>
> If the above does not answer your questions, then perhaps you could
> explain a little more about why you need the callback frame to be
> precisely 256 samples every time.
>
> Brian Willoughby
> Sound Consulting
>