Re: kAudioDevicePropertyBufferFrameSize != inNumberFrames
Re: kAudioDevicePropertyBufferFrameSize != inNumberFrames
- Subject: Re: kAudioDevicePropertyBufferFrameSize != inNumberFrames
- From: Brian Willoughby <email@hidden>
- Date: Fri, 02 Sep 2011 18:41:05 -0700
On Sep 2, 2011, at 18:16, Aristotel Digenis wrote:
I have finally figured how to alter the frame count of the Default
Output Audio Unit, using the kAudioDevicePropertyBufferFrameSize
property. It was normally 512, but now I am changing it to 256. The
property setting function returns no error, and if I again retrieve
the kAudioDevicePropertyBufferFrameSize property from the audio
unit, it continues to be set at 256. However the inNumberOfFrames
argument in the render callback is 236.
Does anybody have any ideas as to why that is?
My understanding is that you cannot completely control the frame
count during the render callback. CoreAudio is not so simple that
you can force everything to fit into one set of limitations.
For one thing, if there is a parameter change and your AudioUnit is
set up for sample-accurate parameter rendering, then your frame will
be split into smaller pieces so that the audio can be rendered with
the correct parameter value in each time frame. The size of these
smaller buffers is almost entirely unpredictable because the timing
of the parameter change could cause it to fall at literally any point
within the buffer.
Another issue is that the frame size probably has to match for all
AudioUnits within a graph, so if one AU prefers 256 and another
prefers 512, you cannot have both getting their preference.
CoreAudio allows you to determine the maximum frame size so that you
can pre-allocate enough memory to handle the largest possible frame,
but there is no guarantee that the size will not be smaller than the
maximum. The buffer size is generally controlled by the output
device, and the graph is set up in a pull model where each node
produces the requested number of samples after pulling the same
number of samples from its inputs. There are exceptions, but most
AudioUnits pull the same samples on input as they are required to
produce on output, and thus you cannot force one AU to have a
specific buffer size if the output device and other AudioUnits do not
all have the exact same size.
Finally, if the reason that you want to set your frame size to
exactly 256 is because you're doing some form of processing in 256-
sample chunks, then you're going about this whole thing in the wrong
way. Any AudioUnit that has a fixed processing chunk size must
buffer the incoming data until the required number of samples have
been collected, and at the same time must advertise that its latency
involves that many samples (256-sample latency in your case). Then,
it doesn't matter if you get 236 samples on one call because you will
be buffering samples until 256 have been collected. Meanwhile, your
code will return 236 samples from previously completed processing
callbacks, thus the latency.
If the above does not answer your questions, then perhaps you could
explain a little more about why you need the callback frame to be
precisely 256 samples every time.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden