RE: How to set buffer size for Audio Output Unit?
RE: How to set buffer size for Audio Output Unit?
- Subject: RE: How to set buffer size for Audio Output Unit?
- From: Roger Butler <email@hidden>
- Date: Tue, 4 Sep 2001 16:55:04 +1000
OK, how does all this buffering stuff work in Core Audio? Here's a bit of a
guess:
The AudioDevice (wrapped by an OutputAudioUnit) starts up and needs some
data to play. So it calls the registered IOProc(). If this is AudioUnit land
then is the IOProc() really the AudioUnitRenderSlice() function in the
source AU? The AU itself also needs data so PullInput() is called to get
data from the source of this AU (through MakeConnection or
SetInputCallback). This continues down the chain to the client app which
delivers some actual PCM data. The output AU and device obviously can't play
anything until all the AUs in the graph have rendered and delivered their
buffer's worth. So there'll be a startup latency.
Now when does the output AU (and device) call for the next buffer? As soon
as it's received and begun playing the first buffer? If so then the latency
of the whole graph must be less than a buffer's worth of audio. What if you
have 10 AUs connected and they each do some processing and have a fair bit
of latency? Is there an audible gap between the first and second buffer?
Between subsequent buffers?
Can an AU advertise its latency so the destination AU or device knows to
call the AU early?
How does an AU know that it's not meeting buffer requirements. Should it
listen for kAudioDeviceProcessorOverload? Should users of AUs have to listen
to AD properties? Does this property only apply to output devices and can an
AU its destination AU's device?
What does kAudioUnitProperty_AUGraphCPULoad do? Does an AU have to calculate
or guess how much CPU it's using or is this done by the graph somehow?
Roger.
>
-----Original Message-----
>
From: Doug Wyatt [mailto:email@hidden]
>
Sent: Saturday, 1 September 2001 7:19 AM
>
To: email@hidden
>
Subject: Re: How to set buffer size for Audio Output Unit?
>
>
>
Yes, the sample rate converter will pull for however many samples it
>
needs in order to fill the hardware buffer, and if the sample
>
rate ratio
>
is non-integral, then that number of samples will vary
>
slightly. We're
>
trying to minimize copying / re-buffering.
>
>
Doug
>
>
>
On Friday, August 31, 2001, at 12:44 , Christof Faller wrote:
>
>
> Doug,
>
>
>
>> One thing you can do now is ask the output unit for the underlying
>
>> AudioDevice it is talking to, and change its buffer size via the
>
>> lower-level API:
>
>
>
> Thanks, it works! I increased the AudioOutputUnit buffer size by
>
> increasing the
>
> device buffer size. Now I have enough time within
>
AudioUnitCallback to
>
> do
>
> my processing.
>
>
>
> An interesting (however obvious) observation which I made:
>
>
>
> For a given output device I can set the buffer size. If
>
SRA/SRD*BS is
>
> not
>
> an integer, then the AudioOutputUnit will be called with
>
buffer sizes
>
> varying +-1 (SRA = sample rate audio unit, SRD sample rate device,
>
> BS = device buffer size).
>
>
>
> Thanks!
>
> Chris
>
>
>
>
>
>
>
--
>
Doug Wyatt
>
work: email@hidden (CoreAudio)
>
personal: email@hidden http://www.sonosphere.com
>
>
"It's kind of fun to do the impossible."
>
-- Walt Disney
>
_______________________________________________
>
coreaudio-api mailing list
>
email@hidden
>
http://www.lists.apple.com/mailman/listinfo/coreaudio-api