Re: kAudioUnitProperty_MaximumFramesPerSlice
Re: kAudioUnitProperty_MaximumFramesPerSlice
- Subject: Re: kAudioUnitProperty_MaximumFramesPerSlice
- From: Jim Wintermyre <email@hidden>
- Date: Tue, 9 Sep 2003 11:17:02 -0700
This is what it says in the AU API docs for
kAudioUnitProperty_MaximumFramesPerSlice:
"This property should be changed if an Audio Unit is going to be asked to
render a particularly large buffer. This then allows the unit to
pre-allocate enough memory for any computations and output that it may
have to have buffers for (including the buffer that it can pass to a
RenderCallback). This avoids allocation in the render process, or a
failure in the render process, because the unit is asked to produce more
data than it is able to at any given time."
Don't you think that's a little ambiguous and weakly defined? What is "a
particularly large buffer"? Should the host always set this property to
something before doing any rendering?
I agree completely. I've been looking into this myself recently.
I'm working on AU versions of the Universal Audio UAD-1 plugins.
These plugins run on a separate DSP card, and as such have some
fundamental requirements that are more stringent than you'd have with
a typical host-based plugin.
For one thing, we absolutely need to know the maximum render size
that we'll see *before* any rendering calls are made, as we need to
set up resources on the card before then. This maximum render size
determines the overall latency of the UAD-1 plugins, which is another
thing that is different from host-based plugins.
I would like to suggest that hosts should typically set this value to
the hardware buffer size, at least for realtime playback. This is
what DP does, and in the VST world, pretty much all apps do this
(there, VST blockSize == AU MaximumFramesPerSlice, and VST
sampleFrames == AU render size; blockSize = hardware buffer size
typically; sampleFrames <= blockSize). This is important for us
because as mentioned it affects our latency, and we'd like the users
to be in control of this.
Currently, it seems to me that most apps which don't currently have
MaximumFramesPerSlice tracking the HW buffer size don't really have a
good reason for doing this, other than perhaps confusion about how
this property should be used. In fact in some cases (Spark/Peak),
the VST implementation has blockSize tracking the HW buffer size, but
the AU implmentation does NOT have MaximumFramesPerSlice tracking the
HW buffer size. It would seem that the 2 implementations should be
similar in this regard.
Certainly, if there is some case where the host needs to change this
value to render some large buffer (say, offline processing), that's
fine too.
I would tend to think that setting this property would be required before
doing any processing, and then, if the host wants to render slices larger
than the size that it set, it must of course set the property again.
That's what I would think would be the expectations for this property, but
the docs leave things really wide open to interpretation. Could this
please be clarified and more specifically defined? I've already now
encountered 2 different AU hosts that don't set this property ever,
causing all AUs to not work at all when you have audio hardware buffer
sizes configured above 1156 in those apps, so I think that maybe is a good
sign that this need to be defined clearly and specifically and emphasized
in the docs.
What hosts are you referring to? What is their behavior? So far in
my tests, I've seen the weirdest behavior in Logic and Peak, followed
by Spark. DP seems to do things exactly the way we'd like.
Thanks,
Jim
Universal Audio Mac Dude
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.