Re: kAudioUnitProperty_MaximumFramesPerSlice
Re: kAudioUnitProperty_MaximumFramesPerSlice
- Subject: Re: kAudioUnitProperty_MaximumFramesPerSlice
- From: Jim Wintermyre <email@hidden>
- Date: Tue, 9 Sep 2003 13:12:03 -0700
(Hmm, this message was too big to go to the list, so I'll resend it
as two separate messages...)
At 1:44 PM -0500 9/9/03, Marc Poirier wrote:
On Tue, 9 Sep 2003, Jim Wintermyre wrote:
I would like to suggest that hosts should typically set this value to
the hardware buffer size, at least for realtime playback. This is
what DP does, and in the VST world, pretty much all apps do this
> (there, VST blockSize == AU MaximumFramesPerSlice, and VST
sampleFrames == AU render size; blockSize = hardware buffer size
typically; sampleFrames <= blockSize). This is important for us
because as mentioned it affects our latency, and we'd like the users
to be in control of this.
Currently, it seems to me that most apps which don't currently have
MaximumFramesPerSlice tracking the HW buffer size don't really have a
good reason for doing this, other than perhaps confusion about how
this property should be used. In fact in some cases (Spark/Peak),
the VST implementation has blockSize tracking the HW buffer size, but
> the AU implmentation does NOT have MaximumFramesPerSlice tracking the
HW buffer size. It would seem that the 2 implementations should be
similar in this regard.
Certainly, if there is some case where the host needs to change this
value to render some large buffer (say, offline processing), that's
fine too.
Yeah, that's what Spark does, I know. In realtime usage, the max size is
set I think to the hardware size, but for offline bouncing, it's set to
8192, if I remember correctly. Also, Logic for example lets you adjust
the slice size for plugins independently of the hardware buffer size
(this allows you lower CPU usage, usually), although when it's an Audio
Instrument track that is selected or a live input track, those ones always
use the hardware buffer size. So I guess it can be intentional and valid
to have the max slice size not correspond to hardware buffer size, but
definitely not okay to never set it at all (which I'm seeing in some cases
now), or to provide slices larger than the max size you set!
Believe me, I'm *WAAAY* familiar with all the "interesting" behavior
in Logic that you're referring to. This whole issue you mention of
plugs getting called with different buffer sizes (and in different
execution contexts on OS 9!) depending on whether the track is in
"live mode" or not has been a big problem for the UAD-1 plugs. We
finally got it fixed for VST, and now there are new (different)
issues for AU.
The way Logic used to work for VST was this:
blockSize == *larger* of processBufferSize and asioBufferSize, where:
processBufferSize == "Process Buffer Range" setting
asioBufferSize == HW buffer size for the particular I/O you're using
if (theTrackThisPluginIsOnIsInLiveMode)
sampleFrames == asioBufferSize;
else
sampleFrames == blockSize;
The problem for us with this was that since blockSize is always the
larger of processBufferSize and asioBufferSize, that means that
blockSize will never be smaller than 512 samples (which is the
smallest Process Buffer Range setting). And, it is blockSize that
determines our plugin latency, not sampleFrames. So, even if you
have your HW buffer size set to 64 to reduce latency, it doesn't help
for our plugs because the latency is determined by blockSize.
Logic was the *ONLY* VST host where we had this issue.
Jim
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.