Re: kAudioUnitProperty_MaximumFramesPerSlice
Re: kAudioUnitProperty_MaximumFramesPerSlice
- Subject: Re: kAudioUnitProperty_MaximumFramesPerSlice
- From: Marc Poirier <email@hidden>
- Date: Tue, 9 Sep 2003 13:44:32 -0500 (CDT)
On Tue, 9 Sep 2003, Jim Wintermyre wrote:
>
I would like to suggest that hosts should typically set this value to
>
the hardware buffer size, at least for realtime playback. This is
>
what DP does, and in the VST world, pretty much all apps do this
>
(there, VST blockSize == AU MaximumFramesPerSlice, and VST
>
sampleFrames == AU render size; blockSize = hardware buffer size
>
typically; sampleFrames <= blockSize). This is important for us
>
because as mentioned it affects our latency, and we'd like the users
>
to be in control of this.
>
>
Currently, it seems to me that most apps which don't currently have
>
MaximumFramesPerSlice tracking the HW buffer size don't really have a
>
good reason for doing this, other than perhaps confusion about how
>
this property should be used. In fact in some cases (Spark/Peak),
>
the VST implementation has blockSize tracking the HW buffer size, but
>
the AU implmentation does NOT have MaximumFramesPerSlice tracking the
>
HW buffer size. It would seem that the 2 implementations should be
>
similar in this regard.
>
>
Certainly, if there is some case where the host needs to change this
>
value to render some large buffer (say, offline processing), that's
>
fine too.
Yeah, that's what Spark does, I know. In realtime usage, the max size is
set I think to the hardware size, but for offline bouncing, it's set to
8192, if I remember correctly. Also, Logic for example lets you adjust
the slice size for plugins independently of the hardware buffer size
(this allows you lower CPU usage, usually), although when it's an Audio
Instrument track that is selected or a live input track, those ones always
use the hardware buffer size. So I guess it can be intentional and valid
to have the max slice size not correspond to hardware buffer size, but
definitely not okay to never set it at all (which I'm seeing in some cases
now), or to provide slices larger than the max size you set!
>
>I would tend to think that setting this property would be required before
>
>doing any processing, and then, if the host wants to render slices larger
>
>than the size that it set, it must of course set the property again.
>
>That's what I would think would be the expectations for this property, but
>
>the docs leave things really wide open to interpretation. Could this
>
>please be clarified and more specifically defined? I've already now
>
>encountered 2 different AU hosts that don't set this property ever,
>
>causing all AUs to not work at all when you have audio hardware buffer
>
>sizes configured above 1156 in those apps, so I think that maybe is a good
>
>sign that this need to be defined clearly and specifically and emphasized
>
>in the docs.
>
>
What hosts are you referring to? What is their behavior? So far in
>
my tests, I've seen the weirdest behavior in Logic and Peak, followed
>
by Spark. DP seems to do things exactly the way we'd like.
I'm not saying this to "out" any hosts ;) but just because it can be
useful for other plugin developers to know this: I was talking about
Peak 4.0 and Melodyne 2.0.
What weird behavior have you experienced in Logic and Spark? I haven't
found any problems, but I'd like to know if there's anything to watch out
for...
Marc
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.