Re: kAudioUnitProperty_MaximumFramesPerSlice
Re: kAudioUnitProperty_MaximumFramesPerSlice
- Subject: Re: kAudioUnitProperty_MaximumFramesPerSlice
- From: Jim Wintermyre <email@hidden>
- Date: Tue, 9 Sep 2003 12:19:31 -0700
At 1:44 PM -0500 9/9/03, Marc Poirier wrote:
>
On Tue, 9 Sep 2003, Jim Wintermyre wrote:
>
>
> I would like to suggest that hosts should typically set this value to
>
> the hardware buffer size, at least for realtime playback. This is
>
> what DP does, and in the VST world, pretty much all apps do this
>
> (there, VST blockSize == AU MaximumFramesPerSlice, and VST
>
> sampleFrames == AU render size; blockSize = hardware buffer size
>
> typically; sampleFrames <= blockSize). This is important for us
>
> because as mentioned it affects our latency, and we'd like the users
>
> to be in control of this.
>
>
>
> Currently, it seems to me that most apps which don't currently have
>
> MaximumFramesPerSlice tracking the HW buffer size don't really have a
>
> good reason for doing this, other than perhaps confusion about how
>
> this property should be used. In fact in some cases (Spark/Peak),
>
> the VST implementation has blockSize tracking the HW buffer size, but
>
> the AU implmentation does NOT have MaximumFramesPerSlice tracking the
>
> HW buffer size. It would seem that the 2 implementations should be
>
> similar in this regard.
>
>
>
> Certainly, if there is some case where the host needs to change this
>
> value to render some large buffer (say, offline processing), that's
>
> fine too.
>
>
Yeah, that's what Spark does, I know. In realtime usage, the max size is
>
set I think to the hardware size, but for offline bouncing, it's set to
>
8192, if I remember correctly. Also, Logic for example lets you adjust
>
the slice size for plugins independently of the hardware buffer size
>
(this allows you lower CPU usage, usually), although when it's an Audio
>
Instrument track that is selected or a live input track, those ones always
>
use the hardware buffer size. So I guess it can be intentional and valid
>
to have the max slice size not correspond to hardware buffer size, but
>
definitely not okay to never set it at all (which I'm seeing in some cases
>
now), or to provide slices larger than the max size you set!
Believe me, I'm *WAAAY* familiar with all the "interesting" behavior
in Logic that you're referring to. This whole issue you mention of
plugs getting called with different buffer sizes (and in different
execution contexts on OS 9!) depending on whether the track is in
"live mode" or not has been a big problem for the UAD-1 plugs. We
finally got it fixed for VST, and now there are new (different)
issues for AU.
The way Logic used to work for VST was this:
blockSize == *larger* of processBufferSize and asioBufferSize, where:
processBufferSize == "Process Buffer Range" setting
asioBufferSize == HW buffer size for the particular I/O you're using
if (theTrackThisPluginIsOnIsInLiveMode)
sampleFrames == asioBufferSize;
else
sampleFrames == blockSize;
The problem for us with this was that since blockSize is always the
larger of processBufferSize and asioBufferSize, that means that
blockSize will never be smaller than 512 samples (which is the
smallest Process Buffer Range setting). And, it is blockSize that
determines our plugin latency, not sampleFrames. So, even if you
have your HW buffer size set to 64 to reduce latency, it doesn't help
for our plugs because the latency is determined by blockSize.
Logic was the *ONLY* VST host where we had this issue.
>
> >I would tend to think that setting this property would be required before
>
> >doing any processing, and then, if the host wants to render slices larger
>
> >than the size that it set, it must of course set the property again.
>
> >That's what I would think would be the expectations for this property, but
>
> >the docs leave things really wide open to interpretation. Could this
>
> >please be clarified and more specifically defined? I've already now
>
> >encountered 2 different AU hosts that don't set this property ever,
>
> >causing all AUs to not work at all when you have audio hardware buffer
>
> >sizes configured above 1156 in those apps, so I think that maybe is a good
>
> >sign that this need to be defined clearly and specifically and emphasized
>
> >in the docs.
>
>
>
> What hosts are you referring to? What is their behavior? So far in
>
> my tests, I've seen the weirdest behavior in Logic and Peak, followed
>
> by Spark. DP seems to do things exactly the way we'd like.
>
>
I'm not saying this to "out" any hosts ;) but just because it can be
>
useful for other plugin developers to know this: I was talking about
>
Peak 4.0 and Melodyne 2.0.
>
>
What weird behavior have you experienced in Logic and Spark? I haven't
>
found any problems, but I'd like to know if there's anything to watch out
>
for...
OK, here's a summary of what I've seen in various AU (and AU + VST)
hosts I've tested with so far. Here, "blockSize" =
MaximumFramesPerSlice, and "sampleFrames" = render size. "Weirdness"
is denoted by "***".
I'd be interested to hear if others have seen similar/different behavior.
LOGIC
(note, I haven't tested with the latest betas to see if this behavior
has changed...)
*** - It appears that the blockSize reported to the AU plug is ALWAYS
1024 samples, regardless of the process buffer range and I/O buffer
size settings.
- The sampleFrames value (processing buf size) seems to behave as in OS 9:
- on non-live tracks, sampleFrames = blockSize (1024)
- on live tracks, sampleFrames = I/O buffer size setting.
*** - On OS 9, the blockSize was the larger of the process buffer
range and hardware I/O buffer size settings. On OS X, blockSize is
apparently always 1024, but *Logic still calls each plug with the
number of samples specified by the larger of the process buffer range
and hardware I/O buffer size settings!* This means that if this
value is > 1024, Logic will "multiple-bang" the plugs with however
many sampleFrames blocks are required to get to this value.
Example: Suppose your HW I/O buffer size is 512, and your process
buffer range is set to "Jumbo", which corresponds to 4096 samples.
For plugs that are not in live mode, you'll see 4 process calls in a
row, each with sampleFrames=1024. For plugs in live mode, you'll see
8 process calls in a row, each with sampleFrames = 512.
Now, this particular case causes problems for our plugins because we
have another requirement that all of our plugs in the chain have to
be called with the "max buffer size" worth of data (which in this
case is always 1024) before any one of our plugs is called with any
more data. (Another case where our UAD-1 plugins have stricter
requirements than typical host-based plugs.) This is because the
processing of all plugs running on our card has to be synchronous,
because this enables us to get a very large performance boost. This
might seem to be a harsh restriction, but in reality it works in
every VST host and all other AU hosts tested so far, except for
isolated cases like when you might be doing an offline process of a
track while the realtime audio engine is playing back (we don't
support those cases, and it's not a big deal). So anyway, this
breaks in Logic, and it's the only AU host so far that I've found
that does this.
This seems odd because this behavior is the only thing that is really
different from the OS 9 behavior. This makes me wonder if perhaps
there is not a "really good" reason for always setting blockSize to
1024. If there's no reason why this value can't just track the HW
I/O buffer size (or the larger of that and the process buffer range
setting as in OS 9), then I would really like to see this changed.
SPARK XL (2.8) - supports both VST and AU
- In VST case, blockSize = HW I/O buffer size
*** - In AU case, blockSize = 1024 independent of HW I/O buffer size,
*until* you increase the I/O buffer size beyond 1024, then blockSize
= HW I/O buffer size (but once you do this, if you subsequently
reduce the I/O buffer size, the blockSize does NOT go back down).
Again, this seems odd - it seems like the behavior in the AU case
here should be the same as the VST case.
- sampleFrames = HW I/O buffer size
PEAK (4.0) - supports both VST and AU (but only VST within vbox matrix)
*** - In vbox (VST only), blockSize = sampleFrames = HW buffer size.
However, there is a bug where if you change the HW buffer size, Peak
will call resume() with the OLD blockSize, but process with the new
blockSize, which can obviously cause problems. I'm going to report
this to the Peak beta list. The workaround is to de/re-instantiate
the plugs in vbox after changing the buffer size.
- In the inserts, you can have either VST or AU plugins.
- For VST plugs, blockSize = sampleFrames = HW buffer size, and it
correctly tracks changes in HW buffer size.
*** - For AU plugs, blockSize = 1024 independent of HW buffer size,
even if HW buffer size > 1024. sampleFrames = HW buffer size. If HW
buffer size > 1024, processing function is not called at all. Seems
to be a Peak bug unless it's something weird in our AU implementation.
DIGITAL PERFORMER (v. 4.1b8)
- blockSize does match the hardware buffer setting (times the "host
buffer multiplier" value, not sure exactly what that's for).
- sampleFrames = blockSize
This is the ideal behavior for us.
Jim
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.