kAudioUnitProperty_MaximumFramesPerSlice
kAudioUnitProperty_MaximumFramesPerSlice
- Subject: kAudioUnitProperty_MaximumFramesPerSlice
- From: Marc Poirier <email@hidden>
- Date: Mon, 8 Sep 2003 18:51:12 -0500 (CDT)
This is what it says in the AU API docs for
kAudioUnitProperty_MaximumFramesPerSlice:
"This property should be changed if an Audio Unit is going to be asked to
render a particularly large buffer. This then allows the unit to
pre-allocate enough memory for any computations and output that it may
have to have buffers for (including the buffer that it can pass to a
RenderCallback). This avoids allocation in the render process, or a
failure in the render process, because the unit is asked to produce more
data than it is able to at any given time."
Don't you think that's a little ambiguous and weakly defined? What is "a
particularly large buffer"? Should the host always set this property to
something before doing any rendering?
I would tend to think that setting this property would be required before
doing any processing, and then, if the host wants to render slices larger
than the size that it set, it must of course set the property again.
That's what I would think would be the expectations for this property, but
the docs leave things really wide open to interpretation. Could this
please be clarified and more specifically defined? I've already now
encountered 2 different AU hosts that don't set this property ever,
causing all AUs to not work at all when you have audio hardware buffer
sizes configured above 1156 in those apps, so I think that maybe is a good
sign that this need to be defined clearly and specifically and emphasized
in the docs.
Thanks,
Marc
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.