Re: Changing the latency from within the render callback.
Re: Changing the latency from within the render callback.
- Subject: Re: Changing the latency from within the render callback.
- From: Philippe Wicker <email@hidden>
- Date: Tue, 2 Mar 2004 09:00:12 +0100
On Mar 2, 2004, at 1:21 AM, William Stewart wrote:
I tend to agree with Marc here... It seems to me a somewhat week
assumption to be make (and I know that this is not true for some
hosts) that the render frames you are asked to produce can vary (I'm
not condoning that necessarily, but it is a reality).
Yes. This is the case for Logic when the process buffer is set to a
value greater than the MaxFramesPerSlice. When set to "large" (ie
2048), the render is called successively with 1156 and 892 (making a
total of 2048). I wasn't aware of this behavior and thought that this
frames number would be constant. It isn't. The latency notification
trick won't do it there.
For instance, some hosts will achieve ramping by changing a
parameter's value over time, then calling the AU for smaller slices of
audio over that time... For some AU's that have this internal
buffering, they've actually forgone to publish parameter info because
they can't deal with this kind of manipulation. (Now, with an AU this
is NOT needed of course, as parameter values can be scheduled
intra-buffer).
But, in the case we're describing, I don't know if I've got a great
solution/suggestion - but I wanted to pose the question - have you
thought of having your latency be a property that is somewhat
independent of the number of frames you're asked to render for (and
potentially a settable property)... Its probably some amount of work
within the AU to make this successful, but it might actually be a
reasonable feature (that could have default values expressed in terms
of some relationship to the maxFrames value). Just a thought
Yes, I'm thinking of this solution, but unless I'm missing something
(this won't be the first time :)), it doesn't work well. When the AU
(the one that allows audio IPC) is inserted, I have to first call
PullInput before I can send the rendered slice to the external audio
"client" (the one connected through this AU) and return the preceding
buffer to the caller of my render ("never block the IO thread"). This
means that the latency cannot be less than the buffer size to render.
Because I cannot know "a priori" this size, I must rely on a "safe"
predefined value (MaxFramesPerSlice is the best candidate) for every AU
instance (wether the AU is used on a live track or not). With a 44.1
sample rate this gives about 26 ms which is a little bit big for a
musician playing live. The best I could do (with Logic as it is
managing MaxFramesPerSlice today) is rely on the user and ask him to
choose - as a property as you said - between latency values such as
128, 256, 512, 1024, etc... knowing that I cannot honor its request 1)
if the track is not live, 2) the process buffer is greater than the
requested latency, 3) the HW IO buffer is greater than the requested
latency. It's cumbersome and quite a bad user experience.
In any case, if the user changes the latency, the host has to listen to
my notification and apply it at some time as Marc said.
Bill
Philippe Wicker
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.