Re: Convolution Audio Unit how to?
Re: Convolution Audio Unit how to?
- Subject: Re: Convolution Audio Unit how to?
- From: Paul Davis <email@hidden>
- Date: Mon, 28 Nov 2011 07:51:45 -0500
On Sun, Nov 27, 2011 at 4:45 PM, Mark Heath <email@hidden> wrote:
> Non real time filters have a definite start to the data, and do not need to
> output anything until they have enough data to do so, hence output sample 1
> will be in input sample 1's position.
there still seems to be some confusion here. you are using an
algorithm that requires you to collect some amount of data that may be
larger than the data you have available from a single render callback.
there is absolutely no way that your first output sample can be
guaranteed to be handed back to the host in the same render callback
as your first input sample. in fact, not only can you not guarantee
this, in almost all likely configurations of the host's calling you
and your window size, you can guarantee the opposite: that the first
output sample you provide the host will be unrelated to the first
input sample (it will be silence).
whether a given class of DSP filters do or do not need anything in
particular, an AudioUnit (and in fact, a audio processing/generating
plugin using any API i'm familiar with) must ALWAYS output a
host-specified amount of data whether they have enough input data yet
or not. this isn't optional, negotiable or contextual.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden