Re: Convolution Audio Unit how to?
Re: Convolution Audio Unit how to?
- Subject: Re: Convolution Audio Unit how to?
- From: Mark Heath <email@hidden>
- Date: Wed, 23 Nov 2011 08:55:01 +1100
On 22/11/2011, at 4:27 PM, Brian Willoughby wrote:
Mark,
Your convolution AU falls into the same category as an FFT-based AU
or any AU that needs windowing. CoreAudio does not provide any
support for the windowing part itself, so you must implement this on
your own. You may be more likely to find examples that are FFT-
based where you could get an idea of how to handle the windowing,
and then it would be a simple matter to substitute your convolution
matrix for the FFT processing.
Hi Brian,
I had found a page that described the problem http://web.willbenton.com/writing/2004/au-effect
but had no code examples.
While CoreAudio and its examples do not provide the code for
windowing of the data itself, the AudioUnit specification does
provide very specific parameters related to windowing. There is
both a 'latency' attribute and a 'tail' attribute. As you correctly
surmise, there are issues with the first samples and therefore the
necessity of a tail. The size of the window determines your latency
and tail. After you determine the needed latency and tail, you will
code these values into your AU, either as a constant or as a value
that is calculated on the fly from run time information such as the
sample rate. The AU spec also provides a reset function so that you
can clear out any memory for the window or state variables when the
transport changes position in a discontinuous fashion. These
aspects of the AudioUnit specification allow the AU host to
coordinate all necessary aspects of windowing with each individual
plugin.
I do have my buffering code sorted out, I didn't quite know about the
latency or how to implement the tail. Or if this was indeed the
correct way to do this.
The documentation I read regarding tail were talking about reverb
filters. Where N samples in produce N + decay samples out. My
filter is still N samples in to N samples out.
As to your 2nd question, most AudioUnits have what is known as an
Kernel object. These objects are dedicated, one to a channel. If
you need state variables such as memory for the windowing, then you
need to add these variables to the Kernel object, not the overall
audio engine object. Using your terminology, the "stored sample
buffer" should be a member of the Kernel object, and then the
incoming channel buffer will always match the stored state.
So clarifying, there is one instance of my AU class per channel, any
buffering I am doing in this instance will not clash with another
channel?
This is probably the missing information I'm after.
For the first call to my Process method, instead of trying to output
windowSize/2 less than the inFramesToProcess, I simply pad it with 0s
at the start and set the latency and tail.
In summary, I believe that the AudioUnit documentation assumes prior
experience with plugin development, and therefore the understanding
is that Apple should not need to provide instruction or tutorials on
such basic aspects of DSP such as windowing. Once you understand
the basics of plugin development and the specifics of the AU API,
then the pieces should fall into place in a fairly obvious fashion.
It seems that the curse of AU is that it attracts new folks who
maybe expect a bit more hand-holding than is provided. Apple
supplies the pieces that only they can supply, and they even supply
a few solutions that are not absolutely necessary. Sure, they could
supply even more than they do, but I think they've found a
reasonable balance.
My experience has been in writing video filtering plugins, where all
spatial data is available at once, and requesting one frame at a time
is quite common.
My audio dsp has been with non realtime libraries where the filter
itself requests sample(s) as it needs them and latency isn't an issue.
Ross Bencina mentioned "partioning" in another post. Which is
something I've never encountered before and didn't know the correct
terminology (to search google). And assumed it was something specific
to the Audio unit framework.
Thank you all for your patience and my ignorant questions.
Mark
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden