• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Convolution Audio Unit how to?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Convolution Audio Unit how to?


  • Subject: Re: Convolution Audio Unit how to?
  • From: Brian Willoughby <email@hidden>
  • Date: Mon, 21 Nov 2011 21:27:49 -0800

Mark,

Your convolution AU falls into the same category as an FFT-based AU or any AU that needs windowing. CoreAudio does not provide any support for the windowing part itself, so you must implement this on your own. You may be more likely to find examples that are FFT-based where you could get an idea of how to handle the windowing, and then it would be a simple matter to substitute your convolution matrix for the FFT processing.

While CoreAudio and its examples do not provide the code for windowing of the data itself, the AudioUnit specification does provide very specific parameters related to windowing. There is both a 'latency' attribute and a 'tail' attribute. As you correctly surmise, there are issues with the first samples and therefore the necessity of a tail. The size of the window determines your latency and tail. After you determine the needed latency and tail, you will code these values into your AU, either as a constant or as a value that is calculated on the fly from run time information such as the sample rate. The AU spec also provides a reset function so that you can clear out any memory for the window or state variables when the transport changes position in a discontinuous fashion. These aspects of the AudioUnit specification allow the AU host to coordinate all necessary aspects of windowing with each individual plugin.

Basically, you need to divorce your inner convolution code from the specific size of the buffers, e.g. 512 or 4096 samples. To do this, you need a layer of code that copies buffers into a piece of memory dedicated to your window size. Once you have filled your window completely, you should run your convolution and then provide the results to the output buffer. Needless to say, the first call or calls to your AU will have to return zeroes if the window is not full, but that will quickly pass, especially if your required window is small. Properly designed code will work no matter what size the AU buffer is, and will even work when the buffer size changes from one call to the next.

I believe I have said before on this list that it would be really handy to have an AUEffectBase subclass or AUBase subclass that implements windowing in a generic fashion. But the truth is that these techniques are fairly basic DSP that can be handled in Standard C, and thus Apple has chosen to focus on examples that are specific to AudioUnits and perhaps a bit simpler than windowed effects. I tried to write such a generic class myself, but never really had the time to turn this into a piece of universal sample code. I can understand why nobody has volunteered to do it for free. As I mentioned above, there may be FFT-based AU samples out there that happen to have windowing solved as part of the overall problem, so look for that.

As to your 2nd question, most AudioUnits have what is known as an Kernel object. These objects are dedicated, one to a channel. If you need state variables such as memory for the windowing, then you need to add these variables to the Kernel object, not the overall audio engine object. Using your terminology, the "stored sample buffer" should be a member of the Kernel object, and then the incoming channel buffer will always match the stored state.

In summary, I believe that the AudioUnit documentation assumes prior experience with plugin development, and therefore the understanding is that Apple should not need to provide instruction or tutorials on such basic aspects of DSP such as windowing. Once you understand the basics of plugin development and the specifics of the AU API, then the pieces should fall into place in a fairly obvious fashion. It seems that the curse of AU is that it attracts new folks who maybe expect a bit more hand-holding than is provided. Apple supplies the pieces that only they can supply, and they even supply a few solutions that are not absolutely necessary. Sure, they could supply even more than they do, but I think they've found a reasonable balance.

Brian Willoughby
Sound Consulting


On Nov 21, 2011, at 15:21, Mark Heath wrote:
I'm trying to implement an audio unit filter that behaves similar to a convolution matrix filter (my background is in image processing so I may use the wrong terminology)

To calculate the new value of the current sample I need a window of samples either side of current sample. (from the future and past)
I have implemented this (using AUBaseEffect) without processing the samples near the edge of the supplied sample frame. However I am getting some strange distortion that I could only attribute to not processing these edge samples.


So I'm looking for a correct implementation of a convolution filter.

My original thought was to buffer the last frame and process the sample that are near the edge this way but this has 2 problems;
1) the first sample buffer passed into the filter must output less samples than passed in and then I would need a tail to process the remaining samples.
2) as the filter is only receiving 1 channel, I do not know if my stored sample buffer is from a different channel.


Could anyone help?


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Convolution Audio Unit how to?
      • From: Mark Heath <email@hidden>
References: 
 >Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)

  • Prev by Date: Re: Convolution Audio Unit how to?
  • Next by Date: Re: Convolution Audio Unit how to?
  • Previous by thread: Re: Convolution Audio Unit how to?
  • Next by thread: Re: Convolution Audio Unit how to?
  • Index(es):
    • Date
    • Thread