• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Convolution Audio Unit how to?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Convolution Audio Unit how to?


  • Subject: Re: Convolution Audio Unit how to?
  • From: Brian Willoughby <email@hidden>
  • Date: Tue, 22 Nov 2011 21:34:35 -0800


On Nov 22, 2011, at 13:55, Mark Heath wrote:
On 22/11/2011, at 4:27 PM, Brian Willoughby wrote:
You may be more likely to find examples that are FFT-based where you could get an idea of how to handle the windowing, and then it would be a simple matter to substitute your convolution matrix for the FFT processing.

I had found a page that described the problem http:// web.willbenton.com/writing/2004/au-effect but had no code examples.


There is both a 'latency' attribute and a 'tail' attribute. As you correctly surmise, there are issues with the first samples and therefore the necessity of a tail. The size of the window determines your latency and tail. After you determine the needed latency and tail, you will code these values into your AU, either as a constant or as a value that is calculated on the fly from run time information such as the sample rate. The AU spec also provides a reset function so that you can clear out any memory for the window or state variables when the transport changes position in a discontinuous fashion. These aspects of the AudioUnit specification allow the AU host to coordinate all necessary aspects of windowing with each individual plugin.

I do have my buffering code sorted out, I didn't quite know about the latency or how to implement the tail. Or if this was indeed the correct way to do this.
The documentation I read regarding tail were talking about reverb filters. Where N samples in produce N + decay samples out. My filter is still N samples in to N samples out.
No offense intended, but I have my doubts that your buffering code is sorted just yet.

There are two aspects to each of the latency and tail parameters: 1) reporting the time durations, and 2) implementing the actual features. In terms of tail, you probably should report 0.0 seconds of tail time and implement nothing. I probably should not have even mentioned tail. The latency parameter is the one where you'll report your window duration, but reporting it is separate from actually implementing it.


As to your 2nd question, most AudioUnits have what is known as an Kernel object. These objects are dedicated, one to a channel. If you need state variables such as memory for the windowing, then you need to add these variables to the Kernel object, not the overall audio engine object. Using your terminology, the "stored sample buffer" should be a member of the Kernel object, and then the incoming channel buffer will always match the stored state.

So clarifying, there is one instance of my AU class per channel, any buffering I am doing in this instance will not clash with another channel?
This is probably the missing information I'm after.
Sorry to be pedantic, but your question is too vague. As mentioned in the article that you linked above, there are two AU classes: AUEffectBase and AUKernelBase. You will be extending both of those classes to implement your plugin.

If you were to place your buffer in the AUEffectBase subclass, then you'd have problem. Any such channel buffering belongs in the AUKernelBase subclass. AUEffectBase takes care of creating one instance of the kernel for every channel in your plugin instantiation, thus there should be no clash between channels.


For the first call to my Process method, instead of trying to output windowSize/2 less than the inFramesToProcess, I simply pad it with 0s at the start and set the latency and tail.

A) I have no idea where you got the idea that you would output windowSize/2 samples.


B) You do not set latency and tail during Process. They are reported separately, and are not part of the render process at all. They serve as descriptions of your algorithm that the AU host will need to know about in advance of render time so that preparations can be made to latency-compensate your plugin in order to time-align it with other plugins that might have a different latency. If your window size depends upon sample rate, then you need to make sure that this is reflected in the reported value. Otherwise, you can just report a fixed number if your convolution window size is always constant.

Basically, your convolution core will need a couple of working buffers, one for input and one for output. These will be sized according to your windowing needs, plus some optional overhead for copying, and thus the size will be independent of the inFramesToProcess. You will need to copy inFramesToProcess samples from the CoreAudio buffers to your working buffers, and you need to keep count of how many valid samples are in your working buffer. Before your working buffer accumulates enough samples to run a convolution, you'll have nothing to provide to the output, and thus you'll need to pass all 0s to the output before the first convolution calculation is done. After your working buffers first reach the point of being filled sufficiently, at that point you can pad your output with 0s at the start and the new output samples from the output working buffer at the end.

Keep in mind that it may take more than one render call before your convolution can produce output. Taking a perhaps extreme example, let's say your convolution needs 8192 samples for its window. Then let's say the AU host is rendering 4096 samples per render call. You might be able to start returning non-zero output samples on the second render, but you might starve the stream unless you wait until the third render. Also, consider what happens to your required 8192- sample window when the AU host uses 512 samples per render. Worse yet, what about 512 samples typically, but even fewer than 512 samples during certain ramped parameter render calls that might be shorter than 512 samples? You can see that it might even take hundreds of calls before your window size is met.

There are a number of ways to handle this, but you basically need a FIFO that is large enough to hold everything that your convolution calculations need, plus you must have some sort of counter to keep track of how much input and output data is waiting in the input and output working buffer (FIFO).

Brian Willoughby
Sound Consulting

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Convolution Audio Unit how to?
      • From: Mark Heath <email@hidden>
References: 
 >Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Brian Willoughby <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)

  • Prev by Date: Re: Convolution Audio Unit how to?
  • Next by Date: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • Previous by thread: Re: Convolution Audio Unit how to?
  • Next by thread: Re: Convolution Audio Unit how to?
  • Index(es):
    • Date
    • Thread