• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Convolution Audio Unit how to?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Convolution Audio Unit how to?


  • Subject: Re: Convolution Audio Unit how to?
  • From: Richard Dobson <email@hidden>
  • Date: Thu, 24 Nov 2011 10:25:26 +0000

On 22/11/2011 21:55, Mark Heath wrote:

..
I do have my buffering code sorted out, I didn't quite know about the
latency or how to implement the tail. Or if this was indeed the correct
way to do this.
The documentation I read regarding tail were talking about reverb
filters. Where N samples in produce N + decay samples out. My filter is
still N samples in to N samples out.



It would be helpful to know what the exact audio effect/process is that you are implementing, rather than just the method. Given that your stated method is convolution, the above recipe describes circular convolution (in the time domain), which is generally avoided in audio as you get temporal aliasing (unless the data is already sufficiently zero-padded, or for artistic reasons you want/like the result of that temporal aliasing). All filters have a non-zero impulse response; that is part of the definition of a filter. Normally we need linear convolution, N samples in, N+(M-1) samples out where M is the length of the convolution signal (filter coefficients, reverb decay or whatever); exactly as produced by a standard FIR filter, of which reverb is a particular case, and simple delay another. There is input delay, which equates to latency in a streaming context, and a tail (the data still in the delay line which needs to be played out after say a NOTE OFF). In neither case would you want or expect to quantize your delay to the host buffer size; you manage your own internal delay or process buffer(s) to provide the exact delay time requested by the user. At each process call of size N, you inject N samples into your process engine, and extract N samples to be output. Some or all of these may be zero-valued.

As a worst-case test, assume each call to process provides just one sample, like the famous "tick" functions in STK. If you implement a tick function for your process, you are then safe for any host buffer size as you simply iterate over N calls to "tick". It may not be optimal in terms of efficiency, but it will always work. The trick is to regard the host block size not as somehow intrinsic to your ~design~, but rather as a reason, once the process is working, to implement appropriate optimizations to reduce function-call overhead, number of memory moves, etc.

With regard to code examples, PD has already been mentioned; there is also Csound, which has copious examples of all this stuff, including both plain and partitioned convolution, and a streaming phase vocoder framework.

Richard Dobson
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Brian Willoughby <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)

  • Prev by Date: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • Next by Date: Handling MIDI setup notification
  • Previous by thread: Re: Convolution Audio Unit how to?
  • Next by thread: Re: Convolution Audio Unit how to?
  • Index(es):
    • Date
    • Thread