• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Convolution Audio Unit how to?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Convolution Audio Unit how to?


  • Subject: Re: Convolution Audio Unit how to?
  • From: Jamie Bullock <email@hidden>
  • Date: Wed, 23 Nov 2011 10:25:37 +0000

Hi Mark,

I'm fairly new to AudioUnit programming, so this may be way off the mark, but the source code for Ben Saylor's parconv~ external for pure data may be instructive.

Pure Data implements a block-by-block audio processing graph, which can be extended by writing 'external' objects (like plugins). This particular external implements partitioned convolution in a real-time context, which AFAICT is the kind of thing you're looking for:

	http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/bsaylor/partconv~.c?revision=15797&view=markup

All best,

Jamie

--
http://www.jamiebullock.com



On 21 Nov 2011, at 23:21, Mark Heath wrote:

> Hi guys,
>
> I've spent the last week searching google for information on how to do this but have not found anything.
> Even searching this mailing list archive returns an error. So forgive me if this has been asked before.
>
> I'm trying to implement an audio unit filter that behaves similar to a convolution matrix filter (my background is in image processing so I may use the wrong terminology)
>
> To calculate the new value of the current sample I need a window of samples either side of current sample. (from the future and past)
> I have implemented this (using AUBaseEffect) without processing the samples near the edge of the supplied sample frame.  However I am getting some strange distortion that I could only attribute to not processing these edge samples.
>
> So I'm looking for a correct implementation of a convolution filter.
>
> My original thought was to buffer the last frame and process the sample that are near the edge this way but this has 2 problems;
> 1) the first sample buffer passed into the filter must output less samples than passed in and then I would need a tail to process the remaining samples.
> 2) as the filter is only receiving 1 channel, I do not know if my stored sample buffer is from a different channel.
>
> Could anyone help?
>
> Thanks
> Mark
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list      (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)

  • Prev by Date: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • Next by Date: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • Previous by thread: Re: Convolution Audio Unit how to?
  • Next by thread: AudioConverterFillComplexBuffer returns '!dat'/kAudioCodecUnsupportedFormatError
  • Index(es):
    • Date
    • Thread