• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Convolution Audio Unit how to?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Convolution Audio Unit how to?


  • Subject: Re: Convolution Audio Unit how to?
  • From: tahome izwah <email@hidden>
  • Date: Tue, 22 Nov 2011 01:38:02 +0100

That list still active??

--th

2011/11/22 Aran Mulholland <email@hidden>:
> this question might be better asked here -
> http://music.columbia.edu/cmc/music-dsp/
>
> On Tue, Nov 22, 2011 at 10:36 AM, tahome izwah <email@hidden> wrote:
>> I'd recommend you check the literature for implementation of
>> overlap-add and overlap-save processing. These are common
>> implementations for algorithms like convolution that deal with
>> edge-effects.
>>
>> HTH
>> --th
>>
>> 2011/11/22 Mark Heath <email@hidden>:
>>> Hi guys,
>>>
>>> I've spent the last week searching google for information on how to do this
>>> but have not found anything.
>>> Even searching this mailing list archive returns an error. So forgive me if
>>> this has been asked before.
>>>
>>> I'm trying to implement an audio unit filter that behaves similar to a
>>> convolution matrix filter (my background is in image processing so I may use
>>> the wrong terminology)
>>>
>>> To calculate the new value of the current sample I need a window of samples
>>> either side of current sample. (from the future and past)
>>> I have implemented this (using AUBaseEffect) without processing the samples
>>> near the edge of the supplied sample frame.  However I am getting some
>>> strange distortion that I could only attribute to not processing these edge
>>> samples.
>>>
>>> So I'm looking for a correct implementation of a convolution filter.
>>>
>>> My original thought was to buffer the last frame and process the sample that
>>> are near the edge this way but this has 2 problems;
>>> 1) the first sample buffer passed into the filter must output less samples
>>> than passed in and then I would need a tail to process the remaining
>>> samples.
>>> 2) as the filter is only receiving 1 channel, I do not know if my stored
>>> sample buffer is from a different channel.
>>>
>>> Could anyone help?
>>>
>>> Thanks
>>> Mark
>>> _______________________________________________
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list      (email@hidden)
>>> Help/Unsubscribe/Update your Subscription:
>>>
>>> This email sent to email@hidden
>>>
>>  _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list      (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>> This email sent to email@hidden
>>
>
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Convolution Audio Unit how to?
      • From: Ross Bencina <email@hidden>
References: 
 >Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: tahome izwah <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Aran Mulholland <email@hidden>)

  • Prev by Date: Re: Convolution Audio Unit how to?
  • Next by Date: Re: Convolution Audio Unit how to?
  • Previous by thread: Re: Convolution Audio Unit how to?
  • Next by thread: Re: Convolution Audio Unit how to?
  • Index(es):
    • Date
    • Thread