Re: Convolution Audio Unit how to?
Re: Convolution Audio Unit how to?
- Subject: Re: Convolution Audio Unit how to?
- From: Mark Heath <email@hidden>
- Date: Tue, 22 Nov 2011 12:01:28 +1100
The reason I ask this question in this mailing list is that the theory
of the filter is fine, if I'm able to receive one long string of
samples.
It's the implementation using Audio Unit that I am having trouble with.
The problems I encounter are, I only receive 512 samples at a time and
the samples could be from any channel.
I can't copy them to another buffer and use them as historical and I
need to output 512 samples for every 512 samples I receive. I cannot
delay the samples by my window size/2.
If these sort of issues are present in the libraries that musicdsp are
using then this question would be suitable there. However it looks
like algorithmic implementations without respect to any particular
audio library.
I don't think that overlap-add or overlap-save would work, I said that
my filter was convolution like, that it needs a window of samples
around the current sample. I cannot perform an FFT on the impulse
response of my filter. Sorry for using this terminology, I'm aware
of optimising large convolution matrices using fft.
So I assume that the AUEffectBase class (my mistake I referred to this
as AUBaseEffect earlier) is not the one I should use. As I need more
awareness of the samples that are being passed to my filter.
Does this help clarify my issues?
Thanks
Mark
On 22/11/2011, at 10:44 AM, Aran Mulholland wrote:
this question might be better asked here -
http://music.columbia.edu/cmc/music-dsp/
On Tue, Nov 22, 2011 at 10:36 AM, tahome izwah
<email@hidden> wrote:
I'd recommend you check the literature for implementation of
overlap-add and overlap-save processing. These are common
implementations for algorithms like convolution that deal with
edge-effects.
HTH
--th
2011/11/22 Mark Heath <email@hidden>:
Hi guys,
I've spent the last week searching google for information on how
to do this
but have not found anything.
Even searching this mailing list archive returns an error. So
forgive me if
this has been asked before.
I'm trying to implement an audio unit filter that behaves similar
to a
convolution matrix filter (my background is in image processing so
I may use
the wrong terminology)
To calculate the new value of the current sample I need a window
of samples
either side of current sample. (from the future and past)
I have implemented this (using AUBaseEffect) without processing
the samples
near the edge of the supplied sample frame. However I am getting
some
strange distortion that I could only attribute to not processing
these edge
samples.
So I'm looking for a correct implementation of a convolution filter.
My original thought was to buffer the last frame and process the
sample that
are near the edge this way but this has 2 problems;
1) the first sample buffer passed into the filter must output less
samples
than passed in and then I would need a tail to process the remaining
samples.
2) as the filter is only receiving 1 channel, I do not know if my
stored
sample buffer is from a different channel.
Could anyone help?
Thanks
Mark
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden