Re: Applying a low-pass cutoff inside an AU
Re: Applying a low-pass cutoff inside an AU
- Subject: Re: Applying a low-pass cutoff inside an AU
- From: Aran Mulholland <email@hidden>
- Date: Tue, 15 Sep 2009 12:24:21 +1000
sure, in your render function at say a 44100khz sample rate you need 44100 samples per second. if you call a method to get every single one of those samples, 44100 method calls per second. each method call has to push program state on the stack and retrieve it after it returns. a lot of work. if you must call a method (because your audio data is not visible from your render call), pass it the whole buffer and get it to fill it, one method call to your other function per render call. your callback routine is still called multiple times per second, the amount of samples requested each time is dependent on your latency requirements.
i tried doing this in the app im writing, first copying a sample at a time, then filling the buffer using a for loop, then finally a straight memcpy().
the increase in performance was huge. my render calls were taking 25% they dropped to 0.4% of execution time when i moved to memcpy.
this isn't totally transparent however as i moved my processing from my code to system code, but im pretty sure memcpy is implemented on the processor.
enjoy
aran
On Tue, Sep 15, 2009 at 10:40 AM, ROBB A MANIA
<email@hidden> wrote:
Hey Aran, Insightful post. I'm trying to do something similar. Can you elaborate a little on this statement: "this code will never be efficient anyway cause you are calling a method to get every sample. you should be doing a straight memcpy of the audio buffer, once per render call." I don't understand this completely.
Regards,
Robb --- On Mon, 9/14/09, Aran Mulholland <email@hidden> wrote:
From: Aran Mulholland <email@hidden>
Subject: Re: Applying a low-pass cutoff inside an AU To: "Darren Baptiste" <email@hidden> Cc: "coreaudio-api" <email@hidden>
Date: Monday, September 14, 2009, 7:58 PM i think this call :
p = (Float32)[djMixer.loopOne getNextPacket];
probably returns a 32 bit int, representing 2 16 bit samples (left and right) (and im pretty sure it does as i wrote that dodgy stuff :) )
what you need to do is split the int into two 16 bit values and filter each seperately. then rejoin it. a better way might be two use a converter or two, to move it from interleaved to non interleaved and back again.
this code will never be efficient anyway cause you are calling a method to get every sample. you should be doing a straight memcpy of the audio buffer, once per render call. if you want to apply effects you can use up your processing power fast. just a side note on this, if you are running out of steam increase the latency and get more samples each render call. (i think that sample code has really low latency, you probably dont need it quite as low)
then moving it to a float that is within an acceptable range, i would check the range of values that the filter accepts is
within the range you are giving it. (does the filter only want between -1.0 and +1.0)
enjoy.
aran
-----Inline Attachment Follows-----
_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list ( email@hidden)
This email sent to email@hidden
|
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden