Re: Using existing VST/RTAS DSP code in AudioUnit
Re: Using existing VST/RTAS DSP code in AudioUnit
- Subject: Re: Using existing VST/RTAS DSP code in AudioUnit
- From: Howard Moon <email@hidden>
- Date: Tue, 22 Feb 2011 11:44:51 -0800
>> I've got existing DSP code used in both RTAS and VST plug-ins (as well as AU-wrapped VST) that I'd like to use in my new native AudioUnit software. However, in both VST and RTAS, the buffers are passed to the DSP code as an array of pointers to distinct buffers (left and right channel, if stereo). But in the Process function of my Kernel object, the data is interleaved, right? So, the first sample is for the left channel, then the second is the right, then the third is the left, etc. Correct?
>>
>> How can I easily make use of the existing code base I have so that I don't have to rewrite the DSP code to handle this interleaved data? I thought of copying the data going in and out, but that takes a lot of extra time, and also requires that I have 2 input and 2 output buffers that are large enough to hold whatever size buffers get thrown at me (since I don't want to allocate/deallocate memory at that time).
>>
>> I was thinking that maybe I need to throw out the Kernel object that was generated by the Xcode template, and get down-and-dirty with the channel handling in the Render call of my effect class instead. Does that sound correct?
>>
>> Or would it be better (and maybe easier in the long run?) to abandon the existing VST/RTAS code and write new code that handles interleaved data instead?
>>
>> Thanks,
>>
> You should be able to get deinterleaved data using directly the AudioBufferList from the callback. In my case, I override the Render method of AUMIDIEffectBase or MusicDeviceBase
>
> AUOutputElement *output = GetOutput(0);
> AudioBufferList &outputBufferList = output->PrepareBuffer(nFrames);
>
> uint mainChannelCount = outputBufferList.mNumberBuffers;
> for (uint i = 0; i < mainChannelCount; i++)
> {
> outputBufferList.mBuffers[i].mData
> }
>
Hi Olivier,
I'm not sure I understand what you're intending to do in the loop there. It doesn't look much like the Render code in the AUEffectBase class. That class calls ProcessBufferLists (possibly repeatedly via ProcessForScheduledParams, if using scheduled parameter changes), which in turn calls my Kernel object's Process function. I'm unsure how I can create a pair of buffers for input and another pair for output, for processing via a single call to my DSP object.
As background, the DSP code follows the VST convention where there are three function parameters: ( float** pInputs, float** pOutputs, int numSampleFrames ).
Both pInputs and pOutputs are arrays of 1 or 2 pointers (assuming only mono or stereo allowed, as is my case) to non-interleaved buffers.
One other possible solution occurs to me: to modify the DSP code so that instead of simply incrementing my pointer(s) by 1 when traversing the buffers, I could increment by either 1 or 2, depending on a "interleaved" flag that I set prior to processing. (In most cases, I could set the increment at 1, and only set it to 2 in AU when I'm processing interleaved buffer data.) Does that sound like a valid method?
Thanks,
Howard
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden