Re: Using existing VST/RTAS DSP code in AudioUnit
Re: Using existing VST/RTAS DSP code in AudioUnit
- Subject: Re: Using existing VST/RTAS DSP code in AudioUnit
- From: "Sophia Poirier [dfx]" <email@hidden>
- Date: Tue, 22 Feb 2011 17:23:51 -0500
For effect and instrument AUs, the audio data is never interleaved.
- Sophia
On Feb 22, 2011, at 8:44 AM, Howard Moon wrote:
> Hi,
>
> I've got existing DSP code used in both RTAS and VST plug-ins (as well as AU-wrapped VST) that I'd like to use in my new native AudioUnit software. However, in both VST and RTAS, the buffers are passed to the DSP code as an array of pointers to distinct buffers (left and right channel, if stereo). But in the Process function of my Kernel object, the data is interleaved, right? So, the first sample is for the left channel, then the second is the right, then the third is the left, etc. Correct?
>
> How can I easily make use of the existing code base I have so that I don't have to rewrite the DSP code to handle this interleaved data? I thought of copying the data going in and out, but that takes a lot of extra time, and also requires that I have 2 input and 2 output buffers that are large enough to hold whatever size buffers get thrown at me (since I don't want to allocate/deallocate memory at that time).
>
> I was thinking that maybe I need to throw out the Kernel object that was generated by the Xcode template, and get down-and-dirty with the channel handling in the Render call of my effect class instead. Does that sound correct?
>
> Or would it be better (and maybe easier in the long run?) to abandon the existing VST/RTAS code and write new code that handles interleaved data instead?
>
> Thanks,
> Howard
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden