• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Using existing VST/RTAS DSP code in AudioUnit
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Using existing VST/RTAS DSP code in AudioUnit


  • Subject: Re: Using existing VST/RTAS DSP code in AudioUnit
  • From: Olivier Tristan <email@hidden>
  • Date: Tue, 22 Feb 2011 17:52:32 +0100

On 2/22/2011 5:44 PM, Howard Moon wrote:
Hi,

	I've got existing DSP code used in both RTAS and VST plug-ins (as well as AU-wrapped VST) that I'd like to use in my new native AudioUnit software.  However, in both VST and RTAS, the buffers are passed to the DSP code as an array of pointers to distinct buffers (left and right channel, if stereo).  But in the Process function of my Kernel object, the data is interleaved, right?  So, the first sample is for the left channel, then the second is the right, then the third is the left, etc. Correct?

	How can I easily make use of the existing code base I have so that I don't have to rewrite the DSP code to handle this interleaved data?  I thought of copying the data going in and out, but that takes a lot of extra time, and also requires that I have 2 input and 2 output buffers that are large enough to hold whatever size buffers get thrown at me (since I don't want to allocate/deallocate memory at that time).

	I was thinking that maybe I need to throw out the Kernel object that was generated by the Xcode template, and get down-and-dirty with the channel handling in the Render call of my effect class instead. Does that sound correct?

	Or would it be better (and maybe easier in the long run?) to abandon the existing VST/RTAS code and write new code that handles interleaved data instead?

Thanks,

You should be able to get deinterleaved data using directly the AudioBufferList from the callback. In my case, I override the Render method of AUMIDIEffectBase or MusicDeviceBase

AUOutputElement *output = GetOutput(0);
AudioBufferList &outputBufferList = output->PrepareBuffer(nFrames);

uint mainChannelCount = outputBufferList.mNumberBuffers;
for (uint i = 0; i < mainChannelCount; i++)
{
  outputBufferList.mBuffers[i].mData
}

HTH

--
Olivier Tristan
Ultimate Sound Bank

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Using existing VST/RTAS DSP code in AudioUnit
      • From: Howard Moon <email@hidden>
References: 
 >Using existing VST/RTAS DSP code in AudioUnit (From: Howard Moon <email@hidden>)

  • Prev by Date: Re: AudioUnit as parameter listener?
  • Next by Date: AU->UI communication
  • Previous by thread: Using existing VST/RTAS DSP code in AudioUnit
  • Next by thread: Re: Using existing VST/RTAS DSP code in AudioUnit
  • Index(es):
    • Date
    • Thread