RE: Audio Units - newbie questions
RE: Audio Units - newbie questions
- Subject: RE: Audio Units - newbie questions
- From: "Muon Software Ltd - Dave" <email@hidden>
- Date: Wed, 12 Oct 2005 13:28:40 +0100
- Importance: Normal
> uint channelCount = outputBufferList.mNumberBuffers;
> instead of kNumOutputs
kNumOutputs is something in our synth configuration and is set to 8.
Hangover from the VST version I guess. I kept it in the AU version so I'd
have a single place to change the number of outputs.
> Use AUBufferList::ZeroBuffer(bufferList) instead of your memset
OK
> Is there a reason to have two different case nBuffers==1 and
> nBuffers==2, I see no difference.
I don't really know. It seemed possible to me that if a synth can be run in
Mono mode (like Logic Audio's menus imply) that you could get an output bus
containing just a single buffer?
> Anyway, according to what I see you ll just need to do this:
>
> uint numOutputBusCount = Outputs().GetNumberOfElements();
> uint channelSoFar = 0;
> for (uint i = 0; i < numOutputBusCount ; i++)
> {
> AUOutputElement *output = GetOutput(i);
> AudioBufferList &outputBufferList = output->PrepareBuffer(nFrames);
> AUBufferList::ZeroBuffer(outputBufferList);
>
> uint channelCount = outputBufferList.mNumberBuffers;
> for (uint j = 0; j < channelCount ; j++)
> {
> m_ptrAudioBuffers[channelSoFar+j] =
> reinterpret_cast<float*>(outputBufferList.mBuffers[j].mData);
> }
> channelSoFar += channelCount ;
> }
Looks pretty neat and tidy, once I figure out how to use multiple outputs in
Logic 7 I'll give that a go. Honestly, I've been testing synths in Logic
since version 4.0 and I still have never been able to figure much of it out
:-)
> Your code seems to mush complicated IMHO.
It is entirely possible that it is much too complicated. It shows that I
don't really understand what is going on just yet ;-)
> Concerning this part
>
> if (nFrames>(UInt32)m_nBlockSize)
> {
> //is this likely to happen?
> m_nBlockSize=(int)GetMaxFramesPerSlice();
>
> getLock();
>
>
> m_ptrSynth->reset((int)m_fSampleRate,m_nBlockSize);
>
> releaseLock();
> }
>
>
> Just use directly nFrames instead of GetMaxFramesPerSlice and I wonder
> why you will need to lock because Render is always called in the audio
> thread.
The code here is a safeguard. It is looking for a situation where Render
call is asking the synth to render more frames than the current blocksize.
As you can see from my comment, I'm not clear on whether or not this is
likely.
The locking is to do with our own internal mutexes between editor and synth
so you needn't worry about that. We just need to lock the synth to perform
the reset so the GUI is temporarily locked out.
Regards
Dave
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden