Re: Process and Render
Re: Process and Render
- Subject: Re: Process and Render
- From: Aristotel Digenis <email@hidden>
- Date: Mon, 21 Jun 2004 10:45:02 +0100
Goodmorning,
Over the weekend I have gone over the documentation of the SDK and have
some more questions.
It appears that when processing N to N number of channels where there is
no interaction between the channels for the processing, then we
implement a new AUKernelBase and the processing for each of the N
channels happens inside the Kernel. An example of this would be a gain
plug-in where we have one slider for amplitude...we can put in 10
channels and we will output 10 channels out, all amplified by the same
ratio set by the slider.
If the processing of N to N channels requires that the data from one
channel can affect the other channel, then we implement
ProcessBufferLists()? This function internally calls AUKernelBases? I
have not managed to find enough information on this or how it is
implemented.
Then there is AudioUnitRender() which doesn't seem to be intended for
plugins (AUEffectBase type of Audio Units). To add to the confusion i
have also come accross Render(), DoRender(), DoRenderBus(), DoRenderSlice()
Could somebody who does understand the differences and uses help me out
please?
Thank you in advance!!!
Aristotel Digenis wrote:
Hello again,
As I have been using the SDK's SampleEffectUnit example to learn AUs
and adapt it to my needs, I have notices that the Process() function
is declared within a class inside the SampleEffectUnit class as shown
below.
class SampleEffectUnit : public AUEffectBase
{
public:
SampleEffectUnit(AudioUnit component);
virtual AUKernelBase * NewKernel() { return new
SampleEffectKernel(this); }
virtual ComponentResult GetParameterValueStrings();
virtual ComponentResult GetParameterInfo();
virtual ComponentResult GetPropertyInfo();
virtual ComponentResult GetProperty();
virtual ComponentResult GetPresets() const;
virtual OSStatus NewFactoryPresetSet ();
virtual ComponentResult Version() { return
kSampleEffectUnitVersion; }
protected:
class SampleEffectKernel : public AUKernelBase
{
public:
SampleEffectKernel(AUEffectBase *inAudioUnit ):
AUKernelBase(inAudioUnit){}
virtual void Process();
virtual void Reset();
}; };
I understand the purpose of the AUKernelBase but then I saw the source
code of AirSynth where the Process() function is not declared inside a
seperate AUKernel class. Instead the Process function is declared
inside the AirySynth class. I tried to move the Process() function of
my plug in out of the SampleEffectKernel class and inside the
SampleEffectUnit class, but this results in no processing taking
place. Does the fact that AirySynth is a MusicDevice make a
difference? Do MusicDevices have different processing structure to
Effect Units?
Also, use SetParameter() functions in my constructor to set the
initial values of my sliders. This works fine and the sliders do have
an effect on the processing. Having read the API documentation I
thought it would be better to use Initialize() to set these
parameters, and so I declared the Initialize() function after the
SampleEffectUnit constructor and moved the SetParameter() functions
inside the Initialze() function. This compiles and the
sliders/parameters are set just fine. However when the sliders are
changed, the processing this does is not affected. Did I misunderstand
what the Initialize() function is for?
Thank you in advance...
--
Aristotel Digenis
email@hidden
http://www.digenis.ws
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.