Re: AudioUnitKernal constructor called twice...
Re: AudioUnitKernal constructor called twice...
- Subject: Re: AudioUnitKernal constructor called twice...
- From: Brian Willoughby <email@hidden>
- Date: Sat, 20 Aug 2011 13:49:18 -0700
Simon,
I did a quick scan of my AudioUnit sources, but didn't find anything
right away. At the very least, a simple hack would suffice: In your
initialization routine, just step through the array of kernel objects
and send them a message with their index. You'll have to create a
kernel instance variable to hold the index, and you'll have to make
sure to call your initialization after the kernels are created
(obviously, it won't work before they're created). I recommend doing
a code review of AUEffectBase.cpp and .h to see what Apple is doing,
and that might even give you a better idea than the hack I just
suggested.
By the way, while looking around in my code, I did find one AU that
happens to convert between stereo (left, right) and mid+side encoding
(among other things). For this AU, I subclassed AUBase instead of
AUEffectBase, and created a RenderStereo() method for my unit
subclass to use. The goal here was to create a sibling of
AUEffectBase which could be reused for more than one stereo AU. I
can't recall whether I actually ever reused that class, but the idea
seemed to work just fine.
My only caveat is that you should make sure you really want to limit
yourself to stereo. Most "stereo" effects can actually be reduced to
multi-mono effect coding with clever per-channel parameters and
identical code. Mid-side seems largely tied to stereo, but in the
broader sense I might have actually coded this effect for Ambisonic
encoding, of which mid-side is merely a subset. In other words,
there's almost always a channel-count-agnostic implementation of
every "stereo" effect.
Brian Willoughby
Sound Consulting
On Aug 19, 2011, at 04:29, email@hidden wrote:
That is an extremely good point. For me (and possibly others in a
similar situation), it all rests on accessing the kernel index from
within the kernel. If any one knows how to do this, I would love to
know!
On 19 Aug 2011, at 09:06, Brian Willoughby <email@hidden>
wrote:
On Aug 19, 2011, at 00:34, email@hidden wrote:
For processing audio on two channels in the same function (stereo
panning, for example), it seems overriding ProcessBufferLists()
is the way to go.
The most important thing you can do if you want stereo processing
is to either work within the kernel system defined by
AUEffectBase, or simply avoid that class and use AUBase instead.
In other words, if you're considering overriding ProcessBufferLists
(), then you should really consider using RenderBus() in a
subclass of AUBase.
Note that it isn't so difficult to work with kernels, even if you
want to process each channel differently. Stereo panning is
nothing more than simple gain processing where one channel's
volume is inverted with respect to the other. There are a couple
of potential solutions:
A) You can handle this by storing the pan position in the master
object and then use the kernel index to decide whether to use the
pan directly or inversely to set the gain. Off the top of my
head, though, I'm not sure how easy it is for a kernel object to
determine it's own index, but I seem to recall doing so in the
past. The bulk of your kernel code would be generic, except for
the initial determination of gain based on channel index.
B) A totally different solution would be to have a gain parameter
in the kernel objects, such that each channel has an independent
gain. Then, when your master object receives a pan position
change, your code can calculate two gains (or any number of gains)
and store each independent gain value in the appropriate kernel
object. Thus, when the kernel executes, it simply applies the
local gain to an individual channel.
The advantage of working within the kernel system is that you
maintain all of the carefully designed behavior that Apple's
CoreAudio team has developed within the AUEffectBase class. Do
not underestimate the advantages of having your plugin
automatically work with quad, 5.1 and other surround formats
beyond stereo. You also avoid bucking the standard of non-
interleaved audio data, i.e., you're not trying to go back to the
deprecated interleaved data scheme.
There are certainly multi-channel effects where kernel coding
might prove too difficult, thus tipping the scale towards working
under AUBase directly. However, with careful placement of state
variables in the global objects and kernels objects, as
appropriate, I suspect that you'd be hard pressed to come up with
an example that would not work.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden