Re: AU Parts and Groups
Re: AU Parts and Groups
- Subject: Re: AU Parts and Groups
- From: Stephen Blinkhorn <email@hidden>
- Date: Mon, 15 Jan 2007 15:04:58 +0000
Thanks for the reply Ian.
Multitimbral? My synth is monotimbral in the sense that I only
provide one instance per plugin. My synth doesn't gain much from
being multitimbral aside from the convenience of managing multiple
plugin windows (it's a hard life for todays electronic musicians).
However, I do split the keyboard into multiple zones and each zone
renders a (potentially) unique DSP process. The user can choose at
any time which DSP process runs within a zone and a small set of
parameters are provided to control important aspects of the DSP.
Really each zone is a mini-synth in its own right. It is blatantly
multitimbral in a hardware sense but there is no layering. Managing
parameters looks like the hardest part of this project. When a user
changes the synth running in a zone I need to update the parameters
to match in the UI. Probably not too difficult.
I'm going to complete my simplified test model using global scope
parameters but my instincts are telling me I should at least try a
part based solution - maybe it's overkill though?
thanks again, Stephen.
On 12 Jan 2007, at 21:16, Ian Kemmish wrote:
On 12 Jan 2007, at 8:02 pm, Stephen Blinkhorn
<email@hidden> wrote:
Lets say the plugin can simultaneously play 3 different voices but
the user has the run-time choice of which module each voice plays so
you could have for example 3 sine waves or 2 pulse waves and 1 noise
or just 3 noise modules etc... each mapped onto it's own octave say.
...
I came quite close to implementing this by adding to the SinSynth
project and keeping all parameters in the global scope. But it felt
bad and it seems more appropriate to use separate parts for each
voice/module assigned to a single group. Is this a good scheme to
First, the advice Apple's folks will give you anyway. Read
GroupsPartsAndMultitimbrality.rtf. Several times. Draw diagrams.
Second, you are perfectly correct to identify groups with MIDI
channels.
Now, if your synth is NOT multi-timbral, you may want to stop right
where you are. If only one program is playing at a time, then
putting all the parameters on global scope, even with layered
voices, seems acceptable.
If your synth is multi-timbral, read on....
It depends what you mean by "run time". The AU API is quite
comfortable with synths where splitting and layering is a
heavyweight UI operation. (The model I kept in mind while reading
the above, rightly or wrongly, was something like a Sequential
SixTrak....)
(My synth - a research project, www fdsynthesis.com - was to write
something which wasn't MIDI-centric and didn't suffer from the
usual music software syndrome of "you can have 32 of this or 64 of
that". My soak test program for example is a pipe organ, and has
22 layers and 18 published performance parameters, each parameter
in this case being a stop.)
There are (at least) three ways to deal with parts and groups.
1) Exactly one part per group. The part publishes the performance
parameters appropriate to whatever (multi layered) voice is
currently selected. When the user hits program change, the current
sounding note is terminated (like most real synths), and the voice
(and all its published parameters) is replaced in that part.
Note that you'll need to cope with host apps that don't bother to
look for published part parameters. I deal with this by having
stuff in my UI that binds MIDI controllers to parameters. The MIDI
controllers, of course, appear in group scope, not part scope.
[ENTER GRIPE MODE. Support for per-note performance parameters is
even worse. The only one you can use is MIDI polyphonic pressure,
and even that won't work if you're using the StartNote() API with
fractional note numbers. This could have been fixed so simply by
defining an extra "note scope", whose element IDs are just the note
signatures returned by StartNote(). EXIT GRIPE MODE.]
2) Multiple parts per group, with the parts assigned to groups from
your synth's UI. This could either be one part per layer, or if
you wanted to avoid having notes terminated by program change, one
part per voice with the parts used in round-robbin fashion. To be
honest, I can't really think of any situations where this setup has
compelling advantages over 1) above.
The idea here is that changing the part-to-group mapping is a
heavyweight operation, and the host may choose to reset everything
when it changes (if I've correctly understood William Stewart's
various responses to me).
3) Completely dynamic, the way I did it. In my synth, a part is
simply the intersection of a group and a program. The first note
you play after a program change causes a new part to be allocated,
and parts are garbage collected after the last note using that
program expires. DO NOT DO IT THIS WAY. Apple have commented
"this isn't the way we meant it to be used" and William Stewart
believes I can't write a memory manager good enough to avoid the
risk of audio dropouts when I play a note. He may well be right:-)
I hope this helps. Much of the complexity in this topic appears to
stem from the fact that the API tries to avoid being too
prescriptive about the synth's architecture, but at the same time,
some architectures fit it a lot better than others.
I strongly urge you to stick with one part per group, at least to
begin with.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
Ian Kemmish 18 Durham Close, Biggleswade, Beds
SG18 8HZ
email@hidden Tel: +44 1767 601361 Mob: +44
7952 854387
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden