Re: Multitimbral Music Devices - Question and Proposal
Re: Multitimbral Music Devices - Question and Proposal
- Subject: Re: Multitimbral Music Devices - Question and Proposal
- From: James Coker <email@hidden>
- Date: Tue, 15 Jul 2003 13:03:52 -0600
On Tuesday, July 15, 2003, at 10:51 AM, Urs Heckmann wrote:
Regarding Audio Units hosting Audio Units: There should be no need
for that. Or how many layers of Audio Units inside Audio Units you
want to support?
I never meant it that way (Look at www.five12.com for Numerology). I
just meant to propose a way to handle flexible, heterogenous modular
units. No matter what it's good for at the current point of
knowledge. Just as an option. (Maybe James C & James McC support me
here?)
Numerology is just one Example. But think about NI Reaktor. What is
an Element in Reaktor? Can you identify that? You talk about a
solution for a special purpose which is missing the "extensibility".
But to say it again, it is not the task of Audio Units to provide an
interface for a problem like that.
As I work on integrating AU support into Numerology (beta soon...),
I've run into the Mutli-Timbral
issue regarding the DLS Music Device. In Numerology, plugins and
modules are organized
into groups, very similar to tracks/channels in Logic or DP, that
clearly represent different parts,
so the notion of putting a 16-channel Multi-Timbral synth in a group is
really a bit silly -- 3 or
4 parts maybe, but not 16.
OTOH, it is often nice to layer different sounds to make a single
'part', and it is also often
very useful to use a single sequence to control a plugin with multiple
outputs -- drum machines
in particular, where the plugin has multiple outputs that are fed into
different destinations
for individual processing. These are both primary characteristics of
Multi-Timbral synths:
different sounds that can be layered or send to different outputs.
Although my personal preference is for mono-timbral plugins only, this
is not a simple issue.
In addition to the presence of current multi-timbral plugins on the
market, and the clear pressure
on Urs to provide that feature set, there is the presence of the DLS
Music Device -- Multi-Timbral,
but with only a pair of mixed outputs (one w/ reverb). My expectation
as a developer of an AU
host is that I will just have to deal with it.
I think Urs' proposal is potentially very elegant, but haven't worked
with enough plugins
yet to make a definitive call -- I guess it is up to the CoreAudio team
to do that.
I can't comment much on Reaktor. IIRC you set up patches in a stand
alone app, that afterwards can be loaded inside a Plugin and thus
should propagate parameters accordingly. Dunno. If it works this way,
there's no more modularity in the Plugin itself. - If the plugin
itself allows for building patches, then it's up to dynamically
changing parameter semantics. Which is a valid task in AU, anyway
(Notifying a property change for the Parameterlist).
My example goes for stuff that acts in parallel, like different
plugins, but which may "somehow" need to interact. Numerology is a
good example (though not a Music Device in a common sense, of course),
because the modules work stand alone and fulfill a task, but can at
the same time interact. This wouldn't work if each module was a
seperate Plugin.
(Commonly something like this is done via ReWire. But I don't even
dare to apply for the license, you know. And I don't want to beg and
sign NDAs and stuff. Hence I vote for a ReWire free solution for the
small developers to do such stuff properly)
Another example: Think of Ensoniq DP4 effects processor. Mine is
broke, and that's why I wrote MFM (not that these compare...). Imagine
a 4-way multi effect where each single effect could load any preset.
By having 4 effects residing in one, you could connect them with
delays, just like in the DP4. I loved this feature.
I could certainly see a large plugin hosting it's own plugins -- though
I've no plans to
do that myself. Of course, having an AU Output Unit that can be driven
by an Audio Unit running
in a separate AUGraph or AU Host would solve that issue....
Ergo: better no multitimbral Audio Units IMHO.
Get away from "multitimbral". Let's call them "extensible",
"multiparted" or anything.
We might call it however you please. But doing so the doors are open
for extensive misuse. And it will make an verification suite for
Audio Units more difficult to do, if not even impossible...
He, we're just talking about the proper addressing of Elements in the
Global Scope.
It would be inconsistent IMHO if all Elements share the same set of
parameters, while parameters are bound to Elements.
Currently, automation (afaik) only works on Element 0 parameters.
All I talk about is, do the whole step and set in stone that each
Element can have it's own list of parameters, and do whatever is
needed to make this useful. - Like, if parameters go multi-Element,
presets should do so as well.
Hmm, this is where I start to get a bit nervous. I already don't like
it that plugins can juggle
their parameters at any time (though I understand the reasoning).
Adding too much complexity
to the spec makes it harder to make interesting hosts. The more an AU
expects from it's host,
the more the hosts end up being forced into homogeneity -- or forced
to not support certain features.
For instance, how would FinalCut Pro or LIVE be expected to deal with
multi-timbral plugins? Perhaps
there should be a clear spec on how multi-timbral plugins should
behave, with the expectation that
a mono-timbral mode will always be available for hosts where
multi-timbral operation is specified
as discouraged or not available.
Jim
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.