Re: Multitimbral Music Devices - Question and Proposal
Re: Multitimbral Music Devices - Question and Proposal
- Subject: Re: Multitimbral Music Devices - Question and Proposal
- From: Urs Heckmann <email@hidden>
- Date: Tue, 15 Jul 2003 18:51:53 +0200
Am Dienstag, 15.07.03, um 18:15 Uhr (Europe/Berlin) schrieb Frank
Hoffmann:
On Dienstag, Juli 15, 2003, at 04:47 Uhr, Urs Heckmann wrote:
Am Dienstag, 15.07.03, um 16:26 Uhr (Europe/Berlin) schrieb Frank
Hoffmann:
Urs, sorry to insist, but multitimbral Audio Units just add another
layer of unneeded conceptional overhead. There is no need for that
and you can see how much confusion it creates with VST. And there is
still no clear concept how to handle the scenario. Neither on the
host nor on the client side. They just shouldn't. The mistake was to
allow multitimbral plugin in the first place.
The concept of multitimbral comes from hardware synthesizer, where
it made sense for cost reasons. But for the virtual studio there is
simply a lack of need for something like this. Hence somebody wants
to simulate a virtual instrument with say a drum machine and a
synthesis part, why shouldn't he create one Audio Unit for the drum
machine and one for the synthesis part? There is no disadvantage,
but it buys you a lot more flexibility. Of course the virtual
counterfeit wouldn't look exactly like the original anymore. But
this point is mood, you also can't simulate the feeling of actually
touching the keys this > way.
I agree.
But being a tech-punk, I like options.
Yes, I tend to think in the same direction. But we should focus on
User experience. And this is where the pain starts with it. Too many
options can sometimes be a bigger problem, than a clear and simple
rule.
Haha, yes. But the opposite is true as well. Too little options lead to
hassle (see multitimbral softsynths today).
People come up with unusual solutions. In this case, I bet it's hard to
prevent some people from doing so. Offering consistency would be
better. Otherwise one should say "No. Preset/Parameter based Music
Devices must not be multitimbral". Bill?
Regarding Audio Units hosting Audio Units: There should be no need
for that. Or how many layers of Audio Units inside Audio Units you
want to support?
I never meant it that way (Look at www.five12.com for Numerology). I
just meant to propose a way to handle flexible, heterogenous modular
units. No matter what it's good for at the current point of
knowledge. Just as an option. (Maybe James C & James McC support me
here?)
Numerology is just one Example. But think about NI Reaktor. What is an
Element in Reaktor? Can you identify that? You talk about a solution
for a special purpose which is missing the "extensibility". But to say
it again, it is not the task of Audio Units to provide an interface
for a problem like that.
I can't comment much on Reaktor. IIRC you set up patches in a stand
alone app, that afterwards can be loaded inside a Plugin and thus
should propagate parameters accordingly. Dunno. If it works this way,
there's no more modularity in the Plugin itself. - If the plugin itself
allows for building patches, then it's up to dynamically changing
parameter semantics. Which is a valid task in AU, anyway (Notifying a
property change for the Parameterlist).
My example goes for stuff that acts in parallel, like different
plugins, but which may "somehow" need to interact. Numerology is a good
example (though not a Music Device in a common sense, of course),
because the modules work stand alone and fulfill a task, but can at the
same time interact. This wouldn't work if each module was a seperate
Plugin.
(Commonly something like this is done via ReWire. But I don't even dare
to apply for the license, you know. And I don't want to beg and sign
NDAs and stuff. Hence I vote for a ReWire free solution for the small
developers to do such stuff properly)
Another example: Think of Ensoniq DP4 effects processor. Mine is broke,
and that's why I wrote MFM (not that these compare...). Imagine a 4-way
multi effect where each single effect could load any preset. By having
4 effects residing in one, you could connect them with delays, just
like in the DP4. I loved this feature.
Ergo: better no multitimbral Audio Units IMHO.
Get away from "multitimbral". Let's call them "extensible",
"multiparted" or anything.
We might call it however you please. But doing so the doors are open
for extensive misuse. And it will make an verification suite for Audio
Units more difficult to do, if not even impossible...
He, we're just talking about the proper addressing of Elements in the
Global Scope.
It would be inconsistent IMHO if all Elements share the same set of
parameters, while parameters are bound to Elements.
Currently, automation (afaik) only works on Element 0 parameters.
All I talk about is, do the whole step and set in stone that each
Element can have it's own list of parameters, and do whatever is needed
to make this useful. - Like, if parameters go multi-Element, presets
should do so as well.
However. I have headache today (listened to customers yesterday
nite), so maybe I have strange brain configuration today..
ps: I would like to suggest that you try to persuade your "sometimes
imminent rulings" to alter the way of the pure discipline again...
>> ;)
Not necessary. I think they lurk here.
The devil itself lurks behind a pretty face sometimes... ;)
Yeah, I like the look of Live very much 8-)) (sorry, couldn't resist.
Where's my NFR?)
Cheers,
;) Urs
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.