Re: Multitimbral Music Devices - Question and Proposal
Re: Multitimbral Music Devices - Question and Proposal
- Subject: Re: Multitimbral Music Devices - Question and Proposal
- From: Frank Hoffmann <email@hidden>
- Date: Wed, 16 Jul 2003 10:18:23 +0200
I see it the other way round. If I got Christophers statement right, he
suggest that multitimbral instruments put more flexibility in
development. It doesn't. By allowing structure, where it totally
doesn't belong to, it limits the way Audio Units can be used. The means
to compose, structure or group AUs is mission of the host or AU Graph.
It does _not_ belong to the Audio Unit.
The DLS Synth is a very useful device. But the design was not really
done with audio units in mind i suppose. There would also be no real
disadvantage if Apple had designed the DLS Synth monotimbral and maybe
provided a utility to build a AU Sub Graph for the usual requirements.
The DLS Synth doesn't feature a UI (afaik), so limiting the useage of
it in a host is no big deal. No harm is done.
A Host can not have a requirement to only support a subset of Audio
Units, say monotimbral. The UI of a multitimbral AU presents means for
editing all parts of the instrument. What will the user think when he
can edit parts of its AU that can't be used in the host? The User will
complain and the host developer is kind of forced to support the
feature. At the end it limits host developments, because they have to
follow a path already laid. Of course there is always a way around, but
it's ugly.
Frank
On Mittwoch, Juli 16, 2003, at 05:28 Uhr, Christopher Corbell wrote:
Not being a seasoned VST or audio plugin developer, I'm not sure
I understand all of the ramifications of this thread, so please pardon
me if I miss a point. However I do use the DLS synth unit extensively
in my app and hope to add support for other MusicDevice units in
the future...
On Tuesday, July 15, 2003, at 07:26 AM, Frank Hoffmann wrote:
Urs, sorry to insist, but multitimbral Audio Units just add another
layer of unneeded conceptional overhead. There is no need for that
and you can see how much confusion it creates with VST. And there is
still no clear concept how to handle the scenario. Neither on the
host nor on the client side. They just shouldn't. The mistake was to
allow multitimbral plugin in the first place.
The concept of multitimbral comes from hardware synthesizer, where it
made sense for cost reasons. But for the virtual studio there is
simply a lack of need for something like this. Hence somebody wants
to simulate a virtual instrument with say a drum machine and a
synthesis part, why shouldn't he create one Audio Unit for the drum
machine and one for the synthesis part? There is no disadvantage, but
it buys you a lot more flexibility. Of course the virtual counterfeit
wouldn't look exactly like the original anymore. But this point is
mood, you also can't simulate the feeling of actually touching the
keys this > way.
One thing I'm not clear on is how allowing multitimbral synths somehow
hurts
the use case for a monotimbral synth unit, or a multitimbral synth
unit used as
if it were monotimbral. It seems to me that if you want to create
separate Audio Units for separate instruments, the architecture allows
that,
and a host application can require that. Is the implication that the
prevalent
use case for the plugin must be imposed as a limitation in the entire
framework
architecture?
I've found the DLS Synth interface - including the list of available
instruments
as a property, support for extended note events etc. - to be very
convenient
for my app's needs, especially as they integrate with the
MusicPlayer/MusicSequencer API. Again if I'm misunderstanding the
gist of
the thread please correct me, but wouldn't a strictly monotimbral
MusicDevice
unit architecture require that a MusicSequence create separate units,
feeding
into a mixer unit, in order to fully handle instrument changes, etc?
I'm not suggesting this out of laziness, in fact my app does create
separate DLS units for each track. But I'd hope that this level of
complexity
wouldn't be forced on a developer who just wants to deliver sequencing
and DLS functionality - especially since QuickTime music is no longer
moving forward.
Regarding Audio Units hosting Audio Units: There should be no need
for that. Or how many layers of Audio Units inside Audio Units you
want to support?
Isn't this basically what AUGraph is for? I'm completely naive about
the
implications for plug-in development, but I think wrapping an AUGraph
of
arbitrary complexity in a single AudioUnit would be a really powerful
feature.
(If that's a stupid comment please just ignore it :-) )
I guess my point overall is that I think there's more to the
requirements for this
architecture than just to fit in with existing/prevailing host-plugin
environments.
It also needs to be usable and scalable by app developers delivering
unique
(even if opaque) audio functionality, and I feel the existing
MusicDevice unit
configuration does that pretty well.
- Christopher
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
------------------------------------------------------------------------
--------------------------
frank hoffmann mailto: email@hidden
ableton ag
http://www.ableton.com
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.