Re: Multitimbral Music Devices - Question and Proposal
Re: Multitimbral Music Devices - Question and Proposal
- Subject: Re: Multitimbral Music Devices - Question and Proposal
- From: Christopher Corbell <email@hidden>
- Date: Tue, 15 Jul 2003 20:28:29 -0700
Not being a seasoned VST or audio plugin developer, I'm not sure
I understand all of the ramifications of this thread, so please pardon
me if I miss a point. However I do use the DLS synth unit extensively
in my app and hope to add support for other MusicDevice units in
the future...
On Tuesday, July 15, 2003, at 07:26 AM, Frank Hoffmann wrote:
Urs, sorry to insist, but multitimbral Audio Units just add another
layer of unneeded conceptional overhead. There is no need for that and
you can see how much confusion it creates with VST. And there is still
no clear concept how to handle the scenario. Neither on the host nor
on the client side. They just shouldn't. The mistake was to allow
multitimbral plugin in the first place.
The concept of multitimbral comes from hardware synthesizer, where it
made sense for cost reasons. But for the virtual studio there is
simply a lack of need for something like this. Hence somebody wants to
simulate a virtual instrument with say a drum machine and a synthesis
part, why shouldn't he create one Audio Unit for the drum machine and
one for the synthesis part? There is no disadvantage, but it buys you
a lot more flexibility. Of course the virtual counterfeit wouldn't
look exactly like the original anymore. But this point is mood, you
also can't simulate the feeling of actually touching the keys this >
way.
One thing I'm not clear on is how allowing multitimbral synths somehow
hurts
the use case for a monotimbral synth unit, or a multitimbral synth unit
used as
if it were monotimbral. It seems to me that if you want to create
separate Audio Units for separate instruments, the architecture allows
that,
and a host application can require that. Is the implication that the
prevalent
use case for the plugin must be imposed as a limitation in the entire
framework
architecture?
I've found the DLS Synth interface - including the list of available
instruments
as a property, support for extended note events etc. - to be very
convenient
for my app's needs, especially as they integrate with the
MusicPlayer/MusicSequencer API. Again if I'm misunderstanding the gist
of
the thread please correct me, but wouldn't a strictly monotimbral
MusicDevice
unit architecture require that a MusicSequence create separate units,
feeding
into a mixer unit, in order to fully handle instrument changes, etc?
I'm not suggesting this out of laziness, in fact my app does create
separate DLS units for each track. But I'd hope that this level of
complexity
wouldn't be forced on a developer who just wants to deliver sequencing
and DLS functionality - especially since QuickTime music is no longer
moving forward.
Regarding Audio Units hosting Audio Units: There should be no need for
that. Or how many layers of Audio Units inside Audio Units you want to
support?
Isn't this basically what AUGraph is for? I'm completely naive about
the
implications for plug-in development, but I think wrapping an AUGraph of
arbitrary complexity in a single AudioUnit would be a really powerful
feature.
(If that's a stupid comment please just ignore it :-) )
I guess my point overall is that I think there's more to the
requirements for this
architecture than just to fit in with existing/prevailing host-plugin
environments.
It also needs to be usable and scalable by app developers delivering
unique
(even if opaque) audio functionality, and I feel the existing
MusicDevice unit
configuration does that pretty well.
- Christopher
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.