AU Hosts and Parts, Groups and Multi-timbrality
AU Hosts and Parts, Groups and Multi-timbrality
- Subject: AU Hosts and Parts, Groups and Multi-timbrality
- From: Bill Stewart <email@hidden>
- Date: Sun, 27 Jul 2003 12:22:31 -0700
One of the things that may not be immediately obvious about this
discussion that I think is worth highlighting.
How does this work with existing host apps?
I don't believe that host apps need to change a single line of code for
this to work as is today - with a few provisos.
Firstly, the UI of a Multitimbral synth, should present some means to
associate its different parts to the groups that they will live in,
within a given preset configuration. Once this relationship is made,
the AU then knows how to parse the channel information in the MIDI
Event API to the different parts. Whilst a host could certainly present
some kind of "generic UI" for this (as the property is public), I think
that is less ideal.
The discussion about group scope is really a discussion I think, that
should underly the main implementation semantics of an AU; in the way
that the AU treats the group scope parameters. For us, this was
implicit in the design and implementation of the DLS Synth, and we've
had to examine those assumptions and formulate them for this
discussion; overall a good thing I think.
The provision of the MusicDeviceMIDIEvent provides an API that requires
the AU to support the notion of being controlled, etc, through the MIDI
protocol. As many host apps are based around MIDI, and thus talk to
AU's using MIDI, the underlying mechanism between host and AU already
exists.
If the host app does *NOT* want to use MIDI at all, then the
combination (which is actually more natural for the AU I think), of the
Start/Stop note APIs and the group scope parameters should work. In
fact it presents a more powerful and flexible means to do these
"run-time" modifications as AU's can of course use the full parameter
mechanism to publish parameters in the group scope beyond the MIDI
range.
I think the only problem, as Urs alluded to, is whether the hosts will
actually send more than one MIDI channel (one groupID element) to an
AU. For those hosts that currently do not do this, the decision about
whether they should do this is straightforward (though the UI may not
be!), and is probably worth alluding to here again:
if kMusicDeviceProperty_InstrumentCount > 0
you have a multi-timbral device
if kAudioUnitProperty_PartGroup is implemented
AND kAudioUnitProprety_ElementCount for PartScope is implemented
you have a multi-timbral device
else
you have a mono-timbral device
(The subtle implication here is that mono-timbral synths should NOT
implement their synths on part scope or implement the PartScope
property, but should continue to do what they do today - implement
their synth state in the global scope)
A mono-timbral synth (or any MusicEffect for that matter) can of course
provide (and arguably should) some handling of MIDI controllers, and
publish that capability of course through group scope based parameters.
Plan of record:
If the proposal we outlined seems a reasonable one, then we will go
ahead and submit these changes to the headers. As Urs and others work
with this we may find other areas that need attention and we'll just
have to deal with this as we go forward.
The Panther SDK won't have all of the necessary changes in the AUPublic
base classes to fully support the notion of PartScope.. We'll have to
work on that on the side as it were, and update the SDK when that work
is done. If anyone is tackling this themselves, we'll certainly
consider taking those changes (and help answer quetsions, etc).
Bill
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.