• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Mutitimbral - A clarification, sort of
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Mutitimbral - A clarification, sort of


  • Subject: Mutitimbral - A clarification, sort of
  • From: Urs Heckmann <email@hidden>
  • Date: Wed, 16 Jul 2003 12:48:22 +0200

Folks,

here are some bits that hopefully tidy things up.


I think confusion arises from some terms that commonly seem to get mixed up. So here's some explanation (I hope I don't talk rubbish here 8-):

- AU Music Device offers an API that is "Instrument based". Instruments are not presets. They are entities within that device that are opaque to the host. The host just knows about their existence, but not about any sort of "settings". - In this case, multitimbral means that the device can play several such Instruments at once. DLS Synth is an example, and I think it works out well with existing APIs and conventions. - You can not set a parameter inside a single Instrument...

- What we are talking about in respect of multitimbral Music Devices, is a completely different approach. - Instead of Instruments, I use the term "part". I could have used any other term, but I just didn't want to confuse it with above "Instruments". A part is different from an Istrument in that it is not opaque to the host. - A part has parameters and presets.

- The state of an Instrument based multitimbral MD is valid accross all entities of Instruments.

- The part-based MD has states (settings...) for each part seperately. Better: This should be the case.



Problem

Well. The current API has no concept to properly implement part-based MDs. There is no way to specify a part when loading presets, restoring state, setting parameters etc.

This is a critical situation, especially if you take into account what hacks / inconsistencies have been used to circumvent this within VST world, where exactly the same problem exists. However, since VST has learned to let the plugin send midi, some use midi CCs with midichannel <-> part mapping instead of parameters to damp the hassle a bit. AUs currently can't send midi as a replacement for parameter changes, an honestly, it's bad style.



Proposal

My suggestion was to merge the AU concept of Elements with the real world concept of parts. This would immediately provide us with a good 50% on the road to a properly working, part-based multitimbral world.



Respond

To respond to the criticism (Frank, we'll carry this out at our next beer night, maybe I simply pay for yours), I'll sum up some basic conditions that I implicitly put in the pot:

- The property and notification scheme in AU world allows for thorough reconfiguration of what an Audio Unit exposes to the host. Parameters can be added, the parameter list can be altered at "runtime".

- Parameters are already tied to the Elements scheme. There's about no work to do to enable Element/part-based multitimbrality here. (On the specs side, of course)

- Presets and state are not tied to Elements, hence not to parts. This would require some modifications in the specs.

- Extensions to that scheme might be useful, i.e. to deal with overall stuff vs. single part stuff. For example, Element -1 could be used to communicate "He plug! - All parts are meant". Somewhat the like.

- The modifications of the specs can be done without breaking compatibility to current conventions and existing software. Thus a transition can be seemless. At least I hope so.


Conclusion

In my opinion, the hassle that people (host developers, hehe) are concerned of, already exists. AUs already offer that complexity, i.e. by the ability to change their properties any time after construction. Proper host design has to take this into account already ( even if it means a delay in AU support 8-).

It is often said and a valid argument, classical multitimbrality is somewhat obsolete for virtual instruments as we know them. Exceptions may occur, and YMMV

I see applications beyond classical multitimbrality. - Examples like VBox, FXMachine, or even my superficially described visions, show that plugin space wants its options. So does user scape. I assume.

In my opinion, minimal effort is required to get rid of the necessity to do "workarounds".

Cheers,

;) Urs
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.

  • Follow-Ups:
    • Re: Mutitimbral - A clarification, sort of
      • From: Chris Rogers <email@hidden>
    • Re: Mutitimbral - A clarification, sort of
      • From: Frank Hoffmann <email@hidden>
  • Prev by Date: Re: Multitimbral Music Devices - Question and Proposal
  • Next by Date: Re: Mutitimbral - A clarification, sort of
  • Previous by thread: Re: Where do AudioHardwarePlugIns go?
  • Next by thread: Re: Mutitimbral - A clarification, sort of
  • Index(es):
    • Date
    • Thread