• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
An alternative view of "Groups, parts and multi-timbrality"
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

An alternative view of "Groups, parts and multi-timbrality"


  • Subject: An alternative view of "Groups, parts and multi-timbrality"
  • From: Ian Kemmish <email@hidden>
  • Date: Sat, 25 Feb 2006 11:13:37 +0000

I tried, but I just can't make this document from the SDK fit the world I live in.


Axiom: Parts are the place where voice-related performance parameters live.
----- ----- --- --- ----- ----- ------------- ----------- ---------- ----


I'm reasonably confident we're all agreed on that.

--------

(1) My reading of Apple's view: Lets think in terms of hardware-based multi-timbral synths. Parts are more or less identified with tone generators and they are a scarce resource. Either the number is fixed for all time or changing it is a heavyweight operation. (I think there's an underlying assumption, which for many synth designs is probably reasonable as long as you don't support off-line rendering, that polyphony is basically deterministic and may be calculated ahead of time.)

This reading has effects elsewhere in the API. Group IDs are just "keys to a map", the universe of Group IDs is sparsely populated, and you can enumerate all parts and groups by enumerating the known set of Part IDs and asking each part which group owns it.

--------

(2) My view. You've got StartNote(), which can require a different voice for each note, and you support offline rendering. Polyphony is not fixed ahead of time. (What's more, for my particular synth, some voices may consume a hundred times as much CPU time as others. Polyphony is most definitely not fixed ahead of time.) Parts are allocated when needed, and garbage collected to save resources. For me it is the universe of Part IDs that is sparsely populated (the universe of all possible parts is just a matrix whose rows are groups and whose columns are voices).

I also support an arbitrary number of groups, but they are only created either by setting the element count (which creates a contiguous set of Group IDs) or by sounding a note on a previously unused group (which creates a sparse set, but one at least known to the host).

For me, a host can enumerate all parts and proups only by enumerating the known set of Group IDs and asking which parts they own.

--------

I could make my synth conform to (1), but only by degrading the user experience and artificially limiting polyphony (or at least timbrality). I'm not prepared to do that.


The challenge is that any host which is written with (1) in mind will have difficulty getting the most out of a synth like (2), and any host written with (2) in mind will have difficulty getting the most out of a synth like (1).


Neither (1) nor (2) is perfect. But perhaps there is some way to reconcile them? I've previously assumed that I'm just an awkward customer, but I'm learning that there are still relatively few hosts and AUs that make rich use of the API, so there may be merit in opening these issues up for discussion....

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ian Kemmish 18 Durham Close, Biggleswade, Beds SG18 8HZ
email@hidden Tel: +44 1767 601361 Mob: +44 7952 854387
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -



_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: An alternative view of "Groups, parts and multi-timbrality"
      • From: William Stewart <email@hidden>
  • Prev by Date: Re: Recognizing iSight as audio input device?
  • Next by Date: MIDI chunk - how does coreAudio/MusicSequence handle the Division of a MIDI MThd header?
  • Previous by thread: RE: interapplication midi communication, cfmessageports, ques
  • Next by thread: Re: An alternative view of "Groups, parts and multi-timbrality"
  • Index(es):
    • Date
    • Thread