• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: An alternative view of "Groups, parts and multi-timbrality"
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: An alternative view of "Groups, parts and multi-timbrality"


  • Subject: Re: An alternative view of "Groups, parts and multi-timbrality"
  • From: William Stewart <email@hidden>
  • Date: Mon, 27 Feb 2006 12:29:03 -0800


On 25/02/2006, at 3:13 AM, Ian Kemmish wrote:

I tried, but I just can't make this document from the SDK fit the world I live in.


Axiom: Parts are the place where voice-related performance parameters live.
----- ----- --- --- ----- ----- ------------- ----------- ---------- ----


I'm reasonably confident we're all agreed on that.

--------

(1) My reading of Apple's view: Lets think in terms of hardware- based multi-timbral synths. Parts are more or less identified with tone generators and they are a scarce resource. Either the number is fixed for all time or changing it is a heavyweight operation. (I think there's an underlying assumption, which for many synth designs is probably reasonable as long as you don't support off- line rendering, that polyphony is basically deterministic and may be calculated ahead of time.)


Polyphony == many notes? There's no statement about polyphony in this document.
Multi-timbral = yes, that's what it is dealing with.


Part elements do NOT have to be static - they can be dynamic, they can be sparsely used... Think of a mixer's inputs. A mixer can allocate 64 inputs (as some kind of reasonable assumption about the upper limits of the number of inputs) - they don't all have to be connected.

But we did not intend Parts to be a completely dynamic concept either. We were interested in defining broad structural concepts that could be applied across a broad and diverse range of situations. We did not intend this to be either the final or a limiting statement on what is possible or desirable.

This reading has effects elsewhere in the API. Group IDs are just "keys to a map", the universe of Group IDs is sparsely populated, and you can enumerate all parts and groups by enumerating the known set of Part IDs and asking each part which group owns it.

Yes


--------

(2) My view. You've got StartNote(), which can require a different voice for each note, and you support offline rendering. Polyphony is not fixed ahead of time. (What's more, for my particular synth, some voices may consume a hundred times as much CPU time as others. Polyphony is most definitely not fixed ahead of time.) Parts are allocated when needed, and garbage collected to save resources. For me it is the universe of Part IDs that is sparsely populated (the universe of all possible parts is just a matrix whose rows are groups and whose columns are voices).

Part IDs are NOT sparsely populated.

The overiding concept we have in general is that a Scope is a collection of elements, but that collection is not keyed - its an array. So, if a scope has a size of 4 (ie. 4 member elements), then I address each of these elements as 0, 1, 2, 3. This should be clear from the general AU API - the implementation of Get/Set Element Count and the lack of any key here.

We special case Group Scope (you can also see this in AUBase) out of this general scheme. AUInstrumentBase has a good implementation of Group scope - it uses the sparse population of the groupIDs to create the controller state for each of these.

I think you are trying to use Part scopes to do something they weren't specified or designed to do, and that is the core of the problem.

I would say at this point that we would also be thinking seriously about deprecating the general concept of "instrumentID" - this was never a popular concept even though it is fully implemented by Apple's DLSMusicDevice. This would mean that the concept of a voice assignment in StartNote would go away (the normal usage would be to supply 0xFFFFFFFF as the InstrumentID (use group's current voice). We would also deprecate the two API calls Prepare and Release Instrument (MusicDevice.h)

Offline rendering we will continue to support.

I also support an arbitrary number of groups,

So should any synth - the group ID that is used should not really matter.. the number of groupIDs that can be in use at any one time has a ramification for the multi-timbral capability of the AU - in a general case this would be the number of parts (individually addressable voices).


but they are only created either by setting the element count (which creates a contiguous set of Group IDs) or by sounding a note on a previously unused group (which creates a sparse set, but one at least known to the host).

Element Count is really not appropriate for Group Scope - we really expect group scope to be sparse.


For me, a host can enumerate all parts and proups only by enumerating the known set of Group IDs and asking which parts they own.

Can't work that way - the way this concept was designed you enumerate the non-sparse collection of part elements and see what group IDs are assigned to each part:


From AudioUnit/AudioUnitProperties.h:
kMusicDeviceProperty_PartGroup AudioUnitElement (read/write)
AudioUnitElement that is the groupID (The Group Scope's Element) the part is (or should be)
assigned to. The property is used in the Part Scope, where the element ID is the part
that is being queried (or assigned). This property may in some cases be read only, it may
in some cases only be settable if the AU is uninitialized, or it may be completely dynamic/
These constraints are dependent on the AU's implmentation restrictions, though ideally
this property should be dynamically assignable at any time. The affect of assigning a new
group to a part is undefined (though typically it would be expected that all of the existing
notes would be turned OFF before the re-assignment is made by the AU).




--------

I could make my synth conform to (1), but only by degrading the user experience and artificially limiting polyphony (or at least timbrality). I'm not prepared to do that.


The challenge is that any host which is written with (1) in mind will have difficulty getting the most out of a synth like (2), and any host written with (2) in mind will have difficulty getting the most out of a synth like (1).


Neither (1) nor (2) is perfect. But perhaps there is some way to reconcile them? I've previously assumed that I'm just an awkward customer, but I'm learning that there are still relatively few hosts and AUs that make rich use of the API, so there may be merit in opening these issues up for discussion....

Sure - I'd much rather have a discussion about this. I think the first step is to agree on what we already have - our starting point.


Bill


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ian Kemmish 18 Durham Close, Biggleswade, Beds SG18 8HZ
email@hidden Tel: +44 1767 601361 Mob: +44 7952 854387
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -



_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden

--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________ __
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________ __


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >An alternative view of "Groups, parts and multi-timbrality" (From: Ian Kemmish <email@hidden>)

  • Prev by Date: MusicSequence CAShow statement and beats
  • Next by Date: Re: MusicSequence CAShow statement and beats
  • Previous by thread: An alternative view of "Groups, parts and multi-timbrality"
  • Next by thread: MIDI chunk - how does coreAudio/MusicSequence handle the Division of a MIDI MThd header?
  • Index(es):
    • Date
    • Thread