Re: iPhone AU best practices
Re: iPhone AU best practices
- Subject: Re: iPhone AU best practices
- From: Admiral Quality <email@hidden>
- Date: Wed, 16 Jun 2010 14:10:30 -0400
I'm have virtually no iPhone programming experience yet, so I'll only
comment on your previous questions by saying it seems you're going
about this entirely the wrong way. Conceptually an instrument should
output on one bus, and all the mixing of instrument voices (and
including the envelope "fade outs" during their release times) should
happen in your own code. Mixing is easy, it's just adding.
> Non AU question: If I play a note, say c4, and the user taps that note again, while the first instance is fading out, shall I kill the fade-out completely since the new instance is probably going to make the first's fadeout indiscernible? This might be a bigger issue on instruments where the Release is not a fade-out. But even in the case of a Harpsichord, where the release includes the sound of the hammer returning, one wouldn't hear the hammer twice when a note is pressed twice...
>
This is what is commonly referred to as re-triggering vs. cycle mode.
Typically, emulating string or wind instruments we use re-triggering
assignment where the same voice currently playing the note gets
re-triggered to start again. For sounds like bells, cymbals, tom-tom
drums, etc, a cyclic triggering might sound more natural, where a new
voice will be assigned for each successive hit of the same note and
they "pile up" on each other.
You should also look up what a voice assigner is, as well as consider
what the maximum number of voices you want to allow is. Usually there
is a limit, and the voice assigner uses a round robin approach to
assigning which note is played by which voice. Say you offer a
keyboard interface and the user drags her finger across all the keys,
and your sound has a long release time (say a bell). You could
potentially end up with all 88 keys of a piano playing at once (on a
real piano, this is easy to do if the damper/sustain pedal is down. In
the iPhone synth case here, just imagine it's a long sustaining bell
sound). So if you don't limit the number of voices to some practical
maximum you may end up overtaxing the CPU at some level of polyphony.
This is all virtual instrument design 101 stuff and goes right back to
1977 with the first polyphonic synthesizers. You may want to spend
some time playing with other hardware and software instruments to get
used to the paradigm, otherwise you're just reinventing a wheel that
was perfected a long time ago.
- Mike "AQ" Humphrey
http://www.admiralquality.com
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden