Re: iPhone AU best practices
Re: iPhone AU best practices
- Subject: Re: iPhone AU best practices
- From: uɐıʇəqɐz pnoqɥɒɯ <email@hidden>
- Date: Wed, 16 Jun 2010 16:06:08 -0700
Thank you all for the responses. Think of me as a sponge, trying to soak up all this audio engineering information, and excuse me if I get a little saturated at times — I come from a networking background, with issues such as trying to keep up with 10Gig traffic in an Ethernet driver, so callbacks, overruns, ring buffers without disk access, all at interrupt time, while similar to AU callbacks, have a whole other set of physics driving them.
Please read on....
On Jun 16, 2010, at 11:10 AM, Admiral Quality wrote:
> I'm have virtually no iPhone programming experience yet, so I'll only
> comment on your previous questions by saying it seems you're going
> about this entirely the wrong way. Conceptually an instrument should
> output on one bus, and all the mixing of instrument voices (and
> including the envelope "fade outs" during their release times) should
> happen in your own code. Mixing is easy, it's just adding.
Mixing is easy, but I was hoping to use the mixer in order to have independent control over the volume of each note. Plus, it is a lot easier to play each note separately in its own callback rather than combine all the playing notes together; I am also assuming that doing a memcopy of each sound through the mixer is less expensive than doing an addition of all the playing notes, and individually copying each 8.24 value to the output, but skipping the mixer. But, I am not lazy, if that is the better way to do it, then I will. The questions then are:
1. Is it better to skip the mixer and the memcopy and replace them with addition of a number of 8.24 values, where each of the values could also be multiplied by a fade-out value? I would no longer have to worry about mixer bus limits.
2. I am assuming that the mixer is currently doing some work to avoid clipping. If I add values, how do I prevent clipping?
3. could I use floats and then turn them into 8.24 before I copy them to the outBuffer? If I can use floats, can I use the Accelerate framework for the fade multiplication and
4. If the mixer more efficient than adding myself, then how do I tell it to stop calling the callback for inactive buses?
5. And if the mixer is more efficient, is it also best to use its input volume control for fading, or to manually adjust the data to ramp down in amplitude?
>
>> Non AU question: If I play a note, say c4, and the user taps that note again, while the first instance is fading out, shall I kill the fade-out completely since the new instance is probably going to make the first's fadeout indiscernible? This might be a bigger issue on instruments where the Release is not a fade-out. But even in the case of a Harpsichord, where the release includes the sound of the hammer returning, one wouldn't hear the hammer twice when a note is pressed twice...
>>
>
> This is what is commonly referred to as re-triggering vs. cycle mode.
> Typically, emulating string or wind instruments we use re-triggering
> assignment where the same voice currently playing the note gets
> re-triggered to start again. For sounds like bells, cymbals, tom-tom
> drums, etc, a cyclic triggering might sound more natural, where a new
> voice will be assigned for each successive hit of the same note and
> they "pile up" on each other.
I can easily add a flag to my instruments to determine if a note can be re-triggerred or "piled up". I'm going to have many flags like this for other characteristics, such as instruments that can be faded to mimic a Release vs those that have to have an actual sampled release. Also I have to worry about the location of ADSR components - mainly S, in order to loop appropriately.
> You should also look up what a voice assigner is, as well as consider
> what the maximum number of voices you want to allow is. Usually there
> is a limit, and the voice assigner uses a round robin approach to
> assigning which note is played by which voice. Say you offer a
> keyboard interface and the user drags her finger across all the keys,
> and your sound has a long release time (say a bell). You could
> potentially end up with all 88 keys of a piano playing at once (on a
> real piano, this is easy to do if the damper/sustain pedal is down. In
> the iPhone synth case here, just imagine it's a long sustaining bell
> sound). So if you don't limit the number of voices to some practical
> maximum you may end up overtaxing the CPU at some level of polyphony.
Any links for voice assigners? My app right now will play one instrument at a time. In the future, I will allow recording of each instrument separately, and then playback all instruments at the same time.
>
> This is all virtual instrument design 101 stuff and goes right back to
> 1977 with the first polyphonic synthesizers. You may want to spend
> some time playing with other hardware and software instruments to get
> used to the paradigm, otherwise you're just reinventing a wheel that
> was perfected a long time ago.
I have been trying to read up on this all, but I have finite time so I am still catching up. I know there are some synths out there like FluidSynth, or other SoundFonts engines, which might be a shortcut. However, the ones with incompatible licenses or expensive royalties will force me to reinvent where necessary. The economies of a <$5 app makes it hard to do otherwise.
As for playing a note a second time before releasing the first instance, is currently not possible in my UI. if you are holding a note down, the next tap on the same note is ignored.
>
> OP was also wondering about worst case of "10 fingers". But isn't
> there a limit to how many simultaneous multi touch points are
> possible?
It's 11 on the iPad (fingers and nose), it seems. And 5 on the iPhone, although I could swear I have seen more at times.
Again, thanks! and if you have more tips on building an instrument synthesizer, toss them my way.
-mahboud
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden