iPhone AU best practices
iPhone AU best practices
- Subject: iPhone AU best practices
- From: uɐıʇəqɐz pnoqɥɒɯ <email@hidden>
- Date: Wed, 16 Jun 2010 10:46:18 -0700
Hi. I had a few of questions for you all.
I have managed to replace AVAudioPlayer in my iPhone app with AudioUnits, as I continue my quest to write an instrument synthesizer. I am definitely seeing some performance improvement especially on older iPhones, evident when the user quickly taps a number of notes.
Questions:
1. The user could play up to 10 notes at a time, or as limited by the number of fingers they have. However, since the notes don't just end playing after fingers are removed, and the user could initiate another 10 or 20, maybe even 30 notes, I may have more than 40 notes playing at one time. In order to play multiple notes, I use the multichannel mixer and configured it with 20 buses. Seeing that 20 buses are easily exceeded, I have two choices:
- configure a higher number of buses
- add and remove buses on the fly as notes are being played back. This would only work if I don't have to AUGraphStop() and if AudioUnitSetProperty() on the mixer doesn't impact its current operation. Anyone know?
If I use a higher number of buses, what is the cost? (question 2 shows some of the cost)
2. I noticed that right after I call AUGraphStart, the render callback for each of the buses gets called even before I call
OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Enable, kAudioUnitScope_Input, inputNum, true, 0);
So if I have 30 or 40 buses, the callbacks for all are being called regardless of whether I have them enabled. I was hoping to disable a bus using the above call (passing false) but although sound playing stops through the disabled bus, the callback still gets called for it. What's the best way to prevent needless render calls? There is overhead here since I clear out the outBuffer and set kAudioUnitRenderAction_OutputIsSilence for each call on a silenced bus. Ideally I'd like to enable callbacks on a bus and play a sound, and when I know the bus is done, to disable callbacks on it.
3. I was originally going to do my fadeouts in renderNotification callbacks (it seems that the renderNotification gets called for bus 0 only - which, given the above, really is for all buses). Anyone have any thoughts as to whether I should fade out there, or in render callbacks? I am changing the input bus volume of the mixer, and not actually changing the amplitude of the sound data (more of a headache I think), and currently I am doing it in the render call.
Non AU question: If I play a note, say c4, and the user taps that note again, while the first instance is fading out, shall I kill the fade-out completely since the new instance is probably going to make the first's fadeout indiscernible? This might be a bigger issue on instruments where the Release is not a fade-out. But even in the case of a Harpsichord, where the release includes the sound of the hammer returning, one wouldn't hear the hammer twice when a note is pressed twice...
Thank you all!
-mahboud _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden