Audio Units, Graphs, iphone & remoteIO
Audio Units, Graphs, iphone & remoteIO
- Subject: Audio Units, Graphs, iphone & remoteIO
- From: Gregory Wieber <email@hidden>
- Date: Wed, 21 Apr 2010 10:02:25 -0700
Hello,
I've held off on asking these questions until I had thoroughly dug through older posts on this list -- I've done a lot of searching as well. Hopefully I'm not asking something that's already been addressed here, because I don't want to waste anyone's time.
First, I'll try to explain what it is I'm looking to accomplish. I'd like to have an iphone app that has, let's say, 3 different audio sources. For simplicity, let's say each one is playing from a different file. If I wanted to control the pitch and tempo of each of those sources individually, on mac OS X I could write custom audio units. On the iphone, from what I understand, you can only use the audio units provided.
So, that led me to think that any DSP I wanted to do, would need to happen in a render callback. Specifically, in the remote IO render callback. That leads to the questions:
1) I'm pretty sure I read somewhere here that you can only have 1 remote io unit per application -- is that true?
Since you can't write custom audio units on the iphone, I thought I could string together a bunch of remote io units in a graph, and do something like have 1 remote io read the file, the next change the pitch, etc.
2) If that's not possible, does that mean all of the file reading, dsp, etc has to happen in 1 render callback function?
If that's the case, then I probably don't need a graph right? Because the only benefit I see in using a graph (if I can only have one remote io unit) would be to add a mixer unit to control volume in panning -- which seems unnecessary if I'm already blending together a bunch of sound sources, doing processing, etc, inside my remote io's callback.
3) This question isn't really essential, but I'm curious. I think I read somewhere that a single render callback has a given time it needs to execute within. That would seem to imply that having a different Audio Unit for each dsp effect (1 for tremolo, another for reverb, etc) isn't just good semantics -- it would also keep each individual callback time to a minimum. I guess the question is, if I'm trying to do 3 different dsp operations in one render callback, plus mix different audio sources together, is that pushing it too far for one render callback? Since resources are limited on the iphone, and custom audio units don't even seem to be a possibility, is my question even relevant?
Again, thanks a lot for any light you may be able to shed on these questions, and thanks for your time.
Cheers,
Greg
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden