Where the microphone sound data is connected to three or more units?
If not, Can I use the tap on a node to create my own dispatcher of the data, calling the different clients with data, as the data arrives? Am I better off just going back to AudioUnits?
My next question is, can I have multiple unrelated paths in one AVAudioEngine, or do I need to create multiple AudioEngines:
I'll be building a lot of simple generators and filters, like sine, square, sawtooth, lowpass, pitch shift, time shift, etc. Also a "backwards" unit. Except for a few, all others I would like to have work in realtime. Again, shall I stay with the old CoreAudio API or will the new AVAudio framework give me the flexibility I need? I think I can recall most of what I had learned before with CoreAudio, and do remember that things weren't easy. That's why I am giving the new framework a chance, if it'll make my life easier.
Thank you in advance for your help.
-mahboud