Hi all,
I'm developing a means of bringing several sensor signals, modulated together into one signal, into the iPhone through the audio input. I need to do several things:
a) Demodulate these signals from the input signal through a trivial filter chain, and then output each down its own signal path for further processing--must be realtime.
b) Playback a sonified version of each signal--preferably realtime.
c) Stream each signal out over a network connection--preferably realtime.
c) Store each signal in a PCM file--need not be realtime.
I need help conceptualising the signal chain in this process. I've begun to sketch the design using Audio Units. First of all, have I gone too low-level by choosing Audio Units? Would this be implementable with Audio Queue Services? Nevertheless, I've got to the point where I've the modulated signal coming in (have not demodulated it yet), am sonifying it in real time, and am passing the sonified signal back out the output. Now, in order to split this signal into two separate sections of the signal chain, I would imagine doing something like routing the output of my Remote I/O unit into two separate input buses on a Multichannel Mixer Unit, and sonifying/writing-to-disk/writing-to-network in the Multichannel Mixer Unit's callbacks.
However, is this too much processing for a realtime thread? Am I really going to be able to accomplish this, or will I need to pull some of the functionality offline? Second, is it possible to route the I/O Unit's input element's output to separate input elements of a Multichannel Mixer Unit? If not, would I be able to specify a multichannel stream description, and split the original mono signal into various channels upon demodulation, or are we only allowed stereo streams on the iPhone? Finally, what framework would you recommend for writing this data out to the disk/network in realtime?
Many thanks,
Brennon Bortz Software Researcher Dundalk Institute of Technology Ph.D. Composer & Researcher - Sonic Arts Research Centre Queen's University, Belfast
|