Hi all,
Our sensor sends multiplexed data over an audio signal. I pull this from the microphone input and am demuxing it in a Remote IO render callback.
I need to add a couple of things to this signal chain, though. First, I'd like to have a file input in order to test and develop using recorded data. Second, we have several feature-extraction algorithms I need to plug into this chain. After processing input, I need to persist the following data:
- Demuxed signals (two separate mono signals)
- A variable number of signals from these feature extraction algorithms
Ideally, this all will be persisted as a single multi-channel file, but I'd also like to keep the demuxed and feature-extraction signals around for use (visualization, etc.) during the run of the app.
I'm puzzling over the most straightforward but flexible way to implement this DSP chain in CA, and wondering if I can get some input from you all.
Here's what I'm thinking, but I'm perfectly happy to be told I'm going about this all wrong (using an AUGraph now instead of just the one Remote IO unit):
- Depending on current configuration, either pull live microphone input from RemoteIO or file input from an AudioFilePlayer.
- Demux signals from RemoteIO in RemoteIO render callback.
- Demux signals from AudioFilePlayer in an attached effect unit.
- Drop demuxed signals into rung buffers for use elsewhere in the app.
- Send individual demuxed signals to a MatrixMixer.
- Create individual effect units for each feature extraction algorithm--pull from MatrixMixer outputs and perform feature-extraction in render callbacks.
- Drop extracted feature signals into ring buffers for use elsewhere in the app.
- Send outputs from feature-extraction effect units, as well as two original demuxed signals into another MatrixMixer.
- Render MatrixMixer outputs to RemoteIO output--persist to multi-channel file here.
Finally, if I also have some other audio for playback (nothing above is intended for playback), what's the best way to go about doing this? Should I send these 'playable' audio streams into the last MatrixMixer and in the RemoteIO render callback only send on the audio intended for playback to the hardware output? Or, is it possible to setup two AUGraphs altogether--one for this signal processing and another for playback?
Once again, if I'm approaching this completely the wrong way around, please feel free to let me know.
Thank you,
|