Hi Brennon,
I'm by no means an expert on using Core Audio for non-audio sensor info though it's an area that interests me quite a bit. A couple things jump out as I read your post…
First "create individual effects units for each feature extraction algo": Do you mean custom Audio Units which you can add to the chain? If you are on iOS, than you can't yet make custom Audio Units. Either way you could attach a Render Notify Callback to the Mixer Unit and intercept/process the audio…
But with regards to the non-audio DSP, I'd consider avoiding the use of mixer units for it. It would just provide overhead and complexity that you don't need. I'd be inclined to keep all this signal processing code in a single render callback with a set of buffers for sharing with the rest of the app and use of ExtAudioFileWriteAsync to save out your multichannel files. If the DSP was heavy you might just have the RCB code collect the microphone data into the buffers and have a worker thread do the processing as fast as it was able…
I'd be inclined to structure it as follows:
1. RemoteIO on it's own or in a separate graph which grabs the microphone data, demuxes, processes and saves it to files/buffers. Using C++ might assist in keeping the code modular...
2. Another RemoteIO or separate graph with a single RemoteIO for the actual audio, or even something lighter weight like AVAudioSession if sample accurate timing isn't critical.
Personally, in iOS at least, I have yet to see anything that's sold me on the benefits of using an AUGraph for fairly simple setups, except perhaps if you want to use some of the built in AudioUnits.
Hope this helps…
Gratefully, Hari Karam Singh
Using the Core Audio chain for non-audio sensors is something I've been interested in as of late...
On 5 Jun 2013, at 16:11, Brennon Bortz < email@hidden> wrote: Hi all,
Our sensor sends multiplexed data over an audio signal. I pull this from the microphone input and am demuxing it in a Remote IO render callback.
I need to add a couple of things to this signal chain, though. First, I'd like to have a file input in order to test and develop using recorded data. Second, we have several feature-extraction algorithms I need to plug into this chain. After processing input, I need to persist the following data:
- Demuxed signals (two separate mono signals)
- A variable number of signals from these feature extraction algorithms
Ideally, this all will be persisted as a single multi-channel file, but I'd also like to keep the demuxed and feature-extraction signals around for use (visualization, etc.) during the run of the app.
I'm puzzling over the most straightforward but flexible way to implement this DSP chain in CA, and wondering if I can get some input from you all.
Here's what I'm thinking, but I'm perfectly happy to be told I'm going about this all wrong (using an AUGraph now instead of just the one Remote IO unit):
- Depending on current configuration, either pull live microphone input from RemoteIO or file input from an AudioFilePlayer.
- Demux signals from RemoteIO in RemoteIO render callback.
- Demux signals from AudioFilePlayer in an attached effect unit.
- Drop demuxed signals into rung buffers for use elsewhere in the app.
- Send individual demuxed signals to a MatrixMixer.
- Create individual effect units for each feature extraction algorithm--pull from MatrixMixer outputs and perform feature-extraction in render callbacks.
- Drop extracted feature signals into ring buffers for use elsewhere in the app.
- Send outputs from feature-extraction effect units, as well as two original demuxed signals into another MatrixMixer.
- Render MatrixMixer outputs to RemoteIO output--persist to multi-channel file here.
Finally, if I also have some other audio for playback (nothing above is intended for playback), what's the best way to go about doing this? Should I send these 'playable' audio streams into the last MatrixMixer and in the RemoteIO render callback only send on the audio intended for playback to the hardware output? Or, is it possible to setup two AUGraphs altogether--one for this signal processing and another for playback?
Once again, if I'm approaching this completely the wrong way around, please feel free to let me know.
Thank you,
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
|