No, no need to combine signals. It's purely for development purposes--I don't always want to have to have the sensor connected to generate data, so playing back recorded data would be useful.
On Jun 5, 2013, at 4:42 PM, Hari Karam Singh < email@hidden> wrote: That sounds fine though the use of the audio file player AU seems a little strange to me. Do you need to combine a previous signal with a new signal because the combined synchronised output somehow makes sense or are you just trying to prevent repeat code?
Btw, modularity without AUGraph overhead is the reason I tend towards C++ in my RCBs.
Also, if you haven't already, take a look at the vDSP functions in the Accelerate Framework. They leverage vector ops built into the CPU for ultra fast processing. For example vDSP_vadd() is your mixer in one line :)
Hari Karam Singh
On 5 Jun 2013, at 20:01, Brennon Bortz < email@hidden> wrote: Thanks for your thoughtful response, Hari.
I was mistakenly thinking that I could use a 'generic' effect unit in which I could do my DSP for feature extraction. This is why I mentioned effect units, but I didn't intend to mean that I'd be creating a custom AudioUnit altogether. If I were to use the multiple mixer setup, I could just do this processing in render callbacks on the inputs of the mixer unit. Your advice about performing this work in the IO render callback makes sense, though.
However, the need for both live and file input was my reason for considering a graph at all. Furthermore, we are developing additional feature extraction algorithms as well, and the ability to modularly 'plug' this functionality into and out of a graph is attractive to me. Even if these algorithms weren't encapsulated by individual callbacks and I do all of the demuxing/DSP in one go in a single render callback, I'd like to do that in one place irrespective of what is providing the input. In other words, I'd prefer not to duplicate this code for both the file input and mic input. In order to do this, it seems like I'll have to at least use several units in order to accomplish this, irrespective of whether or not I use a graph. Does this make sense?
If my thinking is correct on this, does this simplified approach sound reasonable?:
- Pull input from mic using RemoteIO and from a file using AudioFilePlayer, zeroing samples from RemoteIO if I'm using file input
- Merge inputs with a Merger unit
- Pull stream from Merger with RemoteIO and perform processing in a single render callback
Thanks again, Brennon
On Jun 5, 2013, at 12:07 PM, Hari Karam Singh < email@hidden> wrote: Hi Brennon,
I'm by no means an expert on using Core Audio for non-audio sensor info though it's an area that interests me quite a bit. A couple things jump out as I read your post…
First "create individual effects units for each feature extraction algo": Do you mean custom Audio Units which you can add to the chain? If you are on iOS, than you can't yet make custom Audio Units. Either way you could attach a Render Notify Callback to the Mixer Unit and intercept/process the audio…
But with regards to the non-audio DSP, I'd consider avoiding the use of mixer units for it. It would just provide overhead and complexity that you don't need. I'd be inclined to keep all this signal processing code in a single render callback with a set of buffers for sharing with the rest of the app and use of ExtAudioFileWriteAsync to save out your multichannel files. If the DSP was heavy you might just have the RCB code collect the microphone data into the buffers and have a worker thread do the processing as fast as it was able…
I'd be inclined to structure it as follows:
1. RemoteIO on it's own or in a separate graph which grabs the microphone data, demuxes, processes and saves it to files/buffers. Using C++ might assist in keeping the code modular...
2. Another RemoteIO or separate graph with a single RemoteIO for the actual audio, or even something lighter weight like AVAudioSession if sample accurate timing isn't critical.
Personally, in iOS at least, I have yet to see anything that's sold me on the benefits of using an AUGraph for fairly simple setups, except perhaps if you want to use some of the built in AudioUnits.
Hope this helps…
Gratefully, Hari Karam Singh
Using the Core Audio chain for non-audio sensors is something I've been interested in as of late...
On 5 Jun 2013, at 16:11, Brennon Bortz < email@hidden> wrote: Hi all,
Our sensor sends multiplexed data over an audio signal. I pull this from the microphone input and am demuxing it in a Remote IO render callback.
I need to add a couple of things to this signal chain, though. First, I'd like to have a file input in order to test and develop using recorded data. Second, we have several feature-extraction algorithms I need to plug into this chain. After processing input, I need to persist the following data:
- Demuxed signals (two separate mono signals)
- A variable number of signals from these feature extraction algorithms
Ideally, this all will be persisted as a single multi-channel file, but I'd also like to keep the demuxed and feature-extraction signals around for use (visualization, etc.) during the run of the app.
I'm puzzling over the most straightforward but flexible way to implement this DSP chain in CA, and wondering if I can get some input from you all.
Here's what I'm thinking, but I'm perfectly happy to be told I'm going about this all wrong (using an AUGraph now instead of just the one Remote IO unit):
- Depending on current configuration, either pull live microphone input from RemoteIO or file input from an AudioFilePlayer.
- Demux signals from RemoteIO in RemoteIO render callback.
- Demux signals from AudioFilePlayer in an attached effect unit.
- Drop demuxed signals into rung buffers for use elsewhere in the app.
- Send individual demuxed signals to a MatrixMixer.
- Create individual effect units for each feature extraction algorithm--pull from MatrixMixer outputs and perform feature-extraction in render callbacks.
- Drop extracted feature signals into ring buffers for use elsewhere in the app.
- Send outputs from feature-extraction effect units, as well as two original demuxed signals into another MatrixMixer.
- Render MatrixMixer outputs to RemoteIO output--persist to multi-channel file here.
Finally, if I also have some other audio for playback (nothing above is intended for playback), what's the best way to go about doing this? Should I send these 'playable' audio streams into the last MatrixMixer and in the RemoteIO render callback only send on the audio intended for playback to the hardware output? Or, is it possible to setup two AUGraphs altogether--one for this signal processing and another for playback?
Once again, if I'm approaching this completely the wrong way around, please feel free to let me know.
Thank you,
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
|