• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Translating a signal chain to CA architecture
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Translating a signal chain to CA architecture


  • Subject: Translating a signal chain to CA architecture
  • From: Brennon Bortz <email@hidden>
  • Date: Wed, 05 Jun 2013 11:11:14 -0400

Hi all,

Our sensor sends multiplexed data over an audio signal. I pull this from the microphone input and am demuxing it in a Remote IO render callback.

I need to add a couple of things to this signal chain, though. First, I'd like to have a file input in order to test and develop using recorded data. Second, we have several feature-extraction algorithms I need to plug into this chain. After processing input, I need to persist the following data:

  • Demuxed signals (two separate mono signals)
  • A variable number of signals from these feature extraction algorithms

Ideally, this all will be persisted as a single multi-channel file, but I'd also like to keep the demuxed and feature-extraction signals around for use (visualization, etc.) during the run of the app.

I'm puzzling over the most straightforward but flexible way to implement this DSP chain in CA, and wondering if I can get some input from you all.

Here's what I'm thinking, but I'm perfectly happy to be told I'm going about this all wrong (using an AUGraph now instead of just the one Remote IO unit):

  1. Depending on current configuration, either pull live microphone input from RemoteIO or file input from an AudioFilePlayer.
    1. Demux signals from RemoteIO in RemoteIO render callback.
    2. Demux signals from AudioFilePlayer in an attached effect unit.
    3. Drop demuxed signals into rung buffers for use elsewhere in the app.
  2. Send individual demuxed signals to a MatrixMixer.
  3. Create individual effect units for each feature extraction algorithm--pull from MatrixMixer outputs and perform feature-extraction in render callbacks.
    1. Drop extracted feature signals into ring buffers for use elsewhere in the app.
  4. Send outputs from feature-extraction effect units, as well as two original demuxed signals into another MatrixMixer.
  5. Render MatrixMixer outputs to RemoteIO output--persist to multi-channel file here.

Finally, if I also have some other audio for playback (nothing above is intended for playback), what's the best way to go about doing this? Should I send these 'playable' audio streams into the last MatrixMixer and in the RemoteIO render callback only send on the audio intended for playback to the hardware output? Or, is it possible to setup two AUGraphs altogether--one for this signal processing and another for playback?

Once again, if I'm approaching this completely the wrong way around, please feel free to let me know.

Thank you,

Brennon Bortz
email@hidden



 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Translating a signal chain to CA architecture
      • From: Hari Karam Singh <email@hidden>
  • Prev by Date: Re: iOS - Decode an AAC(ADTS) network stream to PCM buffers
  • Next by Date: Re: Translating a signal chain to CA architecture
  • Previous by thread: Think I've found an AudioConverter bug: Sanity-check please?
  • Next by thread: Re: Translating a signal chain to CA architecture
  • Index(es):
    • Date
    • Thread