• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: CoreAudio, Mixing and Panning, and the Audio Toolbox
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CoreAudio, Mixing and Panning, and the Audio Toolbox


  • Subject: Re: CoreAudio, Mixing and Panning, and the Audio Toolbox
  • From: Chris Rogers <email@hidden>
  • Date: Wed, 6 Mar 2002 12:20:05 -0800

The stereo mixer AudioUnit ('smxr') takes N mono or stereo inputs and
mixes them to a single stereo-interleaved output with volume and pan
for each channel (currently pan is not supported for stereo channels).
It's not very complicated to use. Simply connect up the inputs (either
to outputs of other AudioUnits with the kAudioUnitProperty_MakeConnection
property, or by a user callback with the kAudioUnitProperty_SetInputCallback)
in the standard way. You may also be interested in using the
interleaver and deinterleaver AudioUnits.

Chris



I have written a small program that streams any number of audio files to the default output device. I mix the audio data into the device's buffer "manually", by looping through my ring buffers and scaling and adding them into the device's buffer. It works well, and I'm excited to have finally started getting something done in CoreAudio.

Ultimately, I'd like to stream arbitrary audio data into arbitrary audio outputs, and I'd like to do so in a way that's internally clean and efficient. I already know how I'd do that using the technique I'm already using. However, I wonder whether I'd be better off using the Audio Toolbox. I see that there's a stereo mixer Unit ('smxr'). This implies that you can associate a mixer unit with a physical device, and feed your data to the Unit, letting it do the math for panning and mixing. Is this indeed what it's for? If so, does there exist any sample code that shows such a setup in use? Is there any advantage to using the Audio Toolbox relative to rolling my own scheme?

I have struggled through the "Audio Toolbox" chapter of the CoreAudio pdf, and I see that there is a Java example which writes audio data to a device via the default output unit, but I don't know how to put those things together in such a way that you can combine the HAL and Toolbox worlds.

To restate what I ultimately want: any number of independent channels, each routed to a chosen physical device stream; when more than one channel is associated with a stream, I wish to mix them.

Thanks, in advance, for whatever help and guidance you can provide.
--
Jonathan Feinberg email@hidden Inwood, NY, NY
http://MrFeinberg.com/
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.


--
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.

  • Follow-Ups:
    • Re: CoreAudio, Mixing and Panning, and the Audio Toolbox
      • From: Bill Stewart <email@hidden>
    • Re: CoreAudio, Mixing and Panning, and the Audio Toolbox
      • From: Jonathan Feinberg <email@hidden>
References: 
 >CoreAudio, Mixing and Panning, and the Audio Toolbox (From: Jonathan Feinberg <email@hidden>)

  • Prev by Date: Re: audio callback intervals
  • Next by Date: Re: Opinions sought: AudioUnit Rendering
  • Previous by thread: CoreAudio, Mixing and Panning, and the Audio Toolbox
  • Next by thread: Re: CoreAudio, Mixing and Panning, and the Audio Toolbox
  • Index(es):
    • Date
    • Thread