Re: Easy request...
Re: Easy request...
- Subject: Re: Easy request...
- From: Jeff Moore <email@hidden>
- Date: Mon, 22 Oct 2001 14:52:54 -0700
on 10/17/01 8:49 PM, Andy O'Meara <email@hidden> wrote:
>
Hey there... I make two visual plugins for Mac OS (iTunes+Audion), and all
>
that remains is to make standalone versions of my stuff for OS X (using
>
coreaudio to access sound input)... Since my current implementation is
>
non-carbon, my standalone isn't OS X savvy...
The first step is going to be to Carbonize your application. You won't get
too far without it. Since your app deals with taking over the screen, you
might need to do more depending on the level of support in Carbon for the
APIs you use, but I don't know for sure.
>
So, if someone could point me at some sample code, I'd be set... The goal
>
is to have a simple wrapper that opens a given input channel and keeps a .1
>
sec (or so) buffer for easy access...
Once that's done, you can (as was previously suggested) continue to use the
Sound Input Manager as you do on 9. This would probably be the quickest
route for you.
If you want to drop down to the HAL and use it to get input data from a
device, the basic procedure is pretty simple (I don't have any sample code
handy for this):
1) Get the AudioDeviceID of the device you want to talk to.
There are global properties that can give you the list of all devices, or
you can just get the default device. kAudioHardwarePropertyDevices and
kAudioHardwarePropertyDefaultInputDevice help you out here.
2) Query and/or set the device to the format and buffer size you want to use
and sign up to listen for changes to these properties.
The buffer size is accessed via kAudioDevicePropertyBufferFrameSize.
The device's stream topology is described in the property
kAudioDevicePropertyStreamConfiguration.
For simple devices (that is, devices that have only one stream), you can
access the format related properties (kAudioDevicePropertyStreamFormat and
friends) through the device's global channel (channel 0).
For complex devices, you will need to ask and configure each individual
stream for it's format. The same properties apply, but you will need to
access them through the AudioStream* routines that take AudioStreamIDs.
The list of the AudioStreamIDs for the device's streams is accessed via
kAudioDevicePropertyStreams.
Don't forget to add your AudioDeviceListenerProcs and
AudioStreamListenerProcs for the properties that concern your engine. They
can and will change out from under you because device's are always shared
between multiple processes.
3) Register an AudioDeviceIOProc with the device.
Your AudioDeviceIOProc is called periodically to deliver the data to you at
the frequency of the buffer size you have selected. Note that each process
may have it's own buffer size.
Input and Output data are presented synchronously for devices that support
full duplex IO, so one IO proc can (and should for efficiency's sake) be
used.
4) Use AudioDeviceStart and AudioDeviceStop to start and stop your IOProc.
Hope this helps get you started. There is some documentation located at
http://developer.apple.com/audio and the header file,
<CoreAudio/AudioHardware.h>, has some as well.
Feel free to post any other questions you have as they come up.
Good luck!
--
Jeff Moore
Core Audio
Apple