• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Basic input from audio source to volume display
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Basic input from audio source to volume display


  • Subject: Basic input from audio source to volume display
  • From: Heath Raftery <email@hidden>
  • Date: Tue, 07 Dec 2004 22:30:32 +1100

Hi there list,

I'm a real newbie to audio programming in OS X, and struggling to get my head around the Core Audio framework. My immediate goal is to record sound from the system default audio in device (whatever sound input device is selected in System Preferences) to a buffer, and display the volume information on a standard coloured bar chart. Such an interface appears in System Preferences->Sound->Input, or more simply in iChat's Audio chat feature, for instance. Ultimately I hope to then pass this audio through the PureVoice codec and send the audio over a network connection.

I would have thought, given the prevalent use of such a feature (such functionality seems to appear whereever you can connect to a microphone) that this would be a reasonably straightforward process, or failing that, that sample code for such functionality would exist. In my thorough (though perhaps not exhaustive) searching, I have found nothing that comes close. In fact, all of Apple's documentation seems to assume the sound will come from memory, a file or a MIDI device. It took quite some searching to find that the constant to connect to the default input device is kAudioUnitType_Output!

FYI, my application is a plain Cocoa OS X app, but working with audio has led me to add the AudioUnit, AudioToolbox, CoreServices and CoreAudio frameworks, and of course, use the associated very Carbon-like APIs. By hacking together code segments from several sources, I appear to have managed to connect to the device. My code so far, essentially consists of these steps (error checking removed):

<CODE>
desc.componentType         = kAudioUnitType_Output;
desc.componentSubType      = kAudioUnitSubType_HALOutput;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
comp = FindNextComponent(NULL, &desc);
OpenAComponent(comp, fInputUnit);

enableIO = 1;
AudioUnitSetProperty(*fInputUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enableIO, sizeof(enableIO));
enableIO = 0;
AudioUnitSetProperty(*fInputUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &enableIO, sizeof(enableIO));


theSize = sizeof(AudioDeviceID);
AudioHardwareGetProperty(kAudioHardwarePropertyDefaultInputDevice, &theSize, &fInputDeviceID);


AudioUnitSetProperty(*fInputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &fInputDeviceID, sizeof(fInputDeviceID));

//done setting up device
//initialise the audio buffer
AudioDeviceGetPropertyInfo(fInputDeviceID, 0, YES, kAudioDevicePropertyStreamConfiguration, &theSize, NULL);
fBufferList = (AudioBufferList *)malloc(theSize);
AudioDeviceGetProperty(fInputDeviceID, 0, YES, kAudioDevicePropertyStreamConfiguration, &theSize, fBufferList);

//Set the call back function for when input data arrives
AURenderCallbackStruct input;
input.inputProc = audioArrived;
input.inputProcRefCon = self;


AudioUnitSetProperty(*fInputUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &input, sizeof(input));

//finally, start listening
AudioUnitInitialize(*fInputUnit);
AudioOutputUnitStart(*fInputUnit);
</CODE>

Which seems to my immature eyes to be a fairly verbose way of connecting to the microphone, but so be it. I also admit I don't fully understand a lot of those calls just yet.

From this point, I'm stumbling my way through a callback function, using the AudioUnitRender function, but not really understanding it. I certainly don't know how to get some indication that I am actually connected to the correct device. A live graphical display, or writing to a playable sound file would give me that necessary feedback I imagine.

At this point, I'm calling on you, wise list members, to provide any advice you possess, on how I might continue, or even review my existing work. I would dearly love to find a tutorial on using this framework to connect to a microphone. Or perhaps some sample code or even a NSView descendant to display basic audio data. I have gone over all the documentation I can find, but still feel I'm missing something when it appears so difficult to program a function I can only imagine has been done hundreds of times before.

Regards,
Heath
--
 ________________________________________________________
|   Heath Raftery                                        |
|   email@hidden                               |
|   *There's nothing like a depressant to cheer you up*  |
|                       - Heard at Moe's Tavern          |
|                                         _\|/_          |
|________________________________________m(. .)m_________|

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Prev by Date: Re: Crashes with certain AUs, QuickDraw and Logic 7
  • Next by Date: coreaudio sdk -- for jaguar & project buider?
  • Previous by thread: Re: Crashes with certain AUs, QuickDraw and Logic 7
  • Next by thread: coreaudio sdk -- for jaguar & project buider?
  • Index(es):
    • Date
    • Thread