• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Sound Input Manager under OSX
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Sound Input Manager under OSX


  • Subject: Re: Sound Input Manager under OSX
  • From: Shaun Wexler <email@hidden>
  • Date: Mon, 7 Jul 2003 13:36:13 -0700

On Monday, July 7, 2003, at 11:34 AM, Tim Dorcey wrote:

Yes, we have experienced exactly the same problem. As a work-around, we average the 2 stereo channels together. This works fine on some OSX systems, but on others, we get a "Darth Vader" effect on the voice. It sounds as if high frequencies are being cut out.

I filed bug #3317302 on CoreAudio for Titanium PowerBooks, which seems to impose a 1-sample delay (0.0227 ms) on the left input channel from the built-in audio hardware device, with respect to the right channel, even though they are in the same stream. This might be the cause of the problem when mixing the two channels. Also, I'm seeing a periodic hostTime anomaly when analyzing a built-in device against one from a Sound Devices USBPre; probably in their driver, but I'm not sure yet.

As a workaround, you could perform a cross-correlation, or compare the two channels in the stream for equal values, and apply a delay accordingly. It would be simpler to disregard the unwanted channel, rather than mixing the two.

Not sure if the Darth Vader effect is related to the stereo averaging. The odd thing is that on some OSX systems, the problem can be fixed by opening the SGSettingsDialog, and toggling the Speaker (playthru) on/off. It makes me think the different audio layers are not in sync, and this somehow syncs it up. Unfortunately, it does not work on all OSX systems.

I tried this, and it did not fix the problem on a PowerBook G4 800.

We are working now to port to Core Audio. Check out the "Daisy" sample code in the Core Audio SDK. It appears to illustrate all of the ingredients needed to record, and it does not look too difficult, though can't say we have it working yet.

Also check out Michael Thornburgh's excellent open source MTCoreAudio ObjC HAL wrapper at: http://aldebaran.armory.com/~zenomt/macosx/MTCoreAudio ...

My own HAL framework, based on his original work, goes several steps further into making an object-oriented audio system of AudioChannels and AudioConnections, which can perform DSP and automatically handle their buffer duties, as well as full AudioDevice and AudioStream management, which remembers device configs across deaths and port changes, allowing seamless operation when they are removed and reconnected, etc. The AudioManager can provide a populated NSPopUpButton with all available channels; the button also displays the status of the channel/device, including flashing it red upon clips! With my code, acquiring a buffer's worth of audio from the left channel and returning a windowed FFT as the result, is as simple as this:

AudioManager *audioManager = [AudioManager sharedInstance];
AudioChannel *channel = [audioManager builtInLeftInputChannel];
AudioConnection *connection = [[AudioConnection alloc] initWithDelegate:self forDirection:kInputDirection];

[connection setOptions:(kWindowAndCtoZ_option | kForwardFFT_option)];
[connection setWindowingMethod:kHannWindow];
[connection setFFTLog2n:10];
[connection connectToAudioChannel:channel];

Need a quick RTA display? Just add the kMagnitude_option, and connect the processingBuffer to one of my AnalyzerView objects in your view. Of course, this is overly simplified for illustrative purposes, typed in Mail.app, but:

IBOutlet id analyzerView; // configured by nib

[connection setOptions:([connection options] | kMagnitude_option)];
if ([connection processAudio] && analyzerView) {
[analyzerView setProcessingBuffer:[connection processingBuffer]];
[analyzerView setDataNeedsDrawn];
}

Most other interaction is handled by delegate methods and/or notifications.

Cocoa/ObjC is awesome! Be sure to try the MTCoreAudio.framework...
--
Shaun Wexler
MacFOH
http://www.macfoh.com
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.

References: 
 >RE: Sound Input Manager under OSX (From: "Tim Dorcey" <email@hidden>)

  • Prev by Date: Re: Audiofile buffering
  • Next by Date: Re: Audiofile buffering
  • Previous by thread: RE: Sound Input Manager under OSX
  • Next by thread: Re: DeviceIsRunning for which IOProc?
  • Index(es):
    • Date
    • Thread