Re: Live mixing; how does CA do it smoothly?
Re: Live mixing; how does CA do it smoothly?
- Subject: Re: Live mixing; how does CA do it smoothly?
- From: Kurt Revis <email@hidden>
- Date: Sat, 7 Dec 2002 15:19:39 -0800
On Saturday, December 7, 2002, at 01:30 PM, Ben Kennedy wrote:
My app mixes audio into a ring buffer, which is
fed to the default output audiounit via a render callback...
Often this results in samples of value > 1.0, but coreaudio
seems to handle this fine -- the speaker output sounds quite reasonable
and unclipped.
I'm simply writing out the ring buffer as it
accumulates, using libsndfile
<http://www.zip.com.au/~erikd/libsndfile/>
routines to handle file format and float-to-int (or whatever is
appropriate for the file format) conversion. The results: massive hard
clipping (during concurrent mixed sounds) that sounds markedly worse
than
when being played through the speaker.
The difference in these two situations is that the CoreAudio HAL is
doing another step of mixing. It needs to mix audio between all
applications which are playing sound at the same time. It also applies
the master volume setting for the audio device (see the settings in
Audio MIDI Setup or Daisy). Thus, even though your app is sending
samples > 1.0, by the time they get to the hardware there may have been
enough attenuation to avoid clipping.
You might try turning up the device's volume all the way, and see if
you get clipping in that situation. (This is of course dependent on the
specific hardware -- I don't really know how things work at that level.)
Also note that the output audio unit has its own volume setting. This
is set to 1.0 by default, but it can be decreased. This only affects
the sound played through that AU, not the device's master volume.
One obvious workaround might be to attenuate all my samples before
mixing
so that I don't have to worry about clipping, but it seems CoreAudio
already has a sensible way of handling this, and I want in.
The HAL mixing happens below the level of your application, so I don't
think there's any way you can directly get that data. If libsndfile
expects samples <= 1.0, that's what you should give it.
BTW, if you are using an output audio unit, you can see the audio data
after it is played by using AudioUnitSetRenderNotification(). (Look for
kAudioUnitRenderAction_PostRender in the AudioUnitRenderActionFlags
passed to your callback.) I believe this will take into account the
output audio unit's volume.
(BTW: I'm using libsndfile for file i/o for three reasons: a wide
variety of file formats supported; the present desire to remain
functional on 10.1 which I gather does not have these AudioFile
routines
I've seen reference to; my inability to fnd any documentation on
AudioFile -- where is it?)
Yes, the AudioFile API is new in 10.2. There is some documentation in
the header file:
/System/Library/Frameworks/AudioToolbox.framework/Headers/AudioFile.h,
and there is some sample code in /Developer/Examples/CoreAudio. I don't
know of any other documentation that's available yet.
--
Kurt Revis
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.