Live mixing; how does CA do it smoothly?
Live mixing; how does CA do it smoothly?
- Subject: Live mixing; how does CA do it smoothly?
- From: "Ben Kennedy" <email@hidden>
- Date: Sat, 7 Dec 2002 16:30:27 -0500
- Organization: Zygoat Creative Technical Services
Hi all,
I've been researching this for the past day (coreaudio docs, list
archives, google groups) and haven't particularly gotten anywhere, so now
I appeal for help.
Here's the situation: My app mixes audio into a ring buffer, which is
fed to the default output audiounit via a render callback. This works
well. At any given time, there can be an arbitrary number of distinct
sounds needing to play at once; right now I am simply adding into the
ring buffer. Often this results in samples of value > 1.0, but coreaudio
seems to handle this fine -- the speaker output sounds quite reasonable
and unclipped.
The problem I currently face is dumping the same mix to a file, instead
of the speakers. Rather than feed the default output unit, what I'm
doing in this case is simply writing out the ring buffer as it
accumulates, using libsndfile <
http://www.zip.com.au/~erikd/libsndfile/>
routines to handle file format and float-to-int (or whatever is
appropriate for the file format) conversion. The results: massive hard
clipping (during concurrent mixed sounds) that sounds markedly worse than
when being played through the speaker.
When I play back my output file e.g. using the Finder preview, the volume
level (so far as I can subjectively tell) is the same as the level when
playing to speaker through the app. Of course this makes sense, since I
haven't altered the amplitude in either case. The thing I don't
understand is why there is no (or marginal) clipping when played live
through the app, vs. when written out to disk.
Obviously, what I would like is to write to disk the same sound that I
hear through the speaker. Various statements have made allusions that it
would be easy to write an output AudioUnit which writes to disk instead
of to the HAL. However, looking at the audiounit SDK it's clear that
writing an audiounit is hardly a trivial task, and likely not appropriate
for my case (I don't want to have to spend tens of hours dealing with
C++, component manager, etc when all I want is to simply capture data
within my own app). That i've been unable to find any sample code that
already does this seems a testament to the fact.
So... my question is this: what is CoreAudio doing with the data I feed
it which makes it sound so good on output to HAL, and how can I capture
this (presumably re-scaled/re-mixed/whatever) data for my own purposes?
One obvious workaround might be to attenuate all my samples before mixing
so that I don't have to worry about clipping, but it seems CoreAudio
already has a sensible way of handling this, and I want in.
(BTW: I'm using libsndfile for file i/o for three reasons: a wide
variety of file formats supported; the present desire to remain
functional on 10.1 which I gather does not have these AudioFile routines
I've seen reference to; my inability to fnd any documentation on
AudioFile -- where is it?)
-b
--
Ben Kennedy, chief magician
zygoat creative technical services
613-228-3392 | 1-866-466-4628
http://www.zygoat.ca
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.