Re: AudioUnits....
Re: AudioUnits....
- Subject: Re: AudioUnits....
- From: kelly jacklin <email@hidden>
- Date: Tue, 20 Dec 2005 08:44:16 -0800
On Dec 19, 2005, at 6:29 PM, George Malayil-Philip wrote:
I am trying to read a file into memory, process the data with a
series
of audio units, then convert it to .mp3 using LAME and then also
output it
to a file and default output. The problem is in taking the output
from the
last of the first series of audio unit renders and pass it on to
LAME and
from there to the output unit and file. What would be the best way
to do
this? Do I write my own audio unit that uses LAME? This way I
figure the
pull I/O model would work fine, would just have to insert the new
LAME audio
unit into the sequence.
If I'm understanding you correctly, you want to do simultaneous
playback to the device and export through LAME to an .mp3 file, right?
Naturally, you do not want to be doing the file writing on the audio
playback (IOProc) thread, or you are guaranteed to get dropouts, and
you should probably avoid doing the LAME encoding there as well, so a
threaded model is your best approach.
I think the easiest way to approach this would be to just let the
audio play through the output unit directly from your last unit, but
watch the audio stream being channelled through the output unit, and
send it to another thread to do the LAME encode and file writing.
Specifically, just connect up the graph to the output unit, and use
AudioUnitAddRenderNotify to install a callback on the output unit (as
long as you're not using an AUGraph, see below). Then your callback
will be invoked for every buffer that the output unit sees, and in
your callback (for the kAudioUnitRenderAction_PreRender phase) you
can copy the audio data from that buffer into a ring buffer (or other
suitable structure) that you use to feed the export (on another thread).
On the export thread, when buffers are copied into the ring buffer
from the output unit callback, you pick them up, feed them through
LAME, and write them to a file. Always assuming, of course, that
LAME can encode in realtime (it's very slow, but it should be able to
encode faster than realtime...).
You'll need to deal with making sure the export thread wakes up on a
regular-enough basis to service the ring buffer, so pick a suitable
strategy for doing that (either do a loop with short sleeps depending
on how well your data-handling is going, or poll (yuck!), or better
yet use a semaphore to coordinate with the audio output thread). One
could have the app's main thread be the export thread, but that is
generally not a good direction to go in, IMO, as you'll end up with
app responsiveness issue on lower-end machines.
The caveat above about using AUGraph is that an AUGraph installs a
render notify callback on the output unit, and does some processing
of connections and whatnot on that callback. This used to be a
problem, because only the one callback could be installed on the
output unit (I forget whether this has been addressed or not, or what
the exact issue was...), so if you use AudioUnitAddRenderNotify on
the output unit, you nuke the AUGraph's callback, and it does not get
to do its servicing. If you are using an AUGraph, then you have two
choices: use AUGraphAddRenderNotify, or install the callback on the
last unit before the output unit (and then copy the audio during the
kAudioUnitRenderAction_PostRender phase). Both of these approaches
should work fine.
Hope this makes sense and helps...
kelly
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
References: | |
| >AudioUnits.... (From: George Malayil-Philip <email@hidden>) |