A conceptual sanity check (or two)...
A conceptual sanity check (or two)...
- Subject: A conceptual sanity check (or two)...
- From: Daniel Jalkut <email@hidden>
- Date: Tue, 14 Nov 2006 14:59:21 -0500
Hello list! I am delving back into CoreAudio after an absence, and
having never really become proficient at the lower levels of audio-
tweaking. Having said that, hopefully my questions below will carry
the flavor of my having at least tried to RTFM.
Conceptual goal: mix an arbitrary number of input sounds from files,
possibly overlapping in time and possibly with large programmed
delays of silence between them, into a single audio stream and save
to disk as m4a aac format. (The idea is to support an "export to
iTunes" feature for my application, FlexTime).
My current plan of attack for achieving this is to set up an AUGraph
for doing the sound mixing, and pull data from that graph as fast as
I can to feed an ExtAudioFile reference opened for writing:
http://www.red-sweater.com/temp/MixingGoal.png
Does this seem like a reasonable approach? Am I oversimplifying
things or (please!) perhaps overlooking an even simpler way of
accomplishing this? I realize that for much of the time I'm liable
to be "pulling silence" from the AUGraph. I'm assuming this is
reasonably high performance, but in the worst case I'll special case
times when I'm not expecting any sound and just generate the silence
myself.
I'm basing my thinking right now on a presumption that AUGraphs are
purely processing mechanisms, and that any time-scale that may be
applied to them is based solely on the rate at which an output unit
pulls on them. Is this correct?
Thanks a lot for your help,
Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden