Pointers for n00b?
Pointers for n00b?
- Subject: Pointers for n00b?
- From: Luke Evans <email@hidden>
- Date: Fri, 14 Mar 2008 17:25:17 -0700
Hello list.
I have some code that generates audio samples (as simple samples with
a time code), and I wish to output this through the default audio
device to hear it.
So far, I have looked through the core audio docs and played with some
sample code. I can see how to output code to the output device, and
indeed I have some test code that generates the "next part" of a sine
wave on callback to my waveIOProc. This works great.
Now I need to be able to feed samples to waveIOProc that are derived
from those that are independently created by my audio generator. It's
not clear to me whether there is anything for 'free' in Core Audio
that would allow me to simply dump audio sample/time stamp tuples into
a stream/buffer and have the audio output callback automatically pull
data out of this, perhaps correcting for its own timebase.
So, I'd like to know:
- What's the nearest that Core Audio gets to my requirement? I'm I
looking at some fairly trivial plumbing of existing pieces here, or do
I need to implement some named component (in which case what is this?)
and do all the heavy lifting of adjusting time bases with
interpolation etc?
- Assuming this sort of thing isn't too far beyond some starting point
I might find as a sample code somewhere, could somebody point me at
that sample? Trying to sort this out from 'first principles' in the
docs looks like more of a handful than most frameworks I've attempted
to learn so far.
Much appreciated.
Luke
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden