Hi
I want to modify the iPhone “Speak Hear” sample application
to use a WAV based DSP library.
I figured out how to play audio files in just about any
format using AudioFileReadPackets() AudioQueueEnqueueBuffer(). How ever
I am not sure what format the buffer returned by AudioFileReadPackets will be
in or what format the buffer needs to be in before I can call
AudioQueueEnqueueBuffer().
At a high level I want do something like
1) read buffer from mp3
2) convert to wav format
3) process buffer using WAV based DSP library
4) play buffer
I think WAV is a linear PCM format. Will
AudioFileReadPackets() returns the buffer in a format I can use with my WAV
library or do I need to run some sort of conversation / codec? Can some one
point me at the library for conversions?
I assume that if I need to convert the format between step
2) & 3) I will need to perform another conversation from wav to ??? between
step 3) and 4) Is this correct?
I expect my application will get more complicated in the
future. Should I be using AudioQueues or something else? The Audio
documentation talks about audio graphs, how ever its not clear to me if they
are available on iPhone or if they will make my job easier or not.
Thanks
Andy
Replace Ads with your Stuff! Learn how @ www.prowebsurfer.com