Re: Examples for AudioFileStream and AudioConverter?
Re: Examples for AudioFileStream and AudioConverter?
- Subject: Re: Examples for AudioFileStream and AudioConverter?
- From: William Stewart <email@hidden>
- Date: Tue, 9 Sep 2008 11:56:52 -0700
AUHAL doesn't deal with compressed data, so you either have to have a
convert to convert the compressed data to LPCM, or you can use the
audio queue api to take your compressed buffers and play them back
directly.
If you want to just grab the buffers, decode them and store them for
use later, then the audio converter is what you need to use - it is
far better to think of this as a pull model - drive it from your calls
to audio converter fill complex buffer, rather than as a push where
you try to figure out how much you need to call in order to convert
one given input buffer. The converter will buffer data for you (keep
your buffer around if you don't consume it all in one pull) and you
can keep pulling on it to convert data and if you don't have any more
input you just signal that you don't and it will return what it can
and you can continue on next time around..
Bill
On Sep 9, 2008, at 11:51 AM, james mccartney wrote:
On Sep 9, 2008, at 11:41 AM, Nick Zitzmann wrote:
OK, I've searched around Google, I've read through the ConvertFile
sample source code, and now I need help. I am not new to Mac OS X
programming, but I am very new to CoreAudio programming.
Does anyone have any sample code that effectively ties
AudioFileStream and AudioConverter together?
Here's the story. I'm working with a file format that embeds MP3
data in it. The MP3 data can be of any size and length, from a full-
length song to a split-second sound effect, and is often VBR I
assume to conserve space.
So far I've got the AudioFileStream part working. AudioFileStream
loads the data, I get the AudioStreamBasicDescription describing
the MP3 data, and I pass that to AudioConverterNew() along with my
output requirements - 44KHz native-endian 16-bit non-floating-point
stereo linear PCM for loading into an AudioBuffer later on in the
program's execution.
But now how do I process this correctly? I already tried the
deprecated (and much simpler) AudioConverterFillBuffer(), and found
out that it doesn't work with VBR data. Darn! So I tried
AudioConverterFillComplexBuffer(), and I'm hopelessly lost with
that function. The ConvertFile source code was kind of helpful, but
it dealt with AudioFiles instead of AudioFileStreams, and there are
some important differences between the two.
Also, AudioConverterGetProperty() is returning a ridiculous value
when I try to get the kAudioConverterPropertyMaximumInputPacketSize
for VBR packet sizes. The return value is 8192, which is kind of
crazy considering some of these MP3s are less than a kilobyte in
size.
So again, does anyone have any sample code I could follow? I could
post my non-working code if requested...
Where are you going to be putting the data after you get it out of
the converter?
If you are writing it to a file, then you should use ExtAudioFile,
which will do the conversion for you.
If you are playing it to hardware, then you should use an output
audio unit (e.g., AUHAL or DefaultOutputUnit) which will do the
conversion for you.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden