Re: Getting raw audio data from an AIFF file
Re: Getting raw audio data from an AIFF file
- Subject: Re: Getting raw audio data from an AIFF file
- From: Kurt Revis <email@hidden>
- Date: Sat, 8 Feb 2003 19:46:18 -0800
On Saturday, February 8, 2003, at 04:42 PM, Andrew GFunk wrote:
Hi! I'm a newbie to audio app development. Under OS X, is there an
easy way to get at the raw audio data (PCM) of an AIFF (or similar
type) file? All I need besides the raw audio data is the sample-rate,
resolution, and number of channels of the sound...
If you are only interested on running on 10.2 or later, and only want
to read AIFF or WAV files, I would recommend looking at the AudioFile
API in the AudioToolbox framework. Specifically:
/System/Library/Frameworks/AudioToolbox.framework/Headers/AudioFile.h
I don't know of any real documentation yet, but the comments in the
headers should get you started. You would use AudioFileOpen() to open
the file and get an AudioFileID. Then use AudioFileGetProperty with
kAudioFilePropertyDataFormat to find out the data format (in a
AudioStreamBasicDescription structure), and use
kAudioFilePropertyAudioDataByteCount to find out how much audio data is
present.
To get the data from the file into an NSData, you have a number of
options. To be lazy and just read the whole file at once, just create
an NSMutableData of the appropriate size, and then call
AudioFileReadBytes(), giving it [yourMutableData mutableBytes] as the
'outBuffer' parameter.
If you have the Dec 2002 developer tools, or earlier dev tools and the
CoreAudio SDK v1.0 (available at connect.apple.com), you will have a
bunch of examples in /Developer/Examples/CoreAudio/Services. At least
one of these uses the AudioFile API. (I don't have them right in front
of me to check...)
The AudioFile API is very new, but there is other older API which can
do similar things. The Carbon Sound Manager has a few functions which
deal with AIFF files, but they don't do much for you. QuickTime can
also read audio files of many different formats (including compressed
formats like MP3), but it is difficult to learn and use.
Also, how would you go about spliting up the stereo data of an AIFF
file into 2 seperate NSData objects (left and right)? Then how would
you recombine the 2 (after doing some DSP on each) to form a stereo
AIFF file again?
Generally AIFF files have interleaved data. So if your file contains
stereo 16-bit samples, you would have:
2 bytes -- sample 0 of channel 0
2 bytes -- sample 0 of channel 1
2 bytes -- sample 1 of channel 0
2 bytes -- sample 1 of channel 1
and so on. (I forget which channel is right and which channel is left.)
It shouldn't be hard to write a little loop to read each sample and
write it to a separate NSData object, and the opposite operation is
just as simple.
If you can manage it, I recommend writing your processing code in such
a way that it can operate on the interleaved data directly. This could
be significantly faster and would use less memory, since you wouldn't
need to copy the data into separate buffers. Instead of having your
processing code advance directly from one sample to the next, give it a
number of samples to step over.
--
Kurt Revis
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.