Re: Decoding compressed to LPCM
Re: Decoding compressed to LPCM
- Subject: Re: Decoding compressed to LPCM
- From: Stephen Davis <email@hidden>
- Date: Mon, 21 Apr 2008 10:01:02 -0700
Your comment says you're setting the audio format to 44.1kHz, stereo,
16-bit native-endian integer but you're actually setting it to
44.1kHz, stereo, 32-bit native-endian floating point. I'd surmise
that might be the issue. :-)
Off the top of my head, I think you just need to change these lines:
clientFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked;
clientFormat.mBitsPerChannel = 8 * sizeof(short);
Note that this is setting the data to little-endian which is required
for the WAV format.
The format you were using should've worked since .WAV supports 32-bit
floating point files but it may have been failing for one or both of
the following two reasons:
1) You're on a powerpc machine and the kAudioFormatFlagIsPacked
specifies native-endian.
2) The CoreAudio file component for .WAV doesn't support floating
point as an output format.
The CABasicStreamDescription class really does help get these things
right and no one will yell at you (too much) if you mix in a little C+
+ with your Objective-C. ;-)
hth,
stephen
On Apr 21, 2008, at 9:15 AM, Aurélien Ammeloot wrote:
Thanks Jens for these explanations.
However I can't solve a problem.
Here's some code, where an ExtAudioFile has already been open
successfully (inputFile), I want to set its "ClientDataFormat" to
linear PCM (for which I've created an AudioStreamBasicDescription).
I'm not using CABasicStreamDescription as my code is in Objective-C.
============
UInt32 propSize;
AudioStreamBasicDescription clientFormat;
propSize = sizeof(clientFormat);
error = ExtAudioFileGetProperty(inputFile,
kExtAudioFileProperty_FileDataFormat, &propSize, &clientFormat);
if (error) {........}
// Defining a linear 44100Hz, 16bit, stereo linear PCM file format
clientFormat.mSampleRate = 44100.;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kAudioFormatFlagsCanonical;
clientFormat.mBitsPerChannel = 8 * sizeof(AudioSampleType);
clientFormat.mChannelsPerFrame = 2;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerPacket = 2 * sizeof(AudioSampleType);
propSize = sizeof(clientFormat);
error = ExtAudioFileSetProperty(inputFile,
kExtAudioFileProperty_ClientDataFormat, propSize, &clientFormat);
if(error) {.......}
===========
ExtAudioFileSetProperty returns "fmt?".
What am I doing wrong here ? Is there a way to know what I'm
actually doing wrong?
Using Leopard.
Le 21 avr. 08 à 15:34, Jens Alfke a écrit :
On 21 Apr '08, at 6:48 AM, Aurélien Ammeloot wrote:
5. Creating a new ExtAudioFile at destination with the
AudioStreamBasicDescription created above as format.
Did you set its file format too? If you want a WAV file, you have
to set that explicitly; otherwise I think you just get a file of
raw PCM frames, which isn't a standard file format.
If that's not it, you might have to post your code, or a link to
it, here.
Or like other APIs would convert PNG to JPG images with a single
line of code.
I think the difference is that images are usually loaded, processed
and displayed all at once, whereas audio is nearly always streamed.
Adding the time element complicates the APIs.
Images also have a more standardized internal representation,
nearly always 8 bits per pixel RGB, most often 72dpi. With audio
it's as though you always have to take into account the color
depth, color space, resolution and so forth; and do this without
loading more than a few scanlines into memory at once.
—Jens
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden