Re: Decoding compressed to LPCM
Re: Decoding compressed to LPCM
- Subject: Re: Decoding compressed to LPCM
- From: Aurélien Ammeloot <email@hidden>
- Date: Mon, 21 Apr 2008 17:15:31 +0100
Thanks Jens for these explanations.
However I can't solve a problem.
Here's some code, where an ExtAudioFile has already been open
successfully (inputFile), I want to set its "ClientDataFormat" to
linear PCM (for which I've created an AudioStreamBasicDescription).
I'm not using CABasicStreamDescription as my code is in Objective-C.
============
UInt32 propSize;
AudioStreamBasicDescription clientFormat;
propSize = sizeof(clientFormat);
error = ExtAudioFileGetProperty(inputFile,
kExtAudioFileProperty_FileDataFormat, &propSize, &clientFormat);
if (error) {........}
// Defining a linear 44100Hz, 16bit, stereo linear PCM file format
clientFormat.mSampleRate = 44100.;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kAudioFormatFlagsCanonical;
clientFormat.mBitsPerChannel = 8 * sizeof(AudioSampleType);
clientFormat.mChannelsPerFrame = 2;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerPacket = 2 * sizeof(AudioSampleType);
propSize = sizeof(clientFormat);
error = ExtAudioFileSetProperty(inputFile,
kExtAudioFileProperty_ClientDataFormat, propSize, &clientFormat);
if(error) {.......}
===========
ExtAudioFileSetProperty returns "fmt?".
What am I doing wrong here ? Is there a way to know what I'm actually
doing wrong?
Using Leopard.
Le 21 avr. 08 à 15:34, Jens Alfke a écrit :
On 21 Apr '08, at 6:48 AM, Aurélien Ammeloot wrote:
5. Creating a new ExtAudioFile at destination with the
AudioStreamBasicDescription created above as format.
Did you set its file format too? If you want a WAV file, you have to
set that explicitly; otherwise I think you just get a file of raw
PCM frames, which isn't a standard file format.
If that's not it, you might have to post your code, or a link to it,
here.
Or like other APIs would convert PNG to JPG images with a single
line of code.
I think the difference is that images are usually loaded, processed
and displayed all at once, whereas audio is nearly always streamed.
Adding the time element complicates the APIs.
Images also have a more standardized internal representation, nearly
always 8 bits per pixel RGB, most often 72dpi. With audio it's as
though you always have to take into account the color depth, color
space, resolution and so forth; and do this without loading more
than a few scanlines into memory at once.
—Jens
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden