Re: Decoding compressed to LPCM
Re: Decoding compressed to LPCM
- Subject: Re: Decoding compressed to LPCM
- From: Jens Alfke <email@hidden>
- Date: Mon, 21 Apr 2008 07:34:17 -0700
On 21 Apr '08, at 6:48 AM, Aurélien Ammeloot wrote:
5. Creating a new ExtAudioFile at destination with the
AudioStreamBasicDescription created above as format.
Did you set its file format too? If you want a WAV file, you have to
set that explicitly; otherwise I think you just get a file of raw PCM
frames, which isn't a standard file format.
If that's not it, you might have to post your code, or a link to it,
here.
Or like other APIs would convert PNG to JPG images with a single
line of code.
I think the difference is that images are usually loaded, processed
and displayed all at once, whereas audio is nearly always streamed.
Adding the time element complicates the APIs.
Images also have a more standardized internal representation, nearly
always 8 bits per pixel RGB, most often 72dpi. With audio it's as
though you always have to take into account the color depth, color
space, resolution and so forth; and do this without loading more than
a few scanlines into memory at once.
—Jens
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden