Re: AudioConverterFillComplexBuffer for transcoding streamed audio [iMac OSX10.5.6]
Re: AudioConverterFillComplexBuffer for transcoding streamed audio [iMac OSX10.5.6]
- Subject: Re: AudioConverterFillComplexBuffer for transcoding streamed audio [iMac OSX10.5.6]
- From: Brian Willoughby <email@hidden>
- Date: Thu, 10 Jun 2010 01:51:13 -0700
On Jun 10, 2010, at 00:02, Richard Dobson wrote:
On 10/06/2010 00:38, Brian Willoughby wrote:
On Jun 9, 2010, at 14:59, Abhinav Tyagi wrote:
1) I have noticed that the header is 4096 bytes if we use
AudioFileWritePackets. I remember the header for RIFF wave file is
only 44 bytes. Why is the header 4096 bytes.
RIFF/WAVE does not have a header. RIFF is a sequence of chunks
which can
come in any order, and have varying sizes. The term 'header' usually
refers to a fixed length of data that always comes at the
beginning of a
file, but since WAVE does not work this way, I avoid using the term
header.
I think this is a little misleading. The order of chunks does not
come in any order (that privilege is reserved for AIFF files). In
a RIFF WAVE file, the fmt chunk ~must~ precede the data chunk. This
makes the file technically speaking streamable, as all the required
format information is supplied prior to the data to be rendered. It
is true that one can never say WAVE has a fixed-size header - the
best one can say is that there is a minimum header size
corresponding to the RIFF<WAVE> fixed opening chunk header, minimum
16byte fmt chunk, the data chunk and nothing else. It is really a
variable-sized header, which ~necessarily~ precedes the audio data
itself. Some systems insist on placing data after the audio data
chunk, something which I have always thought a very bad idea!
You are correct - my statement was a bit too brief to paint a full
picture.
I will return the favor and say that your "nothing else" comment is
also incorrect. While fmt must precede data, there is still the
opportunity for any other kind of chunk to come between RIFF<WAVE>
and fmt, and again between fmt and data. Broadcast Wave (BMF) is a
very good example of this. You mention your opinion that placing
chunks after the audio data is bad, but that basically means the only
thing left is to place them before the audio data, where they must
separate the RIFF, fmt, and data chunks.
To keep this on topic, my intention was to determine whether Abhinav
was writing his own WAVE parser or using AudioFile. You'll note that
I recommended using AudioFile to remove most of the potential for
errors in parsing WAVE.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden