• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Question about Default Output Unit Example
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question about Default Output Unit Example


  • Subject: Re: Question about Default Output Unit Example
  • From: Jens Alfke <email@hidden>
  • Date: Sun, 6 Apr 2008 13:25:47 -0700


On 6 Apr '08, at 11:57 AM, Thorrin wrote:

I always wondered about sound formats but never really
took the time to investigate things. If I take an
audio file and look at it in a hex editor, are the
values in the file samples?

There are dozens of formats, in 4 categories that I just made up:

Uncompressed: AIFF, WAV and some older formats like SD2. These are what you're thinking of. Apart from headers and such, they consist of raw samples. But there are dozens of ways to represent samples! Numeric format can be from 8 to 24 bit or more (16 is most common), big- or little-endian or floating point. Most audio has at least two channels, and the channels can either have their samples interleaved or be stored one after the other. Most formats use a linear mapping from sample values to amplitude, but some use a logarithmic one.

Simple compression: Mostly older formats like MACE and µ-law. These use some simple techniques to compress the data. A typical trick is to subtract adjacent sample values and encode those differences using a variable-length code where small numbers take fewer bits. These are generally obsolete, though, because the compression isn't very effective.

Advanced lossy compression: MP3, AAC, Ogg, etc. These use very complicated digital signal processing to break the sound into frequency bands, and compress the data of each band in such a way that, even though a lot of information is lost, the human ear won't notice. (For example, if there's high volume at one pitch, your ear ignores a low volume at a nearby pitch, so the data for it can be tossed out.) They're the auditory equivalent of JPEG.

Advanced lossless compression: FLAC, Apple Lossless. I really don't know how these work. I assume they use the same psycho-acoustic tricks as the lossy techniques, but only to compress the data as much as possible without losing any information.

In all but the uncompressed formats, you need a decoder that reads a chunk of the file, decompresses it, and outputs an array of raw samples.

To generate (or parse) audio files, use the AudioFile, ExtendedAudioFile or AudioFileStream APIs in AudioToolbox.

(Wikipedia has good articles about all of these formats.)

—Jens

Attachment: smime.p7s
Description: S/MIME cryptographic signature

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Question about Default Output Unit Example (From: Thorrin <email@hidden>)

  • Prev by Date: Question about Default Output Unit Example
  • Next by Date: Re: MIDI control of simple AU parameter?
  • Previous by thread: Question about Default Output Unit Example
  • Next by thread: AudioBufferList changing the number of buffers
  • Index(es):
    • Date
    • Thread