• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Audio recording bitdepth
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Audio recording bitdepth


  • Subject: Re: Audio recording bitdepth
  • From: Brian Willoughby <email@hidden>
  • Date: Sat, 28 Nov 2009 16:13:16 -0800

This topic has been covered in past threads on the CoreAudio mailing list. For the full details, search for that.

The brief summary is as follows:

To convert from 16-bit to float, divide by 32,768.0
To convert from float to 16-bit, multiply by 32,768.0
To convert from 24-bit to float, divide by 8,388,608.0
To convert from float to 24-bit, multiply by 8,388,608.0

All values are signed, both fixed-point and floating-point.

To summarize the debate, some folks were concerned that positive 16- bit integers never exceed 32,767 and wanted to alter the conversion to adjust for this. The problem is that causes distortion of the waveform and introduces quantization errors. To avoid the quantization, always use a pure power-of-two factor.

Of course, the easiest way to handle this is to use an AudioConverter, so that you not only get the official CoreAudio conversions, but it will also run in highly optimized code.

Brian Willoughby
Sound Consulting


On Nov 28, 2009, at 14:02, Bjorn Roche wrote:
A recent discussion on another mailing list made me realize that I may not recording (or playing back) audio in a bit-transparent way. Currently, my app interacts with CoreAudio via floating point numbers, and I'd like to keep it that way [1]. I would like to make sure that integer-based linear-PCM files recorded or played back (with the assumption of unity gain in between) on a device with the same or higher bit-resolution remain bit-for-bit identical. For example, a 16-bit file should play back via a 16-bit spidf interface exactly the same as it is on the file, despite having been converted to float and back to get into Core Audio (of course, dither may also be an issue. I see nothing in the docs about dither).


Unfortunately, I cannot find any documentation about what int/float/ int transform is used by core audio, so I can't be sure (without extremely painful testing) of what transform I should use as an inverse transform. So if someone could let me know how Apple handles these conversions, so I can properly invert them, I'd really appreciate it.

	thanks,

[1] this is because my app uses PortAudio, which is a portable API that is built on top of APIs like CoreAudio. It interacts with the AUHAL layer and does it's own conversions to non-float formats when required. My app just speaks to it using float, and does it's own conversions when reading files/playing back, which is not ideal, since CoreAudio could give it the raw int-based data, but that's the way things are.

bjorn

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Audio recording bitdepth
      • From: Bjorn Roche <email@hidden>
References: 
 >Audio recording bitdepth (From: Bjorn Roche <email@hidden>)

  • Prev by Date: Audio recording bitdepth
  • Next by Date: Re: Audio recording bitdepth
  • Previous by thread: Audio recording bitdepth
  • Next by thread: Re: Audio recording bitdepth
  • Index(es):
    • Date
    • Thread