• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Native Device Formats
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Native Device Formats


  • Subject: Re: Native Device Formats
  • From: Brian Willoughby <email@hidden>
  • Date: Mon, 9 Jun 2008 13:25:49 -0700


On Jun 9, 2008, at 03:38, Mikael Hakman wrote:
such as e.g. that using inclusive [-1.0, +1.0] range mapping when working with CD data would cause distortion and therefore we are forced to use [-1.0, +1.0) exclusive mapping. If you use lossless mapping from signed integers stored on a CD to float, which [-1.0, +1.0] mapping is, and you interpret the numbers in the same way as they are meant to be processed by a DAC, then no distortion will occur.

That is not true. There is no lossless mapping from CDDA to float [-1.0, +1.0], although it is sometimes possible to recreate the lost data under carefully controlled conditions.


I explained that your mapping would result in quantization noise. You must consider that float does not have infinite resolution. A 32- bit float has only 24 bits of mantissa. The standard mapping to [-1.0, +1.0) does not create any extra bits, thus none are lost in the converrsion. Your mapping creates quantization noise because very long bit sequences are generated by the conversion, and the intermediate results are truncated in the float. You're lucky that the conversion back to integer involves rounding back to the original values, but any processing in the float world could easily bake in the quantization noise with no hope of reversing it.

The standard mapping used in the industry, and used by CoreAudio, involves dividing the integer values by a value like 0x800000, with no offset. The reason that there is no quantization noise introduced by the standard mapping is that there is only one significant bit in the conversion factor, and thus it is equivalent to a bit shift. This also happens to allow for faster optimizations of the conversion, but it's not the only reason to choose the standard mapping.

Your mapping involves dividing by 0x7FFFFF as well as adding in an offset of 1.0/0x800000. Either or both of these operations result in more significant bits than will fit into a 32-bit float, and many sample values won't even fit into a 64-bit float. What results is a signal in the float representation which has quantization distortion, and very likely also has nonlinear distortion, depending upon whether you apply an offset and how you do that. Your testing only succeeds because you can recreate and cancel the quantization noise (and any nonlinear distortion) that you introduced, but even a simple gain fader applied to the float values would prevent your paired nonlinear conversions from remaining lossless. Your suggested mapping is incredibly fragile, and only works if no processing at all is done to the float values. There's no point in converting to float if you are not able to apply any processing without distorting the original signal. Keep in mind that performing math on a signal which has nonlinear distortion will not produce valid results - e.g. frequency modulation, FFT, etc.

Likewise the assertion that suitability or convenience of a particular mapping depends on the source of digital audio is given without any rational explanation, verifiable support or proof. It would be a good argument if it was true, and this is the reason why you use it.

I think you misunderstood the purpose of my three examples. They were not meant as proof of suitability of any given mapping. I was merely illustrating how to successfully use the standard mapping which CoreAudio requires.


The proof is in the mathematical representation. The algorithms I described are all based on the fact that we use the standard mapping, and once you accept the standard mapping for the reasons above, then you are forced to use these algorithms, or something basically equivalent.

My intent was to show that the standard mapping is only marginally unsuitable for digitally-generated signals and dynamics processors which attempt to avoid clipping. In those cases, I gave examples of algorithms which work within the constraints of the standard mapping. In other words, I suggest that you accept the standard mapping, and find ways to work within the standard without clipping.

For most parts you are describing how thing are, not why they are this way, which was my original question. The closest thing to an explanation is Richard's assertion that this +1.0 exclusive mapping provides "a nice simple mapping between the integer form and the mantissa of the float", meaning, I suppose, that you can do the conversion by bit-twiddling (faster) instead of division and multiplication (slower). This is true, but equally nice simple mapping exists even when using inclusive +1.0 mapping.

You're right, Richard does touch on some of the advantages. But those are mostly beneficial side-effects of the real reason for using the standard mapping, which I outlined above: To avoid quantization. (Your belief that your mapping does not introduce quantization noise is mistaken).


The important fact is that all DAC and A/D chips use twos-complement binary integers. As such, all signals have exactly one negative code which cannot be converted to a positive code. Your desire to rid yourself of this unfortunate fact while converting into the float representation is a futile and moot exercise. If anything, you are exchanging one problem for another, worse problem. Your mapping offers +1.0 as a valid sample value for synthesized signals, but loses a precise value for 0. You seem to think this isn't important, but you've not considered the machine code necessary to work with such numbers. Standard math libraries are just not going to work with a signal representation where zero does not have a code value of 0. That alone puts your mapping at a significant disadvantage to the standard mapping.

In other words, I don't see how you have clearly shown your mapping to be more "intuitive." At first, you asked for a simple [-1.0, +1.0] range, which I will admit is quite intuitive. But then you modified this to represent the integer range from 0x800000..0x7FFFFF, which results in a quite different float world. In the former example, zero is 0.0, but in your latter example zero is not 0. I don't see how the latter is intuitive in the least.

Consider a series of numbers consisting of following values: +1.0, -1.0, +1.0, -1.0, etc. What signal does this series represent?

That represents a signal which is completely impossible in the analog domain. All proper DACs must implement a filter which has no output at >fs/2. Your digitally-generated signal has maximum amplitude at fs/2, which would require an ideal filter that transitions from maximum amplitude to minimum amplitude instantaneously in the frequency domain. This is impossible in actual filters in both the digital and analog domain. Sorry.


Now, consider the following series of signed 24-bit binary integers: 0x7fffff, 0x800000, 0x7fffff, 0x800000, etc. The values are the maximum and the minimum of a 24-bit number, and these are the values transmitted by S/PDIF and processed by a DAC. What does this signal represent?

Again, this is an impossible signal in the analog world. Thus you'll never see it on S/PDIF unless it is digitally-generated.


But if you were to reduce the frequency of this signal significantly, I still don't see the importance. The set of signals which can be represented only by [-1.0, +1.0], but not by [-1.0, +1.0), is infinitesimal compared to the vast set of real-world signals that people are working with. Gaining the ability to represent this infinitesimal set of marginally useful signals in a distorted format has no hope of outweighing the advantages of the undistorted standard mapping.

I understand what you are saying, I agree with some of it, and I disagree with other parts. In particular I disagree with parts that are pure assertions and/or assumptions without any rational explanation or proof,

To a certain extent, I understand your desire for a rigorous proof. However, I hardly think that the CoreAudio mailing list is an appropriate place to have the expectation of such a multidisciplinary proof. It is not possible to prove based on logic alone. An understanding of mathematics, binary representations, processor machine code, and even analog circuit analysis is required for complete proof. You're on your own in those respects.




Brian Willoughby Sound Consulting

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Native Device Formats
      • From: "Mikael Hakman" <email@hidden>
    • RE: Native Device Formats
      • From: "Richard Furse" <email@hidden>
References: 
 >Native Device Formats (From: email@hidden)
 >Re: Native Device Formats (From: Jeff Moore <email@hidden>)
 >Re: Native Device Formats (From: "Mikael Hakman" <email@hidden>)
 >Re: Native Device Formats (From: Jeff Moore <email@hidden>)
 >Re: Native Device Formats (From: "Mikael Hakman" <email@hidden>)
 >Re: Native Device Formats (From: Brian Willoughby <email@hidden>)
 >Re: Native Device Formats (From: "Mikael Hakman" <email@hidden>)

  • Prev by Date: auval seeding list
  • Next by Date: Can anyone explain me the audio playing on iPhone.
  • Previous by thread: Re: Native Device Formats
  • Next by thread: RE: Native Device Formats
  • Index(es):
    • Date
    • Thread