• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: float sample conversion (was Re: Calculating peak level in db)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: float sample conversion (was Re: Calculating peak level in db)


  • Subject: Re: float sample conversion (was Re: Calculating peak level in db)
  • From: Richard Dobson <email@hidden>
  • Date: Thu, 17 Oct 2002 10:42:33 +0100

I have been checking this as carefully as I can. The 24bit claim was indeed premature (my display doesn't give me enough decimal places for the sample value to see the differences!), but the differences is itself interesting. With the same process applied to a 24bit set, the 'kink' appears not at -6dB (the 0.5 mark) but at ~-2.5dB (the 0.75 mark). I haven't done the maths, but it is reasonable to presume that the "noticeable difference point" will be even closer to peak in a 32bit int system, and of course much lower down in a 8bit system, in which most medium to small level changes will make no difference at all!

So as I see it, this process does in fact demonstrate very interestingly what the effects of quantisation (and, in particular, rounding), are. The point is that the level change is very very small. Going back to the 16bit case, the change is so small that all values below 16384 are changed so slightly, that the rounding will just return them to their original values. That is what I meant when I said the 'distortion' falls below the quantisation noise. At 16385, the change is just enough to give a new value below 16384.5, which will now be rounded to 16384.


In the 24bit case, the difference is commensurately smaller (8388607/8388608), and only values above 6291456 will get changed.


So this illustrates the fact (which I suppose is intuitively obvious) that there are level changes too small to register 'cleanly' in an n-Bit system, as the changes mostly get swallowed by the rounding. So, if an AudioUnit with a hi-res volume control sets a level change of 0.999969482, to a signal at 16bit full-scale, the ouput, via the rounding, will show exactly the same change (assuming no dithering) that has been demonstrated in this discussion.

So yes, in a sense this is distortion, but it is the same distortion that any process such as level change or filtering will introduce. I am not "shifting values" above some determined point, that is merely an emergent feature of applying a very small level change to a (relatively speaking) coarsely quantised signal. Which of course why we should dither after any direct processing on a 16 bit signal, and similarly dither floats before converting to 16bits.


Which brings me to my (hopefully!) final CoreAudio question on this thread (and not currently having a Mac to test on, or even study the new SDK)- who is responsible for applying dither when sending floats to a 16bit output device? The driver writer - the CoreAudio engine itself (if so, what dither method?), or the general developer writing AudioUnits, etc?



Richard Dobson


....


Your radical approach does more than drop the level and prevent peak clipping. What you've done is add quantization noise, and distorted the wave shape (by introducing a nonlinearity at the -6 dB transition, on both the positive and negative sides). You've effectively shifted the clip to the middle of the curve from the peak! The difference between the techniques being compared is that waveforms will rarely touch +1.0 precisely without going beyond that, but waveforms will frequently hit the -6 dB level and go beyond. So your approach will cause problems more frequently than the one used in CoreAudio. Studies have been done on human perception of hard clipping, but I don't know how bad it sounds to introduce nonlinearities in the middle of the wave.

...
You are not minimizing quantization errors. You are merely shifting them by 1/2 LSB in a signal dependent fashion (i.e. whether the signal is above or below the -6 dB threshold). This is waveform distortion. You have in effect mixed in a square wave that tracks the incoming signal. Even though this square wave has an amplitude of 1/2 LSB, it's still noise that you've added! And all because 16 bit twos complement numbers have a missing code for +32768


[ The scale factor 32767/32768 is so small that it in effect falls
[ inside the 16bit quantisation distance, except, as noted
[ elsewhere, for signals above -6dB.

(See above)
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.

References: 
 >Re: Calculating peak level in db (From: Kurt Bigler <email@hidden>)
 >Re: Calculating peak level in db (From: Richard Dobson <email@hidden>)
 >Re: Calculating peak level in db (From: Brian Willoughby <email@hidden>)
 >Re: Calculating peak level in db (From: Richard Dobson <email@hidden>)
 >float sample conversion (was Re: Calculating peak level in db) (From: Brian Willoughby <email@hidden>)

  • Prev by Date: Re: Synchronization problem in 10.2
  • Next by Date: Re: Crashing when loading 3rd party AU
  • Previous by thread: float sample conversion (was Re: Calculating peak level in db)
  • Next by thread: Re: Calculating peak level in db
  • Index(es):
    • Date
    • Thread