Re: Calculating peak level in db
Re: Calculating peak level in db
- Subject: Re: Calculating peak level in db
- From: Brian Willoughby <email@hidden>
- Date: Tue, 15 Oct 2002 05:45:52 -0700
[ Well, the trick I use is to use convergent rounding when
[ converting from floats to shorts, and my underlying concern is
[ with preserving exact symmetry of the waveform - that is the case
[ if I map floats to +- 32767 (so that 16bit zero = true DC).
Are you talking about preserving the exact symmetry of an externally sampled
waveform, or a purely software synthesized waveform?
By the way, "true DC" does not mean 0 Volts. A DC signal can have any
voltage. Besides, at true 0 Volts, there is no current flow, thus no "C" for
the "DC"
Scaling 16 bit samples by 32768 on input and output preserves the exact
waveform that was sampled, and recreates it on the output. If you're concerned
about DC biasing and true zero volt signals being sampled as 16 bit zero, you
need to examine any capacitive coupling on the analog inputs of your audio
interface, research the specifications of the ADC chip used in your interface,
and double check any reference voltages or bias adjustments. This is not
something that should be handled with asymmetrical conversion math in software,
because each ADC may be different with respect to the DC bias. If you divide
by 32768 and multiply by 32767, then I'm calling that asymmetrical conversion
(not to be confused with the asymmetrical conversion that I found in the old
CoreAudio sample code).
[ A sampled sinusoid that touched both -32768 and +32767 would have
[ a true DC point 0.5 below 16bit zero, on rendering to analogue.
[ So an ADC that returned such values from an ideal input sinusoid
[ would be doing something wrong, in my view.
You've described two different things: an "ideal input sinusoid" would not
touch both -32768 and +32767 when sampled unless you clipped it or added
one-half-lsb.
Rendering to analog does not create a DC biasing problem that was not present
when the original analogue signal was sampled to digital. If we were dealing
with software that kept the 16 bit samples all through the buffers (like the
original ProTools), nothing you've described would be any different, so I don't
see how the choice to use float in CoreAudio should require any special
treatment to eliminate bias.
Not to mention the fact that most consumer DAC circuits will be capacitively
coupled to the analog outputs, so your contrived sinusoid with a 0.5 V bias
will be centered around 0 V anyway.
[ A sinusoid that somehow spanned +- 32768 (which would of course
[ need something larger than 16bits to hold it), would likewise have
[ true DC at zero.
Yes, but this sinusoid in the analog domain would be clipped when sampled by a
16 bit ADC, so nothing you can do in the int->float->int conversion will
resurrect the clipped peak values.
If you're talking about a software synthesized sinusoid, then this would be a
programmer error, considering that CoreAudio is going to clip the +32768 sample
to +32767 when played back on any 16 bit hardware. 24 bit hardware would
handle it fine, though.
[ To cut a long story short (now there's a pun!), and excluding the
[ matter of dithering, I can read 16bit samples into floats, scaling
[ by 1/32768 (sadly, I don't know how to apply shift to a float yet,
[ in ANSI C), and convert said floats back to shorts using
[ convergent rounding (~not~ a 'simple' C cast - and on Intel it
[ requires but three _asm instructions) after scaling by 32767, and
[ amazingly enough, the output samples are bit for bit identical to
[ the input ones.
Even if you could prove that your convergent rounding preserved bit for bit
through all 65536 possible sample values, I don't see why it's worth extra
calculations/instructions when scaling by 32768 does the same thing.
Amazingly enough, dividing incoming 16 bit samples by 32768 to convert to
float, and multiplying output samples by 32768 for a 16 bit DAC produces a bit
for bit identical PCM stream. It's a pretty simple mathematical proof to show
that dividing by 32768 and then multiplying by 32768 produces the original
value. If convergent rounding is bit for bit identical to this simple scaling
(handled mostly by shift operations), then I would pick the operation that is
less processor intensive. The only way that convergent rounding could be
superior is if it did something different than produce identity, which would
introduce distortion of the 16 bit stream.
Brian Willoughby
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.