> The range of interest is only 0dBFS and
down
This is very misleading. dB doesn't really come into
it. Intermediate results can, and generally do, exceed 1. If
you look at the MAD fixed-point MP3 decoder, for example, it uses a 28.4 format
(i.e. 28 bit mantissa and 4 bits exponent) as its internal representation.
And if you do something like an FFT you need the full range offered by true
floating point numbers. Many of the other algorithms I use need this
too. In fact you probably need it for any
algorithm of any complexity.
A good way to look at (say) 32 bit floating point is that
it offers you 24 bits of precision whatever the magnitude of the number (unless
it overflows or underflows, of course). So by that metric, 80 bit floating
point offers you 64 bits of 'mantissa precision' which exceeds anything you can
do in fixed point. And with the blinding speed of floating point on Intel
there's no reason to look elsewhere.
Bonus question: which is faster? (note that the two
buffers are actually the same size):
(a) char buf [8192];
memset (buf, 0, 8192);
(b) double buf [1024];
for (int i = 0; i < 1024; ++i) buf [i] = 0.0;
In fact it is (b), because the code generated uses 64
bit loads and stores. Just goes to show there's no substitute for a
bit of experimentation.
Regards,
Paul Sanders.
|