Re: sizeof float in Xcode debugger
Re: sizeof float in Xcode debugger
- Subject: Re: sizeof float in Xcode debugger
- From: Michael McLaughlin <email@hidden>
- Date: Tue, 24 Jun 2008 08:30:50 -0400
- Thread-topic: sizeof float in Xcode debugger
Greg Guerin wrote:
>> I doubt that a float could really have nine sig. figs. as Xcode
>> indicates
>> and wonder if some garbage might be getting included somehow.
>You seem to be confusing "sig. figs." (i.e. significant figures, or
>significant decimal digits), with significant bits.
No, I meant significant figures (decimal digits) as usual.
In a 32-bit float, there are typically 24 bits of (unsigned) mantissa.
Therefore, the largest integer representable by that mantissa is
2^24 - 1 = 16777215 = 1.68e7 (approximately) -- in other words, a
*precision* of something less than one part in 10^7 which is less than seven
(decimal) digits. The latter is traditionally referred to as seven
significant figures (assuming that only the last digit is uncertain).
Hence, there is no way to get nine decimal-digit precision with a normal
32-bit float even though GDB displays nine digits.
Granted, I am speaking as a former chemistry professor and that may
constitute a biased perspective in this forum but it is quite standard
nonetheless.
****
My original problem was that it appears as though the nine digits shown in
GDB are being treated as genuine (relative to our legacy CodeWarrior
debugger). I was trying to pin down the reason for divergent results in the
two cases.
--
Mike McLaughlin
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden