• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: sizeof float in Xcode debugger
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: sizeof float in Xcode debugger


  • Subject: Re: sizeof float in Xcode debugger
  • From: Greg Guerin <email@hidden>
  • Date: Tue, 24 Jun 2008 07:48:11 -0700

Michael McLaughlin wrote:

In a 32-bit float, there are typically 24 bits of (unsigned) mantissa.
Therefore, the largest integer representable by that mantissa is
2^24 - 1 = 16777215 = 1.68e7 (approximately) -- in other words, a
*precision* of something less than one part in 10^7 which is less than seven
(decimal) digits. The latter is traditionally referred to as seven
significant figures (assuming that only the last digit is uncertain).
Hence, there is no way to get nine decimal-digit precision with a normal
32-bit float even though GDB displays nine digits.

Please read the URL I linked to. Quoting from the section I noted:

"To illustrate extended precision further, consider the problem of converting between IEEE 754 single precision and decimal. Ideally, single precision numbers will be printed with enough digits so that when the decimal number is read back in, the single precision number can be recovered. It turns out that 9 decimal digits are enough to recover a single precision binary number (see the section Binary to Decimal Conversion)."

And from the section Binary to Decimal Conversion:

"Since single precision has p = 24, and 2^24 < 1^08, you might expect that converting a binary number to 8 decimal digits would be sufficient to recover the original binary number. However, this is not the case. ..."


Also reread what I wrote about GDB:
"The GDB representation could simply be an artifact of how GDB displays float values."


It could be that GDB is using a single-to-double conversion and then printing 9 figures for its own reasons, or it could be that GDB's author knows about the 9-digit requirement and is Doing The Right Thing without explaining why.


Also, note that the value displayed by the CodeWarrior debugger, 0.802060 (six decimal digits), is inaccurate. Reading that decimal number back in produces 0x3f4d53ce as the bits of the float, which is not the value you have. The correct input decimal value is 0.8020601. You might also try 0.80206007 (8 significant figures), which is the input needed to produce the binary value that lies between 0.802060 and 0.8020601 (bit-patterns 0x3f4d53ce and 0x3f4d53d0).


If GDB prints additional digits, it does not follow that those digits are inaccurate, imprecise, or even insignificant. E.g. the double representation my sample code printed is more precise, and it demonstrates the value printed by GDB is indeed correct to 9 significant figures. Granted, that precision might not be carried through in calculations, but it is nevertheless correct to 9 significant figures.

If the float were converted to double, then you would have those 9 significant figures (and more), and the decimal value of that double would be closer to 0.802060127 than to 0.8020601 or 0.802060. That may seem wrong or impossible, but it's because IEEE-754 binary floating-point is not decimal floating-point.


My original problem was that it appears as though the nine digits shown in
GDB are being treated as genuine (relative to our legacy CodeWarrior
debugger). I was trying to pin down the reason for divergent results in the
two cases.

I doubt that it's the debugger. Or if it is the debugger, then it's CodeWarrior that's inaccurate by displaying 0.802060.


It's unlikely this value at this point is the cause, and the problem is likely somewhere else. You should look at any trig, power, rounding, or other functions you might be using. Pretty much any C library function that takes float or double args is suspect, because you don't necessarily know what the internals are doing, and a single- bit difference in the lsb can result in divergence.

Finally, I assume the CodeWarrior code is PPC-native, as is the Xcode executable. If, however, one is PPC and the other is x86, then the divergent values may be due to differences in hardware precision.

  -- GG

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: sizeof float in Xcode debugger
      • From: Andy Lee <email@hidden>
  • Prev by Date: Re: sizeof float in Xcode debugger
  • Next by Date: Testing for Build Success?
  • Previous by thread: Re: sizeof float in Xcode debugger
  • Next by thread: Re: sizeof float in Xcode debugger
  • Index(es):
    • Date
    • Thread