• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: sizeof float in Xcode debugger
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: sizeof float in Xcode debugger


  • Subject: Re: sizeof float in Xcode debugger
  • From: Andy Lee <email@hidden>
  • Date: Tue, 24 Jun 2008 09:02:39 -0400

On Jun 24, 2008, at 8:30 AM, Michael McLaughlin wrote:
In a 32-bit float, there are typically 24 bits of (unsigned) mantissa.
Therefore, the largest integer representable by that mantissa is
2^24 - 1 = 16777215 = 1.68e7 (approximately) -- in other words, a
*precision* of something less than one part in 10^7 which is less than seven
(decimal) digits. The latter is traditionally referred to as seven
significant figures (assuming that only the last digit is uncertain).
Hence, there is no way to get nine decimal-digit precision with a normal
32-bit float even though GDB displays nine digits.

I'm not too familiar with floating point but from this it seems the exponent is base 2, not base 10:


<http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html >

This would add more decimal digits of precision.

I wonder if it works to look at it another way? Consider the sequence 0.5, 0.25, 0.125, 0.0625, etc. Each bit of binary precision adds a digit of decimal precision, because halving that last 25 results in a 125. It seems to me you'd be able to express a number with 24 decimal digits after the decimal point with 24 bits of mantissa. Is my math off here?

My original problem was that it appears as though the nine digits shown in
GDB are being treated as genuine (relative to our legacy CodeWarrior
debugger). I was trying to pin down the reason for divergent results in the
two cases.

Did you try my suggestion to see what your *program* thinks the values are in the two IDEs?


   NSLog(@"float is %.12f", myFloat);

If this prints the same thing in both IDEs, then there is in fact no discrepancy, except in the amount of precision the respective debuggers choose to display.

--Andy

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: sizeof float in Xcode debugger
      • From: email@hidden
References: 
 >Re: sizeof float in Xcode debugger (From: Michael McLaughlin <email@hidden>)

  • Prev by Date: Re: sizeof float in Xcode debugger
  • Next by Date: Re: sizeof float in Xcode debugger
  • Previous by thread: Re: sizeof float in Xcode debugger
  • Next by thread: Re: sizeof float in Xcode debugger
  • Index(es):
    • Date
    • Thread