• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: sizeof float in Xcode debugger
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: sizeof float in Xcode debugger


  • Subject: Re: sizeof float in Xcode debugger
  • From: email@hidden
  • Date: Tue, 24 Jun 2008 09:47:44 -0400


On Jun 24, 2008, at 9:02 AM, Andy Lee wrote:

On Jun 24, 2008, at 8:30 AM, Michael McLaughlin wrote:
In a 32-bit float, there are typically 24 bits of (unsigned) mantissa.
Therefore, the largest integer representable by that mantissa is
2^24 - 1 = 16777215 = 1.68e7 (approximately) -- in other words, a
*precision* of something less than one part in 10^7 which is less than seven
(decimal) digits. The latter is traditionally referred to as seven
significant figures (assuming that only the last digit is uncertain).
Hence, there is no way to get nine decimal-digit precision with a normal
32-bit float even though GDB displays nine digits.

I'm not too familiar with floating point but from this it seems the exponent is base 2, not base 10:


<http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html >

This would add more decimal digits of precision.

I wonder if it works to look at it another way? Consider the sequence 0.5, 0.25, 0.125, 0.0625, etc. Each bit of binary precision adds a digit of decimal precision, because halving that last 25 results in a 125. It seems to me you'd be able to express a number with 24 decimal digits after the decimal point with 24 bits of mantissa. Is my math off here?


Yes, your math is off here. I suspect we're getting off topic, but Michael is correct that you get about 7 significant decimal digits with single precision. You get about 15 significant decimal digits with double precision. You should ignore any number of significant digits beyond that displayed in any program that uses the IEEE standard floating/double precision formats. And even worse, because of the limited number of values that can be represented exactly (e.g., you can't represent 0.1 exactly using the IEEE format), you will get round-off errors meaning your answer may not even be accurate to 7 or 15 digits depending on your calculations.

Dave

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: sizeof float in Xcode debugger
      • From: Andy Lee <email@hidden>
References: 
 >Re: sizeof float in Xcode debugger (From: Michael McLaughlin <email@hidden>)
 >Re: sizeof float in Xcode debugger (From: Andy Lee <email@hidden>)

  • Prev by Date: Re: sizeof float in Xcode debugger
  • Next by Date: Re: sizeof float in Xcode debugger
  • Previous by thread: Re: sizeof float in Xcode debugger
  • Next by thread: Re: sizeof float in Xcode debugger
  • Index(es):
    • Date
    • Thread