Re: FW: NSNumberFormatter: Display INT as decimal
Re: FW: NSNumberFormatter: Display INT as decimal
- Subject: Re: FW: NSNumberFormatter: Display INT as decimal
- From: "Michael Ash" <email@hidden>
- Date: Tue, 4 Nov 2008 23:33:50 -0500
On Mon, Nov 3, 2008 at 8:30 PM, HAMILTON, Steven
<email@hidden> wrote:
> That would be an idea. I think my reluctance to do that initially was down still not understanding floats and their imprecision. I use bindings for some columns directly into the Core Data INT attribute. I presume I can use a Value Transformer to do this last tweak of the data?
>
> Also, am I right in thinking that a float's imprecision only occurs when trying to use the least significant digit of the float? The one that's rounded up because we've ran out of bits? And therefore all the values before that are expected to be accurate?
No, a float's imprecision depends entirely on where it came from.
For example:
float x = 5200;
float y = x / 100.0;
The value in y is an exact value with no error. This is because 5200,
100, and 52 can all be represented exactly, and simple operations such
as division are specified (I think) to produce exact results within
the precision offered by the data type.
Now consider this example:
float x = 1;
float y = x + 1e38;
float z = y - 1e38;
The value of z is probably (I say probably because I think it depends
on the internal representation being used) be 0, not 1. This is
because the float data type isn't precise enough to hold 1e38 + 1, so
it gets rounded.
You have probably seen advice not to use floats for money. And this is
very good advice! Floats are imprecise, and doing money calculations
using floats will quickly give you fractional cents and other problems
in your calculation.
However, those problems stem from doing repeated calculations using a
data type which cannot exactly represent the values you want to use.
In the case you have here, you're performing a single operation. You
don't have a buildup of imprecision because you only do one thing. In
this case, you know that as long as the magnitude of your integer is
small enough to be fully represented within the data type (the limit
is about 24 million for floats, and 2^53 for doubles) then the single
division by 100 will leave you with only a very small amount of
imprecision due to the fact that the result can't always be
represented precisely. If your method for displaying the result then
rounds to two digits, you can be sure that what is displayed will
match what is being computed.
Mike
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden