Re: (was Int Function) mod bug
Re: (was Int Function) mod bug
- Subject: Re: (was Int Function) mod bug
- From: Doug McNutt <email@hidden>
- Date: Mon, 19 Aug 2002 10:52:09 -0600
At 14:20 +0100 8/19/02, has wrote:
>
Yeah. Well, you know that amazing, powerful, handsome hunk o' metal and
>
plastic sitting on your desk there? It can't even add a couple simple numbers together without getting the answer wrong.
It's all about cardinals and ordinals and has nothing to do with ordaining Cardinals. We once attempted to teach "new math" in elementary school but got nowhere because teachers couldn't understand. We also have erudite folks who argue about the start of the millennium because they don't understand the difference.
Cardinal numbers are used to measure something and they have a zero as on the left end of a meter stick. No measurement is ever perfect regardless of the numeric base in which it is recorded. If you're dealing with a measurement you should be using cardinals which are floats in the computer world.
Ordinals are used for counting be it times through a loop or dollars or pesos. Ordinals represent countable sets. In natural English they are first, second, and the like. In a manner of speaking there is a zero with meaning "none" but the Romans got along quite well without it. C programmers start counting items in a list at zero because it allows efficient usage of the bits in a word. It's really just a change of assignment between bit pattern and number. (000 refers to the first item). **
Analog computers use cardinals and are limited in precision by noise, Digital computers attempt, in the case of floats, to assign one of 2^64 countable arrangements of 64 bits to the infinity of cardinals. It's impossible but it's not any different from attempting the same thing with, say, 15 decimal digits.
Apple had it right when, in the days of the Apple II they introduced type comp which, at +/-(2^63), was "larger than the U. S. national debt expressed in Argentine pesos".* It would be an option for those cases where folks want to count higher than the limit of integer. Switching them to floats is a bit like the natives counting scheme in Gamow's "One, Two, Three, Infinity".
The best way to fix AppleScript (and Excel where folks complain about the same thing) is to educate users and allow them to choose the proper type for the task based on need be it counting or measuring. In days of old, before floating point hardware, considerations of speed suggested integers for some calculations but that has gone away.
Dim Y as Real
Dim J as Integer
Are as much natural English as
global Z
and would push users toward what they should have learned in grade school. It would then be easy to understand that objects of class real come with error bars because they represent measurements of "real" things. Rounding a real would always return a real and likewise for integer. It would be nice if class real could be implemented with a property called error which would always go with the object and could be used during comparison for equality.
* Apple Numerics Manual, Addison Wesley, 1986, ISBN 0-201-17741-2, page 13
** Just for completeness there is another class, the rationals, which are the numbers representable as a ratio of two integers. But I've gone on too long.
--
--> If you are presented a number as a percentage, and you do not clearly understand the numerator and the denominator involved, you are surely being lied to. <--
_______________________________________________
applescript-users mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/applescript-users
Do not post admin requests to the list. They will be ignored.