Re: (was Int Function) mod bug
Re: (was Int Function) mod bug
- Subject: Re: (was Int Function) mod bug
- From: bill fancher <email@hidden>
- Date: Tue, 20 Aug 2002 14:31:12 -0700
On Monday, August 19, 2002, at 09:52 AM, Doug McNutt wrote:
At 14:20 +0100 8/19/02, has wrote:
Yeah. Well, you know that amazing, powerful, handsome hunk o' metal and
plastic sitting on your desk there? It can't even add a couple simple
numbers together without getting the answer wrong.
It's all about cardinals and ordinals and has nothing to do with
ordaining Cardinals. We once attempted to teach "new math" in elementary
school but got nowhere because teachers couldn't understand. We also have
erudite folks who argue about the start of the millennium because they
don't understand the difference.
Cardinal numbers are used to measure something and they have a zero as on
the left end of a meter stick.
Cardinals are not reals. Cardinals are typically treated as equivalence
classes: two sets are cardinally equivalent if there is a 1-1
correspondence between their elements. Up to aleph 0, there's a 1-1
correspondence between cardinals and ordinals: The way integers are
normally constructed in set theory, the integer n has exactly n elements,
and is a member of the equivalence class representing the n-th cardinal.
Ordinals are like markers or ticks on tally sheet: one way of counting.
Cardinals correspond to our idea of "the number of things in a set".
No measurement is ever perfect regardless of the numeric base in which
it is recorded. If you're dealing with a measurement you should be using
cardinals which are floats in the computer world.
This is incorrect. As mentioned, cardinals are not reals. Real numbers
"are floats in the computer world."
Ordinals are used for counting be it times through a loop or dollars or
pesos. Ordinals represent countable sets.
This is wrong too, but I won't go into the details, since it would take us
too far afield.
In natural English they are first, second, and the like. In a manner of
speaking there is a zero with meaning "none" but the Romans got along
quite well without it. C programmers start counting items in a list at
zero because it allows efficient usage of the bits in a word. It's really
just a change of assignment between bit pattern and number. (000 refers
to the first item). **
Analog computers use cardinals and are limited in precision by noise,
Digital computers attempt, in the case of floats, to assign one of 2^64
countable arrangements of 64 bits to the infinity of cardinals. It's
impossible but it's not any different from attempting the same thing with,
say, 15 decimal digits.
Apple had it right when, in the days of the Apple II they introduced type
comp which, at +/-(2^63), was "larger than the U. S. national debt
expressed in Argentine pesos".* It would be an option for those cases
where folks want to count higher than the limit of integer. Switching
them to floats is a bit like the natives counting scheme in Gamow's "One,
Two, Three, Infinity".
64 bit integers are better for counting big numbers than comps. Comps have
the same approximate nature that floats do. They don't address the issue
discussed here.
The best way to fix AppleScript (and Excel where folks complain about the
same thing) is to educate users and allow them to choose the proper type
for the task based on need be it counting or measuring. In days of old,
before floating point hardware, considerations of speed suggested
integers for some calculations but that has gone away.
Changing AS from untyped to typed will NOT fix rounding errors (and would
be a MAJOR architectural change). You end up with the same result, you
just have to write more code, such as the following bits, to get there.
Dim Y as Real
Dim J as Integer
Are as much natural English as
global Z
and would push users toward what they should have learned in grade school.
Though rounding errors have nothing to do with cardinal/ordinal
distinction you discussed. I won't comment on "what people should have
learned in grade school."
It would then be easy to understand that objects of class real come with
error bars because they represent measurements of "real" things. Rounding
a real would always return a real and likewise for integer.
I neither want nor expect reals when I round, thank you.
It would be nice if class real could be implemented with a property
called error which would always go with the object and could be used
during comparison for equality.
And would the "error" itself have an error property so we could see how
far off IT is? And so on ad infinitum? Comparing infinitely many error
terms would be time consuming.
* Apple Numerics Manual, Addison Wesley, 1986, ISBN 0-201-17741-2, page 13
** Just for completeness there is another class, the rationals, which are
the numbers representable as a ratio of two integers. But I've gone on
too long.
Why snub complex numbers? Don't they, too, deserve at least a mention?
--
--> If you are presented a number as a percentage, and you do not clearly
understand the numerator and the denominator involved, you are surely
being lied to. <--
Is this supposed to be "wisdom" of some sort? I've seldom run across a
sillier claim. We might append "Or, you are uninformed, ill-educated, or
intellectually inadequate to grasp the concepts being discussed." But that
wouldn't help make us a nation of Yahoos, which, I suspect, is the desired
result of those who promulgate such nonsense.
--
bill
"In the welter of conflicting fanaticisms, one of the few unifying forces
is scientific truthfulness..." Bertrand Russell
_______________________________________________
applescript-users mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/applescript-users
Do not post admin requests to the list. They will be ignored.