Re: Dice
Re: Dice
- Subject: Re: Dice
- From: deivy petrescu <email@hidden>
- Date: Sun, 31 Oct 2004 10:53:11 -0500
On Oct 26, 2004, at 4:56 PM, Bill Briggs wrote:
Deivy, where the hell are you? You should be all over this like a
starving squirrel on a peanut. And you probably have the variance test
in your head. Don't make me go look it up.
Okay, I've got to go to dinner.
- web
Sorry Bill, but sometimes we are called for bureaucratic duties...
Actually, you did an excellent job and there was no need for any other
intervention.
However, there is apparently some confusion going on with some of the
concepts.
As people have pointed out, random means unpredictable. So, among all
the possibles events anyone event can happen, even if the events have
different probabilities.
Statistics is used to study events that are random, or at least behave
as random (that is, does not allow us to predict the outcome). Thus,
any of the statistics that one does with a distribution assumes the
distribution is random. So variance, mean and standard deviation makes
sense only for random distribution. If the distribution is not random,
then throw these in the garbage. What is the point anyway?
A random distribution does not necessarily have to be a normal
distribution. It can be very general and then our ideas of dispersion
around the mean do not follow the same pattern (around 67% of the
outcome is around mean +/- 1 * std deviation).
And, yes, there is a measure of dispersion for an equiprobable
distribution.
So basically, one uses mean and std deviation to determine the
distribution of a() random variable(s).
To determine the randomness of the distribution, I would check the
distribution itself. If I can find a pattern, then the distribution is
not random.
Dennis Manasco wrote:
But, a random sequence does not have a normal distribution and has, by
definition, no useful mean except in the infinite case where the mean
is, by definition, the mid-point of the chosen range and the
population distribution is flat over the entire range.
This is not true. A random sequence can have a mean, of course. Assign
the number of dots in the face of a die to the value of the outcome of
rolling a die. Say you event is rolling the die three times. What is
wrong in computing the mean of the numbers you rolled?
The distribution of the events is not normal, I do not know what it is.
But if I assign the mean of the 3 rolls as my random variable, then in
this case, for a large number of experiments, the mean will have a
normal distribution. The mean not the rolls...
Think of flipping a perfectly balanced coin. This is a random act:
past and future flips of the coin have no influence on the current
flip.
Exactly, this is randomness! One can *not* predict the outcome, even if
you look at what happened before.
It is possible to flip this perfect coin a million times and always
get "heads." The statistical probability that you will do so is
incredibly small, but that doesn't make the coin an invalid random
number generator.
Well, it is possible for a person to walk though a solid wall and cross
it to the other side. But I'd advice against betting on that!
Bill Briggs wrote:
A lot of people have trouble understanding what random means, and the
consequences. We teach a graduate course here that is called
Probability, Random Variables, and Stochastic Processes, which is the
foundation course for the Statistical Theory of Communication, and
some students have difficulty with the concept "random". They make the
same error that is often made when the discussion comes up on this
list. They apply an inappropriate significance to the distribution of
the outcomes to the events. And they also think that a big (to them)
number of events should give a result close to that which would be
seen from an infinite number of events. This last bit is a failure to
understand the difference between finite and infinite orders.
I did not quite understand what you meant here. First, infinite in
Math is kind of weird. In many cases 10 or less is good enough for
infinity, sometimes it (infinity) has to be really really large.
However, "a big (to them) number of events should give a result close
to that which would be seen from an infinite number of events" might be
true. As far as I understand this, it might be the "Law of large
numbers". This is, if by result one means the association of an event
with a random variable.
Finally, on checking the outcome in the case of rolling dice, the
random variables associated with these rolls (number of 1s, numbers of
2s, numbers of 3s, numbers of 4s, numbers of 5s, numbers of 6s) can be
studied. The fact that these are as close as they are measure the
assumption of equal probability, not of randomness.
A unfair coin (say 70%,30%) does not have equal probabilities but it
will be a random sequence nevertheless.
Regards
Saudações
deivy petrescu
http://www.dicas.com/
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Applescript-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
References: | |
| >Re: Dice (From: "John C. Welch" <email@hidden>) |
| >Re: Dice (From: Bill Briggs <email@hidden>) |