Re: rand() and srand() broken?
Re: rand() and srand() broken?
- Subject: Re: rand() and srand() broken?
- From: Derek Gaston <email@hidden>
- Date: Wed, 25 Nov 2009 11:53:47 -0700
Thank you for the great explanation... that really does help a lot.
I think what I'm going to do is just change my algorithm so that I
throw away the first (or maybe the first couple) entry by just calling
rand() directly after calling srand(). The second number will still
always reliably be the same given the same seed... and it is better
spread out through the range.
Thanks again for all the prompt replies from everyone!
Derek
On Nov 25, 2009, at 11:31 AM, Terry Lambert wrote:
The relevant standards are FIPS 140-2, FIPS 186-2, NIST SP 800-90,
ANSI X9.17-1985, ANSI x9.31-1998, and ANSI X9.62-2005.
However, his expectations and conclusions in this case are both bogus.
On UNIX systems, rand() is a simple PRNG, not a CSPRNG, and has
historically used the 48 bit linear congruential PRNG algorithm
derived from BSD UNIX. The BSD algorithim is what glibc, which
houses Linux's implementation, still uses. In 10.4, Mac OS X went to
use of /dev/random, which uses a kernel implementation of the CSPRNG
based on the Yarrow algorithm. This was done in order that people
who were using rand() incorrectly as if it were not a simple PRNG
for things like nonce generation did not incorrectly introduce
weaknesses into what was intended to be cryptographically secure
code by their non-understanding of UNIX libraries. This was not a
general fix, but provided a degree of hardening against first order
attacks.
In any case, compliance with cryptographic standards is neither a
goal nor desirable for rand() and its family of functions.
By reseeding with monotonocally increasing values each time he is
resetting the sequence start value to a value which in Yarrow is
directly derived, and further is tending to start with mostly zero
bits and effect only the 7 least significant bits of the seed space,
and only impact a few at a time.
The conclusions of non-randomness on a sample size of 100 is also
bogus and unjustifiable, due to his sample size being not
statistically significant enough for him to draw any mathematically
valid conclusions.
To make a valid argument here, he would minimally need to apply the
test suite providided in NIST Special Publication 800-22 before
drawing any conclusions. This suite requires on the order of
hundreds of millions of sequence runs before drawing any conclusions.
PS: Minimally, unless the application is the intentional
construction of a logically and mathematically flawed strawman
argument, the reseeding must go. The initial value for a given input
seed says absolutely nothing about the randomness of the subsequent
sequences following from that seed; almost all PRNGs are based on
feedback from previous results, and regardless of the algorithm
being used, it is being denied its previous result.
-- Terry
On Nov 25, 2009, at 9:47 AM, Jason Foreman <email@hidden> wrote:
On Nov 25, 2009, at 11:26 AM, Derek Gaston wrote:
This piece of code should generate 100 random numbers between 0
and 1... I am expecting that they be fairly spaced out as well...
What basis do you have for this expectation? Is it spelled out in
any specification or standard somewhere?
You're seeding the RNG with a monotonically increasing seed, and
getting what looks like monotonically increasing starting values
for your random sequences. The fact that you get what you expect
on Linux seems to be purely coincidental.
This is actually indicative of what we actually do in one of our
applications (I know it doesn't usually make much sense to reseed
all the time... but you're just going to have to trust me that
it's necessary in our case ;-)
I think you're going to have to explain why you feel reseeding is
necessary. You probably shouldn't be doing it.
Jason
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden