Digging from http://www.befria.nu/elias/pi/binpi.html to get the binary value of pi (so that it was easier to convert into bytes rather than trying to use decimal digits) and then running it through ent I get the following for an analysis of the random distribution of the bytes:
Entropy = 7.954093 bits per byte.
Optimum compression would reduce the size of this 4096 byte file by 0
percent.
Chi square distribution for 4096 samples is 253.00, and randomly would
exceed this value 52.36 percent of the times.
Arithmetic mean value of data bytes is 126.6736 (127.5 = random).
Monte Carlo value for Pi is 3.120234604 (error 0.68 percent).
Serial
correlation coefficient is 0.028195 (totally uncorrelated = 0.0).
So yes, using pi for random data would give you fairly random data... realizing that it is well known random data.
From a comment above...
Depending on what you are doing, but I think you can use the decimals
of the square root of any prime number as a random number generator.
These should at least have evenly distributed digits. – Paxinum
So, I computed the square root of 2 in binary to undetake the same set of problems. Using Wolfram's Iteration I wrote a simple perl script
#!/usr/bin/perl
use strict;
use Math::BigInt;
my $u = Math::BigInt->new("2");
my $v = Math::BigInt->new("0");
my $i = 0;
while(1) {
my $unew;
my $vnew;
if($u->bcmp($v) != 1) { # $u <= $v
$unew = $u->bmul(4);
$vnew = $v->bmul(2);
} else {
$unew = ($u->bsub($v)->bsub(1))->bmul(4);
$vnew = ($v->badd(2))->bmul(2);
}
$v = $vnew;
$u = $unew;
#print $i," ",$v,"\n";
if($i++ > 10000) { last; }
}
open (BITS,"> bits.txt");
print BITS $v->as_bin();
close(BITS);
Running this for the first 10 matched A095804 so I was confident I had the sequence. The value vn as when written in binary with the binary point placed after the first digit gives an approximation of the square root of 2.
Using ent against this binary data produces:
Entropy = 7.840501 bits per byte.
Optimum compression would reduce the size
of this 1251 byte file by 1 percent.
Chi square distribution for 1251 samples is 277.84, and randomly
would exceed this value 15.58 percent of the times.
Arithmetic mean value of data bytes is 130.0616 (127.5 = random).
Monte Carlo value for Pi is 3.153846154 (error 0.39 percent).
Serial correlation coefficient is -0.045767 (totally uncorrelated = 0.0).
The key for a truly random number is random data source. Sometimes this is information such as delays in keyboard events or network events. Where high quality random data is desired, it may be radioactive decay. SGI implemented lavarand which drew its seed for a random number generator from a digitized image of a lava lamp. This was sufficient for being considered a random number generator.
Outside of truely random data, one can work with a deterministic but chaotic system. For example, the mersenne twister. In these situations, one seeds the generator with a number and then runs it forward to get pseudo-random numbers. These are sufficient for games and the like where it isn't critical if someone can determine the seed (and the next number in the sequence).
Consider reading patent 5,732,138 and http://www.lavarnd.org/ for the implementation details on how to make a number.
Best Answer
Main issues with your approach:
You ideally want a PRNG that produces new pseudo-random bits from an internal state. Here's the one I use:
This is based on George Marsaglia's XORShift algorithm. It produces good pseudorandom numbers and is very fast (typically even faster than a Linear Congruential Generator since the xors and shifts are cheaper than multiplies and divides on most hardware).
Having said that, I wouldn't expect people to memorise this kind of algorithm for an interview unless you are specifically applying for a role as a crypto programmer!