It's simple, we test them before we sell them and throw the bad ones out.
There are lots of ways to do this - different people do different thing, often use a combination of:
some tests are at speed to make sure they go fast enough.
other tests involve a mode that turns some or all of the flipflops in the chip into giant serial shift registers, we clock known data into those chains, then run the chip for one clock and then scan the new results back out and check that they match our predicted results - automatic test tools generate a minimum set of "scan vectors" that will test every random gate or transistor on the chip - other vectors do special tests of ram blocks,
others test that the external wires are all bonded correctly
we make sure it's not pulling an unhealthy amount of current
Testing time costs money, we sometimes do some simple testing for obvious dead chips before they are packaged to discard the bad ones and then more testing after the packaging is done
It wasn't even true back then. Well, maybe that's why Dawkins is a biologist and not an engineer. :-)
Today's processors pack billions of transistors on a die a few square cm in area and less than a mm high. There would fit hundreds of them in a skull, maybe \$10^{12}\$ transistors.
Even if you look at discrete transistors there would fit more than just a few hundred. I guess SOT-23 already existed in 1989, and then you would get \$10^5\$-\$10^6\$ of them in a skull.
edit (2011-06-13)
I own a copy of The Selfish Gene, and was curious what Dawkins had in mind, so I looked into it. Her's more from that paragraph:
The basic unit of biological computers, the nerve cell or neurone, is really nothing like a transistor in its internal workings. Certainly the code in which neurones communicate with each other seems to be a little bit like the pulse codes of digital computers, but the individual neurone is a much more sophisticated data-processing unit than the transistor. Instead of just three connections with other components (sic), a single neurone may have tens of thousands. The neurone is slower than the transistor, but it has gone much further in the direction of miniaturization, a trend which has dominated the electronics industry over the past two decades. (The Selfish Gene, p.49)
Somebody must have told Dawkins that a transistor has 3 pins :-).
Anyway, he doesn't only compare the numbers of neurons (or neurones, BE?) to transistors, but also points out that the neuron is a lot more complex, partly because of its thousands of connections. My guesstimate is that you'd need \$10^5\$ to \$10^6\$ transistors to emulate one such neuron (maybe as an analog instead digital computer?). Which means that a skull stuffed with GPUs wouldn't still come close to the processing power of a brain.
And then there's the problem of all these connections. They're the real power, not just the large number of neurons. We don't have the technology to build such complex systems, and IMO won't for a long time. And then I'm not even talking about the dynamic nature of these connections: they can rearrange themselves, making new connections and breaking others.
To put all these AI suckers in perspective, take a look at our vision system. In a second we can process a stereoscopic image of \$10^8\$ pixels, create a virtual 3D model of the scene and identify objects in detail. Move half a meter to the right and you add lots of new data. There's still a long way to go...
Best Answer
Something very like this is already in use. All transistors generate noise, from a number of effects: http://www.nikhef.nl/~jds/vlsi/noise/sansen.pdf
Intel have a hardware random number generator that uses this: http://electronicdesign.com/learning-resources/understanding-intels-ivy-bridge-random-number-generator
It's still quite hard to eliminate various side-channels and effects of temperature and manufacturing variation, but that article gives a well-sourced discussion of why it's believed to be a good random number source.