Given the goals of the class, I think the TTL approach is fine, and I say this as an "FPGA guy". FPGAs are a sea of logic and you can do all sorts of fun stuff with them, but there's only so much that's humanly possible to do in a semester.
Looking at your syllabus, your class is a mix of the logic design and "machine structures" courses I took in undergrad. (Plus, it's for CS majors. I'm all for CS majors having to face real hardware--letting them get away with writing code seems like a step back.) At this introductory level, where you're going over how assembly instructions are broken down, I see no real benefit to having students do things in code versus by hand. Doing HDL means learning the HDL, learning how to write synthesizable HDL, and learning the IDE. This is a lot more conceptual complexity and re-abstraction. Plus you have to deal with software issues.
Generally the point of a course that uses FPGAs is to practice creating logic that is useful--useful for talking to peripherals, serial comms, RAM, video generators, etc. This is valuable knowledge to have, but it seems very much out of the scope of your course. More advanced classes in computer architecture have students implement sophisticated CPUs in FPGAs, but again, this seems out of the scope of your course.
I would at the very least devote a lecture to FPGAs. Run through a few demos with a dev board and show them the workflow. Since you're at Mills, perhaps you could contact the folks at Berkeley who run CS150/152 and go see how they do things.
Something very like this is already in use. All transistors generate noise, from a number of effects: http://www.nikhef.nl/~jds/vlsi/noise/sansen.pdf
Intel have a hardware random number generator that uses this: http://electronicdesign.com/learning-resources/understanding-intels-ivy-bridge-random-number-generator
The core of Ivy Bridge's ES is an RS-NOR latch with the set and reset
inputs wired together (red). When the R/S input is de-asserted, the
latch becomes metastable, and its output eventually settles to 0 or 1,
depending on thermal noise.
It's still quite hard to eliminate various side-channels and effects of temperature and manufacturing variation, but that article gives a well-sourced discussion of why it's believed to be a good random number source.
Best Answer
If he is of a certain age, he may be half remembering early 3-rail NMOS devices such as 16 kilobit (2 kilobyte) DRAMs. These required 5V for I/O, 12V for the storage cells (to get adequately large signals on the storage capacitors) and -5V back bias to turn the MOS transistors off by default.
On these devices it was important that the -5V supply was present before the other supply rails. All the FETs were NMOS, acting as low side switches, with either resistors between drain and VDD, or NMOS FETs configured as passive current sources.
With these chips, if the 12V supply came up first, over 16000 transistors all turned on at once, consuming enough power to destroy the device.
The next generation (64 kilobit) incorporated a charge pump on-chip, t supply the -5V supply automatically.
I never encountered any actual CPUs with this requirement but it's possible the Intel 8008 (no that's not a typo for 8080) did.
Nowadays it sounds like a quaint myth ... and with rare exceptions (possibly some high performance FPGAs) it is.