CMOS – Soft Errors from SEUs/SETs in Early 8-bit Microprocessors

Architecturecmoscomputer-architecturemicroprocessorvintage

Why is it that soft errors due to single-event upsets/transients never seemed to be a problem in early 8-bit microprocessors, like the MOS 6502 or the Zilog Z80? The microprocessors themselves were widely assumed to be deterministic, like TTL logic gates, and didn't have any any sort of error detection/correction, and seemed to work with little issue (from soft errors, there were plenty of worse problems) for a wide variety of applications.

When I say "never," I only mean microprocessors in consumer level applications, not including niche aerospace or other mission-critical uses, which I know have always included fault-tolerant and rad-hardened electronics. Also, I am not asking about the issues with memory, like those radioactive DRAM chips in the 1970s which led to parity becoming more widely used, but only soft errors that may occur in the microprocessor chip itself, which seem to be becoming an increasingly common problem today as feature size scales down and transistor density scales up. This is especially true for those mission-critical applications, including servers and financial computers.

Was it because the transistors in those early microprocessors were microscopic instead of nanoscale? Was it because of the transistor density and processor complexity? Was it something else entirely?

It just seems that the soft errors were only a worry in RAM with these computers in the majority of applications, like in consoles or personal computers, and rarely in the microprocessor itself. There seems to be an elegance in the simplicity, determinism, and functionality of the early microprocessors.

Best Answer

Was it because the transistors in those early microprocessors were microscopic instead of nanoscale?

Mostly yes. Larger geometries and higher voltages are less susceptible. Early processors ran on 5V, now processor cores are in the 1V range.

You also must consider the consequence of an SEU. Most people were not doing serious work with the processors that you mentioned, they were playing games.

For many applications, the probability of a S/W bug causing lost data is much higher than the probability of an SEU causing lost or corrupted data. As long as this is the case, worrying about SEUs isn't the highest priority.

Related Topic