Electronic – Why would a Intel 8080 chip be destroyed if +12 V is connected before −5 V

8080integrated-circuitnmospower supplysemiconductors

The Intel 8080 is a classic microprocessor released in 1974, fabricated using an enhancement-mode NMOS process, and shows various unique characteristics related to this process, such as the requirement of a two-phase clock, and three power rails: −5 V, +5 V, and +12 V.

In the description of the power pin from Wikipedia, it says

Pin 2: GND (VSS) – Ground

Pin 11: −5 V (VBB) – The −5 V power supply. This must be the first power source connected and the last disconnected, otherwise the processor will be damaged.

Pin 20: +5 V (VCC) – The + 5 V power supply.

Pin 28: +12 V (VDD) – The +12 V power supply. This must be the last connected and first disconnected power source.

I cross-referenced to the original datasheet, but the information is a bit contradictory.

Absolute Maximum:

VCC (+5 V), VDD (+12 V) and VSS (GND) with respect to VBB (−5 V): −0.3 V to +20 V.

Even if VBB is 0 V when it's unconnected, VDD would be +17 V, and it shouldn't exceed the absolute maximum. Is it the original claim on Wikipedia that a Intel 8080 chip be destroyed if +12 V is connected before −5 V correct?

If it is correct, what is the exact failure mechanism if I do this? Why would the chip be destroyed if +12 V is applied first without −5 V? I suspect it must has something to do with the enhancement-mode NMOS process, but I don't know how semiconductors work.

Could you explain how the power supply is implemented internally inside Intel 8080? Did the problem exist among other chips in the same era built using a similar process?

Also, if I need to design a power supply for the Intel 8080, let's say using three voltage regulators, how do I prevent damages to the chip if +12 V rail ramps up before −5 V?

Best Answer

In the process used for the 8080, +12 provided the primary voltage for the logic, +5 supplied voltage for the I/O pin logic (which was intended to be TTL compatible, thus limited to 0 -> 5 volt signals) and -5 was connected to the substrate. The latter voltage insured that all of the active devices on the IC remained isolated by maintaining a reverse bias on the PN junctions that separated them from the common silicon substrate.

If any I/O signal went "below" substrate voltage, it could potentially drive the isolating junction into a SCR-like latchup condition, with the resulting continuous high current potentially destroying the device. The required sequence of turning on and turning off the three power supply voltages was intended to minimize this risk.

As a previous answer correctly pointed out, in practice system designers ran fast and loose with this requirement. Basically, the most important thing was to power the rest of the system logic with the same +5 supply that drove the CPU, so that at minimum the voltages applied to CPU input pins would never be greater than the CPU "+5" supply, or lower than the CPU "-5" supply, and to insure that the "+12" supply was equal to or greater than the "+5 supply at all times. A schottky power diode sometimes was bridged between those voltages, to maintain that relationship e.g. during power-down.

Typically, the electrolytic filter cap values for the three supplies were chosen such that -5 and +12 ramped up fairly quickly, and +5 lagged a bit after.

MOS process refinements allowed later IC designs to be powered solely by +5, and if a negative substrate voltage was needed it was generated on-chip by a small charge pump circuit. (e.g. 2516 EPROM vs. 2508, 8085 cpu vs. 8080.)