Electronic – How was physical design verification accomplished before CAD

integrated-circuitmicrofabrication

I'm reading a history of EDA for ASICs because I'm curious about how older ICs were mass-produced.

The article explicitly mentions that in the early 1970's, SPICE was used to simulate circuit behavior, and cut rubylith was laid out to ensure that the ASIC design matched the schematic correctly (layout versus schematic):

What we now know as physical design verification consisted of taking flatbed plots of the layouts, pinning them on the wall or laying them on a light table, and having people try to find errors. Hence, physical verification was one of the first businesses to be adopted in the emerging custom design space (see "The More Things Change, The More They Stay The Same,").

However, the article doesn't really go into how designers ensured that ASIC features such as gates, would work correctly when the layout was shrunk to create a mask. Assuming an ASIC mask mistake was comparatively expensive in the past as it is today, what techniques were used to minimize the risk of a defective mask due to improper physical dimensions of features, such as a gate?

Considering geometry is important for an ASIC, were predictions of field strength using Maxwell's equations typically used (analogous to a 2.5D field solver for PCBs today), or were simple transistor models based on length and width sufficient? Or were physical dimension tolerances sufficiently large as to not be a significant source of error during manual layout?

To reduce the scope of the question, let's assume digital ASICs of the era such as TTL or 6502.

Best Answer

Too give the best chance of a device working as expected in an asic cell library we used to work in a system of increasing abstraction.

The spice models for the technology would be written based on mos transistor theory and verified against a test chip which contained a few devices. The Transistor theory always seemed to be fairly accurate but we were not dealing with sub micron technology with a bunch of secondary effects at the time.

This spice model would be used to develop the rest of the cells in the library. These cells would be put on a test chip. This would be manufactured out at the "corners" of the technology. This involves deliberately introducing manufacturing variation (doping,geometry etc) to give fastest and slowest cases. The test chip would be evaluated to produce the digital simulator models which would be used for individual device design.

I don't go back to the days of rubylith but I can remember doing manual checks of cell connectivity on coloured transparent prints (one per layer) before computer CVS (connectivity verification by computer). Took a few days for a 3k gate asic.

In the earlier days when the whole thing was done by cutting rubylith getting and measuring the right dimensions was critical, but the devices were very simple. In the early days of CAD the cells were put down as cells in cookie cutter form and by that stage we were able to assume that the computer generated geometries within a cell and for connectivity were correct.