Electronic – When Intel / AMD choose their Nanometer Processes, why were the specific numbers, 5, 7, 10, 14, 22, 32, 45, etc chosen

cpuengineeringhardware

When looking at the roadmaps for the CPU manufacturing process
https://wccftech.com/intel-expects-launch-10nm-2017/
enter image description here

  1. 10 µm – 1971
  2. 6 µm – 1974
  3. 3 µm – 1977
  4. 1.5 µm – 1981
  5. 1 µm – 1984
  6. 800 nm – 1987
  7. 600 nm – 1990
  8. 350 nm – 1993
  9. 250 nm – 1996
  10. 180 nm – 1999
  11. 130 nm – 2001
  12. 90 nm – 2003
  13. 65 nm – 2005
  14. 45 nm – 2007
  15. 32 nm – 2009
  16. 22 nm – 2012
  17. 14 nm – 2014
  18. 10 nm – 2016
  19. 7 nm – 2018
  20. 5 nm – 2020
  21. 3 nm – ~2022

Why are these numbers chosen specifically? I have looked around, and there are deviations, such as:

Samsung Electronics began mass production of 64 Gb NAND flash memory chips using a 20 nm process in 2010.[114]

TSMC first began 16 nm FinFET chip production in 2013.[115]

And many others.

and so on. Yet as far as Intel and AMD are concerned, they are both in lockstep. Is there something to these numbers that lends themselves to the manufacturing process? Or is the selection completely arbitrary?

Best Answer

There are a number of different reasons for this.

The numbers aren't chosen

Modern CPU manufacturing processes, at least for top-of-the line mainstream CPUs such as Intel Xeon and Core, AMD Epyc and Ryzen, etc. are at the very edge of what is currently physically possible and economically viable.

Since the laws of physics and the laws of economics are the same for all players, it is to be expected that they all end up using the same technology. The only way this could be different is if one company manages a totally game-changing technological breakthrough without any other company noticing. Given the highly competitive nature, the amount of research and development invested by all companies, and the comparatively small community where everybody knows what the others are up to, this is highly unlikely.

So, in other words: Intel and AMD don't choose the process node size, they just use the best thing that is currently available, and that happens to be similar for both companies.

The numbers aren't real

The numbers are marketing terms chosen by an industry think tank. They don't accurately capture every detail of the various processes. There may very well be differences in the processes that have more impact than the node size.

For example, Intel is currently using the improved second generation of its 10nm process. Yet, both the first generation and the improved second generation of this process are lumped together under the same name "10nm" in the roadmap in your question.

Which brings us to our two next points. The first is a throwback to point #1, the second is a throwback to this very second point:

The numbers aren't chosen by Intel and AMD

As mentioned, the numbers are marketing terms chosen by an industry think tank. They aren't actually chosen by Intel and AMD.

The numbers are predictions

There is another way in which the numbers aren't real: not only are they marketing terms, that don't fully capture all the details, they are also predictions.

Now, as you probably know, predictions are hard. Especially predictions of the future. Case in point: the roadmap you show in your question has a 5nm process node for 2020, but actually, the current top-of-the line offerings are 10nm by Intel and 7nm by AMD, Apple, and nVidia. IBM's current top-of-the-line is the POWER9, launched in 2017 on a 14nm process. The POWER10 will probably be available in 2021 and manufactured in either 10nm or 7nm.

As you can see, the prediction is actually doubly wrong: it predicts that Intel and AMD will be in lockstep, and it predicts that the process node size will be 5nm, yet Intel and AMD are not in lockstep and neither of the two has hit 5nm yet.

The numbers are kind of a self-fulfilling prophecy

No company wants to be caught failing to hit the predicted process improvements. So, they work very hard to "hit the mark", but not harder, since these improvements are very expensive. (Moore's Second Law predicts that as chips get exponentially cheaper (for the same performance) or exponentially more performant (for the same price), chip fabrication gets exponentially more expensive.)

This is similar to what happened with Moore's Laws: originally, when Gordon Moore wrote down his Laws, he wrote them down as historical observations and projected their trend lines 10 years into the future without actually having solid statistical grounds to do so. 10 years later, he revised them (he had originally projected a doubling every year, which he then revised to a doubling every two years.) However, since then, Moore's Laws have morphed from historical observations to rough predictions to market expectations, where a manufacturer that doesn't hit the projected improvements of Moore's Laws will have to justify that failure to the market, the shareholders, and the stakeholders.

Also note that despite the ramifications of not being able to hit Moore's Law, actual development has dropped below the curve predicted by Moore's Law in 2012, and seems to be flattening out.

The ISTR had a similar effect.

Note, however, that the industry think tank which published the ISTR is actually no longer using it since 2017. They have created a new set of predictions called the ISDR, which are more based on "pull" created by new applications than on "push" created by process improvements.