Recently, I have heard that AMD has released the new Vishera series of FX processors which run at 5 GHZ. My question is whether there is any upper bound of clock rate of a processor? That is can we keep increaing the clock rate for ever? What electrical problems will we face at higher clock rates?
Electronic – Maximum clock frequency of microprocessors
clock-speedmicroprocessor
Related Solutions
Most microprocessor manufacturing (along with countless other devices) undergo the process of binning: all similar products are made at once, and depending on their performance, are placed into "bins" (groups) of similarly performing products, and then packaged and sold accordingly.
In the case of Intel processors (AMD is similar), generally processors within the same line are manufactured together, and are binned according to their stable clock frequency. You can tell when a processor is part of the same "line," by looking at the core codename, or if that is not specific enough, the features of the core (as mentioned by embedded.kyle, the i5 doesn't have hyperthreading, but the i7 does, even though both in question are "Sandy Bridge").
Sometimes a higher-end processor that fails can still be sold as another. An example I know first-hand is that the M0 steppings of the old Northwood-core Pentium 4's (130nm process) were actually failed Gallatin-core processors (which was the core for the P4 "Extreme Edition"). Similarly, a lot of people had/have luck unlocking extra cores, caches and shader units on various CPUs and GPUs. For example, it is quite common to be able to buy a mid-high range video card (take for example, the AMD Radeon 6850) and flash it with the BIOS of the higher-level card (the Radeon 6870, in this case) and gain the extra things that card has (some extra shader cores). This also has to do with binning during the manufacturing process.
This sort of thing drives overclockers to take good note of the stepping, place of origin, and batch number of their processors. When word gets out that certain batches of processors are overclocking better than their not-as-potent brethren (same CPU, mind you, just made at a different time or place), they become more in demand.
If you're interested more, definitely search "CPU Binning," or read up at some forums. I'm a member at www.overclockers.com, and the forum there is quite welcoming and has a wealth of past and current knowledge (along with an abundance of fantastic members).
Am I correct that the faster processor draws more power (and thus dissipates more heat) under a computational load?
Not necessarily. There are two major components of power dissipation - static power (the power you burn when the chip is on) and dynamic/switching power (the power burned when the clock is running). While running the same chip at a higher frequency will result in more power dissipation, a chip may have a static power dissipation that is too high when combined with the faster clock rate to meet the bin requirements for the faster rating.
If so, is the power under computational load approximately proportional to the rated/clocked frequency? In other words, inasmuch as the one processor is clocked 8 percent faster than the other, does it run about 8 percent hotter under load? Another way to ask the same question is to ask: does each processor process about the same quantity of data per unit of energy? or, if battery powered, can each accomplish about as much before its battery dies?
For a given chip running identical calculations, the dynamic portion of the power consumption will be proportional to the clock frequency. The total power dissipation of the processor will increase a bit less than 8% for an 8% increase in clock frequency due to the static power dissipation.
When not under load, do the two processors idle equally cool, or are there practical or theoretical factors that make the one idle cooler than the other?
If you had two identical chips idling, the one with the lower clock frequency would dissipate less power. When the chips are idling, the static power becomes a much larger portion of the active power dissipation, so any differences there would be more noticeable.
Even if the processor's price were not determinate, might one prefer the slower processor merely for the sake of cooler operation and extended battery life?
Possibly, but again, you have a lot less of a guarantee that this would be the case. If you bought chips with different rated TDPs, then you could safely make this argument. Otherwise, you're at the mercy of the binning algorithm and the consistency of the manufacturer's process. Also, note that we're talking about power dissipation, not energy consumption. A faster processor may be able to complete a computationally heavy task faster, and switch to a low power idle mode sooner than a slower processor.
Would the answers differ for embedded processors?
Yes. The static power dissipation is most significant on the bleeding edge processes that Intel, TSMC, IBM, and Global Foundries use. Embedded processors are often optimized for low static power dissipation and use larger processes where static power dissipation is a much smaller portion of power dissipation. The variation at those larger process nodes is much less, so microcontrollers are much less susceptible to variation in power and frequency performance.
Related Topic
- Electronic – Is CPU/GPGPU heat dissipation quadratic in clock frequency
- Can a CPUs speed be increased by (physically) upgrading the CPU clock
- Electronic – Figuring out mininum/maximum clock frequency [VHDL]
- Electronic – How is 255 Tbit/s processed in optical fiber communication
- Electronic – RPi GPIO speed issue in bare metal
- Electronic – Maximum Clock Frequency
Best Answer
EDIT: This question led to a long discussions. It is crucial to understand that the fact that CPUs speeds haven't been increasing over the last years is related to commercial aspects, and not directly related to any engineering or physical problem. You can check this link for the topmost frequencies achieved with existing CPUs by overclocking and supercooling.
From the invention of the first PC and until early 2000's the main parameter of each CPU was its frequency (maximal frequency of operation). Manufacturers tried to come up with new technologies which will allow for higher frequencies, and chip designers worked very hard to develop micro-architectures which will allow to the chip to run on a higher frequency.
However, as chips became smaller and faster, the problem of heat dissipation arose – when the whole amount of heat generated by switching transistors couldn't be dissipated, the chips got damaged. Engineers started to attach heat sinks to processors, then fans, but eventually they concluded that the approach of increasing CPU's frequency is no longer practical in terms of added performance per added cost.
In other words: CPU frequencies can be raised, but this makes CPUs (in fact, not the CPUs but the cooling mechanisms) too expensive. Consumers won't buy expensive computers if there is an alternative.
In general, current technological processes allow very high frequency operation (way above ~3GHz which Intel usually uses, and even AMD's 5GHz is not the ceiling). However, the associated cost of cooling devices which are required at these high frequencies is too high.
I'd like to emphasize this: there is no physical effect that prevents development of 8-10GHz processors with current technology. However, you'll have to provide a very expensive cooling mechanism in order to prevent such a processor from burning out.
Moreover, processors usually work in "burst" – they have very long idle periods, followed by short, but very intensive (and therefore high energy consuming) periods. Engineers could build a 10GHz processor that works at the highest frequencies for short periods of time (and no additional cooling is required because the periods are short), but this approach was also declined as worthless (high investments in development as compared to questionable gains). However, following future micro-architectural improvements, this approach may be reconsidered. It is my belief that this 5GHz AMD processor does not work constantly at 5GHz, but raises its internal clock to a maximum during short bursts.
PHYSICAL LIMIT: There is a physical limit to a maximal achievable clock rate for each process technology (which depends on technology's minimal feature size), however I think that the last Intel's processor which had been really pushed to this limit was Pentium 4. This means that today, when the technology advances and the minimal feature size is reduced (meanwhile in accordance with Moore's law), the only benefit from this reduction is that you can fit more logic into the same area (engineers no longer push CPU frequency to the limits of the technology).
BTW, the above limit can't increase forever. Read about Moore's law and the problems associated with its further application.