Electronic – How clock speed of devices determined

clockhardwaremicroprocessor

How the does hardware designer determined the suitable frequency of the clock for his device to work on? After which he created the timing diagram which is then further used for interfacing.

All I need to know what is the mechanism for determining the suitable clock and then creating the timing diagrams. My own logic is first designer creates the device such as Microprocessor then give it different instructions and then check the results on different clock rates but this is hit and trial method is there any specific algorithm to do that?

Best Answer

The frequency at which logic chips operate is determined by the technology used -- going from very old logic families like RTL (resistor transistor logic), DTL (diode-transistor logic), ECL (emitter-coupled logic), TTL (transistor-transistor logic), which has several subtypes such as S (high-speed Schottky), LS (low power high-speed), F (fast), AS (advanced Schottky), to CMOS, with subtypes such as HC (high-speed CMOS) and HCT (TTL compatible high-speed CMOS).

Each of these has a characteristic maximum clock rate, ranging from 4 MHz for RTL, 25 MHz for TTL, 50 MHz for HC/HCT CMOS, 100 MHz for TTL S and F, 500 MHz for ECL, and up to several GHz for state of the art CMOS designs in today's multi-core microprocessors.

So in each case, the logic designer has to first choose a logic family that is compatible with their requirements, both in terms of power, power supply voltages and logic thresholds, and speed. For example, even though it is an very old family and has a slower switching speed, 74LS00 series IC's are still widely available and used in new designs. LS TTL is good for 40 MHz and HCT CMOS can run at 50 MHz, and a lot of logic circuits don't require more than that. Take a look at the various circuits on this site and you will see a lot of LS and HCT chips used, with clock rates of only a few MHz. That's about the limit you can reliably prototype on a wireless breadboard.

So you design a circuit first, determine what speed it needs to run at, and then choose the technology to use. Sometimes the speed of the circuit will be determined by the need to sample external data -- for example, sampling analog data using an ADC at a particular rate, say 1000 times per second. And then you may need to store data into a memory at a particular speed. So you look at what the fastest requirements are, and go from there. Often that will involve choosing a microcontroller to run everything. Microcontrollers can be clocked anywhere from 32.768 kHz (watch crystal) or below, to run at very low power, all the way to hundreds of MHz for 32-bit chips. Although most smaller 8-bit microcontrollers use clocks in the tens of MHz or less. Larger microprocessors such as those running Windows or Linux typically have clock speeds in the 1 to 3 GHz range.

The speed of each type of gate will be shown in the datasheet, either as a switching frequency (as listed earlier) or a propagation delay typically in ns (nanoseconds) or ps (picoseconds). For a new IC being designed in-house such as a microcontroller or memory, the company doing the design will have information for their logic designers regarding these parameters based on the types of transistors and process being used.

The following diagram shows the propagation delay from the rising edge of the input to the corresponding rising edge of the output, and vice-versa, for a 74HCT00 series gate. t\$_{pd}\$ is given as 10 ns typical, and 27 ns maximum for this gate.

enter image description here

Propagation delays are important to keep in mind when designing, because if you have several signals that need to be read at a given time, you have to make sure they have all become stable. For example, in the case of writing to a RAM chip, it is important for both the address lines and data lines to be stable before a signal called the write strobe (-WR in the diagram below) is used to clock the data into the memory. This delay from when stable data is first presented to the RAM and when it is clocked is called the "data setup time", and is shown in the following diagram:

enter image description here

You can observe the propagation delay in a gate using a multi-channel oscilloscope, and using one or more channels for the the input(s), and one or more for the output(s). For example, using a four-channel scope, one would be able to observe the dynamic behavior of a half-adder with two inputs, plus the sum and carry outputs.

The designers that are going to use the chip simply use the propagation delay values as stated in the datasheet. But where do these numbers come from?

First of all, the designers of the chip will have built models that can be used to completely simulate the chip's internal workings before it is "taped out". Making a mistake at this stage can result in the chip having to go through a "spin", possibly costing millions of dollars. In building the model, they won't have to start from scratch each time, but can start with models from earlier designs based on the same technology. When first silicon arrives, the chip will go through both verification and characterization, in which the various characteristics of the chip's parameters are measured and compared with the model. All of this data is then used to generate the values for the datasheet. So it is a combination of theory and real-world measurements.

In general, as the density of transistors on IC's increase (Moore's Law), the speed increases. IC manufacturing processes have advanced from 10 µm in 1971, to 1 µm in 1985, 90 nm in 2004 and 14 nm in 2014.

The 8-bit 6502 microprocessor, used in the Apple ][ (1977), had 3510 transistors and used an 8 µm process, and was clocked at just over 1 MHz. The 64-bit Apple A8X tri-core ARM microprocessor, used in the iPhone 6, has 3 billion transistors (almost a million times more than the 6502), using a 20 nm process and is clocked at 1.5 GHz (almost 1.5 million times faster than the 6502).