# Electronic – Why do Microcontrollers need a Clock

clockmicrochipmicrocontrollermicroprocessor

Why do instructions need to be processed at set time intervals (i.e. with the use of a clock)? Can't they be executed sequentially – immediately after the previous instruction has completed?

An analogy for the necessity of clocks in microcontrollers would prove particularly useful.

An illustrative example or two may help here. Take a look at the following hypothetical circuit:

simulate this circuit – Schematic created using CircuitLab

Suppose to start both A and B are high (1). The output of the AND is therefore 1, and since both inputs to the XOR are 1, the output is 0.

Logic elements don't change their state instantly - there's a small but significant propagation delay as the change in input is handled. Suppose B goes low (0). The XOR sees the new state on its second input instantly, but the first input still sees the 'stale' 1 from the AND gate. As a result, the output briefly goes high - but only until the signal propagates through the AND gate, making both inputs to the XOR low, and causing the output to go low again.

The glitch is not a desired part of the operation of the circuit, but glitches like that will happen any time there's a difference in propagation speed through different parts of the circuit, due to the amount of logic, or even just the length of the wires.

One really easy way to handle that is to put an edge-triggered flipflop on the output of your combinatorial logic, like this:

simulate this circuit

Now, any glitches that happen are hidden from the rest of the circuit by the flipflop, which only updates its state when the clock goes from 0 to 1. As long as the interval between rising clock edges is long enough for signals to propagate all the way through the combinatorial logic chains, the results will be reliably deterministic, and glitch-free.