Electronic – How are timers adjusted in a micro-controller

timer

In a 555 Timer the resistors and capacitors determine the frequency and the duration of the timer pulses. How is this actualized in case of a micro-controller timer?

Best Answer

Microcontrollers generally use crystals or internal oscillators to generate a reference clock. That clock frequency can (on higher-end chips) be multiplied up using a PLL. The resulting clock is used to run the system.

At its simplest, a timer is a counter plus a comparison value (the period). In software, the user sets the period with a register and turns on the clock to the counter. When the counter value reaches the period value, the comparison logic generates a pulse. This can trigger a CPU interrupt or (on some MCUs) go out through a pin.

Since the timer logic is purely digital, the period (in cycles) has to be calculated based on the clock frequency.

Edit: It seems like you're asking more about low-level implementation. A counter can be implemented using edge-triggered flip-flops. Set up a flip-flop such that its output inverts on the falling edge of every clock cycle. (In a D flip-flop, you can connect the output to the input through an inverter.) Take two these, and connect the output of the first to the clock input of the second. Then, supply a clock to the first flop. They'll toggle like this:

RefClk -> Q1 -> Q2
0         0     0
1         0     0
0         1     0  <-- falling clock edge inverts Q1
1         1     0
0         0     1  <-- falling clock edge inverts Q1, falling Q1 edge inverts Q2
1         0     1
0         1     1  <-- falling clock edge inverts Q1
1         1     1
0         0     0  <-- falling clock edge inverts Q1, falling Q1 edge inverts Q2

By connecting more flip-flops together in this way, you get more bits in your counter. Each additional flop toggles at half the rate of the previous flop.