Electrical – Clocks, Processors and Timers on a MCU

8051clockmicrocontrollermicroprocessortimer

So i am going through this excellent book Patterns for Time-Triggered Embedded Systems and have a question regarding calculating values for hardware timers precisely. See "Hardware Delay" pattern in the above book.

My understanding is that, a "clock source" (aka "System Clock") generates a steady series of ticks (say f1 khz) which is then fed through a "Prescaler" unit which divides it (by a factor n which can be 1) giving another frequency (say f2=f1/n khz) which is then fed to the processor. Thus the "Processor Clock" has a frequency f2 and time period t2=1/f2 ms. To calculate a precise time interval (eg. 100ms timer) we just calculate the number of ticks in 100ms like 100/t2 = f3 "processor clock ticks". So we need to setup the appropriate timer registers to values corresponding to f3 to generate a 100ms tick.

However it does not seem to be so straight-forward (at least for the 8051) because of the number of ticks required for a "processor instruction cycle". A processor goes through its instruction cycle i.e. "Fetch->Decode->Execute->Interrupt" (interrupts are checked at the end of the current instruction) which takes a series of ticks. Ideally the entire instruction cycle should take just 1 tick (eg. a pipelined processor) and so i can use the above calculation to setup timers. However, apparently in the original 8051, the System clock ran at 12 mhz and each instruction cycle took 12 ticks. Since interrupts are checked only at the end of the instruction cycle, we now need a further division by 12 to get the correct "timer" clock tick, which is "processor clock"/12 i.e. f2/12.

Is my above understanding correct? If true, how can i calculate precise timings when instructions can have different cycle ticks (eg. mixed 32 and 16-bit instructions). Also do timers have to be incremented via a processor instruction cycle or is there a way in HW to increment a timer register strictly in sync with the output of the "Prescaler" unit?

Best Answer

The short answer then is "It depends." Different processor families will use different approaches. There isn't a one idea fits all micros answer. Also, synchronous interrupts (those generated by internal hardware sync'd to some internal clock) may be acknowledged more predictably and differently than asynchronous events (external to the micro.) As always, read the datasheet and keep an open mind.

Calculating precise timing will depend upon the processor, as well. Even if an event (timer counter match, external, whatever) occurs, if your processor is busy executing an instruction for several cycles, then it usually won't abort the instruction in order to start an interrupt routine. (But some processors WILL interrupt SOME instructions, even though I just said they usually don't. So even that isn't gospel and you have to read the datasheet and family guides to be sure.) Worse, even if the processor is executing a single cycle instruction, there may be some variability in the interrupt response. So you might find the docs saying "5 to 6 cycles later," for example. So even then, you aren't exactly sure.

On the other hand, some processors are as predictable as an atomic clock. The Analog Devices ADSP-21xx processor (single cycle for EVERY instruction word -- some of which can be performing three instructions in parallel), ALWAYS has exactly the same interrupt response time, every time, to an timer event. You can almost set your atomic clock by it. No variation, at all. Just clean, perfect, predictable responses. Every time.

But that's rare.

And if your processor has a nice, long pipeline, you might have to wait for it to "drain" out. That's done so that there isn't a lot of internal state that needs to be restored, restarting multiple instructions in various stages of execution.

But even then, there are exceptions. The DEC Alpha might take some clocks just to get to your interrupt routine. But when it does, it will probably have several instructions at various states in the various pipelines -- all of which are just sitting there waiting to continue. Your interrupt code will need to save all that state, do its thing, restore that state and then restart ... with all the pipelines back where they were when interrupted. The interrupt code, if it needs to track back and find a faulting instruction, is PAINFUL to write. But that thing screamed. They wouldn't even do lane changes for byte selection because it would add a combinatorial delay which would reduce the clock rate.

So, that's rare. But yes, even that can happen.

SO...... READ THE DATASHEET AND FAMILY MANUAL. And be prepared for ANYTHING at all. Designers can be VERY CREATIVE at times.

Related Topic