Electronic – Good approaches to implement more than one time-critical function using a microcontroller

microcontroller

What is the philosophy or approach taken to implementing highly time critical functions in microcontrollers, if there are any?

I am working on a project involving outputting a precise square-wave waveform of varying frequency. I have done this using a timer and an interrupt function. However, even to implement this correctly, I had to calibrate the offset for the number of clock cycles taken during the interrupt service routine. I'd imagine this precision would be disturbed by having another such waveform running alongside (say the frequency needed to be changed at the exact same time). Having a microcontroller each, for every such time critical function seems wasteful.

Take another example, of implementing a clock (as in hh:mm:ss) function. I can't imagine that every high level microncontroller/computer has a dedicated real time clock chip solely to keep track of the time. However, I find it hard to imagine it being accurately measured using the core processor, which is busy servicing a plethora of functions that come at asynchronous intervals in the meantime. I'd imagine the time counting would have offset errors, that change depending on the functions that are running.

Is there a design process or approach to containing or giving a tolerance to the precision achievable? Or does anyone have any pointers or suggestions on where I could find more information regarding this?

Best Answer

To output precise square waves, use the hardware. Most microcontrollers have PWM generators built in that can do this. You set the period and on time in clock cycles, and the hardware does the rest. To change it to a new frequency, write the new period into the period register and half the period into the duty cycle register.

As for real time clock losing time due to other load of the processor, it doesn't work that way unless it is very poorly written. Generally the hardware would be used to create a periodic interrupt that is some multiple of seconds, and the firmware divides further from there. This works regardless of how busy the processor is, since the interrupt runs whenever it needs to. As long as the interrupt routine takes a small fraction of the overall cycles, most of the processor is still applied to the foreground task.

There are also ways to keep time by polling at somewhat unknown intervals. You have the hardware keep a count, and whenever you get around to updating the clock, you update it based on the total number of elapsed ticks. As long as this routine is run often enough so that whatever counter is used doesn't wrap between runs, no time is lost.