Electronic – CPU Utilization Methods

cpudigital-logicembeddedinterruptsmicrocontroller

To fulfill customer expentations and fill out customer reports about the device, is there also one section about CPU Utilization. Because I have never done such task before I have overview some "Google Search" articles. A lot of the articles are in direct connection with Linux programming, a lot of them speak generally about CPU Utilization (just theoretical) and I didnt found no article about the method how can be this done .. okay there are some: embedded.com.

I am interested how YOU have done such task timing job before? I am intereseted in method and also whit which tool was done? With some direct measuring on the osciloscope (or logic analyser) or capturing data from osciloscope and post processing them? Which time frame to take for CPU Utilization to calculate – most "busy moment" when all interrupts are present, because in this case CPU utilization is much bigger then maybe 1 milisecond or 1 microsecond later, when only the background loop is executing?

Maybe for reference how I have made my first CPU Utilization approach (I don't know if is the right approach):
Every interrupt when start executing have dedicated PIN which goes high when interrupt begin and gets low when interrupt ends. There is also same propagation delays involed. I export this signals over the osciloscope into one file and post-process with octave. There is still an issue which timeframe to take.

In case of any question please write in the comment section

Best Answer

CPU utilization is really only a crude measurement of the overall resiliency of a real-time system. Therefore, the answer to your question is that it is generally a long-term average value.

The real criterion is whether all of the software tasks meet their completion deadlines. Note that this includes both tasks triggered by interrupts and tasks triggered by other kinds of events. When CPU utilization begins to approach 100%, then the completion time of lower-priority tasks tends to become arbitrarily large.

Using GPIO pins to indicate the run time of individual tasks is one good way to check whether those deadlines are ever exceeded.

Another approach is to instrument the code itself. If you have access to a free-running counter (a spare hardware counter/timer module, perhaps), then you can take a snapshot of its value at the beginning of each task, and then at the end of the task, take another snapshot and compute the difference. If this ever exceeds the required value for that task, indicate an error.


A slightly different question would be to compute the expected CPU utilization of a system, before it is implemented.

In this case, you consider each task individually, coming up with estimates of how long it runs when triggered and how often it is triggered. The run time divided by the trigger period gives the CPU utilization for that task by itself.

If you add up all of the individual utilization values and get a value that approaches or exceeds 100%, then you need to think about ways to redistribute the work — faster CPU, more CPUs, dedicated hardware for some tasks, etc.