Electronic – arduino – Which is more efficient: every loop or afterwards

arduinodata acquisitionesp32microcontrollersignal processing

Planning to collect some sensor data with an Arduino (or ESP32) and looking through other people's code I noticed two variants, generally, to process sensor data.

  1. Collect, average and subtract some initial calibration value within the loop.
  2. Collect all data, then process after the loop.

Some data qualities (in my example):

  • data sampled at 100Hz
  • 2-3 channels (imagine gyroscope sensor, natural signal frequency…. maybe 1-10Hz for anything useful?)
  • some baseline calibration window before is subtracted, e.g. 100 datapoints / 1s

Pseudocode:

//variant 1) process within loop
loop() {
  read_data=readsensor();
  read_data=smooth(read_data);
  read_data=subtract(read_data,baseline);
  data_store[cnt]=read_data;
  cnt++;
}

//variant 2) after
loop() {
  read_data=readsensor();
  data_store[cnt]=read_data;
  cnt++;
}
process() { //called after loop 'finishes'
  data_store=smooth(data_store)
  data_store=subtract(data_store, baseline)
}

I understand that 1 will impact processing time during acquisition and 2 afterwards. I'm interested in coming up with the fastest way (i.e. time after recording) to the best version of the data (i.e. smoothed, subtracted, etc). Concrete questions:

Questions:

  • Is there some sort of optimization implemented in Arduino, that would make 2 faster than 1 – or can this fundamentally be implemented faster than 1? (this is my hunch)
  • is there some fundamental engineering principles that would suggest 1 rather than 2 (based on sampling theorem etc)?
  • Are there some general rules of thumb on the Arduino (-compatible) platform?

Any other advice is very welcome!

Best Answer

OK, to answer your questions directly:

  • Version 2 (all else being equal, and assuming the code takes equal time) will always be slower, as it has to do the computations in the end. There's no optimization that can be applied by the compiler here, and the Arduino environment is not changing this either - it is just a thin wrapper above the C compiler (gcc in most cases). It's worth measuring how much slower it is, your computations may be trivial and the slowdown be unnoticeable.
  • as discussed in the comments, in some cases data has to be collected at a fixed sample rate (e.g. audio). For your application that might not be the case, it's your call here. Also keep in mind that "variable loop time" could be misleading, the variability may be too little to matter. So for data that needs fixed sample rate, the approaches, sorted from best to worst are
    • using a timer/interrupt to trigger successive samplings
    • your approach #2
    • your approach #1
  • in general in engineering, data collected at a variable sample rate has a number of undesirable properties, for example even doing something as simple as averaging is conceptually harder. The data samples represent differing amounts of time, so their relative weights are unequal. If you insist on mathematical correctness, then you'd need to store timestamps beside the sample data and use those to compute relative weights and use that to compute the average. If you don't insist... then it's your call, really, but it's harder to reason about the results you're getting.
  • any generic rules of the thumb is a really vague question. Keep in mind that if you've written code for a PC using gcc, then code for a MCU using gcc is not that much different, same programming concepts apply with regards to what the compiler sees and can optimize. There's no hidden ingredient to Arduino in general. You just need to be aware that memory is much more limited, the codespace is limited, debugging is harder, the MCU could be kind of slow, and especially slow in some cases (like floating point math, trig functions, etc.)