Electronic – seeing drift when using Madgwick algorithm to correct for orientation with 9 DOF IMU

accelerometersensor

I am trying to correctly track orientation of a sensor with accelerometer, gyroscope, and magnometer, using the open source Madgwick algorithm. The algorithm returns quaternions which represent the rotation between the internal and external frames. However, when using this code the quaternions returned seem to be suffering from drift and even after digging through multiple times I can not find what may be the cause. On the top is what is returned, and the bottom is the approximately correct values (from an experiment with a different sampling frequency): Quaternions.
Could this be caused by too low of a sampling frequency? The internal Madgwick report shows a lower bound of 10hz, and this sample was taken at 5hz. However it is not clear to me how that issue would produce these results, but that is currently my best guess. I think the issue could also be the linear acceleration from gravity, but my understanding is that gravity's influence can be removed once the quaternions have been calculated – so if that is indeed the issue how can I remove linear acceleration to calculate them in the first place?

Edit: The accelerometer and gyro values are not suffering from drift, and this effect only emerges with the calculated quaternions.

Best Answer

@Olin Lathrop's answer is wrong. Unfortunately, I don't have the reputation to either downvote or comment on his answer. It's curious that he chose to claim that what is being achieved in the paper you link isn't physically possible...

As madgwick's paper demonstrates it is indeed possible to maintain stable orientation estimates over long durations using only a 9 DOF MEMS MARG. His algorithm was also benchmarked on a bunch of different hardware by Kris Winer. He, in particular, demonstrates virtually drift-free tracking over a half hour period with both time periods of lying still and being moved around. The relative error of the absolute orientation is less than a degree over this time period. Impressively this is achieved without resorting to a full Kalman filter and requires only scalar operations instead of needing the computationally expensive matrix inversion required by a Kalman filter.

Looking at it through the lens of observability the problem of orientation estimation is a problem with six coupled states. 3 states for the orientation itself and 3 states for the derivatives of the orientation (ie the angular rates)

Obviously, the gyroscopes allow for direct measurement of the latter three states. This leaves the issue of the three orientation states. Assuming that there is a static gravitational field and magnetic field that are not aligned, we can fix two of the three directions. The structure of the coordinate system imposes constraints on the relationship between the three directions allowing the third to be fixed simply by construction. As such the system is observable, which is to say no, unlike @Olin Lathrop's claim, the "physics" explicitly allows you to determine the complete orientation even without depending on any integration. As such his objection regarding integration error is completely unfounded.

With that being said integration is still a useful process as it allows sensor fusion to be performed which is to say that you are essentially performing a best fit between the implied angular rate incremental positions and the incremental directly measured absolute orientations. Seeing as these are likely not correlated in that they are measuring three completely separate physical phenomena they allow you to significantly improve the accuracy and robustness of your orientation estimation.

This brings us to the issue you raised regarding the origin of the lower frequency bound on the tracking accuracy. Not being the author of the algorithm and only ever having used it as a component in a bigger solution where its performance was adequate I haven't personally dug through every line of every equation in the paper but that being said the temporal nature of the problem does indeed point to something related to timesteps.

The numerical integration itself might contribute but is likely not the main driver as its performance should be reasonably robust against larger timesteps. I would look at the single step gradient descent (second paragraph on page 7 of the linked paper) that is being used as the optimization algorithm to fit the sensor observations to the assumed fields. At higher frequencies, this simplification is likely sufficient as there isn't much time for the two to diverge and you can arrive at a reasonable estimate after only one iteration. As you slow down the algorithm this likely isn't the case and you would need to iterate until you get a reasonable error term.

In summary, you can definitely track orientation using a 9DOF MEMS sensor. I'm not clear why you are attempting to run this algorithm as slowly as you are but there are some spots in the algorithm to check if you want to improve the lower bound but be aware that they will come with significant overhead which means that if the reason you are having this issue is because it is the best your hardware is able to achieve (maybe due to other background tasks) then I would recommend offloading background tasks or upgrading hardware to improve your rate rather than trying to tweak the algorithm. Generally speaking, AHRS and IMU systems should be run at the highest rate you possibly can on dedicated hardware as its performance tends to have fairly critical implications for the rest of your system.