Electronic – How to improve resolution and presicion of sensor reading

sensingvoltage measurement

I'm trying to measure temperature with an arduino uno and a LM335 temperature sensor.
The LM335 can provide differences in temperature between -40C to 100 with a resolution of 10mV. I read the sensor with the (onboard) AVR ADC which is 10-bit.

I use the following conversion to get Celsius from the Kelvin scale that the sensor provides:

\$ ( \frac{M_s * V}{1024.0f} * \frac{1}{0.01f} ) – 273.15f\$ where \$M_s\$ is the measured sample and \$V\$ is the Voltage of my power supply.

I use the "uncalibrated" method of measurement. (currently I don't care about the calibration procedure)

My question is how to improve the measurement perhaps to three decimal points? Is this possible with this sensor and ADC? Can I make multiple measurements perhaps with 2-3 units of sensors with different resistances or with a single unit connected with a multiplexer to different resistance networks so to get a better resolution?
What is the common procedures in cases like this?

EDIT: I would further appreciate if the answers include some information about the difference between accuracy, precision and resolution.

For an example as I understand calibration of the sensor module would affect the accuracy of it. i.e. How close are the sensor readings to the "real" quantities but not its precision. My question was aimed at first to improve the "precision" (how close are the readings to the average value of readings). I said that I don't care right now about the calibration scheme because i 'm not really interested if the temperature is 28.4 and I read in my measurement 29.4. I care if the temperature rises 0.1 C to be able to read it (so if i improve the precision of the readings then I will calibrate the sensor to get better accuracy). Sorry for my poor explanation and English.

Best Answer

Sorry for my poor explanation and English.

There are a number of important terms in English. But these would have the same scientific meaning in any language. So you'll need to work out the mental "simulation" in your own way of thinking about the world. If you understand the terms well, you will have in your head what others have in theirs and if you apply the terms to some specific situation you will make similar predictions to others seeing the same specific situation, regardless of language, culture, fad, idiom, or century.

These are:

  1. precision: (Sometimes called random error.) A measure of statistical variability of a group of measurements made on the same measurement circumstances. It is what remains after corrections are applied using extant science theory and mathematical modeling. The shape of the distribution may be meaningful, and is often left unstated. If measures about the shape are left unstated, it is usually taken to imply Poisson events surrounding some assumed true value in reality, with the usual Gaussian distribution (the integral of Poisson) and therefore the usual meaning of "standard deviation" and "variance" about that value.
  2. trueness: (Sometimes called systematic error.) A measure of statistical bias: the difference between the mean of a group of measurements and the best-known "true" value of the measurement circumstances.
  3. accuracy: (Sometimes, conflated together with trueness.) This term combines precision and trueness, so that it is worse than either of those individually. The reason trueness is often conflated with accuracy, is that if one knows any two of precision, accuracy, and/or trueness, then the other can be derived from it (so long as the random and systematic errors are Gaussian distributed, anyway.) So just be aware of the context to figure that out.
  4. repeatability: The measurement variation that remains after attempting to keep measurement conditions constant (known variables in control) while also using the same instrument and operator and over some "short-enough" time period to avoid long term drift.
  5. reproducibility: The measurement variation that remains after attempting to keep measurement conditions constant (known variables in control) but now using different instruments (of the same manufacturing batch or type, typically) and different operators with standardized training, and now over longer time periods where some long term drift may be present. Reproducibility is important in science, because if one researcher has described all of the circumstances needed to reproduce a result, another researcher must read the description and attempt to replicate the circumstances using their own instrumentation and capabilities. They won't have access to the same instruments. And the new researcher is obviously not the same individual, either, though it is reasonable to assume a standard training level between them. So reproducibility provides an idea of how well an experimental result might be replicated. (For example, the reproducibility of a pipette is typically considered to be one half of the smallest visible gradiations marked on it -- which takes into account precision and accuracy of the marks as well as the varying ability of different researchers to use the pipette to measure out a quantity of fluid.)
  6. detectability: The smallest possible change in measurement statistics that can be used to distinguish one measurement from another measurement (sometimes, one of those measurements is simply "background fluctuations.) The exact meaning of "distinguished statistically" varies from field to field and may be more a matter of informed opinion. For example, in subatomic physics where statistics dominates discoveries, \$3\sigma\$ is considered "evidence for" and \$5\sigma\$ is usually considered "detectable." But don't expect the same standards in every field.
  7. dynamic range: The ratio between the largest and smallest measurements.

Accuracy and trueness require references back to standards and needs traceability to those standards. Once calibrated against a standard (and this may require just a few calibration points or it may require hundreds or even thousands of them), an instrument will still include both time drift and temperature drift with respect to that calibration event and ambient temperature at calibration. So, the further away in time or the further away in ambient temperature that you use this instrument, the worse its accuracy and trueness becomes.

Even with an infinitely precise instrument, should that ever be considered possible, this fact alone helps you in no way in terms of accuracy and trueness for that instrument. It could still be way off the mark, just infinitely precise at being way off. Precision helps you when it comes to calibration, though, because it helps tell you just how precisely you are accurate or true (consistent with the precision of the accuracy standard, of course.)

Since you mentioned temperature and that happens to be an area where I've spent a little time, let me put of the above into context as well as talk a little about some of the ideas introduced already in other answers.

Accurate temperature measurement generally requires traceability to standards which can tell you the true value of a specific situation. In the US, this usually means being traceable to NIST's standards. (In Germany, DIN.)

Temperature happens to be quite difficult in terms of finding true values. For many decades, this was done using "freeze points," since the process of a pure material going from a melted state into a frozen state is sharper than the reverse transition (from frozen to melted) and because, when using very highly pure ingredients it is possible to make rather accurate theoretical predictions about the freeze point under a set of crafted conditions. These freeze points included copper, gold, and platinum, just to name a few. There are substantial limitations to using freeze points, though. One of them being the limited number of useful ones. The cheapest freeze point is, of course, water. And ice baths are commonly used in order to create one. But sadly, even under the best circumstances, that only provides one calibration point. And it is rarely enough, unless your needed dynamic range is quite narrow and near that freeze point.

NIST has replaced the use of freeze points as they now have better methods. But commercial companies usually use traceable methods, where they buy calibration of a device from NIST and then use it under the specified conditions and for the specified allowable duration before getting a new one or re-calibrating the old one. Tungsten strip lamps and radiation thermometers are examples of traceable standards. (A disappearing filament can often be used to make comparisons between a standard and a target situation to see if they are the same, but usually isn't used to make an absolute measurement.) Some companies will use a secondary standard -- one that is made and calibrated by a company that has purchased a NIST calibrated standard. (The number of "hops" from NIST to the actual calibration of an instrument is often related to its value as a product.)

A single ADC measurement, for example, includes both random and systematic errors. You can sum up ADC values (an average is the same thing as a sum, the only difference being a known factor used to multiply or divide) in order to improve the signal to noise ratio. But this really only works if the random error causes sufficient dithering near one or more ADC digitized values to cause different readings to occur. If the random error in the measurement process is too small, the ADC just reads the same value every time and this cannot be used to improve the signal to noise ratio. All it does is waste time. So if you intend on using this technique, you need to carefully arrange things so that the noise causes some dithering between ADC values. It's not uncommon to target an ADC to read about 2 or 3 bits "into the noise" in order to make this method work reasonably well.

Assuming that the random errors are Gaussian in distribution, the signal will increase by a factor of \$N\$ (the number of samples in the sum) while the random errors will increase by a factor of \$\sqrt{N}\$. (There is truncation of the noise by the ADC, so that impairs this simplistic calculation a bit.) But the signal also includes systematic errors and these also increase by a factor of \$N\$. So summing doesn't reduce the effect of systematic error on the measurement. To help handle systematic error, measurement devices will use more calibration points or else include additional information about the systematic errors between calibration points which can then be used to make additional corrections.

In general, it's quite expensive to achieve accuracy in temperature measurements. (Excepting the case where an ice bath calibration point is used and you don't make measurements far from that calibration point.)

All the above said, it's true that it is important to improve precision if you expect to calibrate a device for accuracy (or trueness) later. That's obvious. And good precision can yield useful detectability, even if you don't know the true value of something.

If you make two of these devices, or more, there will be the issue of reproducability between devices. I think there's an adage about this: "A man with one clock knows what time it is. A man with two clocks is never sure."

Keep this in mind as you move forward.