A confusion about error components in a data acquisition system

erroroffsetrandom

My question is related to my previous question:Accuracy of a data acquisition hardware

In a data acquisition system absolute accuracy is defined as measure of all error sources as following:

Abs. accuracy = error from Gain(Span error) + error from Offset + (error from Noise + Quantization)

Here in the following data sheet at page 30 and 31:

http://www.mccdaq.com/pdfs/manuals/PCI-DAS6034-35-36.pdf

"absoulte accuracy" is defined.

My question is about interpreting these parameters.

Lets say I take five samples by applying precisely known 5V reference voltages to a channel as:

I apply 5V and daq-board reads 5.004V

I apply 5V and daq-board reads 5.002V

I apply 5V and daq-board reads 5.001V

I apply 5V and daq-board reads 5.003V

I apply 5V and daq-board reads 5.002V

Now the readings above are different than the 5V true value applied.

So the readings above includes gain error + offset error + noise error

Noise error is statistical in nature and wont effect the mean value but the disperse.

As far as I understand, What effects the mean value here is the "gain error" and the "offset error" (systematic errors).

My question is: Are the gain and offset errors in the data-sheet fixed values or are they maximum values and also statistical or indicate a range?

For example, if data sheet says offset error is x, and my mean reading is A; should then I correct my reading as A-x ? or x is not constant?

I'm asking because if lets say offset error is fixed and knwon for all measurements why don't they compensate it before sending the data to sthe erial port instead of documenting?

Or if it is not fixed should I measure offset before each measurement?

Best Answer

My question is: Are the gain and offset errors in the data-sheet fixed values or are they maximum values and also statistical or indicate a range?

As pointed out in the other answer, the actual error varies from device to device, and when the temperature of the device changes, and maybe in response to other effects like if the power supply varies, etc.

For example, if data sheet says offset error is x, and my mean reading is A; should then I correct my reading as A-x ? or x is not constant?

The datasheet normally says the maximum error is \$x\$. Then the actual error could vary between \$-x\$ and \$x\$. They might mean this as an absolute limit, or they might mean this as something like a 3-\$\sigma\$ or 6-\$\sigma\$ limit.

If the number was an exact statement of the error, constant across all devices and temperatures, etc, the device manufacturer would have been able to adjust the device to eliminate that error and sold you a device with 0 error instead of \$x\$.

Or if it is not fixed should I measure offset before each measurement?

Measuring offset might help you to reduce the error in your measurement. This would be a form of calibration.

But remember not all errors are offsets. It's also possible there's an error term that's proportional to the measured value (a gain error).

If you have some "golden" target to measure, you might even be able to calibrate out the gain error as well as the offset.

That would likely reduce your measurement error considerably, but still leave you with errors caused by the nonlinearity of the measurement process, temperature changes between the time of calibration and the actual measurement, etc.