Electronic – Confusion with the meaning of a component tolerance in terms of accuracy and standard deviation

componentstolerance

As far as I understand a component's tolerance relates the difference between the actual component value and the value given on the data sheet.

If a 1k Ohm resistor have ±5% tolerance, is this 5% percentage a measure of standard deviation or accuracy? Is this tolerance related to systematic error or random error?

I'm a bit confused about two concepts and how is this tolerance practically obtained. It seems if statistical methods are used this percentage relates to standard deviation but in many calculations it is taken as accuracy. But accuracy and standard deviation are not related (?) Can you give an example to illustrate the idea behind?

Best Answer

Internally, the manufacturer will measure the standard deviation of what he produces, to control his process. However, when he sells you a '5%' component, he guarrantees that the value you get will be within +/- 5% of what you expect. How he achieves this, whether by measuring all the components, or by knowing that 5% is perhaps 6 sigma for his process, is up to him. 5% is the peak error you should ever see.

It's possible that the same line is producing 1% and 5% components, and the +/- 1% components are being selected out at test. So when you buy 5% components, they could easily have a bimodal distribution, and you could find that all your resistors have at least 1% error, but less than 5% error.

If you wanted to predict statistically how a large number of circuits would turn out, perhaps applying Monte Carlo techniques, then given that you have been told nothing about the distribution, your safest course of action would be to assume that the resistors are drawn from a uniform distribution, of width +/- 5% about the nominal value. You may be tempted to do some sums to express that rectangular distribution as an equivalent mean and standard deviation. You should resist that temptation.

Even if you've bought components from manufacturer X for years, measured their distribution, found it's normal with a 1% standard deviation around the nominal, the next batch might be bimodal with error distributed between 2% and 5% as now he's got a new customer for 2% components and is selecting. He might have had his good machine fail. Any number of reasons might change what you're getting from 'better than expected' to 'what you've contracted for'.

Related Topic