Numeric Precision – Why Normalization Improves Numerical Precision

normalizationnumeric precision

I was reading the following article:

Polynomial interpolation of GPS satellite
coordinates
, Milan Horemuz and Johan Vium Andersson 2006

and it states the following:

"The estimation procedure of the ai coefficients is done
with the Matlab function polyfit, which estimates the
coefficients in a least squares sense. To improve the
numerical precision, the dataset p is normalized
by
centering it at a zero mean; by subtracting its mean value
p; and scaling it to a unit standard deviation by dividing
each observation with the standard deviation σp as follows…"

My question is how does the value normalization improves the numerical precision of computational operations?

Best Answer

Imagine for a moment that the quantity you are interested in has a value in the range 42.0 - 42.999.

Imagine further that you want as much precision as possible.

As it stands, you are spending a chunk of your available bits representing the value 42, and that leaves fewer bits available to represent the 0.000 - 0.999, which in some sense is what you are really interested in.

By subtracting out the constant 42, you can now spend all of your bits representing the 0.000 - 0.999 delta, that you are really interested in.

Now, suppose you don't actually know that the values are all centered around 42.5, but you do know that they are all centered around SOME mean value. You can calculate that mean, subtract it out, and use all your bits to represent the delta from the mean.

This is the concept behind normalization about a mean. You spend your bits representing the quantity you are interested in, which is the delta from the mean, and you don't waste bits representing the mean itself.

Related Topic