Electronic – Conversion of 12 bit ADC/DAC data to floating point for PID operation

pid controller

I am working with PIDs for temperature control. I have implemented the PID using floating point numbers. I am using 8051 for PID implementation.I have two queries:
I have to apply the PID output to a 12 bit DAC. How do i convert 32 bit float to 12 bit.
How to convert a 12 bit ADC data so that i can feed to PID input
Both ADC and DAC work with 5 V reference. So LSB size is 5V/4096= 1.22mV
thanks.

Best Answer

Changing integers to floating point is easy. Just throw the integer into a floating point calculation that multiplies it by the scale factor and applies the appropriate offset (so your output centers around zero) if required.

Here are three different versions depend on whether your integer/register value is signed or unsigned and whether you want to use the zero offset in terms of the integer or real-world units:

\$Floating Point =(Unsigned.Int-uint.Zero.Offset)\times\frac{Some.Real.World.Unit}{LSB}\$

\$Floating Point = \left\{Unsigned.Int\times\frac{Some.Real.World.Unit}{LSB}\right\}-Real.World.Zero.Offset\$ \$Floating Point = Signed.Int\times\frac{Some.Real.World.Unit}{LSB}\$

If there is an ADC then you still do the exact same thing. Just that the scale factor decomposes into: $$ \frac{Some.Real.World.Unit}{LSB}=\frac{Some.Real.World.Unit}{Volt}\times\frac{Volt}{LSB}$$

LSB = Least Significant Bit = The binary step size

This is mathematically identical to dividing your register/integer value by the real-world value that the full-scale of the register/integer is supposed to represent (applying zero offsets as required).


But you probably don't want to use floating point on an 8051. Convert your floating point equations and values to fixed point and use that instead.

You need to choose real world measurements that your largest integer will represent and then convert your floats (which are in real world units) into integers that are proportions of these full scales. You are basically mapping one number line or scale to another.

Then you must adjust your equations to work with these scaled ratios so it makes sense, throwing in adjustment factors where necessary. You can't just blindly convert.

So if you were doing trig in 8-bits, you might define:

[0,255] -> [0, 360] degrees.

or

[-127,128] -> [-180, +180] degrees

or

[0,255] -> [0, \$2\pi\$] rad.

or

[-127,128] -> [-\$\pi\$, +\$\pi\$] rad

Then you scale all your integer equations that contain angles to interpret things that way.

If you were doing temperature, you might pick that 0 = 0C and 255 = 100C and scale everything to that. If you have an ADC with n-bits of resolution, and a scaling value for your temperature sensor, that's a convenient place to start.

It goes without saying to size your scale as close as possible to the largest and smallest values you expect to be working with for best precision. Don't make 255 = 1000C if the highest temperature you expect to measure is 50C.

This scale does not need to be the same for every number or throughout your calculations and can be adjusted for best precision depending on the calculation being done. But the different scalings do need to be accounted and compensated for whenever two numbers with different scales are used together.


Next step up from integers is fixed point where you have an integers but with a decimal point that you mentally and MANUALLY keep track of and interpret in your equations as you go along, including shifting the binary integerso the decimal points are aligned before you add and subtract, and keeping track of where it is in a result after you multiply two integers and compendate for it in the next equation you use it in. It's tricky and tedious.

Related Topic