Electronic – PID controller implemented digitally

pid controller

I've been reading articles about digital PID implementation.
So, in the discrete domain, we can write the parallel form PID law, as:

$$U(z) = E(z)*\left[K_p + \frac{K_iT_s}{1 – z^{-1}} + K_d\frac{1 – z^{-1}}{T_s}\right]$$

Where \$T_s\$ is the sampling time. It's also said that \$K_p\$, \$K_i\$ and \$K_d\$ are related as:

$$K_i=\frac{K_p}{T_i}\\ K_d = K_p\cdot T_d$$

where \$T_i\$ is the integral time and \$T_d\$ is the derivative time.

So, in the above equation I would get
$$K_i=K_p\frac{T_s}{T_i} \\ K_d=K_p\frac{T_d}{T_s}$$

Let's say I implement my control law, in discrete time (k), as:

$$u[k] = K_p\cdot e[k] + K_i\big(e[k] + e[k-1]\big) + K_d\big(e[k] – e[k-1]\big)$$

My questions:

  • is the integral time and derivative time (\$T_s\$ and \$T_d\$) defined by the clock period of the circuit that perform these computations?

  • if my error signal is sampled by the same clock that computes the integral and derivative parts of my law, \$T_i=T_d=T_s\$?

  • If \$T_i=T_d=T_s\$, \$K_p = K_i = K_d\$? It sounds really odd.

Best Answer

is the integral time and derivative time (Ts and Td) defined by the clock period of the circuit that perform these computations?

The integral time and the derivative time are related to any associated strobes used around those block.

Digital electronics take a finite time to perform the required operations. Take a trapezoid discrete integrator who's difference equation looks like this:

\$y_n = y_{n-1} + K_i*T_s*\frac{x_n + x_{n-1}}{2} \$

There is two additions, two multiplications and one division (although a cheap shift right division) for one integration step. This must be completed before the next sample \$x_n\$ arrives. Now to save on a multiplication that \$K_i * T_s\$ could be combined into one scalar, which is standard practice & this results on the gain variable being related to the sample time \$T_s\$

What sets your \$T_s\$ ? . Those multiplications and additions take a finite time & they MUST be completed by the time the next sample is available \$x_n\$

if my error signal is sampled by the same clock that computes the integral and derivative parts of my law, Ti=Td=Ts?

Yes it would be, if Ts is the data acquisition or interpolator rate, of one is used)

If Ti=Td=Ts, Kp = Ki = Kd? It sounds really odd.

No because Ts is only part of the discrete integral equation as there is then the gain factor. As previously mentioned the gain factor AND the sample period can be pre-calculated to save on a multiplication step

--edit--

To help clarify. Below is a piece of python code that implements a discrete integrator (choice of 3, but Trap more often than not is the one you want). The Sample time, \$T_s\$ here is 1us (as well as 100us to prove a point). In this instance \$T_i = T_s = 1us (or 100us)\$. Likewise the "clock" of the processor, the python virtual machine clock speed, for simplicity, can be considered to be 2.9GHz. This is a reasonable example for a FPGA/uP running at a higher speed and the ADC's are sampling at a slower rate BUT a rate that is slow enough for the main processor to complete its computational steps.

#!/usr/bin/env python
import numpy as np
from matplotlib import pyplot as plt


def fwdEuler(X=0, dt=0):
    '''y(n) = y(n-1) + [t(n)-t(n-1)]*x(n-1)'''
    x[0] = X*Ki # apply gain prior to integration 
    y[0] = self.y[-1] + dt*x[-1]
    x[-1] = float(x[0])
    y[-1] = float(y[0])
    return y[0]

def revEuler(X=0, dt=0):
    '''y(n) = y(n-1) + [t(n)-t(n-1)]*x(n)'''
    x[0] = X*Ki # apply gain prior to integration 
    y[0] = y[-1] + dt*x[0]
    x[-1] = float(x[0])
    y[-1] = float(y[0])
    return y[0]

def Trap(X=0, dt=0):
    '''y(n) = y(n-1) + K*[t(n)-t(n-1)]*[x(n)+x(n-1)]/2'''
    x[0] = X*Ki # apply gain prior to integration 
    y[0] = y[-1] + dt*(x[0]+x[-1])/2
    x[-1] = float(x[0])
    y[-1] = float(y[0])
    return y[0]


plt.hold(True)
plt.grid(True)
f,p = plt.subplots(4)
x = [0,0]
y = [0,0]
Ki = 1

t = np.arange(0,1,1e-6)  # 1million samples at 1us step
data = np.ones(t.size)*1 # 
p[0].plot(t,data)
p[0].set_title('Simple straight line to integrate')

int_1 = np.zeros(t.size)
for i in range(t.size):
    int_1[i] = Trap(data[i],t[1]-t[0])
p[1].set_title('Trap integration with Ki=1 and Ti=1u')
p[1].plot(t,int_1)

x = [0,0]
y = [0,0]
Ki = 1
t = np.arange(0,1,100e-6)  # 1million samples at 1us step
data = np.ones(t.size)*1 # 
int_2 = np.zeros(t.size)
for i in range(t.size):
    int_2[i] = Trap(data[i],t[1]-t[0])
p[2].set_title('Trap integration with Ki=1 and Ti=100u')
p[2].plot(t,int_2)


x = [0,0]
y = [0,0]
Ki = 2
t = np.arange(0,1,1e-6)  # 1million samples at 1us step
data = np.ones(t.size)*1 # 
int_3 = np.zeros(t.size)
for i in range(t.size):
    int_3[i] = Trap(data[i],t[1]-t[0])
p[3].set_title('Trap integration with Ki=2 and Ti=1u')
p[3].plot(t,int_3)

plt.show()

enter image description here

As you can see, for a valid integral difference algorith, the actual integral gain is unity and is thus can be ... tuned, via an additional gain, \$K_i\$. This is equally true for differential terms.