Reading time
8 Minutes
Published
A brief overview of the most common closed-loop control techniques
Insight in Brief
This article provides a brief overview of the most widely used closed-loop control techniques in the industry, including PID controllers, loop shaping with lead-lag compensators, and classic control theory. Get a glimpse of the basics of closed-loop control and the tools used for its analysis and design.
Introduction
There are two fundamental types of control techniques:
- Open-loop control or feedforward control
- Closed-loop control or feedback control
In open-loop control, the control input to the system (plant) is independent of the system state or system output (measurements). No feedback is used to determine whether the desired output has been reached. A simple example of such a system is a light switch, that is triggered by a person manually, or that is triggered by a motion-detection sensor.
In closed-loop control, on the other hand, the system control input depends on the state or system output. This feedback is used to achieve the desired objective and to minimize the deviation from that objective. A simple example is the cruise-control of a car, that uses the measured velocity as feedback to determine the correct action to reach the desired velocity.
The goal of closed-loop control is to achieve the desired system behavior and to achieve stability even in the presence of disturbances and model mismatch. If the controller is not designed properly, feedback control has poor performance in the best case, but it can make stable systems unstable in the worst case. Therefore, closed-loop control requires a careful analysis to get the required system behavior.
C: Controller, P: Plant, r: Reference input, e: feedback error, u: controller output / plant input, y: plant output
1. Classic control theory
The classic control theory covers the most widespread control techniques. It owes its popularity to the fact that it is the easiest to understand and, in many cases, provides a sufficiently good system behavior, not justifying the additional cost and work required for more advanced techniques.
A vast majority of the problems can be transformed with good enough accuracy to a linear time-invariant (LTI) single-input single-output (SISO) system, which is the basis for the classic control theory. Even if a system is non-linear, it can often be linearized around the desired operating point, which then allows the use of the classic control theory.
LTI SISO system can be transformed into the frequency domain using the Laplace transformation, and several theories and tools exist that allow the analysis and the design of suitable controllers:
- Nyquist plot: This is a graphical tool, that plots the frequency response of the open-loop system, and, using the Nyquist stability criterion, allows to assessment of the stability of the closed-loop system. It serves as the basis for loop-shaping techniques.
- Bode plot: This is also a graphical tool, that looks at the transfer function in the frequency domain, showing the magnitude and the phase shift. It allows us to assess the stability by looking at the gain margin and the phase margin of the open-loop system.
- Root locus plot: Another graphical tool, showing the roots of the closed-loop system frequency domain transfer function. The graph allows us to determine the stability of the system.
1.1 PID
The PID controller is the standard control scheme used throughout the industry. The controller applies correction to the plant based on the error proportional term P, the error integral term I, and the error derivative term D. This controller can be implemented both analog and digital.
The big advantage of the PID controller lies in its simplicity. Typically, the influence of each control parameter on the system output can often be understood easily. It can therefore be tuned on-site if the design parameters from the lab do not yield the desired behavior in the field. Also, there are heuristic methods such as Ziegler Nichols or Skogestad that can be used to find acceptable control parameters without the need for a mathematical model of the system.
In practice, the PID controller provides sufficient performance and stability for a wide range of systems.
1.2 Loop-shaping with lead-lag-compensator
A lead-lag-compensator is a controller that is designed in the frequency domain. In its simplest form, it can be used together with a proportional controller. But it can also be used as an extension to a PID controller to optimize the behavior in the frequency domain.
The basic idea of the lead-lag-compensator is to shape the open-loop system bode plot in a way to satisfy the design restrictions regarding the crossover frequency, the gain margin, and the phase margin. For good command tracking and good disturbance rejection, the magnitude of the transfer function shall be large at low frequency. For good noise rejection, the magnitude of the transfer function shall be small at high frequencies. Multiple lead and/or lag compensators can be added until the desired behavior is reached.
1.2.1 Lead element
A lead element is typically used to increase the gain margin. It has a lower magnitude gain at low frequency and adds phase lead around a certain frequency. A possible side effect is a magnitude increase at high frequencies, thus it is noise-sensitive. It can be seen as an approximation to a PD controller.
1.2.2 Lag-Element
A lag element is typically used to improve the command tracking (decrease steady-state error) and to reject disturbances. It has a higher magnitude gain at low frequency and adds phase lag around a certain frequency. A possible side effect therefore is the reduction of phase margin. A Lag element can be seen as an approximation to a PI controller.
1.2.3 Pole-Zero placement
The idea of adding lead and/or lag compensators can be taken one step further. Poles and zeroes can be introduced in the controller to shape the open-loop frequency response of the system without the restriction of adding a pole and a zero together as it was done with the lead and lag compensators. This method requires a good understanding of the impact of poles and zeroes on the system behavior.
Poles and zeroes are a characteristic in the frequency domain. But Pole-Zero placing can also be done in the time domain, for example with a model-based approach such as the Ackermann’s formula.
1.3. Extensions
Anti-Reset Windup: Almost all systems in real life have some actuator limitations, for example, a valve actuator can only move within certain limitations. The controller output in its basic form however does not take into account such limitations. Subsequently, the control structure has to be expanded with Anti-Reset Windup.
Reset windup is defined as any kind of unwanted controller behavior due to actuator limitations. In the case of a PI controller, for example, the controller output might require the valve to open more than 100%. The integrator part of the PI controller could potentially continue integrating (and increasing the output) while the output is actually saturated (limited), leading to oscillations. It is therefore important to take such limitations into account.
Gain Scheduling: Often used for non-linear systems, that have been linearized around different operating points. For each operating point, another linear controller or set of control parameters can be chosen. This simplifies the control design because not a single controller has to cover the whole operating range.
Feed-Forward: If the control input to a system can be calculated with good accuracy for a desired output, it is advisable to use a feed-forward part in the controller. Disturbances and setpoint changes often behave differently. The feed-forward part of the controller can thus be designed to react to setpoint changes, and the feedback control can be designed to deal with disturbances. Without the feed-forward, a compromise is required to deal with the disturbances while still reacting appropriately to setpoint changes.
2. Optimal control
The optimal control approach in the control theory attempts to find a controller that satisfies an optimality criterion. This criterion is a cost function as a function of the system states and inputs to be minimized:
$$ J=F_{t_f} (x(t_0 )) + F_{tf} (x(t_f)) + ∫_{t_0}^{t_f} g(x(t),u(t),t)dt $$
The optimal control theory works in the time domain, contrary to the classic control theory, where the system analysis and optimization are done in the frequency domain. For multiple-input multiple-output systems (MIMO), it can be difficult or even unfeasible to perform a system analysis in the frequency domain, especially if an input has a strong influence on multiple outputs, or an output is influenced by multiple inputs. For such systems, the optimal control theory might provide a more suitable approach to finding a controller.
The basis for the optimization is a state-space representation of the system:
\( \dot{x}(t)=f(x(t),y(t),t) \)
\( y(t)=g(x(t),y(t),t) \)
2.1 LQR and LQG
The LQR (linear-quadratic regulator) controller is a state-feedback controller, that assumes that the system is linear and time-invariant and that all states in a system are measurable:
\( \dot{x}(t)=Ax(t) + Bu(t) \)
\( y(t)=Cx(t) + Du(t) \)
The resulting LQR controller is a linear and time-invariant proportional controller with no internal states. The system states are multiplied with a matrix, which is the outcome of the optimization problem, that minimizes a quadratic cost function, hence the name.
\( J = \int_{0}^{\infty} (x^T Qx + u^T Ru)dt \)
For single-input single-output systems, LQR controllers have a high gain margin and >60° phase margin, so they have good stability properties.
There are versions for finite or infinite time horizon and for both the continuous time domain and the discrete time domain. For the continuous time-domain, infinite time-horizon problem formulation (cost function given below), the control law can be calculated analytically with the Ricatti equation.
The LQG (linear-quadratic Guassian) controller is an extension of the LQR controller. In reality, the states of a system are often not measurable, and can therefore not be fed back. LQG adds a Kalman filter (observer) to the LQR controller, that estimates the state of a system.
In its basic form, the LQG controller is guaranteed to be asymptotically stable, however, there are no guarantees regarding robustness (the high gain margin and high phase margin properties of the LQR controller are lost), and in general does not suppress noise. There are several extensions that can improve the controller:
- Feedforward to improve the speed
- Extension with an integrator to improve noise suppression
- Loop Transfer Recovery (LTR) to improve robustness.
3. Model predictive control
MPC also tries to find a control strategy that minimizes a cost function, therefore it is technically part of the optimal control theory. MPC solves the optimization problem online for a receding time horizon. This control strategy is then used for one step, and the optimization is run again.
As the name suggests, a plant model is used to solve the optimization problem, thus having a predictive ability. While LQR has a quadratic cost function with a control law that is found by solving a differential equation, MPC allows arbitrary cost functions, that are typically solved numerically.
MPC often requires a significant effort to implement and also requires a plant model that accurately describes the internal system dynamics. The optimization problem needs to be solved in real-time, or at least repeatedly regularly, therefore limiting its application to relatively slow systems with long sampling periods.
It has found its use, especially in the process industry, where classic control theory does not provide satisfying results. Also, processes are typically relatively slow, allowing the optimization problem to be solved at each controller step.
Noticeable extensions are nonlinear MPC and robust MPC. The former allows nonlinear systems to be used for optimization problems, which might lead to a non-convex problem, making the finding of a numerical solution considerably more difficult. Another popular approach is Explicit MPC, which precomputes control laws offline, and the online controller then gets the control law from a lookup table, which is then trivial.
4. Robust control
Robust control explicitly addresses uncertainties in the plant. The controller shall still work properly even if the plant parameters are not as expected, or disturbances occur.
4.1 H-Infinity
The H-Infinity controller is the solution to an optimization problem as described in the section for optimal control. It tries to find an optimal controller given a certain cost function by minimizing the infinity norm of that cost function. Its main difference from the LQR/LQG method is the extension of the problem with frequency domain specifications.
Finding a solution to the H-Infinity optimization problem is not trivial, and no unique solution exists. But it is often not necessary to design an optimal controller, it is sufficient to find a controller that is close to optimality (but easier to compute).
4.1.1 Mixed Sensitivity Loop-Shaping
The mixed sensitivity approach introduces frequency domain specifications regarding good disturbance rejection (small sensitivity) and good noise attenuation (small complementary sensitivity). It introduces weights that are used to tune the controller to the desired behavior. In each loop shaping iteration, the controller is checked if it meets the required criteria, and the weights are adjusted for the next iteration. Moreover, for SISO systems – if a multiplicative uncertainty weight is available – the robust performance conditions are closely approximated by the mixed sensitivity problem.
Summary
The discipline of closed-loop control engineering provides a wide range of tools and theories to achieve the required system behavior. The ubiquitous PID controller is still the most popular basic control approach because it is simple to implement and can be tuned to work well in a wide range of applications. MPC is an active area of research and has gained a lot of attention, especially with the increasing availability of computing power, due to its potential to tackle complex problems.
More Expert Blog articles
Discover IMT’s engineering expertise and innovative solutions in our Expert Blog. Gain valuable know-how from our experts and explore technology highlights.