You are here

Iterative Learning Control for Systems with Iteration-Varying Trial Lengths

Dong Shen and Xuefang Li
Publisher: 
Springer
Publication Date: 
2019
Number of Pages: 
256
Format: 
Hardcover
Price: 
169.99
ISBN: 
978-981-13-6135-7
Category: 
Monograph
[Reviewed by
Bill Satzer
, on
08/31/2019
]
This book is a monograph that focuses on control theory, and in particular on an approach for controlling systems called iterative learning control.  Although control theory has a variety of design tools for improving performance of dynamic systems it is not always possible to achieve desired performance because of unmodeled dynamics or uncertainties that arise during actual system operation. Iterative learning control (hereafter ILC) offers a way to work around many of those issues.
 
ILC is a control technique designed for systems that operate repetitively.  Its goal is to improve the tracking response of a system by taking advantage of repetition to modify inputs to the system in order to correct for errors observed on earlier steps. Often the repetitions are carried out for a fixed number of times until the performance is acceptable. The authors of the current book take ILC one step further to allow varying lengths of the repetition cycle.
 
To make this more concrete with a simple example, consider a manufacturing environment with a robot that performs a pick-and-place operation.  A sequence of operations occurs: the robot begins at rest waiting for a workpiece to appear; when it appears, the robot moves to the location of the workpiece; it picks up the workpiece; it moves to a desired location; it puts the workpiece in place; then the robot returns to a rest position. Here, over a fixed number of trials, ILC issues a sequence of input commands to the robot and modifies them in order to improve performance.  Performance is determined by the difference between the actual motion of the robot and a desired reference motion.  The simplest version of this is a discrete time linear system of the form
 
\( x_{k}(t+1)=Ax_{k}(t)+Bu_{k}(t) \)
 
\( y_{k}(t)=Cx_{k}(t) \)
 
where \( x_{k}(t) \) is the system state at time \( t  \) for iteration \( k \), \( u_{k}(t) \) is the input, \( y_{k}(t) \) is the output, and \( A \), \( B \), and \( C \) are constant matrices of appropriate dimension that represent the dynamics. The system operates on a finite time horizon with \( y \in [ 0, T ] \). The goal of the control algorithm is to drive the output \( y_{k}(t) \) to track the desired output \( y_{d}(t) \) over the time interval \( [0,T] \) as the iteration \( k \) increases. ILC is not limited to simple systems like this.  It handles continuous time systems and nonlinear systems as well.
 
Although applications with robotic systems are the most common, the ILC approach has also been used in biomedical applications (for example, treatments for patients recovering from strokes, and monitoring positive pressure ventilation for people with respiratory insufficiency), and a broad collection of manufacturing activities, chemical processes and energy generation.
 
The ILC approach was developed in the late 1980s. It has been very successful, but it has some serious limitations. The iterations occur over a fixed time interval with fixed trial lengths; the initial states must be the same for each iteration; and system dynamics must be deterministic and invariant through all iterations. For many real-time applications these restrictions are impractical. The authors of this book attempt to overcome some of these limitations by relaxing the restrictions of perfectly repeating conditions through all repetitions. In particular, they explore the use of varying trial lengths. The trial length is assumed to be randomly varying.  When that length is equal to or longer than the fixed length, redundant information is discarded. For shorter trial lengths the issue is to identify learning algorithms to compensate for the missing information.
 
The authors consider linear and nonlinear systems in both discrete and continuous time with a variety of alternative algorithms. Convergence rates are important because they determine now fast a system can achieve the best tracking performance. The authors present five different algorithms for linear systems and six more for nonlinear environments. For each algorithm they provide convergence analysis and simulation results.
 
The authors suggest that their book is self-contained, but this would be true only for a very experienced reader. A solid background in control theory and some experience with iterative learning control would be highly desirable.

 

Bill Satzer (bsatzer@gmail.com), now retired from 3M Company, spent most of his career as a mathematician working in industry on a variety of applications ranging from speech recognition to optical films. He did his PhD work in dynamical systems and celestial mechanics.

See the publisher's web page.