You are here

Practical Augmented Lagrangian Methods for Constrained Optimization

E. G. Birgin and J. M. Martinez
Publisher: 
SIAM
Publication Date: 
2014
Number of Pages: 
220
Format: 
Paperback
Series: 
Fundamentals of Algorithms
Price: 
65.00
ISBN: 
9781611973358
Category: 
Monograph
[Reviewed by
Brian Borchers
, on
05/14/2015
]

Nonlinear optimization problems with linear and nonlinear inequality constraints arise in many areas of applied mathematics and statistics. Methods for the solution of unconstrained problems, including the method of steepest descent, conjugate gradients, Newton’s method, and quasi-Newton methods are well developed and widely used. Problems with constraints are considerably harder to solve, particularly when the problems have thousands or millions of variables and constraints. Despite decades of work in this area, the development of efficient and robust methods for large scale constrained optimization is still an active area of research.

One obvious way of solving constrained optimization problems is to move the constraints into the objective function by adding a penalty term proportional to the sum of squares of the constraint violations. By increasing the penalty parameter we can force the solution to the penalized problem to converge to a solution to the original constrained problem. However, the penalized problems become extremely ill-conditioned as the penalty parameter is increased, leading to numerical failures.

Another relatively simple approach is to construct the Lagrangian of the optimization problem and find a saddle point point of the Lagrangian that is a maximum with respect to the Lagrange multipliers and a minimum with respect to the primal variables. In this approach we alternate steps that minimize the Lagrangian with respect to the primal variables with steps that update the Lagrange multipliers. This Lagrangian method has the desirable property of producing primal and dual optimal solutions, but it is not very robust in practice.

In the late 1960s, Hestenes and Powell proposed methods that combined the quadratic penalty method with the Lagrangian method. As with the Lagrangian method, each major iteration consists of solving an unconstrained optimization optimization problem followed by a step in which the Lagrange multipliers and penalty parameter are updated. This augmented Lagrangian method has good theoretical properties and works well in practice.

Although augmented Lagrangian methods were among the first really successful methods for constrained nonlinear optimization, in the 1970s and 1980s attention shifted first to sequential quadratic programming (SQP) methods and then to interior point (IP) methods that had very good performance on small to medium sized problems. In recent years, there has been renewed interest in augmented Lagrangian methods, particularly for very large scale problems. For example, the Lancelot software package of Conn, Gould, and Toint (1992) is a widely used augmented Lagrangian solver.

This book provides a self contained introduction to augmented Lagrangian methods. After a brief review of the Karush-Kuhn-Tucker optimality conditions, the authors introduce the augmented Lagrangian method and discuss its theoretical properties. The authors then introduce their open-source solver, Algencan. In the remainder of the book the authors provide detailed information on the use of Algencan and illustrate this with several examples.

There are many other textbooks for introductory courses in nonlinear optimization that cover a broader range of topics in unconstrained and constrained nonlinear programming (e.g. Bazaraa et. al. 2006, Nocedal and Wright, 2006) Although this book cannot be recommended for use in a first course in nonlinear programming, it is suitable for more advanced readers who need a focused introduction to augmented Lagrangian methods and particularly for readers who wish to make use of the Algencan software.


References

M. S. Bazaraa, H. D. Sherali, and C. M. Shetty. Nonlinear Programming: Theory and Algorithms, 3rd ed. Wiley, 2006.

A. R. Conn, N. I. M. Gould, and Ph. L. Toint. Lancelot: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Springer-Verlag, 1992.

J. Nocedal and S. J. Wright. Numerical Optimization, 2nd ed. Springer, 2006.


Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.