This volume is a collection of four coordinated monographs on topics in numerical optimization. Each section of the book is written by a different author with significant differences in approach and notation in the different sections. The book was translated from French, and this becomes apparent at times. However, the four sections of the book fit together to provide a broad survey of methods for numerical optimization at an advanced level.
In part I, Claude Lemarechal presents methods for unconstrained minimization, including linesearch techniques, Newton and quasi-Newton methods, the conjugate gradient method, methods for least squares problems, trust region methods, methods for quadratic programming, and limited memory methods. The theory of the convergence of these methods is presented in fairly dense theorem-proof form, with limited attention to implementation issues and no exercises. This section of the book provides a good summary of the topic for advanced readers but really isn't appropriate as an introduction to the topic for students new to optimization.
In part II, Claudia A. Sagastizabal presents an introduction to methods for the minimization of convex but potentially non-differentiable functions. After a brief introduction to nonsmooth optimization, she discusses steepest descent, subgradient methods, cutting plane methods, and bundle methods. The convergence theory for these methods is worked out in considerable detail. This is followed by a chapter on applications of nonsmooth optimization and a chapter containing a very good collection of computational exercises to be implemented by the reader. Although nonsmooth optimization is an advanced topic not typically encountered in a first course in optimization, the material in this section of the book would be appropriate for an advanced course introducing nonsmooth optimization.
In part III, J. Charles Gilbert presents methods for constrained optimization problems. The focus here is on methods based on Newton's method, particularly the method of sequential quadratic programming. In addition to exercises at the end of each chapter, the author returns repeatedly to an extended project involving the analysis of a hanging chain. The material in this part of the book is traditionally presented in introductory courses on nonlinear programming, but the presentation here is in greater depth than in many other textbooks. On the other hand, the focus on the SQP method comes at the expense of other approaches to constrained optimization problems.
In part IV, J. Frederic Bonnans presents an introduction to interior point methods for linear and quadratic programming and monotone linear complementarity problems. Convergence theory and computational complexity results are presented in careful detail. Little attention is paid to implementation issues, and there are no exercises.
There is a substantial gap between the discussion of SQP methods for constrained optimization in part III and the discussion of interior point methods for linear and quadratic programming in part IV. Interior point methods for constrained nonlinear programming are an important topic of current interest, but they simply don't appear in this book. The variability of the exercises and the general lack of attention to implementation issues are also somewhat disappointing.
Despite these weaknesses, this book should be of interest to advanced graduate students and researchers working in numerical optimization.
Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.