The authors of an introductory textbook in optimization have many important topics to choose from, ranging from applications and optimization modeling to algorithms for various classes of optimization problems and the more mathematical theory of convergence of algorithms and optimality conditions. Since there is simply too much material to cover comprehensively in a single course, textbooks in this area tend to cover a broad range of material in a relatively shallow way and focus selectively on a few topics. Although this textbook is very thorough in its coverage of some topics, its coverage of other topics is weak. Instructors considering the use of this textbook will have to decide whether the book adequately covers the topics that they want to include in their course.

The Karush-Kuhn-Tucker optimality conditions for nonlinear optimization are one of the high points of this book. The authors provide careful statements of the optimality conditions together with complete proofs. The discussion of constraint qualifications is unusually thorough. The authors state and prove the connections between the Guignard, Abadie, Magnasarian-Fromovitz, and Slater constraint qualifications.

Another strong aspect of this book is the discussion of interior point methods for linear and semidefinite programming. Interior point methods are often not included in introductory optimization courses, and when they are included the discussion is typically limited to linear programming. Here, the authors have included an extended discussion of primal-dual interior point methods for linear programming, complete with analysis of both short-step and long-step path following methods. The authors then introduce semidefinite programming (SDP), develop duality theory for SDP, and discuss interior point methods for SDP.

The coverage of some other topics is much more limited. The discussion of the simplex method for linear programming is limited to ten pages on the primal simplex method. There is no mention sensitivity analysis or the dual simplex method. A short chapter on methods for nonlinearly constrained optimization includes a discussion of sequential quadratic programming, but no mention of interior point methods.

The book contains many illustrations, some of which are particularly helpful in visualizing the mathematical theory. An appendix discusses optimization software including the MATLAB Optimization Toolbox, SeDuMi, and Maple. All of this software would be very appropriate for use in a course based on this textbook. There are many exercises, ranging from theoretical problems to computational exercises that require programming. The English is generally good, with a few exceptions such as the definition of “the feasible direction” at a feasible point that would seem to imply that there is a unique feasible direction at each feasible point.

Although this book has been published in Springer’s series of undergraduate texts in mathematics and technology, it is unlikely that this book could be used successfully with undergraduate students in the United States. The required level of mathematical maturity makes it more suitable for a first graduate course in optimization. This book may be of interest to instructors who are looking for a textbook that emphasizes the mathematical theory of optimization, optimality conditions, and interior point methods for linear and semidefinite programming.

Brian Borchers is professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.