You are here

Nonlinear Optimization

Francisco J. Aragón, Miguel A. Goberna, Marco A. López, and Margarita M. L. Rodríguez
Publisher: 
Springer
Publication Date: 
2019
Number of Pages: 
350
Format: 
Hardcover
Series: 
Springer Undergraduates Texts in Mathematics and Technology
Price: 
69.99
ISBN: 
978-3-030-11183-0
Category: 
Textbook
[Reviewed by
Brian Borchers
, on
01/20/2020
]
Nonlinear Optimization is a textbook on theory and methods for smooth, convex nonlinear optimization problems.  The book is aimed at advanced undergraduate students, so the authors have been careful to limit the required mathematical background to vector calculus, linear algebra, and basic analysis.  The authors' goal is to introduce the subject with basic theoretical results and a representative collection of methods.  The authors largely succeed in this limited aim, but I was ultimately disappointed by the authors' lack of ambition.  
 
In Part I of the book, the authors discuss optimality conditions and analytical solutions for linear least-squares problems, convex quadratic programming, smooth unconstrained convex optimization problems, and linearly constrained smooth convex optimization problems.  The authors also introduce Lagrangian and Wolfe duality.  Linear regression is the primary application discussed in this part of the book.  The authors also present the arithmetic mean-geometric mean inequality and a proof of the fundamental theorem of algebra as applications.
 
In Part II, the authors move on to algorithms for the iterative solution of smooth unconstrained convex optimization problems including steepest descent, conjugate gradients, Newton's method, and Quasi-Newton methods.  A chapter on methods for constrained problems introduces penalty function and barrier methods including interior-point methods for linear programming.  The book finishes with an introduction to the Karush-Kuhn-Tucker (KKT) conditions and the sequential quadratic programming (SQP) method.
 
There is no discussion of important numerical issues in the implementation of the methods.  For example, linear least squares problems are solved using the normal equations and the QR factorization is not mentioned.  The authors have also avoided any discussion of nonsmooth convex optimization problems which are becoming very important in practice.
 
A number of other textbooks show that these issues can be addressed in textbooks aimed at undergraduate students.  Linear and Nonlinear Optimization, 2nd ed. by Igor Griva, Stephen Nash, and Ariela Sofer provides broad coverage of theory, methods, and numerical issues for smooth nonlinear optimization.  Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares by Stephen Boyd and Lieven Vandenberghe presents important applications of least squares problems in machine learning and includes coverage of numerical issues.  Optimization Models by Giuseppe Calafiore and Laurent El Ghaoui introduces nonsmooth convex optimization problems and a large variety of applications.

 

Brian Borchers is a professor of mathematics at New Mexico Tech and the editor of MAA Reviews.