You are here

Lectures on Convex Optimization

Yurii Nesterov
Publisher: 
Springer
Publication Date: 
2018
Number of Pages: 
589
Format: 
Hardcover
Edition: 
2
Series: 
Springer Optimization and Its Applications 137
Price: 
69.99
ISBN: 
9783319915777
Category: 
Monograph
[Reviewed by
Brian Borchers
, on
03/15/2020
]
Lectures on Convex Optimization is advertised as the second edition of the author's earlier Introductory Lectures on Convex Optimization: A Basic Course.  However, the second edition is twice as long as the first edition and constitutes a significant expansion and update of the earlier book.
 
The first half of the book is concerned with the analysis of the iteration complexity of methods for solving an unconstrained convex minimization  problem
 
\( \min f(x) \)
 
with optimal value \( f^{*} \).
 
Algorithms that treat the function and its derivatives as a black box are considered.  That is, the algorithm only has access to values of the function and its derivatives at points \( x^{(k)} \).  First order methods, such as gradient descent, compute a sequence of solutions \( x^{(k)} \) using only values of the function and its gradient (or a subgradient in the nondifferentiable case.)  A second order method, such as Newton's method, also has access to the Hessian of \( f \) at \( x^{(k)} \).  The material on first order methods in Introductory Lectures has been updated for this second edition.  The analysis of second order methods is new in the second edition.
 
A solution \( x^{(k)} \) is \( \epsilon \)-optimal if its objective value is within \( \epsilon \) of \( f^{*} \),
 
\( f(x^{(k)}) - f^{*} \leq \epsilon \).
 
The first goal of the iteration complexity analysis is to establish lower bounds on the number of calls to the black box function required to obtain an \( \epsilon \)-optimal solution for various classes of problems.  The second goal is to develop iterative schemes that are optimal in the sense that they achieve the lower bound.  Both questions are addressed in this book.
 
Compare this with the asymptotic analysis of convergence in which we try to establish linear or quadratic asymptotic convergence of  the sequence of solutions \( x^{(k)} \) to the limit \( x^{*} \).  For an algorithm that will not be run until convergence to a very exact optimal solution, the iteration complexity for finding an \( \epsilon \)-optimal solution is more relevant than the asymptotic convergence rate.  
 
The second half of the book concerns itself with problems in which the objective function has a particular structure that can be exploited.  This includes problems such as linear and semidefinite programming for which interior-point methods can exploit the problem structure to obtain convergence to an \( \epsilon \)-optimal solution in polynomial time.  This second edition has considerably expanded discussion of polynomial time interior point methods.  There is also a chapter on minimization with a relative (percentage) optimality target.
 
The results presented in this book are of great significance in that they establish limits on the computational effort needed to solve convex optimization problems and that some available methods have been shown to be optimal in iteration complexity for various classes of convex optimization problems.  Researchers working in this field will find the book to be a useful reference to these results.
 
However, this is an advanced research monograph that would not be appropriate for use as a textbook.  For a course focusing on first order methods for smooth and non-smooth convex optimization problems, First-Order Methods in Optimization by Amir Beck would be a more reasonable starting point.  For a course focusing on modeling and applications of convex optimization, Optimization Models by El Ghaoui and Caliafore would be a good choice.

Brian Borchers is a professor of mathematics at New Mexico Tech and the editor of MAA Reviews.

See the table of contents in the publisher's webpage.