You are here

Introduction to Derivative-Free Optimization

Andrew R. Conn, Katya Scheinberg and Luis N. Vicente
Publisher: 
SIAM
Publication Date: 
2008
Number of Pages: 
277
Format: 
Paperback
Series: 
MPS-SIAM Series in Optimization
Price: 
73.00
ISBN: 
9780898716689
Category: 
Monograph
[Reviewed by
Brian Borchers
, on
05/18/2009
]

Since its beginning, most research into methods for nonlinear programming have been built around the assumption that the objective function to be minimized is sufficiently smooth. Theoretical conditions for optimality are expressed in terms of the gradient and Hessian of the objective function. Most commonly used methods, including Newton's method, quasi-Newton methods, and the conjugate gradient method depend on availability of derivatives. Depending on the application, these derivatives might be computed from analytical formulas, by automatic differentiation, or by finite difference approximation.

However, there are many cases in which we need to minimize a function without access to its derivatives. The objective function might simply be non-differentiable. Even when the function is differentiable, it may be impractical to compute derivatives. In some cases the objective function is the result of the execution of a "black box" simulation which is not amenable to automatic differentiation. When function evaluations are extremely time consuming and there are many parameters, the computation of a finite difference gradient may be too time consuming to be practical. Or, as is often the case with computational simulations, the implementation of the objective function might be noisy, producing function values that are unsuitable for use in finite difference computations.

An early development in this area was the simplex method (not to be confused with the simplex method for linear programming) published by Nelder and Mead in 1965. The Nelder-Mead algorithm evaluates the objective function at a set of nearby points in each iteration and uses this information to update the solution and the pattern of points to be examined in the next iteration. Over the years, this algorithm was very widely used, despite the fact there was no theoretical proof of convergence. In 1998, McKinnon derived a family of smooth and convex functions on which the simplex algorithm converges to a nonstationary point. Since then, a number of variations of the Nelder-Mead simplex algorithm with provable convergence properties have appeared. This highlights the need for methods that work well in practice and have good theoretical properties.

The authors also discuss other approaches to derivative-free optimization, including directional direct search methods, line search methods based on simplex derivatives, trust region methods based on models of the objective function, and interpolation methods. In each case the authors summarize theoretical results on the convergence of the methods. Derivative free methods for nonlinear optimization problems with constraints are not nearly as well developed as methods for unconstrained optimization problems. The authors include a brief chapter reviewing some of the work in this emerging area. An appendix lists available software implementations of the various methods.

This book provides comprehensive and up to date survey of derivative-free algorithms for unconstrained optimization and theoretical results on the convergence of these methods. It will be a useful reference for graduate students and researchers who want to use derivative-free methods. The book would also be suitable for use as a textbook in advanced graduate courses on derivative-free optimization.


Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.