You are here

Introduction to the Theory of Nonlinear Optimization

Johannes Jahn
Publisher: 
Springer Verlag
Publication Date: 
2007
Number of Pages: 
292
Format: 
Hardcover
Edition: 
3
Price: 
99.00
ISBN: 
9783540493785
Category: 
Textbook
[Reviewed by
Brian Borchers
, on
04/10/2007
]

There are many textbooks on nonlinear optimization, some focusing on computational algorithms for the solution of particular classes of problems, some focusing on the analysis of the convergence of these methods, and others focusing on more mathematical issues including questions of the existence and uniqueness of solutions and necessary and sufficient conditions for the optimality of solutions.

Most introductory courses in optimization try to cover all of these issues to some degree, with the primary focus on methods for the solution of smooth unconstrained and constrained optimization problems involving a finite number of variables. Students are typically introduced to the Karush-Kuhn-Tucker (KKT) optimality conditions, although these are often presented without proof. Understanding and applying the KKT conditions requires only a modest background in linear algebra and vector calculus.

However, optimization problems on function spaces are also of interest, particularly because of their applications in optimal control. In this more general setting, understanding and applying the optimality conditions requires many concepts and theorems from linear functional analysis, including the Gateaux and Fréchet derivatives and the Hahn-Banach theorem. For readers who are already familiar with functional analysis, Jahn's textbook provides a thorough development of optimality conditions for optimization problems on function spaces.

In the first chapter, Jahn provides a number of examples of optimization problems. Existence theorems for solutions to minimization problems on normed linear spaces are given in Chapter 2. In Chapter 3, various generalizations of the derivative are reviewed, including the Gateaux and Fréchet derivatives and the Clarke derivative. The tangent cone is introduced in Chapter 4. These ingredients are brought together in Chapter 5 to prove a very general version of the Lagrange multiplier theorem and to prove the Pontryagin maximum principle. Duality theory is introduced in Chapter 6. The book concludes with two chapters in which the theory is applied to semidefinite optimization problems and optimal control. Each chapter is accompanied by a set of exercises. Answers are provided in an appendix.

In this third edition of the book, Jahn has added a chapter on semidefinite optimization problems. The theory developed earlier in the book is applied to some specialized optimization problems involving matrix variables, particularly matrices that are restricted to being positive semidefinite. This is an important topic of current interest, but the connection to the other material in the book is somewhat tenuous.

The book will be of interest to suitably prepared graduate students and researchers who are working on problems in optimal control theory. It may also be of interest to analysts who want to learn something about how functional analysis is used in the theory of optimization. For readers who are interested only in optimality conditions for problems in less general settings or who do not have the required background in functional analysis other books would be more appropriate.


Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.