You are here

Optimization: Insights and Applications

Jan Brinkhuis and Vladimir Tikhomirov
Publisher: 
Princeton University Press
Publication Date: 
2005
Number of Pages: 
658
Format: 
Hardcover
Series: 
Princeton Series in Applied Mathematics
Price: 
79.50
ISBN: 
0-691-10287-2
Category: 
Textbook
[Reviewed by
Brian Borchers
, on
05/9/2006
]

This book is an introduction to the mathematical theory of continuous optimization, algorithms for the solution of optimization problems, and examples of applications of optimization in economics and mathematics.

The book is very different from most existing introductory textbooks in continuous optimization. Traditionally, introductory textbooks have given a brief introduction to necessary and sufficient conditions for optimality and focused primarily on algorithms for finding local minima of convex and nonconvex optimization problems. Typically, such a textbook would include sections or chapters on Newton's method, quasi-Newton methods, the conjugate gradient method, penalty function methods, logarithmic barrier methods, active set methods and sequential quadratic programming methods. For example, Numerical Optimization, by Nocedal and Wright, follows this standard approach. In contrast, Brinkhuis and Tikhomirov skip many of these methods, focusing instead on the mathematical theory of conditions for optimality.

The central theme of the book is necessary conditions for optimality. The basic strategy that the authors advocate is to determine that an optimal solution exists, write down the first order necessary conditions for optimality as a system of equations, and analyze these equations to obtain a solution. The authors use this framework to explore applications of optimization in economics and mathematics and to explain interior point methods and the ellipsoid algorithm for convex optimization problems. The book also includes two chapters on dynamic programming in discrete and continuous time. There are extensive appendices reviewing prerequisite material in analysis and giving proofs of some of the theorems.

The style of the book is informal and friendly, but the authors have also succeeded in carefully stating and proving theorems. In several cases the authors have provided alternative proofs to standard results that are interesting. For example, the authors sketch the standard proof of the Lagrange multiplier theorem based on the implicit function theorem and then give an alternative proof based on the tangent space theorem.

The greatest weakness of the book is that the individual chapters do not seem to be closely connected. Beyond the unifying theme of necessary conditions for optimality, there is little to connect the material in different chapters. A further problem is that some of the material in the appendices would fit more reasonably into the main body of the text.

The authors clearly had reasons for the unusual organization and selection of topics, but it is hard to understand why the authors included extensive exercises in some chapters and no exercises at all in other chapters. Although there is much interesting material in this book, it is unlikely to be successful as a textbook because of the unusual organization of the material, lack of coverage of important topics, and the lack of exercises in most of the chapters.

Reference:

Jorge Nocedal and Stephen Wright, Numerical Optimization , Springer, 1996.


Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems. 

Preface xi
0.1 Optimization: insights and applications xiii
0.2 Lunch, dinner, and dessert xiv
0.3 For whom is this book meant? xvi
0.4 What is in this book? xviii
0.5 Special features xix
Necessary Conditions: What Is the Point? 1

Chapter 1. Fermat: One Variable without Constraints 3
1.0 Summary 3
1.1 Introduction 5
1.2 The derivative for one variable 6
1.3 Main result: Fermat theorem for one variable 14
1.4 Applications to concrete problems 30
1.5 Discussion and comments 43
1.6 Exercises 59

Chapter 2. Fermat: Two or More Variables without Constraints 85
2.0 Summary 85
2.1 Introduction 87
2.2 The derivative for two or more variables 87
2.3 Main result: Fermat theorem for two or more variables 96
2.4 Applications to concrete problems 101
2.5 Discussion and comments 127
2.6 Exercises 128

Chapter 3. Lagrange: Equality Constraints 135
3.0 Summary 135
3.1 Introduction 138
3.2 Main result: Lagrange multiplier rule 140
3.3 Applications to concrete problems 152
3.4 Proof of the Lagrange multiplier rule 167
3.5 Discussion and comments 181
3.6 Exercises 190

Chapter 4. Inequality Constraints and Convexity 199
4.0 Summary 199
4.1 Introduction 202
4.2 Main result: Karush-Kuhn-Tucker theorem 204
4.3 Applications to concrete problems 217
4.4 Proof of the Karush-Kuhn-Tucker theorem 229
4.5 Discussion and comments 235
4.6 Exercises 250

Chapter 5. Second Order Conditions 261
5.0 Summary 261
5.1 Introduction 262
5.2 Main result: second order conditions 262
5.3 Applications to concrete problems 267
5.4 Discussion and comments 271
5.5 Exercises 272

Chapter 6. Basic Algorithms 273
6.0 Summary 273
6.1 Introduction 275
6.2 Nonlinear optimization is difficult 278
6.3 Main methods of linear optimization 283
6.4 Line search 286
6.5 Direction of descent 299
6.6 Quality of approximation 301
6.7 Center of gravity method 304
6.8 Ellipsoid method 307
6.9 Interior point methods 316

Chapter 7. Advanced Algorithms 325
7.1 Introduction 325
7.2 Conjugate gradient method 325
7.3 Self-concordant barrier methods 335

Chapter 8. Economic Applications 363
8.1 Why you should not sell your house to the highest bidder 363
8.2 Optimal speed of ships and the cube law 366
8.3 Optimal discounts on airline tickets with a Saturday stayover 368
8.4 Prediction of ows of cargo 370
8.5 Nash bargaining 373
8.6 Arbitrage-free bounds for prices 378
8.7 Fair price for options: formula of Black and Scholes 380
8.8 Absence of arbitrage and existence of a martingale 381
8.9 How to take a penalty kick, and the minimax theorem 382
8.10 The best lunch and the second welfare theorem 386

Chapter 9. Mathematical Applications 391
9.1 Fun and the quest for the essence 391
9.2 Optimization approach to matrices 392
9.3 How to prove results on linear inequalities 395
9.4 The problem of Apollonius 397
9.5 Minimization of a quadratic function: Sylvester's criterion and Gram's formula 409
9.6 Polynomials of least deviation 411
9.7 Bernstein inequality 414

Chapter 10. Mixed Smooth-Convex Problems 417
10.1 Introduction 417
10.2 Constraints given by inclusion in a cone 419
10.3 Main result: necessary conditions for mixed smooth-convex problems 422
10.4 Proof of the necessary conditions 430
10.5 Discussion and comments 432

Chapter 11. Dynamic Programming in Discrete Time 441
11.0 Summary 441
11.1 Introduction 443
11.2 Main result: Hamilton-Jacobi-Bellman equation 444
11.3 Applications to concrete problems 446
11.4 Exercises 471

Chapter 12. Dynamic Optimization in Continuous Time 475
12.1 Introduction 475
12.2 Main results: necessary conditions of Euler, Lagrange, Pontrya-gin, and Bellman 478
12.3 Applications to concrete problems 492
12.4 Discussion and comments 498

Appendix A. On Linear Algebra: Vector and Matrix Calculus 503
A.1 Introduction 503
A.2 Zero-sweeping or Gaussian elimination, and a formula for the dimension of the solution set 503
A.3 Cramer's rule 507
A.4 Solution using the inverse matrix 508
A.5 Symmetric matrices 510
A.6 Matrices of maximal rank 512
A.7 Vector notation 512
A.8 Coordinate free approach to vectors and matrices 513

Appendix B. On Real Analysis 519
B.1 Completeness of the real numbers 519
B.2 Calculus of differentiation 523
B.3 Convexity 528
B.4 Differentiation and integration 535

Appendix C. The Weierstrass Theorem on Existence of Global Solutions 537
C.1 On the use of the Weierstrass theorem 537
C.2 Derivation of the Weierstrass theorem 544

Appendix D. Crash Course on Problem Solving 547
D.1 One variable without constraints 547
D.2 Several variables without constraints 548
D.3 Several variables under equality constraints 549
D.4 Inequality constraints and convexity 550

Appendix E. Crash Course on Optimization Theory: Geometrical Style 553
E.1 The main points 553
E.2 Unconstrained problems 554
E.3 Convex problems 554
E.4 Equality constraints 555
E.5 Inequality constraints 556
E.6 Transition to infinitely many variables 557

Appendix F. Crash Course on Optimization Theory: Analytical Style 561
F.1 Problem types 561
F.2 Definitions of differentiability 563
F.3 Main theorems of differential and convex calculus 565
F.4 Conditions that are necessary and/or sufficient 567
F.5 Proofs 571

Appendix G. Conditions of Extremum from Fermat to Pontryagin 583
G.1 Necessary first order conditions from Fermat to Pontryagin 583
G.2 Conditions of extremum of the second order 593

Appendix H. Solutions of Exercises of Chapters 1-4 601

Bibliography 645
Index 651