You are here

Optimization Theory and Methods: Nonlinear Programming, Volume I

Wenyu Sun and Ya-Xiang Yuan
Publisher: 
Springer Verlag
Publication Date: 
2006
Number of Pages: 
687
Format: 
Hardcover
Series: 
Optimization and Its Applications
Price: 
89.95
ISBN: 
0387249753
Category: 
Textbook
We do not plan to review this book.

 Preface

1 Introduction
1.1 Introduction
1.2 Mathematics Foundations
1.2.1 Norm
1.2.2 Inverse and Generalized Inverse of a Matrix
1.2.3 Properties of Eigenvalues
1.2.4 Rank-One Update
1.2.5 Function and Differential
1.3 Convex Sets and Convex Functions
1.3.1 Convex Sets
1.3.2 Convex Functions
1.3.3 Separation and Support of Convex Sets

1.4 Optimality Conditions for Unconstrained Case
1.5 Structure of Optimization Methods
Exercises

2 Line Search
2.1 Introduction
2.2 Convergence Theory for Exact Line Search
2.3 Section Methods
2.3.1 The Golden Section Method
2.3.2 The Fibonacci Method
2.4 Interpolation Method
2.4.1 Quadratic Interpolation Methods
2.4.2 Cubic Interpolation Method
2.5 Inexact Line Search Techniques
2.5.1 Armijo and Goldstein Rule
2.5.2 Wolfe-Powell Rule
2.5.3 Goldstein Algorithm and Wolfe-Powell Algorithm
2.5.4 Backtracking Line Search
2.5.5 Convergence Theorems of Inexact Line Search

Exercises

3 Newton’s Methods
3.1 The Steepest Descent Method
3.1.1 The Steepest Descent Method
3.1.2 Convergence of the Steepest Descent Method
3.1.3 Barzilai and Borwein Gradient Method
3.1.4 Appendix: Kantorovich Inequality

3.2 Newton’s Method
3.3 Modified Newton’s Method
3.4 Finite-Difference Newton’s Method
3.5 Negative Curvature Direction Method
3.5.1 Gill-Murray Stable Newton’s Method
3.5.2 Fiacco-McCormick Method
3.5.3 Fletcher-Freeman Method
3.5.4 Second-Order Step Rules

3.6 Inexact Newton’s Method
Exercises

4 Conjugate Gradient Method
4.1 Conjugate Direction Methods
4.2 Conjugate Gradient Method
4.2.1 Conjugate Gradient Method
4.2.2 Beale’s Three-Term Conjugate Gradient Method
4.2.3 Preconditioned Conjugate Gradient Method
4.3 Convergence of Conjugate Gradient Methods
4.3.1 Global Convergence of Conjugate Gradient Methods
4.3.2 Convergence Rate of Conjugate Gradient Methods

Exercises

5 Quasi-Newton Methods
5.1 Quasi-Newton Methods
5.1.1 Quasi-Newton Equation
5.1.2 Symmetric Rank-One (SR1) Update
5.1.3 DFP Update
5.1.4 BFGS Update and PSB Update
5.1.5 The Least Change Secant Update
5.2 The Broyden Class
5.3 Global Convergence of Quasi-Newton Methods
5.3.1 Global Convergence under Exact Line Search
5.3.2 Global Convergence under Inexact Line Search
5.4 Local Convergence of Quasi-Newton Methods
5.4.1 Superlinear Convergence of General Quasi-Newton Methods
5.4.2 Linear Convergence of General Quasi-Newton Methods
5.4.3 Local Convergence of Broyden’s Rank-One Update
5.4.4 Local and Linear Convergence of DFP Method
5.4.5 Superlinear Convergence of BFGS Method
5.4.6 Superlinear Convergence of DFP Method
5.4.7 Local Convergence of Broyden’s Class Methods
5.5 Self-Scaling Variable Metric (SSVM) Methods
5.5.1 Motivation to SSVM Method
5.5.2 Self-Scaling Variable Metric (SSVM) Method
5.5.3 Choices of the Scaling Factor

5.6 Sparse Quasi-Newton Methods
5.7 Limited Memory BFGS Method
Exercises

6 Trust-Region and Conic Model Methods
6.1 Trust-Region Methods
6.1.1 Trust-Region Methods
6.1.2 Convergence of Trust-Region Methods
6.1.3 Solving A Trust-Region Subproblem
6.2 Conic Model and Collinear Scaling Algorithm
6.2.1 Conic Model
6.2.2 Generalized Quasi-Newton Equation
6.2.3 Updates that Preserve Past Information
6.2.4 Collinear Scaling BFGS Algorithm
6.3 Tensor Methods
6.3.1 Tensor Method for Nonlinear Equations
6.3.2 Tensor Methods for Unconstrained Optimization

Exercises

7 Nonlinear Least-Squares Problems
7.1 Introduction
7.2 Gauss-Newton Method
7.3 Levenberg-Marquardt Method
7.3.1 Motivation and Properties
7.3.2 Convergence of Levenberg-Marquardt Method
7.4 Implementation of L-M Method
7.5 Quasi-Newton Method
Exercises

8 Theory of Constrained Optimization
8.1 Constrained Optimization Problems
8.2 First-Order Optimality Conditions
8.3 Second-Order Optimality Conditions
8.4 Duality
Exercises

9 Quadratic Programming
9.1 Optimality for Quadratic Programming
9.2 Duality for Quadratic Programming
9.3 Equality-Constrained Quadratic Programming
9.4 Active Set Methods
9.5 Dual Method
9.6 Interior Ellipsoid Method
9.7 Primal-Dual Interior-Point Methods
Exercises

10 Penalty Function Methods
10.1 Penalty Function
10.2 The Simple Penalty Function Method
10.3 Interior Point Penalty Functions
10.4 Augmented Lagrangian Method
10.5 Smooth Exact Penalty Functions
10.6 Nonsmooth Exact Penalty Functions
Exercises

11 Feasible Direction Methods
11.1 Feasible Point Methods
11.2 Generalized Elimination
11.3 Generalized Reduced Gradient Method
11.4 Projected Gradient Method
11.5 Linearly Constrained Problems
Exercises

12 Sequential Quadratic Programming
12.1 Lagrange-Newton Method
12.2 Wilson-Han-Powell Method
12.3 Superlinear Convergence of SQP Step
12.4 Maratos Effect
12.5 Watchdog Technique
12.6 Second-Order Correction Step
12.7 Smooth Exact Penalty Functions
12.8 Reduced Hessian Matrix Method
Exercises

13 TR Methods for Constrained Problems
13.1 Introduction
13.2 Linear Constraints
13.3 Trust-Region Subproblems
13.4 Null Space Method
13.5 CDT Subproblem
13.6 Powell-Yuan Algorithm
Exercises

14 Nonsmooth Optimization
14.1 Generalized Gradients
14.2 Nonsmooth Optimization Problem
14.3 The Subgradient Method
14.4 Cutting Plane Method
14.5 The Bundle Methods
14.6 Composite Nonsmooth Function
14.7 Trust Region Method for Composite Problems
14.8 Nonsmooth Newton’s Method
Exercises

Appendix: Test Functions
x1. Test Functions for Unconstrained Optimization Problems
x2. Test Functions for Constrained Optimization Problems

Bibliography

Index