You are here

Linear and Nonlinear Programming

David G. Luenberger and Yinyu Ye
Publication Date: 
Number of Pages: 
International Series in Operations Research and Management Science 116
[Reviewed by
Brian Borchers
, on

Introductory textbooks on linear and nonlinear programming have traditionally focused on the simplex method for linear programming, Newton's method and variants for unconstrained nonlinear optimization problems, and optimality conditions and methods for constrained nonlinear optimization problems such as penalty and barrier function methods and the reduced gradient method. David Luenberger's textbook on Linear and Nonlinear Programming, which first appeared in 1973 with a second edition in 1984, has followed this general pattern. In this third edition, Luenberger and new coauthor Yinyu Ye have retained the overall structure and most of the material in the second edition while adding some new material on methods for linear and nonlinear programming that have been developed over the past 25 years.

Part I of the book covers linear programming, beginning with four chapters on the simplex method. The simplex method is presented first in tableau form, and then using matrix notation. The authors have added some additional examples and exercises, but this material is mostly unchanged from the second edition. A new chapter has been added to give a very brief introduction to interior point methods. The final chapter in this section on network flow problems is also largely unchanged from the previous edition.

Part II of the book covers methods for unconstrained nonlinear programming, including steepest descent, Newton's method, quasi-Newton methods, and the conjugate gradient method. Limited memory quasi-Newton methods have become more prominent in recent years but are not mentioned here. Very little of the material in this part of the book is new, but there has been relatively little advancement in this area since the second edition was published in 1984.

Part III covers constrained nonlinear optimization including optimality conditions for constrained optimization problems, active sets methods, reduced gradient methods, penalty and barrier methods, dual and cutting plane methods, and primal-dual interior point methods. The chapter on primal-dual interior point methods for nonlinear programming problems is a major addition to the third edition. It is surprising that there is no coverage of sequential quadratic programming (SQP) methods.

This is a book that focuses primarily on the intuitive ideas that the various methods are based on rather than on details of the analysis of convergence and convergence rates. The strength of the book is in the way in which these ideas are presented. However, many of the exercises involve hand computations that seem anachronistic in an era when students are used to using symbolic and numerical computation packages such as MATLAB, Maple, and Mathematica. Many other textbooks on optimization now include example software and computer exercises to be completed by students.

Although about 20% of the material in this third edition of the book is new, the overall structure and approach of the book have remained largely unchanged. As a result, the book is noticeably old-fashioned in its approach to the subject. Readers looking for a more up to date treatment of much of the same material would be better served by the textbook of Nocedal and Wright (2005). For a more comprehensive and up to date treatment of both simplex and interior point methods for linear programming, see Vanderbei (2007). For an even more modern approach to optimization that focuses on convex optimization and its applications consider the textbook by Boyd and Vandenberghe (2004).


S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.

J. Nocedal and S. Wright. Numerical Optimization. Springer, 2005.

R. J. Vanderbei. Linear Programming, Foundations and Extensions, 3rd ed. Springer, 2007.

Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.

1. Introduction

Part I: Linear Programming
2. Basic Properties of Linear Programs
3. The Simplex Method
4. Duality
5. Interior-Point Methods
6. Transportation and Network Flow Problems

Part II: Unconstrained Problems
7. Basic Descent Methods
8. Conjugate Direction Methods
9. Quasi-Newton Methods

Part III: Constrained Minimization
10. Constrained Minimization Conditions
11. Primal Methods
12. Penalty and Barrier Methods
13. Dual and Cutting Plane Methods
14. Primal-Dual Methods

Appendix A: Mathematical Review
A.1. Sets
A.2. Matrix Notation
A.3. Spaces
A.4. Eigenvalues and Quadratic Forms
A.5. Topological Concepts
A.6. Functions

Appendix B: Convex Sets
B.1. Basic Definitions
B.2. Hyperplanes and Polytopes
B.3. Separating and Supporting Hyperplanes
B.4. Extreme Points

Appendix C: Gaussian Elimination