Linear Programming with MATLAB is a classroom introduction to linear programming (LP) for those with a basic familiarity with MATLAB, rather than an introduction to MATLAB for those who know LP. It developed from course notes for an undergraduate LP course for computer science students who have taken linear algebra as a prerequisite. Basic familiarity with MATLAB here means the ability to start it up, put function files in the proper directory, and execute basic keyboard commands. MATLAB is used in partnership with, rather than replacing, symbolic linear algebra.
The book focuses on theory and solution methods for LPs, and it has some coverage of Quadratic programs (QPs) and linear complementarity problems (LCP). It does not cover Integer programming (IP) or nonlinear programming (NLP). The book does not focus on modeling issues. While the usual models and techniques are presented (blending, the diet problem, L1 regression using absolute values, minimax objective functions, classification problems, and minimum-cost network flows), the modeling examples are mostly in an abstract form, with no example data values, and no modeling exercises.
Since the book focuses on teaching LP theory and solutions, it does not include some practical matters on actually using MATLAB to solve LPs other than classroom examples. The examples and exercises mostly give the constraint matrix explicitly; there is no discussion of matrix generation, which is the (often tedious) work of putting the right numbers in the right locations in the constraint matrix according to the structure of the problem. Advanced free or commercial software packages for doing LP inside MATLAB, such as MATLAB’s own optimization toolbox, are also not discussed. All of the usual LP topics are covered: Phase I versus Big-M, variables that are unrestricted in sign, variables with upper and lower bounds, dual simplex, etc.
The book has some good features that do not occur in many other LP books aimed at undergraduates. For example, there is some discussion on how rounding errors can affect the ratio test and the stopping criteria. Also, advanced pricing rules such as steepest-edge are discussed. Regression using the Huber loss function (a hybrid of L1 regression and least-squares regression) is also a welcome inclusion, as are support vector machines for classification.
The exercises are roughly evenly split between those that require just MATLAB, those that are just pencil-and-paper (e.g. proofs and symbolic manipulations), and those that require both. There is generally just one exercise of each type, rather than many variations for drill.
The MATLAB code and data files available from the authors’ web site also work well in Octave, a free software package that operates much like MATLAB. One exception is in the interior-point code, which needs an incomplete Cholesky decomposition function that Octave does not yet have (as of this review).
At some universities, this book will be usable with upper-division undergraduates; at others, it will be more appropriate for graduate students. The sections on LP duality, QP, LCP, and interior-point methods for LP are not shy about using a theorem/proof format or mathematical notation, rather than focusing mostly on MATLAB code. If an instructor wants a textbook that focuses on theory and solution methods as an introduction to LP, this is a good book to consider. If an instructor wants to focus on modeling and interpretation of LPs, this book may be good as a secondary resource.
Andrew M. Ross earned a PhD in Operations Research at the Univ. of California, Berkeley. He is an assistant professor of mathematics at Eastern Michigan University. He works on many areas of operations research: queueing theory, applied probability, and optimization, focusing on the telecommunications and electric power industries.