You are here

Applied Linear Algebra

Pearson/Prentice Hall
Number of Pages: 

I took linear algebra in my first quarter in college. I liked the professor pretty well, but I didn't like the textbook much, and one day, not knowing any better, I asked him why we were using something so uninspired. He told me that the best way to learn linear algebra was to go through it first with a bad book, and then again later with a good book. (I believe he was thinking of Hoffman and Kunze.)

For evidence that many of us have a similar view we have only to consider the success of Sheldon Axler's Linear Algebra Done Right . The book under review is quite different from Axler's, but they have this in common: while they could under very favorable circumstances be used for a first course, they are best conceived of as a basis for advanced undergraduate courses that would revisit the ideas of the first course and build on them. Axler's preface says expressly that his book is intended for a second course. This book's preface says that it was designed for three possible audiences. It could be used for a first course, they say, but only one for which the words "in depth," "highly motivated" and "mathematically mature" would apply. While one could get along perfectly well knowing little or no calculus with many linear algebra books, that is not the case here. Notions from calculus occur frequently, and one of the book's strengths is the way it blends them with linear algebraic ideas.

The other audiences are an advanced undergraduate course with emphasis on applications, and a beginning graduate course in linear mathematics for students in other disciplines. I believe that the book was developed in connection with the (junior/senior level) "Methods of Applied Mathematics" sequence at the University of Minnesota; Olver teaches there, and his wife Shakiban at the University of St. Thomas in St. Paul. It would certainly work very well in that context and others like it. It was originally conceived as part of a larger project, and there will eventually be a complementary book on other aspects of applied mathematics. The two books are unabashedly modeled on Gilbert Strang's Linear Algebra and its Applications and his Introduction to Applied Mathematics. Strang's newer Introduction to Linear Algebra is not mentioned.

Chapter 1 covers (what I think of as) the basics: linear systems, vectors, matrices, Gaussian elimination, LU factorization, inverses, transposes and determinants. (If you think of the vector space axioms as the basics, or even if you don't, they are at the beginning of chapter 2.) To my taste this chapter is somewhat underwritten, but probably most of this material will be review for most of the students who use it, and the preface says that it should be covered rapidly in that case. Moreover, there are many, many good problems in this chapter, as there are throughout the book, and in a first course some of them could be used in class to flesh out the exposition. For example, I always do part (e) of problem 1.2.32 in class — that if AB and BA are both defined, then they have the same trace — because it implies that a non-square matrix cannot have a two-sided inverse. One should come back to this question when one has the idea of rank, but it is nice to be able to dispose of it in the beginning.

The section on determinants is particularly weak. It begins "You may be surprised that, so far, we have not mentioned determinants", which I would find off-putting in a first course. A few lines later we see that this is because "they are almost completely irrelevant when it comes to large scale applications and practical computations." As is well known (even if hard to believe when hearing it for the first time), determinants arose before matrices, and this historical accident led to an overemphasis on determinants in linear algebra courses, which has slowly been corrected over time, but I wonder if we are not in danger of going too far in this direction.

Teaching a first course in linear algebra without mentioning Cramer's Rule (which occurs in this book only in problem 1.9.22, without anything like an adequate proof) is something like teaching a survey of American history without mentioning slavery. Of course, this is not to say that it should be included in a second course.

A basic fact about determinants that does not appear in the book is A(adj A)=(det A)I, where adj A is the adjugate of A, namely the matrix whose ij entry is the cofactor of the ji entry of A. (Some people call this matrix the adjoint, but it seems better to reserve this word for its other uses in mathematics. Although the authors never formally define the adjugate, they make the same point when they define the adjoint of a linear system on page 114.) Recall that this is not a deep result: the diagonal elements are all det A because of cofactor expansion, and the off-diagonal elements are all zero because they are determinants with two equal rows. It gives an explicit, albeit usually impractical, formula for the inverse of A when it exists.

In section 1.7 the Hilbert matrix (whose ij entry is 1/(i+j-1)) is mentioned, and in problem 1.7.25 an explicit formula for the inverse is given, but the student is warned not to try to prove it. If one had the above adjugate formula, one could outline a proof in three problems. The first would evaluate Cauchy's double alternant, the determinant whose ij entry is 1/(xi-yj). This is not much harder than a Vandermonde determinant (which is actually also due to Cauchy); it equals the difference product of the x's times the difference product of the y's, where the subscripts are in increasing order in one product and decreasing in the other, divided by all the factors xi-yj. The second problem would observe that every minor of this determinant is itself an instance of Cauchy's double alternant, so one can use the adjugate formula to write down the inverse of the corresponding matrix. The third problem would use the specializations xi=i and yj=1-j in the first two problems to obtain the determinant and the inverse of the Hilbert matrix, both of which are well worth knowing exactly, for aesthetic reasons and because of the difficulty of computing them numerically. These three problems would be no more challenging than many others in the book.

The authors describe chapter 2 as the crux of the course; it includes vector spaces, subspaces, basis and dimension. Even though this will also be review for many students, the authors advise that it be covered in detail, and recommend greater emphasis on infinite dimensional spaces than one would normally have in a first course. The authors use range, corange, kernel and cokernel instead of column space, row space, null space and left null space, since they want to have the same ideas in function spaces, but they follow Strang in referring to the relationships between the matrix subspaces as the Fundamental Theorem of Linear Algebra. The chapter concludes with an optional section on incidence matrices and graph theory.

Chapter 3 is on inner products and norms. The Cauchy-Schwarz inequality occurs here, and so do positive definite matrices (including Gram matrices). Complex vector spaces are discussed briefly in section 3.6. One might have expected more about Hermitian and unitary matrices, so important in quantum mechanics, from a book like this. But by now any criticism is in the nature of a quibble: I wish the first chapter was a little better — up to the level of the rest of the book is not quite good enough — but otherwise I think the book is excellent.

Chapter 4 is on least squares, and might have been better placed after chapter 5, on orthogonality. The exercises in section 5.3 are especially good. Speaking as one whose training is in special functions, it was nice to see orthogonal polynomials in section 5.4, but there is an overemphasis on monic polynomials. In particular, problem 5.4.19 introduces the "wrong" monic Hermite polynomials: the ones orthogonal with respect to exp(-t2/2) on the real line are naturally monic and have many beautiful properties. The material on orthogonal projections could also be improved, but discrete Fourier analysis gets a very nice treatment in 15 pages at the end of this chapter.

Although some applications occur earlier, chapter 6 is specifically devoted to them. The discussion of electrical networks in section 6.2 is superb, one of the highlights of the book. Chapter 7 is on linearity, including linear transformations and more general linear operators. The most abstract material in the book is in section 7.5, on adjoints.

Chapter 8 is on eigenvalues and eigenvectors, and this material is applied in both continuous (differential equations) and discrete contexts in chapters 9 and 10, respectively. Eigenvalues of (the adjacency or Laplacian matrices of) graphs would have been a nice addition — problems 8.2.47 and 8.2.49 treat the path and cycle of length n, respectively, without saying so. The book concludes with chapter 11, on boundary value problems, where the material on adjoints is revisited. There is plenty of material here for a year long course. The authors have numerous suggestions for a semester course, any of which would involve many regrettable omissions.

I did not find very many mistakes. The only one that really annoyed me is in the preface, where the prime notation for derivatives is called "Newtonian". It was actually introduced by Lagrange in his celebrated Theorie des fonctions analytiques in 1797; the dot notation (also mentioned in the preface) is Newtonian. A list of errata (not very long, the last time I looked at it) is available at Olver's web site.

For many of us, I suspect, this will be a book in search of a course. The most natural setting would be either a Methods of Applied Math type of course, or a Linear Algebra 2 course much different from the Axler type. I would gladly use it in the former, and I would consider it if I were fortunate enough to teach the latter. One would truly be blessed to have a group of students good enough to use it for the first course.

Warren Johnson ( is visiting assistant professor of mathematics at Connecticut College.
Date Received: 
Friday, April 1, 2005
Include In BLL Rating: 
Peter J. Olver and Chehrzad Shakiban
Publication Date: 
Warren Johnson
Chapter 1. Linear Algebraic Systems

1.1. Solution of Linear Systems

1.2. Matrices and Vectors

1.3. Gaussian Elimination-Regular Case

1.4. Pivoting and Permutations

1.5. Matrix Inverses

1.6. Transposes and Symmetric Matrices

1.7. Practical Linear Algebra

1.8. General Linear Systems

1.9. Determinants

Chapter 2. Vector Spaces and Bases

2.1. Vector Spaces

2.2. Subspaces

2.3. Span and Linear Independence

2.4. Bases

2.5. The Fundamental Matrix Subspaces

2.6. Graphs and Incidence Matrices

Chapter 3. Inner Products and Norms

3.1. Inner Products

3.2. Inequalities

3.3. Norms

3.4. Positive Definite Matrices

3.5. Completing the Square

3.6. Complex Vector Spaces

Chapter 4. Minimization and Least Squares Approximation

4.1. Minimization Problems

4.2. Minimization of Quadratic Functions

4.3. Least Squares and the Closest Point

4.4. Data Fitting and Interpolation

Chapter 5. Orthogonality

5.1. Orthogonal Bases

5.2. The Gram-Schmidt Process

5.3. Orthogonal Matrices

5.4. Orthogonal Polynomials

5.5. Orthogonal Projections and Least Squares

5.6. Orthogonal Subspaces

Chapter 6. Equilibrium

6.1. Springs and Masses

6.2. Electrical Networks

6.3. Structures

Chapter 7. Linearity

7.1. Linear Functions

7.2. Linear Transformations

7.3. Affine Transformations and Isometries

7.4. Linear Systems

7.5. Adjoints

Chapter 8. Eigenvalues

8.1. Simple Dynamical Systems

8.2. Eigenvalues and Eigenvectors

8.3. Eigenvector Bases and Diagonalization

8.4. Eigenvalues of Symmetric Matrices

8.5. Singular Values

8.6. Incomplete Matrices and the Jordan Canonical Form

Chapter 9. Linear Dynamical Systems

9.1. Basic Solution Methods

9.2. Stability of Linear Systems

9.3. Two-Dimensional Systems

9.4. Matrix Exponentials

9.5. Dynamics of Structures

9.6. Forcing and Resonance

Chapter 10. Iteration of Linear Systems

10.1. Linear Iterative Systems

10.2. Stability

10.3. Matrix Norms

10.4. Markov Processes

10.5. Iterative Solution of Linear Systems

10.6. Numerical Computation of Eigenvalues

Chapter 11. Boundary Value Problems in One Dimension

11.1. Elastic Bars

11.2. Generalized Functions and the Green's Function

11.3. Adjoints and Minimum Principles

11.4. Beams and Splines

11.5. Sturm-Liouville Boundary Value Problems

11.6. Finite Elements

Publish Book: 
Modify Date: 
Tuesday, June 6, 2006