Its title notwithstanding, this book offers more than what I would call an “introduction” to linear algebra. It covers all the standard introductory topics — vector spaces, matrices, linear transformations, inner products, eigenstuff and diagonalization, etc. — but also contains chapters on topics that may not typically get covered in a first course, such as dual spaces, the singular value decomposition, quadratic forms and positive definite matrices, and the Moore-Penrose inverse. It does so at a fairly high level of generality (the authors work with arbitrary fields, for example), and, moreover, covers all this in the space of about 220 pages of text.

Given the shortness of the book and the amount of material covered, it is not surprising that the exposition is rather succinct. There is not a lot of hand-holding going on here. The authors don’t waste words, and somebody trying to learn from this book would be well advised to have some degree of experience reading mathematics texts.

The first chapter begins with the definitions of a field and a (general) vector space. Some standard examples are given with minimal discussion, and the chapter ends with the definition of a subspace and the concept of the space spanned by vectors. From here, the text takes off at a gallop and doesn’t look back.

There are 25 chapters, each one, we are told, corresponding to a lecture given by the first author to math majors and engineering students. I’m not sure, however, that I could cover the contents of most any one of these chapters in a single lecture, and my guess is that this book covers substantially more material than could be comfortably covered in a one-semester course, at least at most universities.

After the first chapter on fields and vector spaces, there are six chapters on the basics of matrix theory: matrix arithmetic, linear equations, Gaussian elimination, various matrix factorizations, etc. The next block of chapters addresses linear independence, bases and dimension, and coordinates relative to an ordered basis. There are then chapters on linear transformations and their matrix representations, as well as the dual space of a vector space.

From here, the book discusses inner products and normed spaces, and basic eigentheory, including diagonalization. (One unusual feature is the inclusion of material on matrix norms and the Gershgorin disc theorem for location of eigenvalues.) There is also a chapter on the singular value decomposition.

There are then a couple of chapters on applications (to differential and difference equations, and to least squares approximation), followed by two on quadratic forms and positive-definite matrices. These are followed by a chapter on the Moore-Penrose inverse, and a final expository chapter on special types of matrices (irreducible, nonnegative, diagonally dominant, monotone and Toeplitz). A number of important results (such as the Perron-Frobenius theorem) are stated, but nothing in this chapter is proved; the chapter is essentially just a list of definitions and examples.

Every chapter ends with a selection of exercises, mostly fairly straightforward. Hints and solutions follow immediately, so the reader does not even have to flip to the back of the book to find them. I’m not sure that such immediate access to solutions or hints is a good idea; I think most students will just glance down the page for solutions rather than tackling the problem first.

I have some other nits to pick with the text. It seems to me that the authors don’t really make much use of their assumption of arbitrary fields (rather than the standard choices of the real and complex numbers); if that’s the case, why bother with the added generality? Also, there were notational touches (the use of superscripts rather than subscripts to denote a set of vectors, for example) that I didn’t especially like. And there were occasional moments of imprecision — as in Remark 14.2 on page 120, where the statement that any set of orthogonal vectors can be enlarged to a basis is obviously false unless the additional assumption that all vectors are nonzero is made.

At times this imprecision, combined with the authors’ zeal for conciseness, seriously interferes with the clarity of exposition. Chapter 16 is on eigenvalues and eigenvectors, and most of the chapter consists of a flurry of numbered paragraphs, densely packed one after the other (no spaces between them), introducing a series of important facts. Setting these off as theorems and giving proofs would have greatly enhanced the readability of this chapter; as it is, the layout has these facts being blurred together in a whirlwind of statements. And some of them are just downright confusing. For example, after correctly noting that if \(\lambda\) is an eigenvalue of \(A\) and \(p(x)\) is a polynomial, then \(p(\lambda)\) is an eigenvalue of \(p(A)\), the authors state: “In particular, the matrix \(A\) satisfies its own characteristic equation”. The phrase “in particular” makes it sound like the Cayley-Hamilton theorem is a consequence of the fairly trivial observation that precedes it, which is not the case at all, unless of course \(A\) is a diagonalizable matrix, which is not an assumption here. The reader is left with the false impression that the Cayley Hamilton theorem is an obvious fact, which it is not.

A more substantial concern that I have with the book is: I’m not quite sure just who it is intended for. The authors state in the preface that it is written for “senior undergraduates and for beginning graduate one-semester courses.” I am inclined to agree that an undergraduate would have to have reached senior status to get something out of a book that is this concisely written. However, most undergraduate mathematics majors take linear algebra well *before* their senior year; in fact, at many institutions, a linear algebra course is one of the first proof-oriented ones that a math major takes, and a solid grounding is linear algebra is useful in other courses taken subsequently. The problem here is that, given the succinctness of the exposition, lack of any discussion of proof techniques, and the level of knowledge that is presumed on the part of the reader (at one point it is apparently assumed that the reader knows that \(\pi\) is transcendental, and complex numbers are used from the outset; the fundamental theorem of algebra is also used to prove the existence of eigenvalues with no discussion of what that theorem says) this book seems too difficult for students just beginning the study of post-calculus mathematics. In other words, the undergraduate students for whom this book might be suitable are those who would likely have seen most of the topics in this book before.

Likewise, as a text for *graduate* students in mathematics, this book also seems unsuitable; it spends a lot of time on topics that a graduate mathematics major should already know, and at the same time doesn’t cover many topics that one *would* expect to see in any kind of good graduate linear algebra course, including canonical forms, bases for infinite dimensional spaces, bilinear forms, operators on an inner product space, and proofs of some of the results that are stated without proof here, such as the Cayley-Hamilton theorem and Perron-Frobenius theorem. The exercises are also not demanding enough for graduate mathematics students. For these reasons, this is also not a book that I would recommend as a text for a second undergraduate course in linear algebra. Finally, if intended for students in disciplines other than mathematics, the book seems too hard and theoretical, with not enough applications given.

Bottom line: Faculty members, and graduate students in engineering or related disciplines, might find it convenient to have a short and succinct summary of some of the facts of linear algebra close at hand, but I see serious problems with the use of this book as a text for a mathematics course at either the undergraduate or graduate level.

Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.