You are here

Exploring Linear Algebra: Labs and Projects with MATLAB

Crista Arangala
Publisher: 
Chapman and Hall/CRC
Publication Date: 
2019
Number of Pages: 
156
Format: 
Hardcover
Edition: 
1st
Series: 
Textbooks in Mathematics
Price: 
149.95
ISBN: 
9781138063518
Category: 
Textbook
[Reviewed by
Mark Hunacek
, on
05/12/2019
]

Those people who believe, as I do, that much of the charm of linear algebra is in its myriad applications, should find something to like in this slim volume, which discusses dozens of them: cryptography, graph theory, the Lights Out game, and predator-prey problems, to name just a few. Unfortunately, when the book departs from presenting examples and applications and attempts to actually teach some linear algebra, serious problems emerge.

This book contains six chapters, each one broken up into three to six “labs”, and a project section. Each “lab” addresses a topic in linear algebra (e.g., vector spaces) or an extended application of linear algebra (e.g., several labs discuss differential equations and another is devoted to graph theory and the adjacency matrix). The projects consist of additional applications of the material taught in the labs for that chapter; they are do-it-yourself affairs, each broken up into subparts that the reader is to work through, often using MATLAB.

The labs begin with definitions and then provide a number of problems that give the student practice in computational exploration, assisted by MATLAB. At the end of the lab, there are more theoretical problems: the reader is given a list of statements marked “theorems” and “problems” and asked to prove the statements that are true and provide counter-examples to those that are not. Some of the statements marked “theorems” are in fact not, often for fairly subtle reasons. For example, “theorem 69” asserts that any two eigenvectors of a symmetric matrix are orthogonal, a statement which is true if the eigenvectors correspond to distinct eigenvalues but which is obviously not true in general, even if “two” is interpreted to mean “two distinct”. I see no reason to label a false statement as a “theorem”, even if accompanied by a warning that the reader might have to construct counter-examples; other statements in the group are labelled as “problems”, and it would seem to me advisable to use that term for all the statements.

No solutions are provided in the text, and there is, to my knowledge, no solutions manual available for instructors.

The book begins with an introduction to MATLAB, and so is technically self-contained, but this introduction is sufficiently brief that, I suspect, it will not substitute for some prior experience in this programming language. Readers who are more familiar with Mathematica than MATLAB may wish to look at a very similar, indeed almost identical, text written by the author about five years ago using that language.

After an initial lab discussing MATLAB, the book covers all the standard topics in linear algebra: matrices and their arithmetic, determinants, vector spaces, linear transformations, inner product spaces, etc. A number of other topics that are not often covered in a first course (Jordan Canonical form, SVD decomposition, Cholesky decomposition) are also the subject of labs. The exposition is rapid: the Jordan Canonical form, for example, is discussed over a period of barely more than 3 pages.

A natural question is: what are the possible uses of a text like this?  I view the applications as the best feature of the text, and think that they might make this book suitable as a supplemental text or (better yet) as a reference for instructors who want to add some discussion of applications to their lectures.

In view of the fact that many results are stated without proof and left to the reader to establish, another possible use of the book, I suppose, would be as an “inquiry-based learning” text in linear algebra, but, for several reasons, I cannot recommend this text for that purpose.  For one thing, the book (at 144 pages) is much too short, the exposition is too rushed, and crucial ideas are omitted. The concept of “linear independence”, for example, is discussed in connection with row and column vectors, but never defined for arbitrary vector spaces, which are introduced after this topic has been covered. The fact that two bases for a finite-dimensional vector space have the same number of elements is stated but neither proved nor given as an exercise for the reader to prove. There is no discussion of how to prove that a vector space is not finite-dimensional, and, in fact, no real effort is made to even look at infinite-dimensional spaces.

I also thought that there were problems with the exposition in the text.  Some of these problems may just amount to a simple typo: a well-known quote by Stephen Hawking   (“Intelligence is the ability to adapt to change”) is here attributed to a man named “Hawkings”, for example. Other problems are more significant, however.

For one thing, definitions can be strange: a linear transformation T between vector spaces is defined as a mapping which preserves addition and scalar multiplication, and which maps the 0 vector into the 0 vector. I have no idea why this last condition is stated as part of the definition rather than as an easily-proved theorem. I certainly see no pedagogical advantage in doing this, but do see a serious disadvantage: students should be taught the difference between a theorem and a definition, and examples like this blur that distinction.

The definition of “eigenvalue” is also inadequate: “For each n x n matrix A, we can calculate the eigenvalues of A by finding the values for λ such that Ax = λx.  The ‘s are the eigenvalues for A and each eigenvalue has a corresponding eigenvector x.” This “definition” omits the requirement that the eigenvector be nonzero, and also incorrectly suggests that there is only one eigenvector corresponding to an eigenvalue. The term “eigenspace” is used in the text, but never really defined at all. And speaking of eigenspaces, it is stated that “a basis for the eigenspace affiliated with an eigenvalue consists of the eigenvectors affiliated with that eigenvalue” (my emphasis), but that is obviously incorrect.

As yet another example, the minimal polynomial of a square matrix A is defined as “a unique monic, strictly increasing or strictly decreasing, polynomial of least degree” for which p(A) is the zero matrix. Ignoring the trivial omission of the word “nonconstant”, there is the much more serious problem of the requirement that the polynomial be “strictly increasing or strictly decreasing”; I’m not even sure what the author means by this, but if she is intending to suggest that the polynomial (if the coefficients are real) be an increasing or decreasing function, then that is, again, clearly incorrect. As is well known, any monic nonconstant polynomial is the minimal polynomial of a matrix A.

For these reasons, I don’t believe this book can serve as a stand-alone text for a linear algebra course and it is certainly not a source that I would recommend to anyone for learning this material for the first time. Given, therefore, that it should ideally be used in conjunction with another text, it would seem unnecessary to spend any time at all on definitions or statements of theorems. Omitting these could free up more space for the applications, which, as I noted earlier, seem to me to be the best thing about this book.


Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.