You are here

Matrix Analysis for Scientists & Engineers

Alan J. Laub
Publisher: 
SIAM
Publication Date: 
2005
Number of Pages: 
157
Format: 
Paperback
Price: 
36.00
ISBN: 
0-89871-576-8
Category: 
Textbook
[Reviewed by
William J. Satzer
, on
03/16/2010
]

Matrix Analysis for Scientists and Engineers is one of those books. You say to yourself: “I wish I’d had this book when…” In my case it would be when I was working with a group doing adaptive signal processing, and we were all learning the nuances of the Singular Value Decomposition at the instigation of a client who knew a bit more about numerical linear algebra than we did at the time.

This is not a book about numerical linear algebra, but an intermediate text in linear algebra designed to follow a basic undergraduate course and to prepare for advanced work. The “advanced” could be nitty-gritty computational linear algebra (in the manner of Golub and Van Loan, for example), or more theoretical (one of Horn and Johnson’s books). It is a short book that manages to get to a lot of topics in a few pages, but it is correspondingly light on examples and exercises. To use this text in a course would mean supplying a lot more examples and at least a few more exercises.

The author starts with a review of the basics: matrix arithmetic, vector spaces and linear transformations. Then — out of order in the usual development —he discusses first the Moore-Penrose pseudoinverse and then Singular Value Decomposition. Only a few pages are devoted to each topic, so the reader has to work to put meat on the bones, but these are important ideas and worth the effort. The author notes, and I would heartily concur, that while the linear independence of a set of vectors is theoretically a black-and-white question, in applications it can be a very subtle issue. If the vectors are “nearly” dependent, how “near” is that, what subsets of the vectors are the most independent? If the set of vectors is linearly dependent, are there best independent subsets? These questions are particularly important in the context of solving systems of linear equations and linear least squares problems. I have seen a great deal of project effort devoted to determining of the effective rank of a matrix, much of that effort extremely dependent on the application and on the details of computer arithmetic.

The longest sections of the book are devoted to various eigenvalue problems as well as to linear difference and differential equations. The author’s approach here as elsewhere in the book is to identify the central ideas clearly and succinctly, state the basic theorems, prove some of them, and provide a few examples.

It would have been nice to see more attention given to numerical issues — even if only to identify the important tools — and perhaps a little more discussion of matrix factorizations. Overall, this is an attractive and well-written text for the classroom — aimed at senior undergraduates or beginning graduate students — or for self-study.


Bill Satzer (wjsatzer@mmm.com) is a senior intellectual property scientist at 3M Company, having previously been a lab manager at 3M for composites and electromagnetic materials. His training is in dynamical systems and particularly celestial mechanics; his current interests are broadly in applied mathematics and the teaching of mathematics.

Preface; Chapter 1: Introduction and Review; Chapter 2: Vector Spaces; Chapter 3: Linear Transformations; Chapter 4: Introduction to the Moore-Penrose Pseudoinverse; Chapter 5: Introduction to the Singular Value Decomposition; Chapter 6: Linear Equations; Chapter 7: Projections, Inner Product Spaces, and Norms; Chapter 8: Linear Least Squares Problems; Chapter 9: Eigenvalues and Eigenvectors; Chapter 10: Canonical Forms; Chapter 11: Linear Differential and Difference Equations; Chapter 12: Generalized Eigenvalue Problems; Chapter 13. Kronecker Products; Bibliography; Index.