You are here

Matrix Algebra: Theory, Computations, and Applications in Statistics

James E. Gentle
Publisher: 
Springer
Publication Date: 
2017
Number of Pages: 
648
Format: 
Paperback
Edition: 
2
Series: 
Springer Texts in Statistics
Price: 
89.99
ISBN: 
9783319648668
Category: 
Textbook
[Reviewed by
Peter T. Olszewski
, on
01/2/2018
]

James E. Gentle’s Matrix Algebra: Theory, Computations, and Applications in Statistics is divided into three main parts. Part I consists of chapters 1–7, which cover the fundamental material on vectors and matrices needed in linear algebra for statisticians. In Chapter 4, “Vector/Matrix Derivatives and Integrals,” Gentle assumes the student is familiar with the basics of partial differentiation and scalar functions. As this is a graduate level text, concepts are summarized but there are no traditional examples. All the exercises are either proofs or self-discovery questions, which are excellent to use for problem sets or group projects.

Chapters 5–7, “Matrix Transformations and Factorizations,” “Solution of Linear Systems,” and “Evaluation of Eigenvalues and Eigenvectors,” take on applications and start to give the student more problems involving computational methods. As mentioned before, the first part of the book deals with the theory and development of the essential tools needed for the applications. The theory is then introduced informally with no traditional definitions typeset in boxes or highlighted. As Gentle points out in the Preface, page x,

Most of the facts have simple proofs, and most proofs are given naturally in the text. No ‘Proof’ and ‘Q.E.D.’ or ‘\(\blacksquare\)’ appear to indicate beginning and end; again, it is assumed that the reader is engaged in the development.

Once the fundamental chapters have been reviewed, Part II provides the reader with applications in data analysis and Part III covers the details of numerical computations in linear algebra.

The book is a bit wordy in the sections leading up to the exercises. It does have some great references that may have fallen through the cracks for some of us professors. For example, on page 27, \(L_p\)-norms are discussed, denoted as \(\|\cdot\|_p\), and defined for \(p\geq 1\) as \[ \|x\|_p=\left(\sum_i |x_i|^p\right)^{1/p}.\] This is also sometimes called the Minkowski norm and also the Hölder norm. Special cases are the Manhattan norm when \(p=1\), since it corresponds to sums of distances along coordinate axes as one would travel along the rectangular street plan of Manhattan (except Broadway and a few other streets and avenues), and the Euclidean norm when \(p=2\).

There is a solid amount of diverse exercises that help students make connections between concepts. For example, the exercises in Chapter 2, problem 2.1 (the first problem in the set) asks students to write out the step-by-step proof that the maximum number of \(n\)-vectors that can form a set that is linearly independent is \(n\). Reading further, the problems jump in the level of difficulty. Problem 2.18 on pages 53–54 deals with convex cones: the student must show that if \(C-1\) and \(C_2\) are convex cones in the same vector space, then \(C_1\cap C_2\) is a convex cone. Students must also give a counterexample to show that \(C_1\cup C_2\) is not necessarily a convex cone.

I particularly like how Gentle treats section 3.2.1 on matrix multiplication or Cayley multiplication. The picture on page 75 clearly indicates how the multiplication is computed. As a follow-up, Gentle points out that Cayley matrix multiplication is a bilinear mapping \(\mathfrak{R}^{n\times m} \times \mathfrak{R}^{m\times p} \longrightarrow \mathfrak{R}^{n\times p}\).

Chapter 9, “Selected Applications in Statistics,” discusses structure data, statistical data analysis, multivariate probability distributions, linear models, principal components, optimal design, multivariate random number generation, and stochastic processes. Again, since this is a graduate level treatment, no worked out examples are presented. Instead, proofs and in-depth interactions of the material are given for the student. One of the most useful derivations are of Cochran’s Theorem in section 9.2.3, which can then be connected with the chi-squared distribution with \(2j + 1\) degrees of freedom.

All these topics lay the groundwork for the technology used to solve and work with these derivations, results, and proofs in Chapter 12, “Software for Numerical Linear Algebra.” This chapter gives the student a preview of the various software that can be used with computations with linear algebra, in particular, IMSL™ libraries for Fortran and C, Octave or MATLAB®, R or S-PLUS®. Gentle does point out that each student and professor will have their own different preference as to which program they will consider using but, he also stresses that these are the most current and useful for statistical applications and problems. For example, R is open source and is freely distributed. Other programs such as Fortran verse C/C++ or Python will depend on what the student will enjoy most.

There is wealth of self-discovery problems in the chapter. My two favorites are

12.2: Write a recursive function in Fortran, C, or C++ to multiply two square matrices using the Strassen algorithm. Write the function so that is uses an ordinary multiplication method if the size of the matrices is below a threshold that is supplied by the user.

and

12.3: Set up an account on GitHub, and upload the function you wrote in Exercise 12.2 to your account. Share this function with another person.

With these two problems out of 13 in the set, one can get a sense of how challenging and the level of critical thinking that must be paid to these problems.

This is a hard book; there is no other way I can put it. Gentle has put in a lot of time and effort to writing this book with careful attention to details. It is over 600 pages long, so at first I though there would be a lot of unnecessary elements. But it is all needed to make sure the student has a firm and solid understanding of matrix algebra on the graduate level. I would recommend this book for all those who teach graduate level matrix algebra or, if you dare, to those undergraduate students who wish to have an independent study.


Peter Olszewski is a Mathematics Lecturer at The Pennsylvania State University, The Behrend College, an editor for Larson Texts, Inc. in Erie, PA, and is the 362nd Chapter Advisor of the Pennsylvania Alpha Beta Chapter of Pi Mu Epsilon. His Research fields are in mathematics education, Cayley Color Graphs, Markov Chains, and mathematical textbooks. He can be reached at pto2@psu.edu. Webpage: www.personal.psu.edu/pto2. Outside of teaching and textbook editing, he enjoys playing golf, playing guitar and bass, reading, gardening, traveling, and painting landscapes.

See the table of contents in the publisher's webpage.