*Matrix Analysis and Applications* is a comprehensive study in the theory, methods, and applications of matrix analysis. The core topics presented include singular value analysis, the solution of matrix equations, and eigenanalysis. An in-depth consideration of gradient analysis and optimization play a vital role in the text. These applications have a use in areas of science and engineering.

This is a graduate level text and it shows how matrix analysis is used as the powerful tool in computations. Professors who have taught matrix analysis know that it is one of the most powerful and flexible mathematical tools. It plays important roles in physics, mechanics, signal and information processing, wireless communications, machine learning, computer vision, automatic control, system engineering, aerospace, bioinformatics, medical image processing, and many other disciplines.

Over the years, new applications have been considered in matrix analysis such as quadratic eigenvalue problems, joint diagonalization, sparse representation and compressed sensing, matrix completion, nonnegative matrix factorization, and tensor analysis. The purpose and goal of this book is for the reader to gain the skills and knowledge needed to understand these applications and to gain a deeper understanding and appreciation of the power of matrix analysis.

The book is divided into three main parts. Part I is on matrix algebra, Part II is on matrix analysis, which is the heart of the book. This part deals with the topics that are more frequently needed for solving equations, including the Tikhonov regularization method. Finally, Part III is on higher-order matrix analysis. Here, matrix analysis is extended from the second-order case to higher orders via a presentation of the basic algebraic operations, representation as matrices, Tuckey decomposition, parallel factor decomposition, and eigenvalue decomposition.

There are 80 algorithms, each of which contains summaries that help students learn how to conduct computer experiments using related matrix analysis. On page xix in the Preface, there is a diagram giving a schematic organization of the book. It is clear that Xian-Da Zhang has done a comprehensive and thorough job of putting this text together.

Zhang provides a complete list of notations from \(\mathfrak R\) to \(\Phi_{X^\Gamma} (\omega)\), which is the characteristic function of random vector \(X\). In addition, Zhang has complied all Abbreviations from \(AB\), alpha-beta, to VQ, vector quantization, as well as a list of Algorithms from “1.1 Reduced row echelon form” to “10.16 SPC algorithms” and has included 546 references. Zhang offers us a clear and focused presentation of material.

Attention to detail is evident throughout the text, especially derivations with complex gradient matrices on pages 157–174. Here, Zhang discusses the conjugate gradient (cogradient) vector, which is equal to the complex conjugate of the gradient (cogradient) vector and the conjugate Jacobian (gradient) matrix, which is equal to the complex conjugate of the Jacobian (gradient) matrix. Discussion of the rule of operations for the complex gradient are also presented. Tables 3.9, 3.10, 3.11 give a complete list of complex gradient matrices of trace functions, complex gradient matrices of determinant functions, and complex matrix differential and complex Jacobian matrices, which can be used as a reference sheet for students.

The most important part of the book starts in Chapter 10: Tensor Analysis. As Zhang points out, in many disciplines, three or more subscripts are needed to describe data, which are referred to as multi-channel data, whose representation is the tensor. Tensors are used as a generalization of matrices: multilinear data analysis is a natural extension of linear data analysis. Typical applications of multi-way data models in various subjects are presented on pages 590–591 along with Figure 10.1, which plots examples of a three-way array and a five-way tensor modeling a face. The main question Zhang addresses in Chapter 10 is how to matricize (tensorize) data and how to apply the inherent tensor structure in data in high-dimensional problems. Example 10.3 on pages 603–605 asks the student to consider two frontal slice matrices of the tensor \(A\in\mathfrak{R}^{3\times4\times5}\). The two main results from the Kolda longitudinal unfolding are:

- the difference between the same mode-\(n\) horizontal unfoldings of the different methods is in the order in which the column vectors are arranged;
- the difference between the same mode-\(n\) longitudinal unfoldings of the different methods is in the order in which the row vectors are arranged.

Chapter 10 goes on to talk about the vectorization and matricization of tensors through tensor completion and software with a very in depth discussion of tensor eigenvalue decomposition and Tucker decomposition.

Since this text is over 700 pages long, we have a solid graduate level text that does not only present the fundamentals of matrix analysis. With the additional algorithms, applications both new and old, and the extension to tensors, the student has a wealth of material to study. Zhang has written a book for those students who love matrix analysis and wish to learn more. For those professors who want fresh ideas for teaching their matrix analysis class, I would highly recommend considering this book.

Peter Olszewski is a Mathematics Lecturer at The Pennsylvania State University, The Behrend College, an editor for Larson Texts, Inc. in Erie, PA, and is the 362nd Chapter Advisor of the Pennsylvania Alpha Beta Chapter of Pi Mu Epsilon. His Research fields are in mathematics education, Cayley Color Graphs, Markov Chains, and mathematical textbooks. He can be reached at [email protected]. Webpage: www.personal.psu.edu/pto2. Outside of teaching and textbook editing, he enjoys playing golf, playing guitar and bass, reading, gardening, traveling, and painting landscapes.