In *Numerical Linear Algebra: An Introduction* Holger Wendland uses a *matrix-driven* approach to discussing Numerical Linear Algebra (NLA) instead of a *problem-driven* approach. The material presented in the text is based on courses he has taught at advanced BSc and early MSs levels at the University of Sussex (UK), the University of Oxford (UK) and the University of Bayreuth (Germany). While the book does cover traditional topics found in many NLA texts, additional and more specific techniques are presented such as mulitgrid method, the domain decomposition method, multipole expansions, hierarchical matrices and applications to compressed or compressive sensing.

Each section of the text contains some theoretical exercises.Most of the problems presented are algorithm-based, however, where students are presented with clean pseudo-code and no programming language is preferred. The student must also have the prerequisite knowledge of all Linear Algebra topics such as linear spaces, linear mapping, and determinants.

As pointed out in the introduction, the main problem to be solved is a linear system *A***x** = **b**. Wendland addresses the fact that a solution to this fundamental problem requires various methods and other mathematics found in Linear Algebra, one of which is that *A* be invertible. Since a large focus of the book is on algorithms, the question becomes when representing a matrix in a computer program, how accurate will the results be? The main questions Wendland is concerned about throughout that book are:

- How expensive is the algorithm?
- How stable is the algorithm?
- Can we exploit the structure of the matrix
*A*, if it has a special structure?
- What happens if we do not have a square system, i.e. \(A\in\mathbb{R}^{m\times n}\) and \(\mathbf{b}\in\mathbb{R}^m\)?

In addition, as in Linear Algebra, Wendland also considers the computation of *eigenvalues* and *eigenvectors*.

There are three parts to the text: Preliminaries, Basic Methods, and Advanced Methods. The first two parts deal with topics ranging in basic notation, singular value decomposition, back substitution, LU factorization, Cholesky Factorization, Banach’s Fixed Point Theorem, The Jacobi Method, and Householder Reduction to Hessenberg Form, just to name a few.

Part III is the most interesting, as the topics come from Wendland’s lectures. There are three chapters on Methods for Large Sparse Systems, Methods for Large Dense Systems, Preconditioning, and Compressed Sensing. The material found in these sections clearly has been well researched by Wendland and are meant for a graduate level treatment of Numerical Linear Algebra. For example, on page 352, Wendland discusses the approximate inverse preconditioner and provides a definition (8.18) about the sparse approximate *(SPAI, AINV) preconditioner*. A definition of a sparse matrix is given in the earlier part of the book, page 11, definition 1.1, which states: “*sparse*, if more than half of the entries are zero.” Throughout the text, there are always nice connections made to previous parts of the book. On page 363, Algorithm 52 presents the preconditioned Lanczos method.

Chapter 9 addresses the main question of the text which, as presented in the introduction, was to find a solution vector \(\mathbf{x}\in\mathbb{R}^n\) from linear observation of the form \(A\mathbf{x}=\mathbf{b}\). This is where the topics of the Null Space Property, Restricted Isometry Property, and Theorem 9.33 are presented:

**Theorem 9.33: **Suppose \(A\in\mathbb{R}^{m\times n}\) has normalised columns. If the coherence \(\mu\) of \(A\) satisfies \(\mu<1/(3s-1)\) with \(s\in\mathbb{N}\) then each *s*-sparse vector \(\mathbf{x}\in\mathbb{R}^n\) can be recovered from its observation \(\mathbf{b}=A\mathbf{x}\) using at most *s* iterations of hard-thresholding pursuit. (Page 391, Algorithm 60).

Wendland’s book provides the reader with rigorous and clean proofs throughout the text. There are a lot of new concepts being presented that can spark the interest of a student who wishes to take Numerical Linear Algebra and can also serve as an excellent resource for an independent study. If you are considering a new text for your Numerical Linear Algebra class or wish to supplement with another resource, I would recommend giving this book a review.

Peter Olszewski is a Mathematics Lecturer at The Pennsylvania State University, The Behrend College, an editor for Larson Texts, Inc. in Erie, PA, and is the 362nd Chapter Advisor of the Pennsylvania Alpha Beta Chapter of Pi Mu Epsilon. His Research fields are in mathematics education, Cayley Color Graphs, Markov Chains, and mathematical textbooks. He can be reached at pto2@psu.edu. Outside of teaching and textbook editing, he enjoys playing golf, playing guitar and bass, reading, gardening, traveling, and painting landscapes.