You are here

Fundamentals of Matrix Computations

David S. Watkins
Publisher: 
Wiley
Publication Date: 
2017
Number of Pages: 
644
Format: 
Hardcover
Edition: 
3
Series: 
Pure and Applied Mathematics
Price: 
136.00
ISBN: 
9780470528334
Category: 
Textbook
BLL Rating: 

The Basic Library List Committee suggests that undergraduate mathematics libraries consider this book for acquisition.

[Reviewed by
Brian Borchers
, on
09/19/2017
]

The typical first course in numerical analysis focuses on methods for numerical differentiation, integration, interpolation, and the solution of nonlinear scalar equations. In many cases, the focus is mostly on algorithms, with little attention paid to proofs of convergence and error bounds. A small portion of the course may be devoted to the solution of linear systems of equations, but numerical linear algebra is often left for later courses.

Numerical linear algebra often makes its first appearance in a graduate-level course on numerical methods for partial differential equations, where linear systems of equations and eigenvalue problems arise from the discretization of PDEs by finite difference or finite element methods. Since the matrices involved in the numerical solution of PDEs are typically large and sparse, such courses tend to focus on iterative methods. However, numerical linear algebra is much more broadly applicable in other areas of scientific computing, statistics, machine learning, and optimization.

Fundamentals of Matrix Computations provides a broad introduction to numerical linear algebra that covers the solution of linear systems of equations by direct and iterative methods, orthogonal factorizations, least squares problems, and methods for the solution of eigenvalue problems. Although the author uses very simple finite difference schemes in some examples, numerical methods for PDEs are not really covered in this book.

The book is written in a conversational style, with many helpful examples that clarify important concepts. For example, the presentation of the condition number of a linear system in the second chapter is much clearer than in many other books. Watkins also presents the implicitly shifted QR algorithm as an algorithm involving orthogonal transformations without explicitly using the QR factorization. He argues that this algorithm should be called Francis’s algorithm since it doesn’t actually depend on QR factorization.

The book also discusses important computational aspects of the subject, including the time complexity of the methods, sparse and banded matrices, row- and column-based methods, and the importance of using cache-friendly blocked algorithms. The reader is encouraged to perform computational experiments. Although the author suggests the use of MATLAB and provides some sample MATLAB code, the text is not tightly linked to MATLAB and could be used with other languages such as Python/Numpy.

The presentation is reasonably rigorous, but proofs are presented in enough detail and with sufficient explanation so that the book is accessible to undergraduate students. Having taught a senior level course using the book, I can highly recommend it for use with students who have a reasonable background in linear algebra, programming, and proof based mathematics at the undergraduate level.


Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.

Preface.

Acknowledgments.

1 Gaussian Elimination and Its Variants.

1.1 Matrix Multiplication.

1.2 Systems of Linear Equations.

1.3 Triangular Systems.

1.4 Positive Definite Systems; Cholesky Decomposition.

1.5 Banded Positive Definite Systems.

1.6 Sparse Positive Definite Systems.

1.7 Gaussian Elimination and the LU Decomposition.

1.8 Gaussain Elimination and Pivoting.

1.9 Sparse Gaussian Elimination.

2 Sensitivity of Linear Systems.

2.1 Vector and Matrix Norms.

2.2 Condition Numbers.

2.3 Perturbing the Coefficient Matrix.

2.4 A Posteriori Error Analysis Using the Residual.

2.5 Roundoff Errors; Backward Stability.

2.6 Propagation of Roundoff Errors.

2.7 Backward Error Analysis of Gaussian Elimination.

2.8 Scaling.

2.9 Componentwise Sensitivity Analysis.

3 The Least Squares Problem.

3.1 The Discrete Square Problem.

3.2 Orthogonal Matrices, Rotators and Reflectors.

3.3 Solution of the Least Squares Problem.

3.4 The Gram-Schmidt Process.

3.5 Geometric Approach.

3.6 Updating the QR Decomposition.

4 The Singular Value Decomposition.

4.1 Introduction.

4.2 Some Basic Applications of Singular Values.

4.3 The SVD and the Least Squares Problem.

4.4 Sensitivity of the Least Squares Problem.

5 Eigenvalues and Eigenvectors I.

5.1 Systems of Differential Equations.

5.2 Basic Facts.

5.3 The Power Method and Some Simple Extensions.

5.4 Similarity Transforms.

5.5 Reduction to Hessenberg and Tridiagonal Forms.

5.6 Francis's Algorithm.

5.7 Use of Francis's Algorithm to Calculate Eigenvectors.

5.8 The SVD Revisted.

6 Eigenvalues and Eigenvectors II.

6.1 Eigenspaces and Invariant Subspaces.

6.2 Subspace Iteration and Simultaneous Iteration.

6.3 Krylov Subspaces and Francis's Algorithm.

6.4 Large Sparse Eigenvalue Problems.

6.5 Implicit Restarts.

6.6 The Jacobi-Davidson and Related Algorithms.

7 Eigenvalues and Eigenvectors III.

7.1 Sensitivity of Eigenvalues and Eigenvectors.

7.2 Methods for the Symmetric Eigenvalue Problem.

7.3 Product Eigenvalue Problems.

7.4 The Generalized Eigenvalue Problem.

8 Iterative Methods for Linear Systems.

8.1 A Model Problem.

8.2 The Classical Iterative Methods.

8.3 Convergence of Iterative Methods.

8.4 Descent Methods; Steepest Descent.

8.5 On Stopping Criteria.

8.6 Preconditioners.

8.7 The Conjugate-Gradient Method.

8.8 Derivation of the CG Algorithm.

8.9 Convergence of the CG Algorithm.

8.10 Indefinite and Nonsymmetric Problems.

References.

Index.

Index of MATLAB Terms.