You are here

Introduction to High Performance Scientific Computing

David L. Chopp
Publisher: 
SIAM
Publication Date: 
2019
Number of Pages: 
453
Format: 
Paperback
Price: 
89.00
ISBN: 
978-1-611975-63-5
Category: 
Textbook
[Reviewed by
Brian Borchers
, on
08/18/2019
]
For students of mathematics who may only have experienced programming in higher-level languages such as MATLAB, Python, R, or Julia, learning how to write parallel programs for high performance computing systems can be very challenging.  In many cases, the easiest route into parallel computing is to make use of libraries that implement linear algebra, Fourier transforms and other high level operations using shared memory parallel processors, distributed memory clusters, and graphical processing units (GPU’s.)  However, many users eventually find that they need to code algorithms that are not implemented in these libraries.    
 
Introduction to High Performance Computing is primarily a textbook on parallel programming in C and extensions to C including OpenMP for shared memory multiprocessors, MPI for distributed memory clusters, and CUDA, and OpenCL for GPU’s.  OpenACC, an alternative to OpenMP, is not included.   Readers who prefer C++ will find supplementary materials from the author showing how to use C++.  The later chapters of the book give a number of examples that show how scientific computing problems can be solved using the programming languages and libraries discussed earlier in the book. I was disappointed that all of the applications, exercises, and projects for students are related to the numerical solution of differential equations.  Data science and machine learning applications are not included.   
 
The book also has sections describing libraries for linear algebra, discrete Fourier transforms, and random number generation in the different computing environments.  Strangely, these sections appear at the end of each of the chapters on OpenMP, MPI, CUDA, and OpenCL.  It might have been better to start with chapters on the high-level libraries and then dive into lower-level programming.  
 
The focus is very much on programming rather than on numerical analysis or the architecture of high performance computing systems.  Although a background in numerical analysis is not necessary for reading this book, some understanding of computer architecture and parallel programming, including the memory hierarchy, message passing and shared versus distributed memory would be very helpful.  Students with minimal background in a scripting language might find the learning curve very steep.     
 
Compare this book with Introduction to High Performance Computing for Scientists and Engineers by Georg Hager and Gerhard Wellein.  Although the book by Hager and Wellein does not include material on GPU programming, it does provide a more gentle introduction to OpenMP and MPI (in Fortran, although the concepts extend easily to C.)  Hager and Wellein spend more time on the basics of computer architecture and also discuss techniques for optimization and performance measurement.

 

Brian Borchers is a professor of mathematics at New Mexico Tech and the editor of MAA Reviews.

See the publisher's web page.