You are here

Parallel Scientific Computation: A Structured Approach Using BSP

Rob H. Bisseling
Publisher: 
Oxford University Press
Publication Date: 
2020
Number of Pages: 
416
Format: 
Hardcover
Edition: 
2
Price: 
95.00
ISBN: 
9780198788348
Category: 
Textbook
[Reviewed by
Bill Satzer
, on
12/20/2020
]
This is the second edition of a book first published in 2004. In the intervening years parallel computing has become much easier. Now parallel computers are everywhere.  Even the aging laptop I am using to write this review has four processing units, commonly called cores.  The multicore revolution that occurred around 2007 has eliminated the esoteric hardware and software systems that were once necessary for parallel computation. Although computations on even small systems can gain efficiency and speed from parallel operations, the benefit is considerably greater for high performance systems that may have a million cores or more.
 
This book is designed to teach students of mathematics and computer science how to design and analyze parallel algorithms. The Bulk Synchronous Parallel (BSP) programming model provides a theoretical framework for connecting parallel hardware with software. The author uses BSP as a model for the parallel algorithms he describes in the book.
 
The BSP model is described in detail in a long introductory chapter. One of the main goals of the model is to enable definition of a clean algorithm structure. In the BSP terminology this means designing the algorithm as a sequence of “supersteps”, large steps that incorporate many basic computational or communication operations and a global synchronization at the end. The processors wait for each other to finish before proceeding to the next superstep. In this way operations in each superstep occur in parallel, but the global structure of the algorithm is sequential. 
 
The author presents a detailed study describing how parallel computation can be applied to a collection of numerical problems. He considers LU decomposition of dense matrices, the fast Fourier transform (FFT), multiplication of a sparse matrix by a dense vector, as well as matching vertices in a sparse graph and sorting. He uses these to teach design and implementation of well-structured efficient parallel algorithms. These problems were selected to show several different parallelization techniques. 
 
The book begins with a relatively simple example and proceeds in degrees of increasing irregularity. Dense LU decomposition has is a regular computation pattern using communication patterns that are common for matrix computations. An optimization that the author describes should produce near peak performance on a parallel computer. With the FFT, the computational pattern is also fairly regular, but the data flow is more complex.
 
Communication issues among the processors become more complex with sparse matrix-vector multiplication. Although dense input and output vectors can be stored in regular data structures, the communication patterns are irregular since exploiting the sparsity of the matrix means that more efficient communications are necessary. In matching vertices in a sparse graph, the computation is itself very irregular since the graph is sparse and it changes as matched vertices and their edges are removed.
 
There are a number of exercises. Several of them involve taking standard sequential algorithms and converting them to parallel forms. (One of the most amusing exercises asks students to convert Strassen’s algorithm for fast matrix multiplication into parallel form.) The book is best suited for a graduate course in parallel scientific processing for mathematics or computer science students. It would also be a good introduction for those in other disciplines who might want a quick look at parallel computation. Desirable background includes linear algebra and ordinary programming in the C language or Python. The author provides a very extensive list of references.

 

Bill Satzer (bsatzer@gmail.com), now retired from 3M Company, spent most of his career as a mathematician working in industry on a variety of applications ranging from speech recognition and network modeling to optical films, material science and the odd bit of high performance computing. He did his PhD work in dynamical systems and celestial mechanics. His first experience with parallel computing – some years ago - required getting two processors to communicate.
 
The table of contents is not available.