Compressive sensing is the practice of recovering a signal or image from a small set of sampled measurements of the signal. In classical approaches to signal processing, the Nyquist sampling theorem tells us that for arbitrary signals of a given bandwidth, we must uniformly sample at a rate that is at least twice the bandwidth in order to accurately reconstruct the signal. Compressive sensing manages to work around the Nyquist limit in some situations by focusing on the recovery of signals that have some special structure that makes it possible to compress the signals.

The idea of representing a signal using an alternative basis has been widely used in the compression of digital signals and images. For example, in the JPEG image compression system, blocks of pixels within an image are represented in terms of their discrete Fourier transform coefficients. In many real-world images, many of these coefficients are effectively zero. The image is said to be sparse with respect to the DFT basis. We can compress the image by discarding the small coefficients and storing only the larger coefficients.

Note, however, that this approach to compressing a signal begins by first sampling the signal at full bandwidth and only later compresses the signal. Compressive sensing goes beyond signal compression by sampling the signal at less than full bandwidth to produce a compressed signal that must eventually be used to reconstruct the compressively sensed data.

Mathematically, the compressive sensing problem reduces to the problem of finding a sparse solution to an underdetermined linear system of equations \( A\mathbf{x}=\mathbf{y}\). Here the vector \(\mathbf{y}\) gives the sampled measurements of the signal, the vector \(\mathbf{x}\) represents the signal with respect to the sparse basis, and the matrix \(A\) describes how the signal is sampled. The matrix \(A\) must be designed so as to make it possible to recover a sparse signal \(\mathbf{x}\) even though the system of equations is severely underdetermined. Perhaps surprisingly, random matrices can be extremely effective in compressive sensing.

There has been a long history of heuristic approaches to finding sparse solutions to linear systems. For example, in the 1970s, Jon Claerbout at Stanford used linear programming techniques to minimize the one-norm of \(\mathbf{x}\) subject to \( A\mathbf{x}=\mathbf{y}\) in attempting to recover the depths of reflecting surfaces in seismic surveys. Here the nonzero entries in \(\mathbf{x}\) correspond to the reflecting surfaces. This is an example of what is now known as basis pursuit.

Also in the 1970s, an algorithm called CLEAN was developed for the recovery of images from interferometric data in radio astronomy. The CLEAN algorithm iteratively added point sources to an image with each iteration adding a point source at the image location that would most improve the fit to the data. This is an example of what is now known as matching pursuit. Similar ideas were developed repeatedly in different research disciplines.

Although these compressive sensing approaches have been around since at least the 1970s, there was no strong theoretical justification for the use of this approach. This changed in 2006 with the publication of the first papers on compressive sensing that included theorems that give sufficient conditions for compressive sensing to correctly recover a sufficiently sparse signal. David L. Donoho, Emmanuel Candes, Justin Romberg, and Terence Tao have been credited with founding the new field of compressive sensing.

The first chapter of this book provides a broad overview of compressive sensing. In the next section, chapters two through six, the authors introduce basic compressive sensing algorithms, including basis pursuit, matching pursuit, and threshholding. The authors also introduce the important concepts of coherence and the restricted isometry property. This makes it possible to analyze the compressive sensing algorithms and give simple conditions under which compressive sensing will properly recover a sparse solution. The first six chapters might form the core for an introductory course on compressive sensing.

More advanced results in compressive sensing largely rely on the theory of random matrices. The authors introduce tools from probability theory in chapters 7 and 8. Chapters 9 through 14 discuss various advanced topics. Chapter 15 provides a brief introduction to computational algorithms for one-norm minimization. The book also includes extensive appendices on matrix analysis and convex analysis.

Research on compressive sensing makes use of highly specialized results from many areas of mathematics. Few graduate students or researchers are likely to have all of the specialized background in analysis, linear algebra, and probability theory that is required to read the research literature on compressive sensing. This book is notable because it provides a pathway into the field for readers who are sufficiently motivated to dive into the details. At this point there is no comparable reference for the mathematical theory of compressive sensing.

Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.