You are here

An Introduction to Compressed Sensing

M. Vidyasagar
Publisher: 
SIAM
Publication Date: 
2019
Number of Pages: 
341
Format: 
Paperback
Price: 
89.00
ISBN: 
978-1-611976-11-3
Category: 
Monograph
[Reviewed by
Bill Satzer
, on
03/29/2020
]
Compressed sensing, according to the author, is the “recovery of high-dimensional but low complexity objects from a limited number of measurements”. Two canonical examples he offers are the recovery of high-dimensional but sparse vectors and the recovery of large but low rank matrices. This tells us the “what” of compressive sensing, but gives us only a glimpse of the “why”. A short section at the end of Chapter 2 addresses application domains, but the rest of the book is a very focused look at the underlying mathematics. Its theoretical tone is emphasized by a traditional theorem-proof structure.
 
The “why” story is worth understanding. In 2004 Emmanuel Candès, a mathematician at Caltech, was working on a problem in magnetic resonance imaging (MRI) when he discovered that he could recover an image exactly even when the amount of data appeared to be insufficient according to the usual Nyquist sampling criteria. This turned out not to be a fluke, and he and others went on to establish the basis of the theory. The MRI example is important because the standard MRI procedure had required that the patient remain still for a long time while the image data was gathered.  Compressed sensing makes it possible to shorten that time considerably and still get very high quality images.
 
That data can be compressed without appreciable loss of information is well known. We know that photographic images and musical recordings can be and generally are compressed significantly. But that compression starts with high dimensional data only to compress and discard most of the information. Compressed sensing turns that idea upside down and shows how to collect surprisingly few measurements and infer from those a full reconstruction with high probability.
 
A simplified form of compressed sensing works something like this: We want to recover a sparse vector \( x \in R^{n} \) from an underdetermined system \( \Phi x = b \) where the sensing matrix \( \Phi \) is \(n \times N \) with typically \( n \ll N \), and where \( b \) consists of the actual measurements that we collect. Often the sensing matrix is assembled from random projections of the state \( b \), and the entries of \( \Phi \) are Gaussian or Bernoulli-distributed random variables. Under the assumption that the solution is \( k \) -sparse, with \( k \ll n \) (so that \( x \) has \( k \) or fewer nonzero entries) an optimization algorithm is used to find a minimal solution in the \( l^{1} \) norm.
 
This description skips over a lot of details. The vector x may not itself have a sparse representation, but does with an orthogonal change of coordinates. The sensing matrix \( \Phi \) must be chosen carefully. With conditions on the null space of \( \Phi \) and satisfaction of a technical condition called the Restricted Isometry Property, a unique \( k \)-sparse solution exists and the \( l^{1} \) minimization finds it.
 
An Introduction to Compressed Sensing develops the substance behind this simplistic picture.  The author begins with mathematical preliminaries that include discussions of various norms for vectors and matrices, as well as background on the relevant aspects of probability and convexity theory. He goes on to develop a brief historical background for compressed sensing and to provide some
preliminary formulations of the basic problems.  
 
From here on the book deals with specific issues that arise in treating the subtleties of making compressed sensing work. Questions about the vector recovery problem and associated constraints on the null space are treated in some detail. These include an extensive discussion of the Restricted Isometry Property and a more basic condition called the Robust Null Space Property. While many practitioners assemble the sensing matrix \( \Phi \) using random variables, the author believes that sparse binary sensing matrices will eventually become more useful. That leads to a discussion of the relevant graph theory - in particular the use of vertex expander graphs for vector recovery and Ramanujan graphs for matrix recovery. An extended description of how sensing matrices can be assembled using both deterministic and probabilistic constructions follows. A final short section has two case studies.
 
The author supplies a firm mathematical basis for compressed sensing.  He is cognizant of the importance of algorithms to carry out the required computations and optimizations, and he provides estimates for important quantities like the number of measurements needed to guarantee the recovery of a sparse vector or matrix. The work is grounded in the theoretical but also offers a vision of what is needed for practical implementations.
 
The book is written for readers with solid backgrounds in linear algebra, analysis and probability, Readers looking for an easier introduction might consider the third chapter of Data-Driven Science and Engineering by Brunton and Kuntz.  It provides more context, motivation and examples. The original paper by Candès, Romberg and Tao is also very readable.

 

Bill Satzer (bsatzer@gmail.com), now retired from 3M Company, did his PhD work in dynamical systems and celestial mechanics. He spent most of his career working on applications ranging from ceramic fibers to optical films.