You are here

A Course in Approximation Theory

Ward Cheney and Will Light
Publisher: 
American Mathematical Society
Publication Date: 
2009
Number of Pages: 
357
Format: 
Hardcover
Series: 
Graduate Studies in Mathematics 101
Price: 
69.00
ISBN: 
9780821847985
Category: 
Textbook
[Reviewed by
John D. Cook
, on
04/3/2009
]

Approximation theory studies how functions from a large space can be approximated by simpler functions from a smaller space. Traditional examples include approximating continuous functions by polynomials or approximating smooth functions by splines. A more recent example would be approximating continuous functions by wavelets.

Approximation theory differs from "soft" analysis by being concerned with constraints and quantitative bounds. For example, the Weierstrass approximation theorem is a typical soft analysis result. It states that polynomials are dense in the set of continuous functions over a compact interval. However, the Weierstass theorem is not typical of the kinds of results one expects from approximation theory because it considers approximating functions from a very large space: all polynomials. A more typical approximation theory result is the following theorem of Erdös and Brutman.

Let C[–1, 1] be the space of continuous functions on the interval [–1, 1] and let Πn–1 be the space of polynomials over the same interval. Given n points –1 ≤ x1 < x2 < … < xn ≤ 1 in the interval, define L as the linear projection L: C[–1, 1] → Πn–1 that maps a continuous function to its interpolant at the specified points. Then L has operator norm bounded below by 2 log(n)/π + 0.5212.

The above theorem is Theorem 2 from Chapter 3 of A Course in Approximation Theory. It is a typical result from approximation theory in that it deals with a specific set of approximations — polynomials of degree n–1 that match function values at specified points — and gives quantitative bounds. The conclusion of the theorem is interesting: polynomial interpolation can produce progressively worse results as the number of interpolating points increases, as this article illustrates.

Cheney and Light's book is largely self-contained. The reader does not need to have prior knowledge of approximation theory per se. However, the book does assume the reader has a command of real analysis (Lebesgue integration, Lp spaces, Fourier series, etc.) as well as the basics of functional analysis. Some topics toward the end of the book require more mathematical background. The book deals with multivariate approximation throughout, though univariate versions of multivariate theorems are often presented first as an introduction. The material gets quite sophisticated at times, though the tough-going sections are separated by expository remarks.

The book begins with classical approximation topics such as Lagrange interpolation. After some elementary remarks, these classical results are cast in a more modern abstract setting. The latter chapters include more recent topics such as tomography reconstruction, artificial neural networks, wavelets.

A Course in Approximation Theory contains hundreds of exercises. Some of the chapters, especially the early chapters, have motivational quotes before the exercises to encourage readers to dive in. For example, the first problem set is prefaced by this quote from Sophocles:

One must learn by doing the thing; for though you think you know it, you have no certainty until you try.

Approximation theory gathers a cross-section of results from other fields: digital signal processing, numerical analysis, functional analysis, topology, etc. (Of course one could as easily say that many other fields take results from approximation theory.) Working through this book provides an opportunity to review and apply areas of mathematics learned elsewhere, as well as to learn entirely new topics. The book does not emphasize algorithms or numerical analysis, though the subject matter is often directly applicable to numerical computing.

A Course in Approximation Theory was originally published by Brooks-Cole in 2000. This second printing appears as part of the AMS Graduate Studies in Mathematics series.


John D. Cook is a research statistician at M. D. Anderson Cancer Center and blogs daily at The Endeavour.

 

  • Introductory discussion of interpolation
  • Linear interpolation operators
  • Optimization of the Lagrange operator
  • Multivariate polynomials
  • Moving the nodes
  • Projections
  • Tensor-product interpolation
  • The Boolean algebra of projections
  • The Newton paradigm for interpolation
  • The Lagrange paradigm for interpolation
  • Interpolation by translates of a single function
  • Positive definite functions
  • Strictly positive definite functions
  • Completely monotone functions
  • The Schoenberg interpolation theorem
  • The Micchelli interpolation theorem
  • Positive definite functions on spheres
  • Approximation by positive definite functions
  • Approximate reconstruction of functions and tomography
  • Approximation by convolution
  • The good kernels
  • Ridge functions
  • Ridge function approximation via convolutions
  • Density of ridge functions
  • Artificial neural networks
  • Chebyshev centers
  • Optimal reconstruction of functions
  • Algorithmic orthogonal projections
  • Cardinal B-splines and the sinc function
  • The Golomb-Weinberger theory
  • Hilbert function spaces and reproducing kernels
  • Spherical thin-plate splines
  • Box splines
  • Wavelets, I
  • Wavelets, II
  • Quasi-interpolation
  • Bibliography
  • Index
  • Index of symbols