The subject of this clearly-written introductory approximation theory textbook is the approximation of functions on a closed interval by polynomials (and more generally by rational functions, in the last six chapters). The book is based on the Matlab program, using a free Chebyshev package titled Chebfun that was developed at Oxford. Happily the book can be understood easily without knowledge of this software, but most of the exercises are experiments based on use of Chebfun (a few exercises ask for proofs). Unlike much work in numerical analysis, the book is slanted toward well-behaved (usually analytic) functions rather than pathological examples. The book is aimed at upper-level undergraduates who have a good grounding in real analysis and preferably some exposure to complex variables.
The naive view would be that approximating functions by using equally-spaced sample points would give the best results, but this turns out not to be true.
Polynomial interpolants through equally spaced points have terrible properties, as we shall see in Chapters 11–15. Polynomial interpolants through Chebyshev points, however, are excellent. It is the clustering near the ends of the interval that makes the difference, and other sets of points with similar clustering, like Legendre points (Chapter 17), have similarly good behavior. (p. 8)
Expansion in Chebyshev polynomials is the main subject of the book. Chebyshev polynomials are not always the best choice, but they have an important advantage: the coefficents of the polynomials can be calculated quickly from the sample values using the Fast Fourier Transform. There are strong parallels between expansions in Chebyshev polynomials, Fourier series, and Laurent series, and the book often uses these analogies as a guide.
Roughly the first half of the book is devoted to function approximation by Chebyshev polynomials. The remaining half covers a variety of topics based on these ideas, including an extensive look at the three competing quadrature methods of Clenshaw–Curtis (based on Chebyshev polynomials), Gauss (based on Legendre polynomials), and Newton–Cotes (interpolation through equally-spaced points). As we should expect by now, the equal-spacing approach compares poorly in accuracy to the other two. There is an especially interesting chapter on “Two Famous Problems”, namely to approximate the absolute value function on [–1, 1] and to approximate the exponential function on the negative real line. These illustrate the cases where rational functions shine.
Although this is an introductory book, and assumes no previous experience in numerical analysis, it is not an introductory numerical analysis book and may have a hard time finding a place in undergraduate curricula. Introductory numerical analysis textbooks tend to be survey courses. The present book instead takes one idea and runs with it, touching on many aspects of numerical analysis. In particular there is nothing here about roundoff error, about numerical linear algebra (although there are many references to eigenvectors and singular-value decompositions), or differential equations. A good modern introductory book, also based on Matlab, and presenting a survey of the whole field, is Timothy Sauer’s Numerical Analysis.
Allen Stenger is a math hobbyist and retired software developer. He is webmaster and newsletter editor for the MAA Southwestern Section and is an editor of the Missouri Journal of Mathematical Sciences. His mathematical interests are number theory and classical analysis. He volunteers in his spare time at MathNerds.org, a math help site that fosters inquiry learning.