- Membership
- Publications
- Meetings
- Competitions
- Community
- Programs
- Students
- High School Teachers
- Faculty and Departments
- Underrepresented Groups
- MAA Awards
- MAA Grants

- News
- About MAA

Publisher:

Springer

Publication Date:

2009

Number of Pages:

625

Format:

Hardcover

Edition:

3

Series:

Texts in Applied Mathematics 39

Price:

79.95

ISBN:

9781441904577

Category:

Textbook

The Basic Library List Committee suggests that undergraduate mathematics libraries consider this book for acquisition.

[Reviewed by , on ]

Andrew Locascio

10/10/2010

I have a terrible confession to make. I never really learned numerical analysis. Oh sure, if you check my academic record, you’ll see that I took a graduate-level course in numerical analysis and did quite well in it. But the sad truth is I never really *learned* it.

The course I took was notorious for being a “soft” course. All you had to do was program the appropriate algorithms into your calculator and you had a pretty good chance of passing. It’s a shame, because numerical analysis is one of the jewels of modern mathematics. Probably nowhere else does modern analysis with rigorous foundations demonstrate its true power in the solution of practical problems. When some neophyte who is struggling in undergraduate real analysis asks in exasperation, “What’s all this good for?” we could do a lot worse then show them how to approximate the solution of a nonlinear system of ordinary differential equations that arise in a turbulence problem, or a Newton-Coates approximation to an integral that can’t be solved in closed form. This is “meaningful” mathematics at its clearest and most impressive-and none of it would be possible without a precise formulation of how real valued maps behave on their underlying spaces.

Of course, numerical analysis has in recent years been very intimately tied to computer science, leading to advances in algorithm design and information coding in order to make feasible gargantuan calculations that would take eons for humans to even begin to complete. As a result, numerical analysis now is as much computer programming theory as mathematical analysis, and it certainly lends itself well to a number of perspectives and approaches in teaching. My original point was that I did myself a great disservice not taking it more seriously.

Unfortunately, the sheer vastness of the field provides a pedagogical justification for “soft” courses in numerical analysis (besides the usual “maximal enrollment, minimal student intellect” reason). How does one begin to design a numerical analysis course with backbone? Does one take the high-tech approach of functional analysis; describing full-fledged Banach spaces of operators and their associated approximation theorems and topologies? There are certainly many graduate courses that do exactly this. Indeed, many of the basic results of functional analysis were discovered precisely to provide a theoretical justification of many numerical methods in the solution of differential and integral equations.

The problem with this high tech approach is twofold: Firstly, it requires considerable background in graduate level analysis. This means a lot of graduate students specializing in numerical methods research cannot begin research immediately, and this is problematic in many programs, as moving forward hinges largely on research progress that usually takes the duration of the time needed to get a doctorate. Secondly, there are very real questions as to how much of that considerable background in analysis prospective researchers in numerical methods actually need. When I asked my professor why the course was so applied, he told me that many applied mathematicians get by in their research with just a good understanding of calculus and excellent computer skills.

I’m a bit old fashioned, as I’ve said several times in previous reviews — I believe in learning subjects in their fullest detail *alongside* their applications, both being of equal importance. Unfortunately, the practical restraints of graduate programs and actual research make this sadly unrealistic. Still, graduate students looking to specialize in numerical methods really should have a better depth of understanding than strong undergraduates do. Otherwise, how can we really expect them to advance the field’s foundations and develop new methods that are based on analytically sound principles?

Which brings me to a thick yellow volume called *Theoretical Numerical Analysis: A Functional Analysis Framework *by Kendal Atkinson and Weimin Han. This is the third edition, based on a graduate course taught by the authors at the University of Iowa for many years. Atkinson is an eminent numerical analyst and is the author, also with Han, of a very popular basic textbook on the subject. That book is noted for its careful development requiring a strong background in calculus and basic approximation theory. The strategy of the authors is stated in the preface as follows:

The material in the text is presented in a mixed manner. Some topics are treated with complete rigor, whereas others are simply presented without proof and perhaps illustrated (e.g. the principle of uniform boundedness)… We describe some of the standard material on measure theory and distribution theory in an intuitive manner, believing this is sufficient for much of the subsequent mathematical development. In addition, we give a number of deeper results without proof, citing the existing literature. Examples of this are the open mapping theorem, Hahn-Banach theorem, the principle of uniform boundedness, and a number of the results on Sobolev spaces.

This excerpt is somewhat misleading-they are *not* suggesting that students can avoid the standard first and second year graduate courses on analysis by taking their course. Indeed, they say elsewhere in the preface that they fully expect students to be taking these courses. The point is to give them the working knowledge of functional analysis, approximation theory and harmonic analysis from a modern point of view needed to immediately begin research. The result is thus a supplement and reference for mathematicians and graduate students specializing in numerical methods. Such a book by its very nature is going be a bit fractured and uneven in presentation. That being said, I found the book quite well written and informative.

Chapter 1 reviews the basic definitions of linear spaces and geometry and analysis on such spaces. Chapter 2 discusses linear operators and dual spaces, with emphasis on smooth mappings. Since the goal is partly to cover tools needed for an advanced study of the numerical solution of differential and integral equations, this is an appropriate choice. The essential function space tools of functional analysis are presented in detail in this chapter. The major result given and fully proved is the Riesz representation theorem. Chapter 3 gives a fairly complete introduction to approximation theory in function spaces. The Stone-Weierstrass Theorem, interpolation theory on Banach spaces and the best approximation problem via convex subsets and functions in linear spaces are covered. The discussions of these topics are particularly lucid and interesting. The authors then extend these results to best approximation on general inner product spaces.

Chapter 4 gives one of the most concrete and readable introductions to Fourier and wavelet analysis I’ve ever seen. There are numerous examples and graphs of Fourier and wavelet decompositions. There are lots of accessible introductions to this immensely important area of analysis now i.e. Körner, Stein-Shakarchi, etc. But this chapter may be the best short introduction there is in the literature.

Chapter 5 is a long chapter on nonlinear equations and their solution by analytic methods, primarily the Banach fixed point theorem and its various corollaries and analogues. More then half the chapter is a well-developed presentation of the differential calculus of nonlinear operators with emphasis on Kantorovich’s theory of Newton’s method in Banach spaces.

Chapter 6 concerns finite difference methods for solving equations. This rather cumbersome but very useful method of approximation really exploits the vector space structure of linear systems to reduce a nonlinear analytic problem to a straightforward algebraic one that obtains solutions “close” to the actual one. Chapter 7 gives a brief foray into Sobolev spaces and their critical role in nonlinear analysis. This is probably the tersest chapter in the book. Its real strength is an enormous number of concrete examples and exercises that are usually missing from the standard abstract treatments, including a zigzag function, plane graphs of Lipshitz domains and piecewise smooth continuous functions.

Chapter 8 discusses weak elliptic boundary and variational problems. The weak form of Dirichlet boundary problem for the Poisson equation is studied in depth, as are some related estimate bounds for linear operators in normed spaces. An interesting presentation of the Lax-Milgram Lemma is given. The Lemma is then applied in a number of detailed examples, including a boundary value problem in general linearized elasticity.

Chapter 9 gives a very complete development of the Petrakov-Galerkin numerical methods for linear operators in such problems and their generalizations, beginning with the Ritz minimization of the energy functional and generalizing successively to the PG methods by repeated derivations by the Lax-Milgram lemma. This gives a fully rigorous presentation that’s more concrete then most other treatments and contains a large number of worked examples.

Chapter 10 gives a presentation of finite element methods. I particularly liked this chapter as it stresses the intuition behind the method as basically a generalization of the PG methods. Sparse matrices are noticed to give fairly stable approximations when the PG methods are applied. This leads to the idea of using piecewise linear basis functions of small support to minimize the “blowing up” of the approximations. This is a very useful perspective. The method is then described in detail, using one-dimensional examples first and then moving to higher dimensions requiring triangulation partitions of the domains of the systems.

Chapter 11 discusses elliptic variational inequalities and the convergence, existence and uniqueness of their numerical approximations in general spaces. The chapter concludes with a fascinating lengthy physical application to a rigid boundary elasticity problem with frictional contact at the foundation. Physics students will really like this part of the book. I wish there’d been more examples like this.

Chapter 12 by far is the longest chapter in the book. It discusses one of the most important classes of approximation problems in analysis: the numerical solution of Fredholm equations of the second kind. In this chapter, the authors discuss very fully the two most important classes of numerical methods for these equations: projection methods and Nyström methods. Along this lengthy road, many, many illustrative examples are given to these methods, including piecewise linear and trigonometric collations, Galerkin methods with trigonometric polynomials, error analysis of a Nyström method of 3-point approximation, and optimization and linearization methods. This chapter will be particularly useful to supply conceptual insight to graduate students studying the abstract theory of integral equations.

Chapter 13 follows up the finite element methods for the numerical solution of Laplace’s equation discussed in Chapter 10 by reformulating it as a boundary integral equation. This is an important method: some of the most important boundary value problems for elliptic partial differential equations have been studied and solved numerically by this approach. The many historical digressions in this chapter are brief but enlightening.

The final chapter covers the extension of the approximation of univariate functions by polynomials and trigonometric functions to multivariable functions and multivariable polynomials. The chapter is quite brief and only gives an introduction to this vast and very difficult area of modern analysis, which is being very actively researched by both analysts and computer scientists. To maximize clarity and simplicity, Atkinson and Han wisely emphasize planar problems and include many pictures of such regions and their solution subspaces. In particular, they focus on the unit disk as the principal approximation domain of interest. This leads to many interesting pictures and concrete examples.

I’m surprised how much I liked this book. I generally don’t like “toolbox courses” for applied mathematicians because they reinforce the strict separation of pure theoretical mathematics and its applications — a concept I abhor. But in this case, it works. As the authors said, this book is not intended to *replace* the standard graduate courses in analysis, but to *supplement* them by gathering a very specific collection of results in a single source for students and researchers.

Even within the strict limitations the authors place on themselves, the result is a treasure trove of modern analysis and its methods. It is beautifully organized and written. The treatment is very rigorous despite the many omitted proofs — albeit much more concrete then the usual graduate texts on this material. That alone will make the book very valuable. The pace is quite leisurely, and as a result, the book is very pleasant to read. Given the range of the book, that’s pretty impressive.

It is clear Atkinson and Han have been teaching this course for some time. They have developed in that time a very clear idea of exactly what they want to cover and how, what points to emphasize and what points to defer to other sources. There are tons and tons of well chosen explicit examples.

The exercises range in difficulty from very easy to research level, with a number being used to present related material not directly covered. Indeed, entire aspects of the theory are sometimes developed or reviewed in the exercises For example, Chapter 5 contains a rapid course in advanced methods of ordinary differential equations in the exercises.

The book discusses number of unusual topics not present in the standard analysis books. For example, I was surprised to find Atkinson and Han discuss the Fréchet and Gâteaux derivatives at length, including their associated mean value theorems and convex minimization — topics of great importance in functional analysis that rarely get a textbook treatment anymore. Chapter 7 concludes with a fascinating discussion of a generalized “integration by parts” theorem on functions whose domains are Sobolev spaces. I’d never seen this result before. Nuggets of important but uncommon topics like this make this book very useful.

The book’s most impressive quality is the very detailed references. I don’t think I’ve ever seen an advanced mathematics text with this level of diligence and completeness given to its source material. Even more amazing is that each reference is described by the authors in some detail, as if they were personally familiar with each book and paper! This gives the book a tremendous level of scholarship most texts don’t have.

A few quibbles: It’s not clear what the prerequisites for the book are. My impression is that since this is a course taken by graduate students (and very strong undergraduates, obviously), one would expect the students to have command of a theoretical presentation of one-variable calculus à la Spivak or Ross. Since most of the properties of complete normed spaces are used repeatedly and linear mappings are emphasized, a good course in linear algebra would be required as well. They should make this clear in future editions. Lastly, since such a huge amount of material is covered, historical digressions would make a very nice addition. There are some chapters where this is done — Chapter 13 in particular — but more would make the presentation even smoother and would help to unify all this material under a historical umbrella.* *

Those suggestions aside, I think this is an awesome book that will not only be a very useful textbook and reference, but a very *versatile* one as well. It can be used for a course the kind Atkinson and Han teach; it can be used for reading courses in functional analysis for strong senior undergraduates that aren’t quite ready for a graduate analysis course, and it can be used as a reference for analysts of all stripes. I got a lot from reading it and so will you.

Andrew Locascio is currently a man without a country as he prepares to finally take his Master’s qualifying exams in pure mathematics at Queens College of the City University of New York in January. He hopes to work as an adjunct and publish mathematical research next year while waiting for his PhD applications to be processed.

Series Preface

Preface

1 Linear Spaces

1.1 Linear spaces

1.2 Normed spaces

1.2.1 Convergence

1.2.2 Banach spaces

1.2.3 Completion of normed spaces

1.3 Inner product spaces

1.3.1 Hilbert spaces

1.3.2 Orthogonality

1.4 Spaces of continuously differentiable functions

1.4.1 Hölder spaces

1.5 L^{p} spaces

1.6 Compact sets

2 Linear Operators on Normed Spaces

2.1 Operators

2.2 Continuous linear operators

2.2.1 L(V,W) as a Banach space

2.3 The geometric series theorem and its variants

2.3.1 A generalization

2.3.2 A perturbation result

2.4 Some more results on linear operators

2.4.1 An extension theorem

2.4.2 Open mapping theorem

2.4.3 Principle of uniform boundedness

2.4.4 Convergence of numerical quadratures

2.5 Linear functionals

2.5.1 An extension theorem for linear functionals

2.5.2 The Riesz representation theorem

2.6 Adjoint operators

2.7 Weak convergence and weak compactness

2.8 Compact linear operators

2.8.1 Compact integral operators on C(D)

2.8.2 Properties of compact operators

2.8.3 Integral operators on L^{2}(a, b)

2.8.4 The Fredholm alternative theorem

2.8.5 Additional results on Fredholm integral equations

2.9 The resolvent operator

2.9.1 R(λ) as a holomorphic function

3 Approximation Theory

3.1 Approximation of continuous functions by polynomials

3.2 Interpolation theory

3.2.1 Lagrange polynomial interpolation

3.2.2 Hermite polynomial interpolation

3.2.3 Piecewise polynomial interpolation

3.2.4 Trigonometric interpolation

3.3 Best approximation

3.3.1 Convexity, lower semicontinuity

3.3.2 Some abstract existence results

3.3.3 Existence of best approximation

3.3.4 Uniqueness of best approximation

3.4 Best approximations in inner product spaces, projection on closed convex sets

3.5 Orthogonal polynomials

3.6 Projection operators

3.7 Uniform error bounds

3.7.1 Uniform error bounds for L^{2}-approximations

3.7.2 L^{2}-approximations using polynomials

3.7.3 Interpolatory projections and their convergence

4 Fourier Analysis and Wavelets

4.1 Fourier series

4.2 Fourier transform

4.3 Discrete Fourier transform

4.4 Haar wavelets

4.5 Multiresolution analysis

5 Nonlinear Equations and Their Solution by Iteration

5.1 The Banach fixed-point theorem

5.2 Applications to iterative methods

5.2.1 Nonlinear algebraic equations

5.2.2 Linear algebraic systems

5.2.3 Linear and nonlinear integral equations

5.2.4 Ordinary differential equations in Banach spaces

5.3 Differential calculus for nonlinear operators

5.3.1 Fréchet and Gâteaux derivatives

5.3.2 Mean value theorems

5.3.3 Partial derivatives

5.3.4 The Gâteaux derivative and convex minimization

5.4 Newton’s method

5.4.1 Newton’s method in Banach spaces

5.4.2 Applications

5.5 Completely continuous vector fields

5.5.1 The rotation of a completely continuous vector field

5.6 Conjugate gradient method for operator equations

6 Finite Difference Method

6.1 Finite difference approximations

6.2 Lax equivalence theorem

6.3 More on convergence

7 Sobolev Spaces

7.1 Weak derivatives

7.2 Sobolev spaces

7.2.1 Sobolev spaces of integer order

7.2.2 Sobolev spaces of real order

7.2.3 Sobolev spaces over boundaries

7.3 Properties

7.3.1 Approximation by smooth functions

7.3.2 Extensions

7.3.3 Sobolev embedding theorems

7.3.4 Traces

7.3.5 Equivalent norms

7.3.6 A Sobolev quotient space

7.4 Characterization of Sobolev spaces via the Fourier transform

7.5 Periodic Sobolev spaces

7.5.1 The dual space

7.5.2 Embedding results

7.5.3 Approximation results

7.5.4 An illustrative example of an operator

7.5.5 Spherical polynomials and spherical harmonics

7.6 Integration by parts formulas

8 Weak Formulations of Elliptic Boundary Value Problems

8.1 A model boundary value problem

8.2 Some general results on existence and uniqueness

8.3 The Lax-Milgram Lemma

8.4 Weak formulations of linear elliptic boundary value problems

8.4.1 Problems with homogeneous Dirichlet boundary

8.4.2 Problems with non-homogeneous Dirichlet bpundary conditions

8.4.3 Problems with Neumann boundary conditions

8.4.4 Problems with mixed boundary conditions

8.4.5 A general linear second-order elliptic boundary value problem

8.5 A boundary value problem of linearized elasticity

8.6 Mixed and dual formulations

8.7 Generalized Lax-Milgram Lemma

8.8 A nonlinear problem

9 The Galerkin Method and Its Variants

9.1 The Galerkin method

9.2 The Petrov-Galerkin method

9.3 Generalized Galerkin method

9.4 Conjugate gradient method: variational formulation

10 Finite Element Analysis

10.1 One-dimensional examples

10.1.1 Linear elements for a second-order problem

10.1.2 High order elements and the condensation technique

10.1.3 Reference element technique

10.2 Basics of the finite element method

10.2.1 Continuous linear elements

10.2.2 Affine-equivalent finite elements

10.2.3 Finite element spaces

10.3 Error estimates of finite element interpolations

10.3.1 Local interpolations

10.3.2 Interpolation error estimates on the reference element

10.3.3 Local interpolation error estimates

10.3.4 Global interpolation error estimates

10.4 Convergence and error estimates

11 Elliptic Variational Inequalities and Their Numerical Approximations

11.1 From variational equations to variational inequalities

11.2 Existence and uniqueness based on convex minimization

11.3 Existence and uniqueness results for a family of EVIs

11.4 Numerical approximations

11.5 Some contact problems in elasticity

11.5.1 A frictional contact problem

11.5.2 A Signorini frictionless contact problem

12 Numerical Solution of Fredholm Integral Equations of the Second Kind

12.1 Projection methods: General theory

12.1.1 Collocation methods

12.1.2 Galerkin methods

12.1.3 A general theoretical framework

12.2 Examples

12.2.1 Piecewise linear collocation

12.2.2 Trigonometric polynomial collocation

12.2.3 A piecewise linear Galerkin method

12.2.4 A Galerkin method with trigonometric polynomials

12.3 Iterated projection methods

12.3.1 The iterated Galerkin method

12.3.2 The iterated collocation solution

12.4 The Nyström method

12.4.1 The Nyström method for continuous kernel functions

12.4.2 Properties and error analysis of the Nyström method

12.4.3 Collectively compact operator approximations

12.5 Product integration

12.5.1 Error analysis

12.5.2 Generalizations to other kernel functions

12.5.3 Improved error results for special kernels

12.5.4 Product integration with graded meshes

12.5.5 The relationship of product integration and collocation methods

12.6 Iteration methods

12.6.1 A two-grid iteration method for the Nyström method

12.6.2 Convergence analysis

12.6.3 The iteration method for the linear system

12.6.4 An operations count

12.7 Projection methods for nonlinear equations

12.7.1 Linearization

12.7.2 A homotopy argument

12.7.3 The approximating finite-dimensional problem

13 Boundary Integral Equations

13.1 Boundary integral equations

13.1.1 Green’s identities and representation formula

13.1.2 The Kelvin transformation and exterior problems

13.1.3 Boundary integral equations of direct type

13.2 Boundary integral equations of the second kind

13.2.1 Evaluation of the double layer potential

13.2.2 The exterior Neumann problem

13.3 A boundary integral equation of the first kind

13.3.1 A numerical method

14 Multivariable Polynomial Approximations

14.1 Notation and best approximation results

14.2 Orthogonal polynomials

14.2.1 Triple recursion relation

14.2.2 The orthogonal projection operator and its error

14.3 Hyperinterpolation

14.3.1 The norm of the hyperinterpolation operator

14.4 A Galerkin method for elliptic equations

14.4.1 The Galerkin method and its convergence

References

Index

- Log in to post comments