The preface begins "Over the years, I have become more and more dissatisfied with our advanced calculus course [at the University of Maryland]. In most books used for this type of course, theorems are proved to prove more theorems. An 'application' of a theorem is either a trivial calculation or a piece of the proof for another theorem. There are no examples or exercises that use the methods of analysis to solve a real problem. The traditional advanced calculus course has little or no contact with the world outside of mathematics." After a paragraph about computing (Cooper is in favor of it), it continues "My goals in writing this book are to teach the techniques and results of analysis and to show how they can be applied to solve problems. I have gone outside of the usual range of applications in physics to include examples from biology, sociology, chemistry, and economics."
When I first started browsing through the book it occurred to me that I might find the preface to be the most interesting part, but the more closely I looked at it the more I came to admire it. The author's taste in analysis is not quite orthogonal to my own, but the projections of each on the other are not large. Cooper does what he does carefully and well, with a few exceptions, and I always like to see a mathematics book with a point of view, even one I do not share.
The book is divided into two parts, one for each semester. Part I presents the one-dimensional theory, while Part II treats functions of several variables. Most of the applications are in Part II, but Part I also has some unusual features. Chapter 1 discusses supremums and infimums, inequalities, and induction. The section on induction seems a little skimpy to me, but the author does derive one of his favorite tools there, the Bernoulli inequality: (1 + x)n is at least as large as 1 + nx when n is at least 1 and x is at least -1.
Chapter 2 is about sequences. In Example 2.3 Cooper uses Bernoulli's inequality to get an upper bound on nrn for 0
A nice feature in chapter 2 is the arithmetic-geometric mean iteration, and Example 2.8 is also good. It contains a rare instance of the author using something that he doesn't prove until later, namely that |sin x – sin y| is at most |x – y|, which he wants to get from the mean value theorem. One might set x = u + v and y = u – v instead, to reduce to the statement that |(cos u)(sin v)| does not exceed |v|, and then recall the geometric proof that |sin x| does not exceed |x| that is usually done in first semester calculus. This would also come in handy in Example 3.2. Although the student is supposed to have had three semesters of calculus, Cooper actually assumes very little specific knowledge of it, perhaps because of his years of teaching experience. The finite geometric series, which one might ascribe to precalculus instead, is a rare exception. Another, in Example 1.4, is the sum of the first n positive integers.
There are two minor but significant errors in chapter 4, on differentiation. (Chapter 3 is on continuity.) In Example 4.3 the statement that f'(x) > 0 for all x is obviously false at x= 0, and if it were true then the last sentence in the example would make no sense. In exercise 11 four pages later v' should be sin u, not –sin u. (As exercise 12 shows, it's really –(–sin u).) A highlight of this chapter is an unusually careful treatment of L'Hôpital's rule, including not only the 0/0 case (via Cauchy's extension of the mean value theorem) but also infinity/infinity, but there are only a few exercises on it.
Chapter 5 is on higher derivatives and polynomial approximation. I was happy to see Lagrange interpolation in section 5.3. Cooper says that one can't get closed form solutions of the pendulum equation x'' + sin x = 0, and thereby passes up an opportunity to mention elliptic functions. He is less than usually careful in Example 5.4, where he should say at the outset that lambda is positive and, more importantly, he should have u'(0)= 0 rather than u(0)= 0.
Cooper discusses convex functions in section 5.4, but neglects to say that a calculus student would think of them as "concave up". What a calculus student would call "concave down" he calls "concave" in this section and "convex down" on page 207. In the absence of any explanation, I would find this pretty annoying if I was a student.
Cooper has much more on numerical analysis, a subject that has always left me cold, than most analysis textbooks. Chapter 6 is on solving equations numerically. Greater use is made of Lipschitz continuity here than in most books, but Definition 6.1 and the sentence that follows it only defines a Lipschitz constant, not the Lipschitz constant, and Definition 10.5 (the multivariable case) has the same problem.
Chapter 7 is on integration, and about 23% of it is on numerical methods. Perhaps the biggest divergence between Cooper's tastes and mine is that he does not like to do integrals, although he is happy to estimate them. (He is also not very interested in the history of his subject.) Problem 7 in section 7.4 is substantial: Cooper defines a function f(c) as the integral on (0,1) with respect to x of 1/(1 + exp(cx)), and the point is to find, by numerical methods, a positive value of c such that f(c) = 1/4. No hint is given that the problem can be solved exactly. The integral evaluates easily after multiplication by exp(-cx)/exp(-cx), and one then has to solve u5 – 2u4 + u = 0, where u = exp(c/4). Dividing out the extraneous roots u = 0 and u = 1 we are left with u3 – u2 – u – 1 = 0, which has one real root, evidently positive, that may be found by whatever method you like for solving a cubic equation. Similarly, the double integral in Example 14.5, about which Cooper says only that it "can be estimated easily using a two-dimensional version of Simpson's rule", can be found exactly by elementary methods (granted that it takes a while). I understand that integration in closed form is not one of the central concerns of modern analysis, but why not at least add this as an exercise? Even if the author would never assign it, someone else might.
Section 7.5 is on improper integrals. Part e of exercise 1 is very nice, and the hint in the back is fine (approximate the integrand near the doubtful endpoint by Taylor's theorem), but the integral can be evaluated exactly: it equals √2log(tan u), from 0 to π/8 (after substituting x = (π/2) – 4u), so it is infinite. Exercise 5 is on the gamma function. Part d uses the fact that the integral of exp(–x2) on the real line is the square root of π, but gives no hint of a proof. Cooper actually does prove this in section 14.6, but there is no reference from one section to the other.
Except for an appendix on transcendental functions, Part I concludes with chapter 8, on series. I found this chapter to be dull, and this is the main reason why I had a negative first impression of the book. There is a brief treatment of Euler's constant on page 242, but the first displayed formula there is wrong (it could be fixed by inserting two sets of parentheses), and I would rather get Euler's constant from the standard integral test argument, as in section 8.1. Applied to 1/x on (1,n), it shows that
1 + 1/2 + … + 1/(n-1) > log n > 1/2 + 1/3 + … + 1/n for n>1
and that these expressions get farther apart (though not much farther) as n increases. This not only gives the result of exercise 7 on page 230, it implies that
1 + 1/2 + … + 1/n – log n is positive and decreasing for n>0, and
1 + 1/2 + … + 1/n – log(n+1) is positive and increasing for n>0.
Obviously the former is larger, and evidently they have the same limit, which is Euler's constant gamma. It follows that
log n + γ < 1+1/2+...+1/n < log(n+1) + γ for n>0,
and this is a better result than that of exercise 7 except when n = 1. (This is also a good excuse to mention Julian Havil's beautiful book Gamma.)
The definition of interval of convergence on page 246 is sloppy. It could be improved by moving forward from page 247 a sentence about convergence at the endpoints.
The Weierstrass approximation theorem is mentioned already in chapter 5, but not proved until section 15.2. In the course of the proof Cooper derives the lower bound 2/(3√k) for a definite integral by using Bernoulli's inequality, which is good enough for his purposes, but he could have reduced the integral to essentially part c of exercise 9 in section 14.5 by substituting x= cos u. This gives the integral exactly, and by estimating this answer we could get the better bound 2/√(2k+1).
Something on the front cover bothers me. One of the key features claimed for the book on the back cover is "an informal and lively writing style." Cooper is, among mathematicians, a reasonably good writer, as the quotes from the preface show — not Halmos, Hardy, Rota or Truesdell, but comfortably above average, 80th percentile maybe. For a textbook author I would say he is about average. (Perhaps the worst writing in the book is in Example 1.4, where "formula" is used 4 times in the space of 19 words.) But "lively" is not the word that best describes the virtues of his style; "clear" is better. In a textbook, I would rather have clear: for example, Sylvester (who never wrote a book) is great fun to read in small doses, but he is often so excited about his ideas that he never gets around to explaining them very well.
"Informal" is nearly the last adjective I would use to describe the author's style. Proofs are always clearly set off, whereas in a less formal style they might often blend in smoothly with the rest of the exposition. (Many students prefer a little formality here.) They typically conclude with "The proof is complete" (11 times in the first 3 chapters) or "The theorem is proved" (14 times in the next 5 chapters). When they don't, it is often because Cooper (sensibly enough) proves only half of a theorem and says that the other half is similar. He is fond of certain constructions that are used almost exclusively in mathematical writing, such as "by an appeal to" Theorem X. I like this phrase, when used sparingly, but I wouldn't call it informal. He also typically writes "the Bernoulli inequality" or "the Euler constant" or "the Fubini theorem" or "the example of Runge" instead of using the possessive. He even refers to his own earlier book on partial differential equations as "the book of Cooper". The style is not as formal as it could be — when Cooper makes a remark, he doesn't say "Remark" — but I would say it is rather more formal than not.
This finally brings me back to the front cover, a (not unattractive) montage of numbers, text and figures. The numbers at the top come from page 414, the text below it from page 506, and the horrendous formula at the bottom is (11.16) from page 341. (At least two of the figures are also in the book, but I didn't notice the others.) The text from page 506 has been cut off at the left, and consequently Cooper's "Then by the Fubini theorem, we have" (followed by blank space and a display) appears as "by Fubini, we have" etc. I had not heard of the author before I read his book, but I am pretty sure that he would never write "by Fubini" in anything he intended to publish. I wouldn't either. Since the original sentence would fit, what am I to conclude, but that it was changed in an attempt to foster the illusion of an informal writing style?
On the other hand, I didn't find any applications to sociology in the book, so in that regard the back cover is more honest than the preface.
There is some small effort at informality in Cooper's treatment of epsilons and deltas. He sneaks in an epsilon already on pages 10-11, which I think is a good idea. Epsilons come in more formally with N's at the bottom of page 30, in the definition of a convergent sequence, and on page 35, he writes "Now we are ready for the ε-N drill" in the middle of a proof. By my count, this is the ninth ε-N argument since the top of page 31, so it was a little strange to see this phrase all of a sudden. The sentence "Let ε > 0 be given" occurs in most proofs involving ε from page 37 on, although Cooper seems to have tried to avoid it before that. Not infrequently, it is the first sentence of the proof.
Epsilon appears with delta for the first time at the top of page 61: "An alternate, equivalent, formulation of the limit of a function is stated in terms of the dread epsilons and deltas." This is not one of Cooper's better sentences, but even so, one is heartened; one expects (particularly if still looking for an informal style) that more than the usual amount of care will now be taken to motivate and explain these challenging ideas. But no, Cooper just dives right in. He is no worse than the average analyst at explaining epsilons and deltas, but his few attempts to be more engaging are so half-hearted as to be practically worthless.
The minimal amount of topology that one usually sees in analysis (open and closed sets, compactness, connectedness) occurs here in chapter 9, at the beginning of Part II, so if one were using the book for a one semester course then one might want to push this material forward.
Countability is not in the book at all. As I write this, I am just starting to teach out of Stephen Abbott's Understanding Analysis , which received an enthusiastic review from Steve Kennedy here . Except for countability, the content of Abbott's first chapter and Cooper's is almost identical. While Cooper is not without his advantages (for example, Bernoulli's inequality), these chapters are a marvelous case study of the difference between competence and excellence in textbook writing.
Chapter 10 is on the derivative in several variables. Chapter 11 is on solving systems of equations, and contains the Contraction Mapping Theorem as well as the Inverse and Implicit Function Theorems. Chapters 12 and 13 are on optimization, with an unusually careful treatment of Lagrange multipliers in chapter 13; I found this to be the most interesting material in Part II. Chapter 14 is on integration in several variables, and chapter 15 on applications of integration to differential equations. From chapter 11 on, much of the book discusses applications.
This is not a book that I would be likely to select for my analysis course, but I could live with it, and there is something to be said for using a book with a different emphasis than one's own. Real analysis is such a vast subject that one has to pick and choose even in a two semester course, and Cooper has made choices that are interesting and defensible. If you found yourself in strong agreement with the quotations in my first paragraph, then his book is definitely worth looking at.
Warren Johnson (email@example.com ) is visiting assistant professor of mathematics at Connecticut College.
1.1 Ordered Fields
1.3 Using Inequalities
1.5 Sets and Functions
2. Sequences of Real Numbers
2.1 Limits of Sequences
2.2 Criteria for Convergence
2.3 Cauchy Sequences
3.1 Limits of Functions
3.2 Continuous Functions
3.3 Further Properties of Continuous Functions
3.4 Golden-Section Search
3.5 The Intermediate Value Theorem
4. The Derivative
4.1 The Derivative and Approximation
4.2 The Mean Value Theorem
4.3 The Cauchy Mean Value Theorem and l’Hopital’s Rule
4.4 The Second Derivative Test
5. Higher Derivatives and Polynomial Approximation
5.1 Taylor Polynomials
5.2 Numerical Differentiation
5.3 Polynomial Inerpolation
5.4 Convex Funtions
6. Solving Equations in One Dimension
6.1 Fixed Point Problems
6.2 Computation with Functional Iteration
6.3 Newton’s Method
7.1 The Definition of the Integral
7.2 Properties of the Integral
7.3 The Fundamental Theorem of Calculus and Further Properties of the Integral
7.4 Numerical Methods of Integration
7.5 Improper Integrals
8.1 Infinite Series
8.2 Sequences and Series of Functions
8.3 Power Series and Analytic Functions
I.1 The Logarithm Functions and Exponential Functions
I.2 The Trigonometric Funtions
9. Convergence and Continuity in Rn
9.2 A Little Topology
9.3 Continuous Functions of Several Variables
10. The Derivative in Rn
10.1 The Derivative and Approximation in Rn
10.2 Linear Transformations and Matrix Norms
10.3 Vector-Values Mappings
11. Solving Systems of Equations
11.1 Linear Systems
11.2 The Contraction Mapping Theorem
11.3 Newton’s Method
11.4 The Inverse Function Theorem
11.5 The Implicit Function Theorem
11.6 An Application in Mechanics
12. Quadratic Approximation and Optimization
12.1 Higher Derivatives and Quadratic Approximation
12.2 Convex Functions
12.3 Potentials and Dynamical Systems
12.4 The Method of Steepest Descent
12.5 Conjugate Gradient Methods
12.6 Some Optimization Problems
13. Constrained Optimization
13.1 Lagrange Multipliers
13.2 Dependence on Parameters and Second-order Conditions
13.3 Constrained Optimization with Inequalities
13.4 Applications in Economics
14. Integration in Rn
14.1 Integration Over Generalized Rectangles
14.2 Integration Over Jordan Domains
14.3 Numerical Methods
14.4 Change of Variable in Multiple Integrals
14.5 Applications of the Change of Variable Theorem
14.6 Improper Integrals in Several Variables
14.7 Applications in Probability
15. Applications of Integration to Differential Equations
15.1 Interchanging Limits and Integrals
15.2 Approximation by Smooth Functions
15.4 Fluid Flow
A Matrix Factorization
Solutions to Selected Exercises