You are here

Calculus Reordered

David M. Bressoud
Publisher: 
Princeton University Press
Publication Date: 
2019
Number of Pages: 
242
Format: 
Hardcover
Price: 
29.95
ISBN: 
9780691181318
Category: 
Monograph
BLL Rating: 

The Basic Library List Committee suggests that undergraduate mathematics libraries consider this book for acquisition.

[Reviewed by
Steven Deckelman
, on
08/24/2019
]
It is an immense pleasure to be able to review David Bressoud’s new book, Calculus Reordered, A History of the Big Ideas. This is a book every teacher of calculus needs to read at least once. The book is first and foremost a book about the history of the big ideas that underlie calculus: Accumulation, Ratios of Change, Series and Limits. Second, the book presents an argument for significantly revising the traditional calculus curriculum or at least being aware of the consequences of having a curriculum at variance with the historical sequence in which the big ideas of calculus came to be understood.
 
The two main takeaways from the book are:
  1. The creation of calculus was less the brainchild of a few big names like Newton and Leibnitz as it was a collective effort by many people going back to antiquity and spanning several cultures trying to understand its underlying ideas. Newton and Leibnitz came on the scene at a time when the new language of symbolic algebra (à la Viète and others) and Descartes’s analytic geometry enabled what came to be called the calculus to emerge. That said, it was Newton and Leibnitz who both discovered and correctly recognized the significance of the inverse relationship between differentiation and integration that unleashed the true power of calculus. By putting the final pieces of the puzzle of how the big ideas fit together, they pulled a vast history of at times seemingly disparate ideas into the unified whole whose refined form we today think of as calculus. In this sense, Newton and Leibnitz indeed invented calculus as a branch of mathematics.
  2. The order in which students learn ideas in the traditional calculus curriculum (Limits, Differentiation, Integration, and Infinite Series) is the reverse of the order in which these ideas came to be understood historically, as ideas about accumulation, ratios of change, series, and limits. Although teaching calculus along traditional lines may result in certain efficiencies, a great deal of the original intuition and heuristics are lost. This is especially problematic because the first topic, limits, is an exceptionally subtle idea.
 
Bressoud gives this example (attributed to David Tall) to illustrate the point. If \(\lim_{x\to a} f(x)=b\), and \(\lim_{y\to b} g(y)=c\), does it follow that \(\lim_{x\to a} g(f(x))=c\)? You can assume here that the composition is well defined, i.e. that the range of \(f\) is contained in the domain of \(g\). Neither \(f\) nor \(g\) is assumed continuous. Students will often conclude that it does follow by reasoning along these lines. As \(x\to a\), \(f(x)\to b\). Writing \(y=f(x)\), we see \(x\to a\) implies \(y\to b\). Now since \(g(y)\to c\) as \(y\to b\), which happens as a consequence of \(x\to a\), it is concluded that \(\lim_{x\to a} g(f(x))=\lim_{y\to b} g(y)=c\). In brief summary, since \(x\to a\) induces \(f(x)\to b\) and since \(y\to b\) induces \(g(y)\to c\) the fact that \(y=f(x)\) implies \(x\to a\), induces \(g(f(x))\to c\). But notice that this can’t be right. A simple counterexample would be to take \(f(x)=b\) and \(g(y)\) a discontinuous function defined at \(b\) but for which \(g(b)\neq c\). In this case \[ \lim_{x\to a} g(f(x))=\lim_{x\to a} g(b)\neq c. \] Now here is the interesting question: What exactly is wrong with the student’s reasoning? What is the mistake or misconception? Not to be a spoiler, you can read Professor Bressoud’s book for an illuminating discussion of this example.
 
To the extent that calculus is the culmination of human attempts to understand ideas like change, motion, space, time, and infinity, its roots are ancient and highlighted by the paradoxes of Zeno of Elea. As a follower of the Greek philosopher Parmenides, who was known for teaching that change is basically illusory, Zeno crafted a number of paradoxes in support of the argument that motion is an illusion. One, particulary reminiscent of calculus, was his paradox of the arrow. An arrow must always be in either a state of movement or rest. At a single instant, it cannot move so must be at rest. But if the arrow is at rest at every instant, then it must always be at rest and so can’t move. Thus motion is an illusion. This paradox has given beginning calculus students paroxysms ever since and it wasn’t until the invention of calculus and the rigorous foundation it was placed on in the nineteenth century was this paradox resolved, at least in mathematical circles. The problem, of course, has to do with the infinite and how to properly understand it. In the case of the arrow, it is natural to think of its whole motion as being the sum of its parts, no matter how we partition it. Euclid held that a whole is greater than a part. But what about the case when we have an infinite number of infinitely small parts? This notion of combining an infinite collection of infinitely small pieces, especially when it relates to geometry and motion, is the idea of accumulation.
 
Ratios of Change are precursors to derivatives and had their origin in trying to understand how changes in two linked variables affect one another. For example in interpolation, if we know that \(\sqrt{4}=2\) and \(\sqrt{9}=3\), what can we say about \(\sqrt{7}\)? We know that it lies in the interval \((2,3)\), but where? It boils down to how much a change in \(\sqrt{x}\) is induced by a change in \(x\). The problem of numerically interpolating trigonometric functions and logarithms leads to the same sort of questions. Investigations of these and similar questions long predated the problem of tangents and instantaneous velocity that are usually presented to students as the origin of derivatives.
 
Infinite series (sequences of partial sums), while perhaps less controversial, had a historical significance that can be easily lost in the traditional curriculum. Euler asserted that any study of calculus must begin with the study of infinite summation and his famous 1748 Introduction to the Analysis of the Infinite (his precalculus book) is teeming with infinite expansions of various sorts: series, products, continued fractions. Students are usually introduced Taylor series as a device invented for approximation purposes. While this is an important application in numerical analysis, the book points out that their real significance is the ease with which they can be differentiated and integrated and as a representation of the emerging concept of function. It’s not hard to see why this is. Take for example the rational function \[ \frac{1}{1-x}. \] If we then apply the usual (polynomial) long division algorithm to \(1\div (1-x)\) we get the familiar series \[ 1+x+x^2+x^3+\cdots \] It is eminently natural to ask whether meaning can be attached to this expression and in what sense it might represent the rational function. An especially nice feature of the book is the wealth of historical examples such as Leibnitz’s 1693 use of power series to solve the differential equation \[ dy=\frac{a \; dx}{a+x} \] and Euler’s derivation of the Maclaurin series of logarithmic and exponential functions from the binomial theorem. Many of these examples have appeared in other books before but they are here weaved together into a unique tapestry that highlights the intuition that guided the many progenitors of calculus.
 
At last we get to limits, the bugbear of calculus. Limits are the most sophisticated and also the most pedagogically challenging of the four big ideas. The sad state of the way limits are usually taught is that they (quoting the author) “are either reduced to an intuitive notion with some validity but one that can lead to many incorrect assumptions, or their study devolves into a collection of techniques that must be memorized.” Citing Grabiner’s The Origins of Cauchy’s Rigorous Calculus, the book points out that the modern rigorous treatment of limits based on the algebra of inequalities came relatively late in the evolution of calculus. Bressoud argues, and this reviewer would agree, that expecting first-year students to absorb the \(\epsilon-\delta\) definition of limit is irresponsible, yet the ideas behind this formalization can be made accessible through the algebra of inequalities. By this is meant an emphasis on learning the language of approximations. There are not a lot of specific prescriptions in the book for exactly how to do this but references are given to some incipient curriculum revisions that already exist.
 
Finally, the book delves into more advanced analysis and touches on some of the late nineteenth and early twentieth-century developments of concepts like uniform convergence, Riemann integration, Fourier series, and measure. There is a great deal of additional material in the book not mentioned in this review, including an appendix containing the author’s reflections on some of the pedagogical issues highlighted.

 

Steven Deckelman is a professor of mathematics at the University of Wisconsin-Stout, where he has been since 1997. He received his Ph.D. from the University of Wisconsin-Madison in 1994 for a thesis in several complex variables written under Patrick Ahern. Some of his interests include complex analysis, mathematical biology and the history of mathematics.

See the table of contents from the author's web page.