You are here

The Strength of Nonstandard Analysis

Imme van der Berg and Vítor Neves, editors
Publisher: 
Springer Verlag
Publication Date: 
2007
Number of Pages: 
401
Format: 
Hardcover
Price: 
129.00
ISBN: 
9783211499047
Category: 
Anthology
[Reviewed by
Michael Berg
, on
08/22/2007
]

Nonstandard analysis has its roots, or, more properly, its foreshadowing, in the formulation of the infinitesimal calculus, independently by Leibniz and Newton in the 17th century. Newton, performing a famous act of prudence, took great care to phrase his published arguments about fluxions and fluents (and other such calculus fauna) in purely geometric or physical terms. Was he trying to avoid the inherent difficulties he knew would be raised by the appearance of infinitesimals on the scene in this new way of doing mathematics?

Newton was right to be prudent, of course, much as Leibniz was perhaps less circumspect in this regard, for infinitesimals indeed caused controversy from the moment of their first appearance; one need but recall Bishop Berkley’s allusion to “ghosts of departed quantities.” Scholars were immediately cognizant of the fact that the admittedly brilliantly successful new methods of Newton and Leibniz were based on notions that were in dire need of deep investigation, echoing nothing less than the concerns expressed already by the ancient Greeks in this connection. Zeno’s paradoxes come to mind right away, of course, and one is reminded of Euclid’s bizarre and unsatisfying phrasing of the notions of point and line in the Elements. Already at that early stage of history the dangers presented by the infinitely large and infinitely small were recognized and taken note of.

Still, even before the foundations were properly lain, the mathematics was right: Euclidean geometry was right, and so was the infinitesimal calculus. And it is irresistible to note that even in much more recent (and, presumably, rigorous) times the state of affairs is not dissimilar: in quantum mechanics Dirac’s delta function entered the scene well before Laurent Schwartz defined distributions.

In any case, the presence of infinities and infinitesimals in Mathematics had already been a bête noir throughout history when, in the face of the incomparable successes of 18th century analysis, this state of affairs grew nigh on intolerable. In the beautifully written foreword to the book under review, W. A. J. Luxemburg quotes Lagrange, who, in 1774, about a century after the birth of calculus, asked for “[a] clear and precise theory of what is known as ‘Infinity’ in Mathematics.” Lagrange was at the time the Head of the Mathematics section of the Berlin Academy of Sciences and an explicit challenge was issued in the form of a prize contest to give “an explanation why it is that so many correct theorems have been deduced from [the] contradictory assumption [of the existence of an ‘infinite magnitude’].” The award was fifty ducats.

Luxemburg claims that the Berlin Academy’s contest was not properly settled until the appearance of Abraham Robinson’s famous paper, “Non-standard analysis,” in 1961, one hundred seventy five years after the contest’s 1786 deadline. The fifty ducats did not go unclaimed, however: the Swiss mathematician Simon l’Hullier won with an essay titled “The infinite is the abyss in which our thoughts are engulfed.” But the prize committee noted explicitly that the pressing if subtle mathematical question of why so much real and correct mathematics had emerged, and was emerging still, from such ill-defined foundations had gone unaddressed in each of the essays submitted for the prize.

However, in due course, in the immensely fertile 19th century, classical analysis did of course receive a fully rigorous and gloriously elegant foundation. Says Luxemburg, “The construction of the real number system (linear continuum) by Cantor and Dedekind in 1872 and the Weierstrass ε–δ techniques gradually replaced the use of infinitesimals,” and we are on very familiar ground: this is the standard analysis we learn in school today, and it has been this way for on the order of a century. This raises the obvious question of why we should concern ourselves with non-standard analysis at this stage of mathematical history.

The question could — or should — be asked of Abraham Robinson himself, and Luxemburg provides the answer in connection with work by Thoralf Skolem concerning models of Peano arithmetic possessing “infinitely large numbers:”

Robinson, rereading Skolem’s paper, wondered what systems would emerge if he would apply Skolem’s method to the axiom system of the real numbers. In doing so, Robinson immediately realized that… in particular, the set of infinitesimals lacking a least upper bound was an external set [meaning that the set was not realizable by a formula in the according formal language].”

So, in a certain sense, nonstandard analysis is “natural.”

We admittedly discover in Robinson’s nonstandard analysis a framework for calculus (and analysis) in a form that might particularly please Leibniz and, possibly to a lesser extent, Newton. But, being so deeply steeped in mathematical logic, nonstandard analysis has come to suffer the same status vis à vis mainstream mathematics as does logic itself. Luxemburg quotes Augustus De Morgan: “We know that mathematicians care no more for logic than logicians for mathematics. The two eyes of exact sciences are mathematics and logic; the mathematical sect puts out the logical eye; the logical sect puts out the mathematical eye; each believing it can see better with one eye than two.” Luxemburg observes that “[w]e owe Abraham Robinson a great deal for having taught us the use of both eyes… This book shows clearly that we have learned our lesson well.”

And this brings us, at last, to The Strength of Nonstandard Analysis , edited by Imme van den Berg and Vítor Neves, and suffice it to say right off that it is an interesting and valuable book, or, more properly, collection of articles. One is immediately struck by the introductory article by the prominent model theorist, H. Jerome Keisler, (also) entitled “The strength of nonstandard analysis,” and the short note, “The virtue of simplicity,” by Edward Nelson, who, as Luxemburg notes in the foreword, gets credit for having given a full axiom system for Robinson’s “non-standard methodology” in 1977.

Keisler’s paper, which exemplifies hard-core mathematical logic with a vengeance, is concerned with a framework for “reverse mathematics,” a subject originating in the 1970s with Harvey M. Friedman and Steve G. Simpson, about which Kazuyuki Tanaka says (MR1770738) that “the ultimate goal… is to answer the question: Which sets of logical principles are necessary and sufficient to prove the theorems of ordinary mathematics?”

Nelson strikes an entirely different note in his contribution, as is illustrated by his comment to the effect that “[a] prime example of unnecessary complication is, in my opinion, Kolmogorov’s foundational work on probability expressed in terms of Cantor’s set theory and Lebesgue’s measure theory.” Nelson goes on to provide nonstandard analytical proofs of some core theorems in the field, including Radon-Nikodym.

The two articles just mentioned belong to the section, “Foundations,” the first of five, with the other four being, in order, “Number theory,” “Statistics, probability, and measure,” “Differential systems and equations,” and “Infinitesimals and education.” The articles in these sections are all of interest, with some obviously being more eclectic (or even austere) than others. Here is a random sampling of topics touched on: the Erdös-Turan conjecture (that if {a(n)}⊂  N and ∑a(n)-1 < ∞, then {a(n)} contains arbitrarily long arithmetic progressions), quantum Bernoulli experiments and quantum stochastic processes, the Navier-Stokes equations, and (more on) Radon-Nikodym. The Strength of Nonstandard Analysis ends with two thought-provoking articles by Keith Stroyan and Richard O’Donovan, respectively, concerning the possible use of infinitesimals and nonstandard analysis in the teaching of calculus and pre-university analysis.

Although the reader should be cautioned that the level of difficulty in the articles varies considerably, the book contains interesting material for almost everyone and is well worth looking at. But a good deal of preparation may be indicated for the reader, especially as regards model theory, for example. (The last two articles may be something of an exception.)

Nonstandard analysis is an intrinsically fascinating subject deserving a lot more airplay. The Strength of Nonstandard Analysis is a serious contribution to the cause.


Michael Berg is professor of mathematics at Loyola Marymount University in Los Angeles, CA.