Devlin's Angle

March 2009

What is Experimental Mathematics?

In my last column I gave some examples of mathematical hypotheses that, while supported by a mass of numerical evidence, nevertheless turn out to be false. Mathematicians know full well that numerical evidence, even billions of cases, does not amount to conclusive proof. No matter how many zeros of the Riemann Zeta function are computed and observed to have real-part equal to 1/2, the Riemann Hypothesis will not be regarded as established until an analytic proof has been produced.

But there is more to mathematics than proof. Indeed, the vast majority of people who earn their living "doing math" are not engaged in finding proofs as all; their goal is to solve problems to whatever degree of accuracy or certainty is required. While proof remains the ultimate, "gold standard" for mathematical truth, conclusions reached on the basis of assessing the available evidence have always been a valid part of the mathematical enterprise. For most of the history of the subject, there were significant limitations to the amount of evidence that could be gathered, but that changed with the advent of the computer age.

For instance, the first published calculation of zeros of the Riemann Zeta function dates back to 1903, when J.P. Gram computed the first 15 zeros (with imaginary part less than 50). Today, we know that the Riemann Hypothesis is true for the first ten trillion zeros. While these computations do not prove the hypothesis, they constitute information about it. In particular, they give us a measure of confidence in results proved under the assumption of RH.

Experimental mathematics is the name generally given to the use of a computer to run computations - sometimes no more than trial-and-error tests - to look for patterns, to identify particular numbers and sequences, to gather evidence in support of specific mathematical assertions, that may themselves arise by computational means, including search.

Had the ancient Greeks (and the other early civilizations who started the mathematics bandwagon) had access to computers, it is likely that the word "experimental" in the phrase "experimental mathematics" would be superfluous; the kinds of activities or processes that make a particular mathematical activity "experimental" would be viewed simply as mathematics. On what basis do I make this assertion? Just this: if you remove from my above description the requirement that a computer be used, what would be left accurately describes what most, if not all, professional mathematicians have always spent much of their time doing!

Many readers, who studied mathematics at high school or university but did not go on to be professional mathematicians, will find that last remark surprising. For that is not the (carefully crafted) image of mathematics they were presented with. But take a look at the private notebooks of practically any of the mathematical greats and you will find page after page of trial-and-error experimentation (symbolic or numeric), exploratory calculations, guesses formulated, hypotheses examined, etc.

The reason this view of mathematics is not common is that you have to look at the private, unpublished (during their career) work of the greats in order to find this stuff (by the bucketful). What you will discover in their published work are precise statements of true facts, established by logical proofs, based upon axioms (which may be, but more often are not, stated in the work).

Because mathematics is almost universally regarded, and commonly portrayed, as the search for pure, eternal (mathematical) truth, it is easy to understand how the published work of the greats could come to be regarded as constitutive of what mathematics actually is. But to make such an identification is to overlook that key phrase "the search for". Mathematics is not, and never has been, merely the end product of the search; the process of discovery is, and always has been, an integral part of the subject. As the great German mathematician Carl Friedrich Gauss wrote to his colleague Janos Bolyai in 1808, "It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment."

In fact, Gauss was very clearly an "experimental mathematician" of the first order. For example, his analysis - while still a child - of the density of prime numbers, led him to formulate what is now known as the Prime Number Theorem, a result not proved conclusively until 1896, more than 100 years after the young genius made his experimental discovery.

For most of the history of mathematics, the confusion of the activity of mathematics with its final product was understandable: after all, both activities were done by the same individual, using what to an outside observer were essentially the same activities - staring at a sheet of paper, thinking hard, and scribbling on that paper. But as soon as mathematicians started using computers to carry out the exploratory work, the distinction became obvious, especially when the mathematician simply hit the ENTER key to initiate the experimental work, and then went out to eat while the computer did its thing. In some cases, the output that awaited the mathematician on his or her return was a new "result" that no one had hitherto suspected and might have no inkling how to prove.

What makes modern experimental mathematics different (as an enterprise) from the classical conception and practice of mathematics is that the experimental process is regarded not as a precursor to a proof, to be relegated to private notebooks and perhaps studied for historical purposes only after a proof has been obtained. Rather, experimentation is viewed as a significant part of mathematics in its own right, to be published, considered by others, and (of particular importance) contributing to our overall mathematical knowledge. In particular, this gives an epistemological status to assertions that, while supported by a considerable body of experimental results, have not yet been formally proved, and in some cases may never be proved. (It may also happen that an experimental process itself yields a formal proof. For example, if a computation determines that a certain parameter p, known to be an integer, lies between 2.5 and 3.784, that amounts to a rigorous proof that p = 3.)

When experimental methods (using computers) began to creep into mathematical practice in the 1970s, some mathematicians cried foul, saying that such processes should not be viewed as genuine mathematics - that the one true goal should be formal proof. Oddly enough, such a reaction would not have occurred a century or more earlier, when the likes of Fermat, Gauss, Euler, and Riemann spent many hours of their lives carrying out (mental) calculations in order to ascertain "possible truths" (many but not all of which they subsequently went on to prove). The ascendancy of the notion of proof as the sole goal of mathematics came about in the late nineteenth and early twentieth centuries, when attempts to understand the infinitesimal calculus led to a realization that the intuitive concepts of such basic concepts as function, continuity, and differentiability were highly problematic, in some cases leading to seeming contradictions. Faced with the uncomfortable reality that their intuitions could be inadequate or just plain misleading, mathematicians began to insist that value judgments were hitherto to be banished to off-duty chat in the university mathematics common room and nothing would be accepted as legitimate until it had been formally proved.

What swung the pendulum back toward (openly) including experimental methods, was in part pragmatic and part philosophical. (Note that word "including". The inclusion of experimental processes in no way eliminates proofs.)

The pragmatic factor behind the acknowledgment of experimental techniques was the growth in the sheer power of computers, to search for patterns and to amass vast amounts of information in support of a hypothesis.

At the same time that the increasing availability of ever cheaper, faster, and more powerful computers proved irresistible for some mathematicians, there was a significant, though gradual, shift in the way mathematicians viewed their discipline. The Platonistic philosophy that abstract mathematical objects have a definite existence in some realm outside of Mankind, with the task of the mathematician being to uncover or discover eternal, immutable truths about those objects, gave way to an acceptance that the subject is the product of Mankind, the result of a particular kind of human thinking.

The shift from Platonism to viewing mathematics as just another kind of human thinking brought the discipline much closer to the natural sciences, where the object is not to establish "truth" in some absolute sense, but to analyze, to formulate hypotheses, and to obtain evidence that either supports or negates a particular hypothesis.

In fact, as the Hungarian philosopher Imre Lakatos made clear in his 1976 book Proofs and Refutations, published two years after his death, the distinction between mathematics and natural science - as practiced - was always more apparent than real, resulting from the fashion among mathematicians to suppress the exploratory work that generally precedes formal proof. By the mid 1990s, it was becoming common to "define" mathematics as a science - "the science of patterns".

The final nail in the coffin of what we might call "hard-core Platonism" was driven in by the emergence of computer proofs, the first really major example being the 1974 proof of the famous Four Color Theorem, a statement that to this day is accepted as a theorem solely on the basis of an argument (actually, today at least two different such arguments) of which a significant portion is of necessity carried out by a computer.

The degree to which mathematics has come to resemble the natural sciences can be illustrated using the example I have already cited: the Riemann Hypothesis. As I mentioned, the hypothesis has been verified compuationally for the ten trillion zeros closest to the origin. But every mathematician will agree that this does not amount to a conclusive proof. Now suppose that, next week, a mathematician posts on the Internet a five-hundred page argument that she or he claims is a proof of the hypothesis. The argument is very dense and contains several new and very deep ideas. Several years go by, during which many mathematicians around the world pore over the proof in every detail, and although they discover (and continue to discover) errors, in each case they or someone else (including the original author) is able to find a correction. At what point does the mathematical community as a whole declare that the hypothesis has indeed been proved? And even then, which do you find more convincing, the fact that there is an argument - which you have never read, and have no intention of reading - for which none of the hundred or so errors found so far have proved to be fatal, or the fact that the hypothesis has been verified computationally (and, we shall assume, with total certainty) for 10 trillion cases? Different mathematicians will give differing answers to this question, but their responses are mere opinions.

With a substantial number of mathematicians these days accepting the use of computational and experimental methods, mathematics has indeed grown to resemble much more the natural sciences. Some would argue that it simply is a natural science. If so, it does however remain, and I believe ardently will always remain, the most secure and precise of the sciences. The physicist or the chemist must rely ultimately on observation, measurement, and experiment to determine what is to be accepted as "true," and there is always the possibility of a more accurate (or different) observation, a more precise (or different) measurement, or a new experiment (that modifies or overturns the previously accepted "truths"). The mathematician, however, has that bedrock notion of proof as the final arbitrator. Yes, that method is not (in practice) perfect, particularly when long and complicated proofs are involved, but it provides a degree of certainty that the natural sciences rarely come close to.

So what kinds of things does an experimental mathematician do? (More precisely, what kinds of activity does a mathematician do that classify, or can be classified, as "experimental mathematics"?) Here are a few:

Want to know more? As a mathematician who has not actively worked in an experimental fashion (apart from the familiar trial-and-error playing with ideas that are part and parcel of any mathematical investigation), I did, and I recently had an opportunity to learn more by collaborating with one of the leading figures in the area, the Canadian mathematician Jonathan Borwein, on an introductory-level book about the subject. The result was published recently by A.K. Peters: The Computer as Crucible: An Introduction to Experimental Mathematics. This month's column is abridged from that book.

We both hope you enjoy it.


Devlin's Angle is updated at the beginning of each month. Devlin's most recent book for a general reader is The Unfinished Game: Pascal, Fermat, and the Seventeenth-Century Letter that Made the World Modern, published by Basic Books.
Mathematician Keith Devlin (email: [email protected]) is the Executive Director of the Human-Sciences and Technologies Advanced Research Institute (H-STAR) at Stanford University and The Math Guy on NPR's Weekend Edition.