## Devlin's Angle |

generates prime numbers as values for *n* = 0, 1, 2, ... , 39, an unbroken sequence of the first forty natural number arguments. This seems to have been noticed first by Leonard Euler in 1772. Knowing that mathematics is the science of patterns, it is tempting to conclude that the formula generates primes for all natural number arguments, but this is not the case: *f*(40) is composite, as are many other values.

A mathematician with some experience with prime numbers is unlikely to be "fooled" by the numerical evidence in this example, but even world-class mathematicians have on occasion been misled by far fewer cases. Pierre de Fermat among them.
The *n*th *Fermat number* *F*(*n*) is obtained by raising 2 to the power *n*, then raising 2 to that number and adding 1 to the result, i.e.

Thus *F*(0) = 3, *F*(1) = 5, *F*(2) = 17, *F*(3) = 257, *F*(4) = 65,537.

These numbers are called Fermat numbers because of a claim made by Fermat in a letter written to Mersenne in 1640. Having noted that each of the numbers *F*(0) to *F*(4) is prime, Fermat wrote: "I have found that numbers of the form *F*(*n*) are always prime numbers and have long since signified to analysts the truth of this theorem." For all his great abilities with numbers, however, Fermat was wrong. This was first shown conclusively by the great Swiss mathematician Leonhard Euler in 1732: *F*(5) = 4,294,967,297 is not prime. In fact, no prime Fermat number has been found beyond *F*(4).

There are many other examples where the numerical evidence can mislead us. If you have access to a computer algebra system or a multi-precision scientific calculator program (several of which you can download for free from the Web), try computing

An integer! Amazing. Assuming you are aware of Euler's famous identity

where exponentiation of a transcendental real number by a transcendental imaginary number yields an integer result, you might be surprised that you never came across this other one before. How come you missed it?

The answer is, the result is not an integer. Twelve decimal places all equal to zero is highly suggestive, but if you increase the precision of the calculation to 33 places, you find that

This is still an interesting answer, and you would be right to suspect that there is something special about that number 163 that makes this particular power of *e* so close to an integer, but I won't go into that here. My point is simply to illustrate that computation can sometimes lead to conclusions that turn out to be incorrect. In this case the error lies in mistaking for an integer a number that is in fact transcendental.

Another coincidence, in this case most likely having no mathematical explanation, is that to ten significant figures

Here is another example where the numbers can be misleading. If you use a computer algebra system to evaluate the infinite sum

But this answer is only approximate. The series actually converges to a transcendental number, which to 268 decimal places is equal to 1. In other words, you would need to calculate the numerical value to at least 269 places to determine that it is not 1, although to ensure there were no rounding errors, you would have to carry out the calculation to an even greater accuracy.

Or try this one on for size (and what a size it is in terms of accuracy). The following "equality" is correct to over *half a billion* digits:

But this sum, far from being an integer (conceptually far, that is!), is provably irrational, indeed transcendental. As you have probably guessed, this is a "cooked" example, related to my second example. But what an example it is!

If you take any natural number *n*, then, by the fundamental theorem of arithmetic, either *n* is prime or else it can be expressed as a product of a unique collection of primes. For instance, for the first five non-primes,

Of these, 4, 8, and 9 have prime decompositions in which at least one prime occurs more than once, while in the decompositions of 6 and 10 each prime occurs once only. Numbers divisible by the square of a prime (such as 4, 8, 9) are called *square-divisible*. Numbers not so divisible are called
*square-free*. (Thus, in the prime decomposition of a square-free number, no prime will occur more than once.)

If *n* is a square-free natural number that is not prime, then it is a product of either an even number of primes or an odd number of primes. For example, 6 = 2 x 3 is a product of an even number of primes, while 42 = 2 x 3 x 7 is a product of an odd number of primes. In 1832, A.F. Moebius introduced the following simple function (nowadays called the Moebius function) to indicate what type of prime factorization a number *n* has.

Let m(1) = 1, a special case. For all other *n*, m(*n*) is defined as follows:

IfFor example, m(4) = 0, m(5) = -1, m(6) = 1, m(42) = -1.nis square-divisible, then m(n) = 0;If

nis square-free and the product of an even number of primes, then m(n) = 1;If

nis either prime, or square-free and the product of an odd number of primes, then m(n) = -1.

For any number *n*, let M(*n*) denote the result of adding together all values of m(*k*) for *k* less than or equal to *n*.

For example, M(1) = 1, M(2) = 0, M(3) = -1, M(4) = -1, M(5) = -2.

At tbis stage, you may like to investigate this question: What is the first value of *n* beyond 2 for which M(n) is zero again? Or positive again?

So far, everything looks like a nice example of elementary recreational mathematics. Matters take a decidedly more serious turn when you learn that the behavior of the function M(*n*) is closely related to the location of the zeros of the Riemann zeta function.

The connection was known to T.J. Stieltjes. In 1885, in a letter to his colleague C. Hermite, he claimed to have proved that no matter how large *n* may be, |M(*n*)|, the absolute value of M(*n*), is always less than SQRT(*n*).

If what Stieltjes claimed had been true, the truth of the Riemann hypothesis would have followed at once. Needless to say, then, Stieltjes was wrong in his claim, though at the time this was not at all clear. (For instance, when Hadamard wrote his now-classic and greatly acclaimed paper proving the prime number theorem in 1896, he mentioned that he understood Stieltjes had already obtained the same result using his claimed inequality, and excused his own publication on the grounds that Stieltjes' proof had not yet appeared!)

The fact that Stieltjes never did publish a proof might well suggest that he eventually found an error in his argument. At any rate, in 1897, F. Mertens produced a 50-page table of values of m(*n*) and M(*n*) for *n* up to 10,000, on the basis of which he was led to conclude that Stieltjes' inequality was "very probable." As a result Stieltjes' inequality became known as the Mertens conjecture. By rights, it should have been called the Stieltjes conjecture, of course, but Mertens certainly earned his association with the problem by hand-computing 10,000 values of the function.

When mathematicians brought computers into the picture, they took Mertens' computation considerably further, eventually calculating 7.8 billion values, all of which satisfied Stieltjes' inequality. Given such seemingly overwhelming numerical evidence, one might be forgiven for assuming the Mertens conjecture were true, but in October 1983, Hermann te Riele and Andrew Odlyzko brought eight years of collaborative work to a successful conclusion by proving otherwise.

Their result was obtained by a combination of classical mathematical techniques and high-powered computing. The computer was not used to find a number *n* for which |M(*n*)| equals or exceeds SQRT(*n*). Even to date, no such number has been found, and the available evidence suggests that there is no such *n* below 10^30. Rather, they took a more circuitous route. Far too circuitous to present here, in fact. If you want to see exactly what they did, read the account in my book
Mathematics: The New Golden Age, Columbia University Press 1999, pp.208-213.

On the other hand, mathematics is for the most part far less mischievous (and hence potentially far closer to physics) than my above examples might suggest. In general, provided we exercise some reasonable caution, we can, in the words of Yogi Berra, learn a lot by just looking. Looking for, and at, computational (often numerical) evidence, that is.

In today's era of fast, interactive computers, with computational tools such as Mathematica and Maple - not to forget search engines such as Google - we can approach some areas of mathematics in a way reminiscent of our colleagues in the natural sciences. We can make observations, collect data, and perform (computational) experiments. And we can draw conclusions based on the evidence we have obtained.

The result is a relatively new area (or form) of mathematics called Experimental Mathematics. Proof has a place in experimental mathematics, since computation can sometimes generate insights that lead to proofs. But the experimental mathematician is also willing to reach a conclusion based on the weight of evidence available. I shall say more about this new way of doing mathematics next month - when I will also note that it is actually not quite as new as might first appear.

Devlin's Angle is updated at the beginning of each month. Devlin's most recent book is The Unfinished Game: Pascal, Fermat, and the Seventeenth-Century Letter that Made the World Modern, published by Basic Books.