Devlin's Angle

May 2003

The shame of it

I am sure I am not the only mathematician who has had to hang my head in shame at the sloppy behavior of my colleagues who announce major results that they subsequently have to withdraw when it is discovered that they have made a mistake. We expect high school students to make mistakes on their math homework, but highly paid professionals with math Ph.D.s are surely supposed to be beyond that, aren't they?

The rot set in 1993, when Andrew Wiles was forced to withdraw his dramatic claim to have proved Fermat's Last Theorem. His subsequent discovery of a correct proof several months later hardly served to make up for the dreadful example he had set to an entire generation of potential future mathematicians, who followed the "has he, hasn't he?" activities from their high school math classrooms.

Then, this year, we have the entire mathematical profession admitting that they are not yet sure that Russian mathematician Grigori Perelman's claimed proof of the Poincare Conjecture is correct or not months after he first posted details on the Internet. Surely, any math teacher can tell in ten minutes whether a solution to a math problem is right or wrong! What are my professional math colleagues playing at? Come on folks, it's a simple enough question. Is his math right or wrong?

Now we have American mathematician Daniel Goldston and Turkish mathematician Cem Yildirim admitting that their recently claimed major advance on the famous Twin Primes Conjecture has a flaw that they are not yet able to fix. A result that many of the world's leading mathematicians had already declared to be one of the most significant breakthroughs in number theory in the past fifty years. Can't all those experts tell whether a solution to a math problem is right or wrong any more? Have standards fallen so low, not only among students but the mathematics professoriate as well?

If you have been nodding in increasing agreement as you read the above paragraphs, then I can draw two fairly confident conclusions about you. First, you don't know me very well. That's okay, I'm sure you can get along fine in life not knowing that I harbor a mischievous streak. More worrying though is that you have little understanding of the nature of modern mathematics. That's worrying because, in an era when a great deal of our science, technology, defense, and medicine are heavily dependent on mathematics, harboring a false, outdated (and dare I say hopelessly idealistic?) view of the subject is positively dangerous -- at least if you are a voter, and even more so if you are in a position of authority. (You might, for instance, be led to believe that a missile defense shield with a reliability factor of 95% is worth spending billions of dollars on. No this is not a political comment. It's a mathematical issue. Think about that other 5% for a moment and ask yourself what 5% of, oh, let's say 500 incoming missiles would represent. After that, we're all free to make up our own minds.)

The fact is, during the 20th century, much of mathematics grew so complex that it really can take months of very careful scrutiny by a large number of mathematicians to check whether a purported new result is correct or not. That does reflect on mathematicians to some extent, but in a positive way, not a negative one. Mathematicians have pushed the mathematical envelope to such an extent that the field has gone well beyond problems for which solutions might take up a page or two, and solutions that can be checked in a few hours, or even a few weeks. Those that are familiar with the field know this. That is why Andrew Wiles would have been regarded as one of the greatest mathematicians of our time even if he had not been able to patch up his proof. No one who understood what he had done ever doubted that he had made a major breakthrough. What was in question for a while was whether his new methods really did prove Fermat's Last Theorem. Whether they did or not, his new methods were sure to lead to many further advances. In fact, from a mathematical point of view, the least significant aspect of his work was whether or not it solved Fermat's 350 year old riddle. The importance of that particular aspect of what he had (or had not) done was cultural, not mathematical.

The Poincare conjecture goes back to the start of modern topology a hundred years ago. Mathematicians in the 19th century had been able to describe all smooth, two-dimensional surfaces. Henri Poincare tried to do the same thing for three-dimensional analogues of surfaces. In particular, he conjectured that any smooth 3-surface that has no edges, no corkscrew-like twists, and no doughnut-like holes must be topologically equivalent to a 3-sphere. The two-dimensional analogue of this conjecture had been proved by Bernhard Riemann in the mid 19th century. In fact, at first Poincare simply assumed it was true for 3-surfaces as well, but he soon realized that it was not as obvious as he first thought. In recent years, the result has been shown to be true for any surfaces of dimension 4 or more, but the original 3-dimensional case that tripped Poincare up remains unproved.

The problem is regarded as so important that it is one of the seven problems chosen in 2000 by the Clay Mathematics Institute as Millennium Problems, for each of which a $1 million prize is offered for the first person to solve it. (For more details of this competition, see my recent book, details of which are given at the end of this article.) Over the years, several mathematicians have announced solutions to the problem, but on each occasion an error was subsequently found in the solution -- although such has been the complexity of the proposed proofs that it generally took several months before everyone agreed the attempt had failed.

Recognizing the difficulty of checking modern mathematical solutions to the most difficult problems, the Clay Insitute will not award the $1 million prize for a solution to any of the Millennium Problems until at least two years have elapsed after the solution has (i) been submitted to a mathematics journal, (ii) survived the refereeing process for publication, and (iii) actually appeared in print for the whole world to scrutinize.

When Wiles proved Fermat's Last Theorem, he did so by proving a much stronger and far more general result that implied Fermat's Last Theorem as a corollary. The same is true for Perelman's clamed proof of the Poincare Conjecture. He says he has managed to prove a very general result known as the "geometrization conjecture", formulated by mathematician Bill Thurston (now at UC Davis) in the late 1970s. Roughly speaking, this goes part way to providing a complete topological description of all 3-D surfaces by saying that any 3-D surface can be manipulated so that it is made up of pieces each having a nice geometrical form.

In the 1980s, Richard Hamilton (now at Columbia University) suggested that one way to set about proving Thurston's conjecture was by analogy with the physics of heat flow. The idea was to set up what is called a "Ricci flow" whereby the given surface would morph itself into the form stipulated in the geometrization conjecture. Hamilton used this approach to reprove Riemann's 19th century result for 2-D surfaces, but was not able to get the method to work for 3-D surfaces. Last November, Perelman posted a message on the Internet claiming he had found a way to make the Ricci flow method work for 3-D surfaces to prove the geometrization conjecture.

Mindful of the ever present possibility of a fatal flaw in his reasoning, Perelman has been careful not to go beyond claiming he thinks he has solved the problem. In addition to posting details of his argument on the Internet for anyone to examine, he has been lecturing about his proposed solution at major universities in the US, again inviting other mathematicians to scrutinize his argument in detail -- to try to find a major flaw. As was the case with Wiles' work Fermat's Last Theorem, everyone who has looked at Perelman's work has no doubt that he has made a major breakthrough in the field of topology that will significantly advance the subject. What no one is yet prepared to do is go on record as saying he has proved the Poincare Conjecture.

And so to Goldston and Yildirim's proof (or not) of a major result related to the Twin Primes Conjecture.

The Twin Primes Conjecture says that there are infinitely many pairs of primes that are just 2 numbers apart: pairs such as 3 and 5, 17 and 19, or 101 and 103. Although computer searches have produced many such pairs, the largest such to date having over 50,000 digits, no one has been able to prove that there are an infinite number of them. Who cares? you might ask. And with some justification. The Twin Primes Conjecture is one of those mathematical riddles that as far as we know has no practical applications, whose fame rests purely on the fact that it is easy to state and understand, has an intriguing name, and has resisted proof for several centuries.

But then, the same can be said of Fermat's Last Theorem. The lasting significance of Wiles' proof of that puzzle was that the method he developed to solve the problem has had - and will continue to have - major ramifications throughout number theory. And the same can probably be said of the Goldston-Yildirim result, provided they are able to correct the error. For what they thought they had done was estabish a deep result about how the primes are distributed among the natural numbers.

It has been known for over a century that as you progress up through the natural numbers, the average gap between one prime number p and the next is the natural logarithm of p. (This is known as the Prime Number Theorem.) But this is just an average. By how much can the gap differ from that average?

In 1965, Enrico Bombieri (then at the University of Pisa and now at Princeton) and Harold Davenport (of Cambridge) proved that gaps less than half the average (i.e., less than 0.5 log p) occur infinitely often. In subsequent years, other mathematicians improved on that result, eventually showing that gaps less than 0.25 log p crop up infinitely often. But then things ground to a halt. Until, a few weeks ago, Goldston and Yildirim presented a proof that the fraction could be made as small as you please. That is to say, for any positive fraction k, there are infinitely many primes p such that the gap to the next prime is less than k log p.

This is still well short of the Twin Primes Conjecture, which says that there are infinitely many primes p for which the gap to the next prime is exactly 2. But from a mathematical standpoint, the Goldston-Yildirim result is a major result having many ramifications. At least, it will be if it turns out to be true. In the meantime, everyone who has read the "proof," including Andrew Granville of the University of Montreal and Kannan Soundarajan of the University of Michigan who uncovered the possibly fatal flaw in the reasoning, is clear that the new work is still an important piece of mathematics.

And there you have it. Three recent claims of major new advances, each of which has highlighted just how hard it can be to check if a modern proof is correct or not. The score so far: one is definitely correct, for one the jury is still out, and for the third the current proof is definitely wrong, and although the authors may be able to patch it up, the experts think this is unlikely.

Shame? Is there anything for mathematicians to be ashamed of, as I jokingly began? Only if it is shameful to push the limits of human mental ability, producing arguments that are so intricate that it can take the world's best experts weeks or months to decide if they are correct or not. As they say in Silicon Valley, where I live, if you haven't failed recently, you're not trying hard enough. No, I am not ashamed. I'm proud to be part of a profession that does not hesitate to tackle the hardest challenges, and does not hesitate to applaud those brave individuals who strive to reach the highest peaks, who stumble just short of the summit, but perhaps discover an entire new mountain range in the process.

For details on Perelman's work on the Poincare Conjecture, click here and here. For details of the Goldston-Yildirim work, click here.


Mathematician Keith Devlin ( devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and Information at Stanford University and "The Math Guy" on NPR's Weekend Edition. His most recent book is The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time, published last fall by Basic Books.
Devlin's Angle is updated at the beginning of each month.