You are here

A New Kind of Science

Stephen Wolfram
Publisher: 
Wolfram Media
Publication Date: 
2002
Number of Pages: 
1192
Format: 
Hardcover
Price: 
44.95
ISBN: 
978-1579550080
Category: 
General
[Reviewed by
Henry Cohn
, on
06/16/2002
]

For a decade, people interested in cellular automata have been waiting for Stephen Wolfram to publish this book. Now that it has appeared, it's easy to see why it took so long: it's huge and packed full of information. Overall, it's a remarkable piece of experimental mathematics, together with considerable speculation about scientific applications (of varying degrees of plausibility). I greatly enjoyed reading it, and recommend it, although it has a number of flaws I'll discuss later.

One of the difficulties in reviewing a book of this scope is that it is too broad for almost any reviewer. For example, it devotes chapters to topics such as perception of images and fundamental physics, and shorter sections to everything from free will to thermodynamics. I have opinions on most of these topics, but they are generally not informed opinions, so I will not comment in detail on them here. Instead, I'll concentrate on a few of the mathematical aspects of the book. One should thus keep in mind that the issues I discuss do not represent anywhere near the full breadth of this book.

Wolfram's book begins with examples, and that is a good starting place for a review as well. The simplest example of his work is one-dimensional cellular automata. Imagine an infinite row of cells, each of which can be white or black (0 or 1, say — I may also call them "off" and "on"). Time jumps in discrete steps, and at each step, the world is updated using a fixed rule that determines the new color of each cell based only on its own old color and those of its immediate neighbors (the same rule for every cell). This is one of the simplest systems one can imagine. It sounds almost too simple to be interesting, but Wolfram discovered some amazing examples.

To talk about these examples, it is useful to have a classification system. Wolfram classifies his rules by associating them with 8-bit strings as follows. A rule must specify, given the colors of a cell and its neighbors, the new color of the cell. There are eight three-cell color patterns, 000 through 111, and we need one bit for each to specify the new color. The rightmost bit in the 8-bit string corresponds to 000, the next to 001, then 010, etc. For example, the rule associated with the 8-bit string 01101110 turns the central bit on in the patterns 110, 101, 011, 010, and 001, and turns it off otherwise.

One almost never refers to these 8-bit strings directly, but instead interprets them as binary expansions, since it's easier to talk about and remember decimal numbers than binary numbers. For example, the decimal number 110 has the binary expansion 110 = 2^6 + 2^5 + 2^3 + 2^2 + 2^1 = 01101110_2, so Rule 110 corresponds to the 8-bit string 01101110 discussed above. (The exponents of the powers of 2 correspond to the "on" three-cell patterns: 110_2 = 6, 101_2 = 5, 011_2 = 3, etc.)

What sorts of behavior can these automata exhibit? Of course, one can get trivial, repetitive behavior. It's even possible to get nested, recursive behavior. For example, here is a picture of what Rule 90 does when started with a single black cell. It essentially computes Pascal's triangle modulo 2. The picture shows a trace over time: the first line is the initial conditions, the second is what happens next, etc. The cells are tiny (one pixel wide), so one can see the overall pattern.

Illustration for Rule 90

Wolfram discovered that this is far from the limit of how complex the automaton's behavior can be. Two especially nice examples are Rule 110 and Rule 30. They seem to behave in quite different ways, as one can see from a few illustrations. Here's a picture of Rule 110 with random initial conditions. (The picture shows a finite row of cells that wraps around at the left and right edges.)

Illustration for Rule 110

Here's a picture of Rule 30, starting with a single black cell.

Illustration for Rule 30

There are clear differences between the pictures. Despite starting with total order, Rule 30 rapidly produces apparent randomness (at least on the right side of the picture), while Rule 110 quickly moves from randomness to a steady background state with bizarre particles moving across it and interacting. In both cases, the behavior is much more complicated and interesting than one would expect, given that it is generated by such a simple rule.

The natural reaction to seeing these pictures is to want to experiment (e.g., to randomize the initial conditions for Rule 30), and I suspect it is necessary if one wants to develop a feeling for the material. At the end of the review I'll give a simple tool for experimenting with these 256 cellular automata.

These examples are just the start of the mathematics in Wolfram's book. He has countless intriguing examples and beautiful pictures. For the most part, his work is experimental: he searches for phenomena and describes them, but rarely gives proofs. I think many of the things he describes may simply not be susceptible to rigorous proof (some may be hard to formulate precisely; some may be independent of ZFC; some may have proofs but no humanly understandable proofs). However, they are still well worth investigating. Other things Wolfram describes may be provable, and will give mathematicians a lot to think about. For example, one natural conjecture is that Rule 110 is capable of universal computation, i.e., simulating any computer program. The idea is that the particles one sees in the picture can be organized to interact productively with each other. Matthew Cook has in fact proved this conjecture, and his proof is summarized in Wolfram's book (pages 678-689; see also page 1115).

Because I am interested in logic and theoretical computer science, for me universal computation is one of the most interesting topics Wolfram discusses. The book's final chapter introduces a "Principle of Computational Equivalence", which Wolfram considers to be of great originality and importance. I'll summarize it shortly, but first I should note that it can be hard to understand exactly what Wolfram means sometimes. His book is actually written for a general audience, which means that it is pleasantly free from arcane terminology, but also vague from time to time. Furthermore, he asserts on page 727 that "my guess is that there is basically no way to formulate an accurate statement of the principle except by using methods from the kind of science introduced in this book" (and that attempts to investigate it mathematically will be irrelevant). My understanding of the Principle of Computational Equivalence is roughly the following:

  • All the systems we see around us in nature follow computable rules.
  • There is a fundamental upper bound to their complexity, namely Turing's halting problem. All systems that reach this bound are equivalent.
  • Almost all systems that are not obviously weak reach the bound and are thus equivalent to the halting problem. (In fact, Wolfram says this happens for most initial conditions, not just some.)

The first point is controversial (e.g., it seems to imply that strong AI is in principle possible), but I believe it. The second has of course been well known since the 1930's, so Wolfram's contribution lies in the third point. There, he has certainly made some genuine intellectual progress. I would never have believed that systems as simple as Rule 110 were really capable of universal computation until I saw Wolfram's pictures.

Nevertheless, I think his book really misses one of the deep mysteries in the foundations of mathematics. For a long time, it's been apparent that the computational problems we can analyze mathematically (well, those that are "recursively enumerable") have fallen into two categories: some problems (like deciding the truth of statements of Euclidean geometry) can be solved algorithmically, and some (like classifying compact four-manifolds up to homeomorphism) are equivalent to the halting problem and therefore undecidable. The mystery is that theoretically there are many intermediate possibilities. Friedberg and Muchnik showed independently in the 1950's that some problems were easier than the halting problem yet still not algorithmically solvable. Recursion theorists went on to show that there is a huge profusion of such problems, with subtle relationships between them. However, they don't seem to arise in practice, just in intentional constructions. Why don't we see them in practice? One possibility is that they aren't there, that "natural" mathematical problems just tend to be at one extreme or the other. Another is that we have already run across plenty of intermediate problems, and just haven't recognized them. For example, the problem of deciding whether a Diophantine equation has an integral solution is equivalent to the halting problem, but for rational solutions no equivalence is known. Could that be an example of an intermediate problem? (I personally doubt it, but nobody knows.) Perhaps someday we will have more powerful techniques that will let us recognize many problems in these "intermediate Turing degrees", or perhaps there are many examples but we will never be able to prove it.

Wolfram's Principle of Computational Equivalence amounts to a very strong rejection of these intermediate problems. He acknowledges that they exist (p. 734), but more or less dismisses them as contrived (because they are complicated and formed by explicitly weakening more powerful systems), and seemingly asserts that they will almost never come up, in randomly chosen systems or ones observed in nature. I was disappointed in his arguments. I had hoped to find an explanation of why they shouldn't come up, not just an assertion that he thinks they don't.

I also wonder whether Wolfram may be guilty of the same blindness he accuses others of. On page 644, he writes: "In the past it has tended to be assumed that universality is somehow a rare and special quality, usually possessed only by systems that are specifically constructed to have it. But one of the results of this chapter is that universality is a much more widespread phenomenon." Perhaps the intermediate Turing degrees are similar, in the sense that they were first constructed artificially, and will someday be recognized to be widespread.

On the whole, I didn't find the arguments in these sections of Wolfram's book convincing. However, the examples were wonderful, as they are everywhere in the book. Rule 30 particularly intrigued me. Wolfram conjectures that it, like Rule 110, supports universal computation (p. 701). Could that be true? My instincts tell me no, but Wolfram has far more experience with these objects than I do. I would be very interested to see this question resolved either way. Wolfram produces a few intriguing hints of Rule-110-style behavior in Rule 30, with particles moving against a fixed background. However, the particles he constructs are too limited to be useful in a proof of universality. I'm sure that resolving this question will be more difficult even than dealing with Rule 110, which already required years of work (p. 678). Even though the particular result would have little importance in the overall scheme of things, developing the ability to attack such problems more easily would be of considerable value. (Besides, there's always the possibility of a striking outcome: what if predicting Rule 30's behavior belonged to an intermediate Turing degree? I would consider such a theorem wonderful, partly because I have no idea how one might prove it.)

One of the disadvantages of such a broad book is that the author cannot be an expert in everything in it. There were a few cases where I thought Wolfram was just not sufficiently familiar with previous work. For example, on page 779 he writes: "And what I believe is that essentially the same phenomenon operates in almost every area of mathematics. Just like in multiway systems, one can always add axioms to make it easier to prove particular theorems. But I suspect that ultimately there is always computational irreducibility, and this makes it essentially inevitable that there will be short theorems that only allow long proofs." This phenomenon of short theorems with long proofs is of course well known in practice and well understood in mathematical logic. The main text of the book contains essentially no references to other work, but there are copious notes (349 pages of small print!). However, even in the notes I did not see any references for this claim. Wolfram's not a logician, so there is no reason he should know about the history of this idea, but if he had published his work in peer-reviewed journal articles, rather than a huge book published by his own company, surely someone would have pointed out to him which parts of his work were already known.

There are also a number of statements that may bother mathematicians. For example, on page 786 he writes: "The construction here can be viewed as providing a simple proof of Goedel's Theorem on the existence of unprovable statements in arithmetic." It's just not true: the proof depends on the undecidability of Rule 110, and is thus far more complicated than any other proof of Goedel's Theorem I've seen. There are other minor inaccuracies, especially in the notes. I did not see any I would consider important or likely to confuse an expert. However, it's really too bad they are there in a book meant to be readable by the general public. (Incidentally, I think it really is broadly accessible, and I can imagine that a non-expert might contribute something to this field after reading the book, for example by doing clever computer experiments.)

One frustrating aspect of this book is Wolfram's unusual arrogance and desire to praise his own accomplishments. How many people would say that part of their work "has vastly richer implications than the laws of thermodynamics — or for that matter, than essentially any single collection of laws in science" (p. 726)? Or that they have made "one of the more important single discoveries in the whole history of theoretical science" (p. 2)? These quotes are not so extreme for this book. Similar claims occur throughout the book, starting with the assertion that it contains not just a contribution to current science, or even a new branch of science, but a new kind of science altogether (which we learn on page 856 "will require an investment of years comparable to learning an area like physics"). It's hard to resist poking fun by including more quotations, but they would serve little useful purpose: these ones already capture the feeling of the book.

Overall, the kind of science Wolfram proposes is based on the insight that very simple rules can produce complex behavior. When scientists observe something complex, it is tempting to believe that the underlying mechanism must be complex as well. In some cases, such as Rule 30, it would be hard to describe what is going on by looking at it on a large scale and formulating equations, despite the simplicity of the rule. Wolfram believes that scientists are often led astray when they attempt to formulate theories in terms of traditional mathematics, and that the sorts of models he considers (cellular automata and some of their relatives) are often much more suitable. He has some striking examples, such as pigmentation patterns on mollusc shells, which are remarkably like one-dimensional cellular automata, formed one line at a time as the shell grows (pages 423-425). This example was concrete and detailed, and seemed quite convincing, as did several other examples. However, I was skeptical of many of his more speculative applications.

Ultimately, I do not believe that Wolfram's book represents a new kind of science, or that future generations will believe it does. His book would be more pleasant to read if he were more modest: there's a reason why bragging is generally frowned upon (regardless of whether one's achievements are worthy of it). However, I don't want to seem too critical. Despite its flaws, I really enjoyed the book, and found it fascinating and thought-provoking. It has wonderful examples, and some of the best illustrations I've ever seen in a mathematics book. Everyone with any interest in cellular automata should take a look at it.


Experimental Tool: If you'd like to generate your own pictures of Wolfram's 256 simple automata, you can use these postscript files for Rule 110 or Rule 30. You can view the pictures with a postscript previewer or print them on a postscript printer. The files were written to be straightforward to modify for other rules, initial conditions, system sizes, time scales, etc. Just look at them in a text editor (the comments give instructions).


Henry Cohn is a researcher in the theory group at Microsoft Research, and an affiliate assistant professor in the University of Washington Department of Mathematics. His primary mathematical interests are number theory, combinatorics, and the theory of computation.

Preface | The Foundations for a New Kind of Science | The Crucial Experiment | The World of Simple Programs | Systems Based on Numbers | Two Dimensions and Beyond | Starting from Randomness | Mechanisms in Programs and Nature | Implications for Everyday Systems | Fundamental Physics | Processes of Perception and Analysis | The Notion of Computation | The Principle of Computational Equivalence | Notes | Index