Although a bit more cumbersome, the title Games, Puzzles, and Computational Complexity would describe this quite intriguing book more precisely. Since its early days, computer science has interacted with the world of games. Several examples that quickly come to mind arise out of the artificial intelligence end of the field, from chess-playing computers to what Turing called the “imitation game,” now better known as the Turing test.
However, games and puzzles also possess a long history in the area of computational complexity, the study of the time and memory resources required in computation. The work of Cook, Levin, and Karp in the early 1970s inaugurated the investigation of NP-completeness. Already by the end of that decade, the classic treatise by Garey and Johnson listed “Games and Puzzles” as one of twelve categories in its compilation of known NP-hard problems. These included problems relating to checkers, go, hex, instant insanity, and crosswords. In the following years, many more complexity classifications of games and puzzles have appeared, running the gamut from chess to sudoku.
Typically, the hard part of such a classification consists of establishing a lower bound on the complexity. For example, one might easily show that a problem X lies in NP, while proving its NP-completeness demands more work. One often proceeds by taking a benchmark problem of known complexity and representing it within X, which is thus at least as complex as the benchmark.
In their book Hearn and Demaine present an elegant family of benchmarks they have developed, allowing them to settle open questions on the complexity of various games. Their construction has already found applications in non-game contexts as well, besides providing an interesting model of computation. Moreover, it can be stated, practically completely, in two sentences: Take a directed graph with weights assigned to the edges and vertices, subject to the constraint that the sum of the weights of the edges leading into each vertex exceeds that vertex’s weight. Then reverse the directions of edges, keeping the constraint satisfied, until a target edge is reversed. A few variations on this very simple setting — can an edge be reversed more than once? does a single person reverse edges or do two competitors take turns? — yield natural, useful benchmarks at various complexity levels.
Part I of the book (about a hundred pages) sets up the graph framework and proves the appropriate complexity results for the benchmarks. Part II (about fifty pages) then applies the benchmarks to classify several puzzles and a few games. Additionally, a thirty-page appendix summarizes known complexity results for games and puzzles, providing a very useful update to the previously mentioned Garey-Johnson list. The authors also helpfully provide a ten-page appendix on fundamentals of complexity theory.
From time to time the text repeats itself, conveying a slightly padded feeling. However, this is a minor issue, and the authors certainly provide plenty to mull over. The publisher A K Peters has done a quite nice job of production, as well. All in all, this is a book well worth looking into.
Although in recent years Leon Harkleroad has spent much time with mathematical aspects of music, he still enjoys exploring his original stomping grounds of computability theory and related areas.