The original book on the design of experiments carried that as its title and was published in 1935. The author, R. A. Fisher, was responsible for much of the theory and methodology. Fisher’s book treated mainly the methodology rather than the theory, and the more mathematically inclined were not impressed. Even so, the book was welcomed by research workers and was widely cited. In all of his writing, Fisher had a tendency to lose both experts and beginners. He appeared to work very intuitively and much of what he came up with seemed “obvious” to him — but to few others. Time has shown his intuition was accurate indeed — in some situations more on target than some later attempts at formal proof. Fisher spent some summers at (what is now) Iowa State University in the United States and the faculty there produced books that were intended to be easier to follow. Of interest here is *Experimental Design* (1950) by W. G. Cochran and Gertrude Cox. This largely replaced Fisher’s book as the standard reference.

In 1952 there came a second opinion from Iowa State in the form of Oscar Kempthorne’s *Design and Analysis of Experiments*. This took a more philosophical approach than Cochran and Cox. It raised questions not about the theory of analysis of variance but about why ANOVA is a reasonable way to approach experimental data. For the purposes of this review, the salient point is that Kempthorne offered permutation tests as a more satisfactory (if less practical in 1952) approach and saw the usual ANOVA *F*-tests as approximations to permutation tests. This was not an innovation; Fisher held this position before him, but it did not make its way into many other textbooks. In short, permutation tests model the random assignment process while the use of the F-distribution is based on random sampling, and an analogy between that and random assignment. Kempthorne’s text never caught on as far as adoptions are concerned, but it is cited to this day by people concerned with the underlying rationale for how we analyze experimental data.

The last classic to be discussed is George Cobb’s *Introduction to the Design and Analysis of Experiments* (1998). This book takes the classic random sampling approach, mentioning permutation tests only in a short section at the very end of the book. It focuses on the logic of separating out the various effects and partitioning sums of squares. This is done through many simple numerical examples where the reader is invited to do the arithmetic to see how things work. There are far fewer formulae than any of the above texts, and far fewer prerequisites. This is the most pedagogically interesting book, and probably the best choice for someone who wants to understand what is going on at a practical, intuitive level.

By now the reader must be wondering when we will get to the book under review. Placing it in the context of the above, it takes the sampling-based ANOVA models as given and does little to explain their relevance to the real world. There are justifications of another type. For example, “estimable functions” are mentioned often, but this seems of interest mainly to the mathematical statistician rather than the researcher. In addition, the formula density is high. The book is closest to Cochran and Cox of the books mentioned above, but with less for the researcher and more for the mathematician. No prerequisites are mentioned but an introductory statistics course and at least a semester of mathematical statistics seem appropriate.

One innovation in comparison with the older texts is that there is much material up front on designing a study, *and this is reinforced throughout the text* by numerous examples of actual experiments, including not only the analysis but the design. These examples alone are worth the price of admission.

Another innovation is the inclusion of instructions for using two software packages, both SAS and R. These are integrated into the text (which accounts for much of the large page count). This will be an advantage if you use these packages, but not if you don’t. However, these are top packages, and knowing either would be a highly marketable skill. For those who use this feature, the instruction is quite detailed — more like the separate software supplements one can often purchase to accompany a text than the sketchy software help included within many other texts.

For someone who just wants to do a traditional ANOVA, and not question the whys or wherefores, this book is an excellent reference. The advice given, especially that on design, is excellent. It has steeper mathematical prerequisites and a higher mathematical level than comparable textbooks, which may limit the audiences who could profitably use it.

After a few years in industry, Robert W. Hayden ([email protected]) taught mathematics at colleges and universities for 32 years and statistics for 20 years. In 2005 he retired from full-time classroom work. He contributed the chapter on evaluating introductory statistics textbooks to the MAA’s Teaching Statistics.