This book discusses the mathematical, historical, philosophical, and even psychological aspects of probability and statistics. It’s a very nice book: mathematically rigorous, yet also reasonably accessible; informative, yet fun and entertaining to read. Both students and faculty should find reading this to be a rewarding experience.

The text has its origins in an interdisciplinary course taught at Stanford University by the two authors, one of whom (Diaconis) is a mathematician and the other a philosopher. The book, like the course, presupposes some prior exposure to either probability or statistics, but the authors have included, as an Appendix, a brief survey of the basics of elementary probability for readers whose exposure to these areas was some time ago.

There are ten chapters, each one centered around a “great idea” in the history of probability and statistics. The first great idea is that it is possible to measure probability in the first place. So, chapter 1 provides a brief tour of the early history of probability theory, starting with Cardano in the 1500s and proceeding through Bernoulli, about two hundred years later.

The idea of measurement is also reflected in chapter 2, which discusses the idea that judgments can be measured. We can, for example, talk about the probability that a certain politician will be elected to a second term, or that a patient will survive an operation, even though we don’t have before us a finite collection of equally probable outcomes to assign probabilities to.

The third great idea is the realization that the human mind has some psychological difficulty dealing with probabilistic concepts. This chapter showcases the work of Kahneman and Tversky, but also discusses other psychological studies conducted over the years regarding probabilistic decision-making. To give one simple (and probably not surprising) example, the wording of a statement may influence the outcome: telling a patient that he has a 90% chance of surviving an operation, for example, is more likely to induce him to submit to surgery than telling him that he has a 10% chance of dying, even though both statements mean the same thing.

The fourth great idea refers to the fact that, although the classical definition of probability was not phrased in terms of frequency, there is a connection (expressed by the laws of large numbers) between probability and frequency. These fairly sophisticated results, in turn, point to the need for a proper mathematical context in which to study probability, and that is the subject of the fifth chapter/great idea, namely Kolmogorov’s axiomatization of probability in terms of measure theory.

The next two chapters involve two aspects of Bayesian analysis. Chapter 6 discusses Bayes’ theorem and its relationship to “inverse inference”: using known frequency to draw inferences of chance. This is parametric Bayesian analysis; chapter 7 then discusses subjective Bayesian analysis and a famous theorem of de Finetti. This chapter is titled “Unification”, because, in some sense (much too complicated to discuss here) it ties together the concepts of chance, probability and frequency.

The next two chapters both relate probability theory to other disciplines. In chapter 8, the “great idea” is algorithmic randomness; using computers for random number generation. This chapter explores connections between probability and computability theory. Chapter 9 looks at probability and the physical world; i.e., the connections between probability theory and physics. (For a book-length examination of this topic, see *Reasoning About Luck* by Ambegaokar.)

The final chapter of the book concerns Hume’s assertion that, in the words of the authors, “there is a problem of understanding and validating inductive reasoning.” This has been the subject of much scholarly philosophical work, and, as the authors point out, also motivated the work of Bayes and de Finetti.

As the above survey of this text’s contents should make clear, there is a lot of interesting material in this book. The course taught by the authors is undoubtedly unusual and not replicated at many universities, but one doesn’t need to teach a similar course to get something of value from this book. Any faculty member who teaches probability, even at the freshman level, should find something of interest here. Since the authors have taken pains to make the discussions as accessible as possible to a broad audience, students should enjoy reading the book as well.

This book is not intended for a purely lay audience, however. As noted earlier, some exposure to probability is a prerequisite. This implies, as a corollary, that the reader should be familiar with calculus; integral signs and limit arguments are used without apology. The level of difficulty often varies sharply with the chapter; while much of the early chapters might be accessible to people without much background, others (such as the chapter on de Finetti’s work) are quite challenging. I doubt that most students will really understand this chapter; even non-specialist professionals might find it heavy going.

A nice feature of the text is the inclusion of an end-of-chapter summary of the contents of each chapter. Other nice features include an annotated bibliography and the inclusion of appendices in many of the chapters, in which the authors have placed some discussions that were deemed either too tangential or too technical for the body of the chapter.

One not-so-nice feature: there are no exercises in the text at all. Since the book is, we are told, based on a course, it would have been interesting to see what kind of things the authors asked the students to do for homework.

To summarize: this is an interesting and unusual book. I don’t imagine that I will ever use it as an actual text, but I think it likely that I will take material from it for use in future courses.

Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.