You are here

Bernoulli's Fallacy

Aubrey Clayton
Publisher: 
Columbia University Press
Publication Date: 
2021
Number of Pages: 
368
Format: 
Hardcover
Price: 
34.95
ISBN: 
9780231199940
[Reviewed by
Sara Stoudt
, on
05/9/2022
]
Aubrey Clayton’s “Bernoulli’s Fallacy” is not here to make friends; the book does not pull punches when it comes to the personalities of the statistical forefathers or the sins of the frequentist methods they popularized. For some, this book will be preaching to the choir. For others, it may make them question their current approach to analyzing data or teaching statistics.
 
The main argument of the book is that frequentist statistics hinges on assumptions that aren’t justified in all settings. Two large claims that are dismantled throughout the course of the book are that probability can be defined in terms of empirical frequency alone and that “closeness” is symmetric (i.e. an observed sample statistic will be close to its corresponding population parameter with high probability being the same as the population parameter being close to what we observed with high probability). Clayton recounts the history of how these assumptions became glossed over and then ingrained in the way many perform statistical analyses and then unravels the consequences of these misconceptions.
 
The writing is readily accessible to a wide audience, and concrete examples help the reader navigate the most technical parts. The emphasis on the role of extra information in inference, beyond the current data, causes paradoxes to lose their mystique and smoothly motivates the use of prior information and conditional probability in the Bayesian framework, presenting this school of thought not as a convoluted stopgap but a natural approach to answering questions using data.
 
As can happen with any book that takes a pointed stance, there were some elements that I bristled at (“frequentist jihad” seemed a bit much even as an allusion to a Galton quote). Other parts made me chuckle (that AI character, Super Freq, is pretty wild now). I can see a lot of opportunities for using this book in the classroom as well. This book could be assigned as a common reading for incoming graduate students in statistics, or any field that uses statistics, to prompt discussion amongst a new cohort. It could be used as supplementary text in an upper level undergraduate or graduate course on Bayesian Statistics to help students understand the motivation for learning this school of thought and get a sense of the history. Probability and statistics instructors may also mine the book for examples and case studies to spur students to think outside of the frequentist box. I plan to use the extended story of SuperFreq and Jackie Bernoulli as an assigned reading for my undergraduate Statistical Inference Theory course as a light refresher of introductory statistics material. 
 
At a meta level, this book can even be used to teach about making a statistical argument. Both Clayton and the statisticians mentioned throughout the book have a message to share and seeing them in their role as communicators as well as statisticians yields insight into the ethical implications of the work we do. What happens in the art of statistics’ shades of gray? 
 
Tracking the history of eugenics as it pertains to the evolution of statistics is an important endeavor. Clayton shows that the connection is not only a coincidence of personalities but part of a deeper strategy to gain an air of objectivity. Statistical terms like homogeneity, the average (as in average man) and the extremes (as in genius) become more charged in this context. The smoke screen of worrying about the subjectivity of choice of prior obscures the fact that we always face a choice of likelihood, and every choice has a consequence. 
 
Clayton also makes connections between this statistical framework debate and other hot topics in statistics like the replicability crisis and the misuse of p-values, proving that this topic has deep roots and wide applicability to not just statisticians but scientists in general. Overall, “Bernoulli’s Fallacy” is a timely story, well-told. It makes a compelling case for a shake-up in the world of statistics that may just be strident enough to spark change.

Sara Stoudt (https://sastoudt.github.io/) is an assistant professor in the Department of Mathematics at Bucknell University. She is interested in applied statistics and the pedagogy of writing in the STEM fields.