Should We Risk It? is a book about the analysis of risk, but it is not merely a mathematical book. As the authors put it in the second paragraph, the book "is about modeling and calculating a variety of risks, understanding what we're trying to calculate, and why we would want to do so." It includes technical and analytical methods, puts the problems into a social context by discussing "social valuation," and connects the two within a discussion of decision theory. Because the book is a "first draft," risk analysis is relatively new and constantly changing, and there are not many other sources or texts for a course like this, the authors include a web page for suggestions, alternate solutions, and contributions. Of course the mathematical or statistical field of "decision analysis" has been around a long time, but I believe the science of risk analysis, especially closely linked with social ramifications, is fairly new.
In their "Preface and Acknowledgment" the authors describe the evolution of the book, and suggest that it is a text for a one or two semester undergraduate course in risk analysis, a professional graduate program, or doctoral students in various fields. So, the first thing one must remember is that this is a textbook, and not a mathematics book for general audiences. This reviewer concurs with the authors' judgment on how the book should be used, as long as one thinks of an upper-level undergraduate course. The analytical aspect of the book would frustrate many lower-level "non-technical" undergraduates, and the qualitative aspects of the book could surprise the technically oriented freshmen and sophomores. The second thing to remember is that true to the authors' claim, the book is not a bunch of formulas and models. They "seek to bridge the gap between the qualitative 'discussion' books" and the "advanced modeling books and journal papers."
The book is divided into three parts (not including Chapter 1, the introduction). Chapters 2-4 discuss the "tools of the trade:" basic models, basic statistics, and variability and uncertainty. Chapters 5-8 discuss methodologies: structural models, empirical models, exposure assessment and technological risk assessment. Chapters 9 and 10 put the methods into context and explore how the "human agent" influences analyses and decisions.
In all of the chapters the examples actually come in the form of solved problems (they are numbered), and the "homework" problems consist of alphabetically labeled problems. In chapter 1 the numbered problems discuss "getting started," "data needs" and "data usage."
Because this is a 400 page textbook, this reviewer did not read the book from cover to cover, so all that remains in this review is a brief description of what is in each chapter. Therefore, if you think that you have learned enough already, you need not go beyond this paragraph. However, before you turn the "web page," go with the following thought. This statistician found the book quite intriguing, and teaching a course in risk analysis out of this text would be valuable for an instructor not versed in risk analysis as well as for the students. Almost all of the examples are "real-world" in the truest sense. Of course, being non-versed in risk analysis, I would want to team-teach the course with someone (a sociologist?) who was!
Chapter 2 discusses basic models and risk problems, including stock and flow models, cause and effect relationships (such as dose-response models), mechanistic models and curve fitting. Examples include wallpaper glue, indoor radon exposure, earthquake versus traffic risks, and pharmacokinetic models.
Chapter 3 reviews statistics from the typical one-semester, introductory, undergraduate course. Here the discussion covers the mean, median, measuring dispersion, normal, binomial and chi-square distributions, sample data, hypothesis tests and confidence intervals, and fitting curves to ordered-pair data. The whole discussion is done with radon exposure in mind.
Chapter 4 discusses uncertainty and the use of Monte Carlo methods and Bayesian statistics to evaluate uncertainty. Measuring the speed of light, radon concentrations, interpreting medical test results, and exposure to home tap water, are used to illustrate the concepts.
Chapter 5 begins the treatment of basic risk analysis methodologies with toxicology, and with a comparison to epidemiology. There is a hypothetical pesticide example, a discussion of "one-hit, two-hit, two-stage" approaches to fitting data to models, and the "EPA approach" for non-carcinogenic effects.
Chapter 6 is a collection of six examples (the numbered problems, recall) exploring the "strengths and weaknesses" of epidemiological studies. Cigarette smoking, cholera, benzene, the common cold, AIDS and double-blind studies. The examples comprise the mundane, exotic, and the practical!
Chapter 7 discusses exposure assessment, a topic discussed earlier in the book at some level, the "ChemLawn Claim," contaminated milk, biomass fuels and childhood disease, heptachlor in beef and trichloroethylene exposure (I have been exposed to that!), continuing the parade of real real-world examples.
Chapter 8 investigates the "power and limitation" of using risk analysis for technological systems and new technology. From plutonium to coal burning power plants, commercial nuclear safety and high level nuclear waste, fighter aircraft and domestic airplane flights, techniques like event trees and fault trees are illustrated.
Chapter 9, titled "Decision Making," integrates the tools for risk analysis presented in the earlier chapters with "tools for selecting among the alternatives." Using dollar values to compare different risk reduction measures, event trees for decision analysis, risk-time curves to look at Superfund remediation and a hypothetical "hoof blister" example comprise the four examples that are supposed to integrate the analytical with the decision making (remember, I did not read cover-to-cover).
Finally, Chapter 10 examines the relationship between risk assessment and communication of that risk. The authors state early in the chapter that it is not necessarily what you want to tell the public that is important, but what the public needs to know and will be willing to hear. The analysis and reporting of results is grounded in the reality. Sections such as "Same Numbers, Different Stories," "Framing a Question: Loss or Gain?" "Can or Should "Zero Risk" be a Goal" and "What Will They Think It Means" are just a few of the sections putting the final touches on this book. Electromagnetic fields, surfers, ranking potential hazards, saccharin and Alar are the interesting examples used to illustrate the main points of this important chapter.
Dex Whittinghill (email@example.com) is assistant professor of statistics, and in fact an "isolated statistician," in the congenial Department of Mathematics at Rowan University (formerly Glassboro State College). As a member of the MAA and the ASA (American Statistical Association), and member of the ASA/MAA Joint Committee on Undergraduate Statistics, he is ever increasingly involved in the statistics education "movement." After submitting this review to MAA Online, he will put the finishing touches on his first paper (co-authored) in the statistical education genre.