You are here

Statistics: Unlocking the Power of Data

Robin H. Lock, Patti Frazer Lock, Kari Lock Morgan, Eric F. Lock, Dennis F. Lock
Publisher: 
Wiley
Publication Date: 
2012
Number of Pages: 
736
Format: 
Hardcover
Price: 
195.95
ISBN: 
978-0-470-60187-7
Category: 
Textbook
[Reviewed by
Robert W. Hayden
, on
03/8/2016
]

This introductory statistics textbook has been creating quite a stir in the statistics education community. Robin Lock has long been highly regarded in that community, and the other authors are his wife and three of his offspring. The book might reasonably be described as similar to the best current mainstream texts, with the added novelty of integrating resampling methods throughout, which brings us to the main conundrum involved in reviewing it: while most statisticians are familiar with resampling methods, mathematicians teaching statistics in a mathematics department may not be. So when this review was first planned the question arose as to the extent to which such methods should be explained within the review. Since then another option has presented itself. A manuscript your reviewer produced some time ago has appeared on the Internet where it is available to all. It discusses current forces affecting the college introductory statistics course, and how these might in turn affect the teaching of statistics at the precollege level. In so doing, it includes explanations of resampling methods for a layperson, so I encourage readers who want an explanation to read it.

Returning to the book under review, there are two main reasons commonly given for including resampling methods for inference. One is that these are valuable methods in their own right that may be useful to students. The other is that many feel that students find them more direct and intuitive than the traditional methods that must be built up gradually over a period of weeks. Resampling methods can often be introduced, and inference done, in the first week of class. This is the first text that has won widespread acceptance for addressing both reasons to do resampling, so perhaps we should begin with a discussion of how successfully it does that.

Before we can say how well this book treats resampling, we need to mention briefly some of its limitations. Early proponents of resampling made many unsubstantiated claims on its behalf. Some of those have been repeated so many times that some have come to believe they must be true. In fact, resampling is a valid and useful approach, but it is not without limitations. In contrast to many claims, the bootstrap does not work well with small samples, and is not a panacea for violations of the usual assumptions for traditional methods. Some bootstrap methods, however, do offer some advantages for larger samples from skewed populations. The reader interested in details is pointed to the work by Diez and by Hesterberg cited in my own paper above. Both are also accessible to all.

The Locks do a good job of presenting resampling methods and illustrating over and over that they often lead to results very close to those obtained by traditional methods. While they do discuss the limitations of traditional methods, they say little about the limitations of the bootstrap (though they generally apply it only in situations where it works). They mention that resampling methods can be applied to other situations than those covered by the methods that are traditionally part of a first course, but give few examples, and no warnings of limitations. The authors must be granted a measure of lenience here as these issues would be added topics in a course that is already overweight.

The Locks also do a good job of carrying out the ideas that have long been under discussion for using resampling as a pedagogical tool. Again, the limitations of the bootstrap are an issue. Fans point to students’ ready willingness to accept such an approach, while skeptics might note that such willingness led early fans to vastly overgeneralize the strengths of the bootstrap. Will students do likewise? Your reviewer thinks the jury is still out, but thanks the Locks for bringing forth a test case.

Beyond the introduction of resampling methods, this book is good but breaks little additional new ground. One possible exception is that resampling requires technology, and this book has lots to say about technology, and is accompanied by a web site with technological tools.

At the same time, one senses some ambivalence. Take the 30 exercises at the end of Section 6.5 on confidence intervals for a single mean. Almost all of them involve cranking out intervals from given summary statistics “by hand,” as one would see in textbooks from the 1950s. Your reviewer would eliminate all of these (though your reviewer admits to being a radical). It might be helpful to have two problems where the student begins with a small data set and carries out the entire process from that to a final interval, checking assumptions, interpreting the interval in context, and addressing the quality of the study design. A minority of students might learn from the calculation, while others will find doing the arithmetic a difficult diversion from learning statistics. Of the 30 problems supplied, only two ask the student to use statistical software to crunch the numbers, only two ask the student to carry out resampling themselves, and only two provide a display of the data whereby assumptions might be addressed. There are still courses where students do not have access to technology, and publishers want to sell textbooks for those courses, but perhaps it is time for exercises addressed to that situation to be in the minority — or relegated to a supplement the way software instruction has heretofore been.

This book shares with most current texts an only partial recognition of the differences among surveys, experiments, and observational studies. One form that takes is extreme leniency in what is allowed to pass as a “random’ sample. These issues are discussed but not consistently applied to examples and exercises.

Despite some imperfections, this text should be familiar to everyone teaching an introductory statistics course or considering a textbook for one. Truly innovative textbooks are rare indeed, and I hope this one will stimulate much discussion and debate. Often innovative texts are strong on their innovation but not a viable text otherwise. This one is an exception, and a worthy candidate for adoption in most introductory statistics courses.


After a few years in industry, Robert W. Hayden (bob@statland.org) taught mathematics at colleges and universities for 32 years and statistics for 20 years. In 2005 he retired from full-time classroom work. He now teaches statistics online at statistics.com and does summer workshops for high school teachers of Advanced Placement Statistics. He contributed the chapter on evaluating introductory statistics textbooks to the MAA's Teaching Statistics.

Unit A: Data

Chapter 1: Collecting Data

Chapter 2: Describing Data

Unit B: Understanding Inference

Chapter 3: Confidence Intervals

Chapter 4: Hypothesis Tests

Unit C: Inference for Means and Proportions

Chapter 5: Approximating with a Distribution

Chapter 6: Inference for Means and Proportions

Unit D: Inference for Multiple Parameters

Chapter 7: Chi-Square Tests for Categorical Variables

Chapter 8: ANOVA for Comparing Means

Chapter 9: Inference for Regression

Chapter 10: Multiple Regression

Optional:

Chapter 11: Probability Basics

Tags: 

Dummy View - NOT TO BE DELETED