You are here

The Signal and the Noise: Why So Many Predictions Fail — but Some Don't

Nate Silver
Penguin Books
Publication Date: 
Number of Pages: 
BLL Rating: 

The Basic Library List Committee suggests that undergraduate mathematics libraries consider this book for acquisition.

[Reviewed by
Allen Stenger
, on

This book is a series of detailed case studies of some real-world problems and activities of prediction where statistical thinking is essential. It is designed as a popular book and the mathematics is very simple, with no explicit formulas (but lots of graphs). It uses Bayesian methods throughout, there is a little bit of chaos theory, and there are a number of examples of power law dependencies, but these are introduced very gently and would not scare off the average reader.

The author is trained as an economist, and is best known for his predictive work in baseball and in politics. He received the 2015 Joint Policy Board for Mathematics (JPBM) Communications award for his writings. This book is primarily not about his own experience but about conspicuous failures (and some successes) in predictions made by others. The book’s flavor is similar to Ellenberg’s How Not to be Wrong, although that book covers a much broader array of methods (not just statistics) and is more technical. A large part of the book’s message is that the difficulties often lie primarily not in mathematics but in areas such as data collection, modeling, bias, and public policy.

The title refers to the problem of discerning patterns when there is a large amount of random variation. A good example is Chapter 12 on climate change. The outside temperature varies from minute to minute and location to location, and even for fixed locations we don’t have a long history of records. Central Park in New York has one of the best records, with monthly averages recorded since 1869. The climate change problem is to discern a systematic shift among all the random variation, which is an excellent problem for statistics (if you can get the data) but puzzling for the average person. If the outside temperature shot up to 90 degrees and stayed there, everyone would be convinced that global warming was real, but the effect we are looking at is an observed average temperature increase of 1.5 degrees Celsius per century, which is certainly not perceptible without a lot of study.

Bias occurs in many areas. If we have a preference about the outcome, we tend to ignore or discount or declare as an outlier any data that points in a different direction. The problem seems to be especially bad in political forecasts (one of Silver’s specialties; he is founder of the polling web site Everyone has a preference about the outcome of an election and tends to cherry-pick the data the supports his desires. The case of television pundits is especially bad, as they are rewarded for being entertaining and for reinforcing the audience’s bias, not for being accurate. Humans also have a strong bias towards overconfidence: we believe we understand things better than we do, and proceed boldly when we should be cautious. We also have a bias toward believing that unlikely events, and events we have never seen, are impossible. One corrective for this last bias is power law models. Silver contends that rare events such as earthquakes and terrorist attacks follow a power-law distribution, and the data backs him up. In a power-law distribution, extremely large events such a devastating earthquakes or terrorist attacks that kill thousands occur rarely, but statistically they are guaranteed to occur and we can estimate how frequently; in the model there’s no point beyond which they are impossible.

Now that we live in the age of Big Data, there is a temptation to believe that we can model and predict anything based on the data we have. A recurrent theme in the book is the dangers of overfitting and of developing pure correlation models divorced from cause-and-effect reasoning. If you have enough variables you can develop models that fit the existing data as closely as you want, but in Silver’s terminology you may be fitting the noise and not the signal. The danger seems to be especially great in economics, where there is a never-ending search for leading indicators that will predict the future direction of the economy or the stock market.

Public policy is another challenging area. Weather prediction has improved tremendously over the past 50 years or so, and in particular we are now able to make good predictions of the paths of hurricanes. Weather scientists predicted in 2005 that Hurricane Katrina was likely to strike New Orleans, and they predicted and publicized this five days before it hit. Two days before it hit they urged local officials to order evacuations; the Governor of Mississippi did this immediately, and the Governor of Louisiana had already declared a state of emergency. The Mayor of New Orleans delayed, and did not issue a mandatory evacuation order until the day before Katrina hit, and by then it was too late for many people: they either did not hear the warnings, did not take them seriously, or simply could not move that fast. Katrina killed about 1,200 people in Louisiana and Mississippi.

Bottom line: a fascinating book about current real-life problems, with a strong statistical message.

Allen Stenger is a math hobbyist and retired software developer. He is an editor of the Missouri Journal of Mathematical Sciences. His personal web page is His mathematical interests are number theory and classical analysis.


1. A catastrophic failure of prediction

2. Are you smarter than a television pundit?

3. All I care about is W’s and L’s

4. For years you’ve been telling us that rain is green

5. Desperately seeking signal

6. How to drown in three feet of water

7. Role models

8. Less and less and less wrong

9. Rage against the machines

10. The poker bubble

11. If you can’t beat ’em ...

12. A climate of healthy skepticism

13. What you don’t know can hurt you