You are here

A Natural Introduction to Probability Theory

Ronald Meester
Publisher: 
Birkhäuser
Publication Date: 
2003
Number of Pages: 
191
Format: 
Paperback
Price: 
34.95
ISBN: 
3-7643-2188-1
Category: 
Textbook
[Reviewed by
Raymond N. Greenwell
, on
03/1/2005
]

Most textbooks designed for a one-year course in mathematical statistics cover probability in the first few chapters as preparation for the statistics to come. This book in some ways resembles the first part of such textbooks: it's all probability, no statistics. But it does the probability more fully than usual, spending lots of time on motivation, explanation, and rigorous development of the mathematics. All the famous examples are presented, sometimes in the exercises: Simpson's paradox; the two-envelope paradox (including a continuous variable version in which it paradoxically pays to sometimes switch envelopes); the St. Petersburg paradox; the problem of three people trying to guess the color of their hats; and the problem of the careless hatcheck person (here called the secretary problem). There are other fascinating but less familiar examples, such as a counterintuitive result on random networks, illustrated on the cover of the book, with a very clever proof. Although the author assumes only calculus through multiple integration, he tries to bring the reader to the point of understanding why measure theory (alluded to without any details) is necessary for probability to be a fully developed field of mathematics.

The first four chapters cover discrete probability. The first two include the usual topics of conditional probability, independence, Bayes' Theorem, random variables, expectation, and generating functions. Chapter 3 discusses random walks, not a standard topic in mathematical statistics texts. One result from this chapter is used in the next chapter (Limit Theorems) to prove a special case of the Central Limit Theorem in which all the random variables have the value 1 or -1. Chapter 4 also contains the first of many laws of large numbers included in the book.

Before launching into continuous probability in the following chapters, the author provides a short chapter titled "Intermezzo," in which he explains why discrete probability is not enough for many purposes. He discusses uncountable infinities and even the Banach-Tarskii paradox in very general terms.

The second part of the book begins with Chapter 5, where the author extends the ideas of the first four chapters to continuous random variables, univariate and multivariate. There is some discussion of random variables that are mixed continuous-discrete, as well as one example of a random vector that is neither continuous nor discrete nor mixed. Situations with infinitely many repetitions of an experiment are considered in Chapter 6. Returning to the random walks introduced in Chapter 3, the author shows that in a one-dimensional random walk, the probability of returning to the start is 1. The Poisson process is discussed in much greater detail in Chapter 7 than in the brief introduction to the Poisson distribution in Chapter 2.

Chapter 8 features another law of large numbers, but the high point is the Central Limit Theorem, presented here in the form usually seen in undergraduate textbooks, and proven using characteristic functions. Chapter 9, entitled "Extending the Possibilities," introduces sigma algebras and motivates further measure-theoretic study of probability.

There are two types of exercises: those scattered throughout the text, and those at the end of the chapter. The first type usually ask the reader to verify a result or finish a proof. (My favorite, after the proof of a lemma in Chapter 7, simply states: "Make sure you really understand this.") The second type contain further examples, applications, and results, but tend to be plentiful only in the chapters with the standard material, and sparse in the chapters that take interesting detours or focus on theorems. Specifically, Chapters 1, 2, and 5 have 36, 41, and 40 exercises, respectively, while Chapters 3, 4, 6, 7, 8 and 9 have three, two, six, nine, eight, and none, respectively. There is one page of answers to some of the exercises in an appendix.

The exposition is usually clear and eloquent. One exception is an abstruse example on p. 98 which tries to motivate the Cauchy distribution. It desperately needs a figure to be intelligible. The notation is usually clear, except on p. 110 when the notation for an indicator function is used, after being defined 58 pages earlier and not used in between.

Overall, this is a five-star book on probability that could be used as a textbook or as a supplement. Another well-known book in this niche is A First Course in Probability, by Sheldon Ross, but this book is much smaller and cheaper.


Raymond N. Greenwell (matrng@hofstra.edu) is Professor of Mathematics at Hofstra University in Hempstead, New York. His research interests include applied mathematics and statistics, and he is coauthor of the texts Finite Mathematics and Calculus with Applications, both published by Addison Wesley.

Preface

Experiments

Random Variables and Random Vectors

Random Walk

Limit Theorems

Intermezzo

Continuous Random Variables and Vectors

Infinitely Many Repetitions

The Poisson Process

Limit Theorems

Extending the Probabilities

Interpreting Probabilities

Further Reading

Answers to Selected Exercises

Index.