Models for Probability and Statistical Inference was written to serve as a text for a two-semester sequence in probability and statistical inference offered at Michigan State University. It is informed by the author’s years of experience teaching this course: his comments on the importance of distinguishing between handwritten X and x, for instance, are clearly based on time spent in the College of Hard Knocks, Instructors Division. Stapleton covers the usual topics with admirable clarity but in rather compact fashion, which makes his book not always easy to read and perhaps intimidating to some students. On the other hand, graduate statistical study is not for the faint of heart, and most students using this text will have far younger eyes than do I.
The most promising new feature of Stapleton’s text is the attempt to integrate teaching probability and inference with the use of R or S-Plus, which are fast becoming the languages of choice for many statistical purposes. He includes simulations and graphs produced by S-Plus throughout the text, as well as S-plus code and output for some examples. However, he assumes student familiarity with those languages or investment of instructor time to teach them: those who have not used those languages before will not learn how to do so by using this book.
Each section of Models for Probability and Statistical Inference is followed by a number of problems, arranged from simple to complex, and selected answers are provided in the back of the text. There are numerous illustrations to clarify important concepts, although they often seem to have been crammed into pages which were already barely sufficient for the text and formulas. Thirty pages of statistical tables are included after the main text, as is a brief (3 pages) general bibliography.
The students at Michigan State for whom this text is intended will have had at least one course in linear analysis and two semesters of calculus; however much of the text is understandable without that background. It also can serve as a reference book for people who have taken a similar course in the past and just need to refresh their memory on a topic.
James H. Stapleton received his PhD in mathematical statistics from Purdue University and served as professor in the Department of Statistics and Probability at Michigan State University for forty-nine years. He was also the chairperson of the department for eight years and served almost twenty years as graduate director. Stapleton is also the author of Linear Statistical Models (Wiley, 1995).
Sarah Boslaugh (email@example.com) is a Performance Review Analyst for BJC HealthCare and an Adjunct Instructor in the Washington University School of Medicine, both in St. Louis, MO. Her books include An Intermediate Guide to SPSS Programming: Using Syntax for Data Management (Sage, 2004), Secondary Data Sources for Public Health: A Practical Guide (Cambridge, 2007), and Statistics in a Nutshell (O'Reilly, forthcoming); she also served as Editor-in-Chief for The Encyclopedia of Epidemiology (Sage, 2008).
1.1 Discrete Probability Models.
1.2 Conditional Probability and Independence.
1.3 Random Variables.
1.5 The Variance.
1.6 Covariance and Correlation.
2. Special Discrete Distributions.
2.1 The Binomial Distribution.
2.2 The Hypergeometric Distribution.
2.3 The Geometric and Negative Binomial Distributions.
2.4 The Poisson Distribution.
3. Continuous Random Variables.
4.1 Continuous RV's and Their Distributions.
4.2 Expected Values and Variances.
4.3 Transformations of Random Variables.
4 Special Continuous Distributions.
4.1 The Normal Distribution.
4.2 The Gamma Distribution.
5. Conditional Distributions.
5.1 The Discrete Case.
5.2 Conditional Expectations for the Discrete Case.
5.3 Conditional Densities and Expectations for Continuous RV's.
6. Limit Laws.
6.1 Moment Generating Functions.
6.2 Convergence in Probability and in Distribution.
6.3 The Central Limit Theorem.
6.4 The Delta-Method.
7.1 Point Estimation.
7.2 The Method of Moments.
7.3 Maximum Likelihood.
7.5 The Ω-Method.
7.6 Confidence Intervals.
7.7 Fisher Information, The Cramer-Rao Bound, and Asymptotic Normality of MLE's.
8. Testing Hypotheses.
8.2 The Neyman-Pearson Lemma.
8.3 The Likelihood Ratio Test.
8.4 The p-Value and the Relationship Between Tests of Hypotheses and Confidence Intervals.
9. The Multivariate Normal, Chi-square, t, and F-Distributions.
9.1 The Multivariate Normal Distribution.
9.2 The Central and Noncentral Chi-Square Distributions.
9.3 Student's t-Distribution.
9.4 The F-Distribution.
10.3 Nonparametric Statistics.
10.1 The Wilcoxon Test and Estimator.
10.2 One Sample Methods.
10.3 The Kolmogorov-Smirnov Tests.
11. Linear Models.
11.1 The Principle of Least Squares.
11.2 Linear Models.
11.3 F-Tests for H0.
11.4 Two-Way Analysis of Variance..
12. Frequency Data.
12.1 Logistic Regression.
12.2 Two-Way Frequency Tables.
12.3 Chi-Square Goodness of Fit Tests.
13. Miscellaneous Topics.
13.1 Survival Analysis.
13.3 Bayesian Statistics.