You are here

Advanced Statistics from an Elementary Point of View

Michael J. Panik
Publisher: 
Elsevier
Publication Date: 
2005
Number of Pages: 
802
Format: 
Hardcover
Price: 
99.95
ISBN: 
0-12-088494-1
Category: 
Textbook
[Reviewed by
Darren Glass
, on
04/11/2006
]

According to the description on the back of the book, Michael J. Panik's Advanced Statistics from an Elementary Point of View "captures the flavor of a course in mathematical statistics without imposing rigor for its own sake." Your reaction to that quotation will pretty well summarize your feelings towards the book, depending on whether or not you appreciate rigor "for its own sake." If all blurb-writers were this honest about their books, then there would be no need for MAA Reviews. This book covers a wide range of topics in statistics — it doesn't even define means or medians until the second chapter, but it ends with chapters about contingency tables and bivariate linear regressions. In between there are full chapters on parametric probability distributions, sampling, Chi-Square distributions, point estimation, and tests of parametric statistical hypotheses.

Panik's goal is to make the book very accessible, and in this goal he succeeds. His exposition is quite clear and I found it quite easy to follow his many examples. There are also a large number of exercises, many of which have solutions given and some of which seem quite interesting. Panik certainly does not impose much rigor, however, and many of the explanations were not as fleshed out or precise as I would have liked. While I imagine that many of my students would appreciate the lack of formal proofs throughout the book, I know that many others — and certainly most mathematicians or statisticians I know — would find this book highly deficient for this very reason. Panik preemptively addresses this issue in his introduction, pointing out that many of the theorems have their proofs developed in the exercises, but this reader was still disappointed.

Panik is an economist, and I imagine that his book would work well for a statistics course for economics majors — or for the similar courses in many of the departments on my campus that focus on actually using statistical techniques rather than on why they are true. But while it did indeed "capture the flavor" of a course I would want to see in a mathematics department, it would need quite a bit of extra meat to be substantial enough for the whole meal that I would want from a textbook.


Darren Glass (dglass@gettysburg.edu) is an Assistant Professor at Gettysburg College.

 1 Introduction
1.1 Statistics Defined
1.2 Types of Statistics
1.3 Levels of Discourse: Sample vs. Population
1.4 Levels of Discourse: Target vs. Sampled Population
1.5 Measurement Scales
1.6 Sampling and Sampling Errors
1.7 Exercises
2 Elementary Descriptive Statistical Techniques
2.1 Summarizing Sets of Data Measured on a Ratio or Interval Scale
2.2 Tabular Methods
2.3 Quantitative Summary Characteristics
2.3.1 Measures of Central Location
2.3.2 Measures of Dispersion
2.3.3 Standardized Variables
2.3.4 Moments
2.3.5 Skewness and Kurtosis
2.3.6 Relative Variation
2.3.7 Comparison of the Mean, Median, and Mode
2.3.8 The Sample Variance and Standard Deviation
2.4 Correlation between Variables X and Y
2.5 Rank Correlation between Variables X and Y
2.6 Exercises
3 Probability Theory
3.1 Mathematical Foundations: Sets, Set Relations, and Functions
3.2 The Random Experiment, Events, Sample Space, and the Random Variable
3.3 Axiomatic Development of Probability Theory
3.4 The Occurrence and Probability of an Event
3.5 General Addition Rule for Probabilities
3.6 Joint, Marginal, and Conditional Probability
3.7 Classification of Events
3.8 Sources of Probabilities
3.9 Bayes’ Rule
3.10 Exercises
4 Random Variables and Probability Distributions
4.1 Random Variables
4.2 Discrete Probability Distributions
4.3 Continuous Probability Distributions
4.4 Mean and Variance of a Random Variable
4.5 Chebyshev’s Theorem for Random Variables
4.6 Moments of a Random Variable
4.7 Quantiles of a Probability Distribution
4.8 Moment-Generating Function
4.9 Probability-Generating Function
4.10 Exercises
5 Bivariate Probability Distributions
5.1 Bivariate Random Variables
5.2 Discrete Bivariate Probability Distributions
5.3 Continuous Bivariate Probability Distributions
5.4 Expectations and Moments of Bivariate Probability Distributions
5.5 Chebyshev’s Theorem for Bivariate Probability Distributions
5.6 Joint Moment–Generating Function
5.7 Exercises
6 Discrete Parametric Probability Distributions
6.1 Introduction
6.2 Counting Rules
6.3 Discrete Uniform Distribution
6.4 The Bernoulli Distribution
6.5 The Binomial Distribution
6.6 The Multinomial Distribution
6.7 The Geometric Distribution
6.8 The Negative Binomial Distribution
6.9 The Poisson Distribution
6.10 The Hypergeometric Distribution
6.11 The Generalized Hypergeometric Distribution
6.12 Exercises
7 Continuous Parametric Probability Distributions
7.1 Introduction
7.2 The Uniform Distribution
7.3 The Normal Distribution
7.3.1 Introduction to Normality
7.3.2 The Z Transformation
7.3.3 Moments, Quantiles, and Percentage Points
7.3.4 The Normal Curve of Error
7.4 The Normal Approximation to Binomial Probabilities
7.5 The Normal Approximation to Poisson Probabilities
7.6 The Exponential Distribution
7.6.1 Source of the Exponential Distribution
7.6.2 Features/Uses of the Exponential Distribution
7.7 Gamma and Beta Functions
7.8 The Gamma Distribution
7.9 The Beta Distribution
7.10 Other Useful Continuous Distributions
7.10.1 The Lognormal Distribution
7.10.2 The Logistic Distribution
7.11 Exercises
8 Sampling and the Sampling Distribution of a Statistic
8.1 The Purpose of Random Sampling
8.2 Sampling Scenarios
8.2.1 Data Generating Process or Infinite Population
8.2.2 Drawings from a Finite Population
8.3 The Arithmetic of Random Sampling
8.4 The Sampling Distribution of a Statistic
8.5 The Sampling Distribution of the Mean
8.5.1 Sampling from an Infinite Population
8.5.2 Sampling from a Finite Population
8.6 A Weak Law of Large Numbers
8.7 Convergence Concepts
8.8 A Central Limit Theorem
8.9 The Sampling Distribution of a Proportion
8.10 The Sampling Distribution of the Variance
8.11 A Note on Sample Moments
8.12 Exercises
9 The Chi-Square, Student’s t, and Snedecor’s F Distributions
9.1 Derived Continuous Parametric Distributions
9.2 The Chi-Square Distribution
9.3 The Sampling Distribution of the Variance When sampling from a Normal Population
9.4 Student’s t Distribution
9.5 Snedecor’s F Distribution
9.6 Exercises
10 Point Estimation and Properties of Point Estimators
10.1 Statistics as Point Estimators
10.2 Desirable Properties of Estimators as Statistical Properties
10.3 Small Sample Properties of Point Estimators
10.3.1 Unbiased, Minimum Variance, and Minimum MSE Estimators
10.3.2 Efficient Estimators
10.3.3 Most Efficient Estimators
10.3.4 Sufficient Statistics
10.3.5 Minimal Sufficient Statistics
10.3.6 On the Use of Sufficient Statistics
10.3.7 Completeness
10.3.8 Best Linear Unbiased Estimators
10.3.9 Jointly Sufficient Statistics
10.4 Large Sample Properties of Point Estimators
10.4.1 Asymptotic or Limiting Properties
10.4.2 Asymptotic Mean and Variance
10.4.3 Consistency
10.4.4 Asymptotic Efficiency
10.4.5 Asymptotic Normality
10.5 Techniques for Finding Good Point Estimators
10.5.1 Method of Maximum Likelihood
10.5.2 Method of Least Squares
10.6 Exercises
11 Interval Estimation and Confidence Interval Estimates
11.1 Interval Estimators
11.2 Central Confidence Intervals
11.3 The Pivotal Quantity Method
11.4 A Confidence Interval for µ Under Random Sampling from a Normal Population with Known Variance
11.5 A Confidence Interval for µ Under Random Sampling from a Normal Population with Unknown Variance
11.6 A Confidence Interval for •2 Under Random Sampling from a Normal Population with Unknown Mean
11.7 A Confidence Interval for p Under Random Sampling from a Binomial Population
11.8 Joint Estimation of a Family of Population Parameters
11.9 Confidence Intervals for the Difference of Means When Sampling from Two Independent Normal Populations
11.9.1 Population Variances Known
11.9.2 Population Variances Unknown But Equal
11.9.3 Population Variances Unknown and Unequal
11.10 Confidence Intervals for the Difference of Means When Sampling from Two Dependent Populations: Paired Comparisons
11.11 Confidence Intervals for the Difference of Proportions When Sampling from Two Independent Binomial Populations
11.12 Confidence Interval for the Ratio of Two Variances When Sampling from Two Independent Normal Populations
11.13 Exercises
12 Tests of Parametric Statistical Hypotheses
12.1 Statistical Inference Revisited
12.2 Fundamental Concepts for Testing Statistical Hypotheses
12.3 What Is the Research Question?
12.4 Decision Outcomes
12.5 Devising a Test for a Statistical Hypothesis
12.6 The Classical Approach to Statistical Hypothesis Testing
12.7 Types of Tests or Critical Regions
12.8 The Essentials of Conducting a Hypothesis Test
12.9 Hypothesis Test for µ Under Random Sampling from a Normal Population with Known Variance
12.10 Reporting Hypothesis Test Results
12.11 Determining the Probability of a Type II Error •
12.12 Hypothesis Tests for µ Under Random Sampling from a Normal Population with Unknown Variance
12.13 Hypothesis Tests for p Under Random Sampling from a Binomial Population
12.14 Hypothesis Tests for •2 Under Random Sampling from a Normal Population
12.15 The Operating Characteristic and Power Functions of a Test
12.16 Determining the Best Test for a Statistical Hypothesis
12.17 Generalized Likelihood Ratio Tests
12.18 Hypothesis Tests for the Difference of Means When Sampling from Two Independent Normal Populations
12.18.1 Population Variances Equal and Known
12.18.2 Population Variances Equal But Known
12.18.3 Population Variances Equal But Unknown
12.18.4 Population Variances Unequal and Unknown
12.19 Hypothesis Tests for the Difference of Means When Sampling from Two Dependent Populations: Paired Comparisons
12.20 Hypothesis Tests for the Difference of Proportions When Sampling from Two Independent Binomial Populations
12.21 Hypothesis Tests for the Difference of Variances When Sampling from Two Independent Normal Populations
12.22 Hypothesis Tests for Spearman’s Rank Correlation Coefficient •S
12.23 Exercises
13 Nonparametric Statistical Techniques
13.1 Parametric vs. Nonparametric Methods
13.2 Tests for the Randomness of a Single Sample
13.3 Single-Sample Sign Test Under Random Sampling
13.4 Wilcoxon Signed Rank Test of a Median
13.5 Runs Test for Two Independent Samples
13.6 Mann-Whitney (Rank-Sum) Test for Two Independent Samples
13.7 The Sign Test When Sampling from Two Dependent Populations: Paired Comparisons
13.8 Wilcoxon Signed-Rank Test When Sampling from Two Dependent Populations: Paired Comparisons
13.9 Exercises
14 Testing Goodness of Fit
14.1 Distributional Hypotheses
14.2 The Multinomial Chi-Square Statistic: Complete Specification of H0
14.3 The Multinomial Chi-Square Statistic: Incomplete Specification of H0
14.4 The Kolmogorov-Smirnov Test for Goodness of Fit
14.5 The Lilliefors Goodness-of-Fit Test for Normality
14.6 The Shapiro-Wilk Goodness-of-Fit Test for Normality
14.7 The Kolmogorov-Smirnov Test for Goodness of Fit: Two Independent Samples
14.8 Assessing Normality via Sample Moments
14.9 Exercises
15 Testing Goodness of Fit: Contingency Tables
15.1 An Extension of the Multinomial Chi-Square Statistic
15.2 Testing Independence
15.3 Testing k Proportions
15.4 Testing for Homogeneity
15.5 Measuring Strength of Association in Contingency Tables
15.6 Testing Goodness of Fit with Nominal-Scale Data: Paired Samples
15.7 Exercises
16 Bivariate Linear Regression and Correlation
16.1 The Regression Model
16.2 The Strong Classical Linear Regression Model
16.3 Estimating the Slope and Intercept of the Population Regression Line
16.4 Mean, Variance, and Sampling Distribution of the Least-Squares Estimators ˆ •0 and ˆ •1
16.5 Precision of the Least Squares Estimators ˆ •0, ˆ •1: Confidence Intervals
16.6 Testing Hypotheses Concerning •0, •1
16.7 The Precision of the Entire Least Squares Regression Equation: A Confidence Band
16.8 The Prediction of a Particular Value of Y Given X
16.9 Decomposition of the Sample Variation of Y
16.10 The Correlation Model
16.11 Estimating the Population Correlation Coefficient •
16.12 Inferences about the Population Correlation Coefficient •
16.13 Exercises
Appendix A
Table A.1 Standard Normal Areas
Table A.2 Cumulative Distribution Function Values for the Standard Normal Distribution
Table A.3 Quantiles of Student’s t Distribution
Table A.4 Quantiles of the Chi-Square Distribution
Table A.5 Quantiles of Snedecor’s F Distribution
Table A.6 Binomial Probabilities
Table A.7 Cumulative Distribution Function Values for the Binomial Distribution
Table A.8 Poisson Probabilities
Table A.9 Fisher’s ˆ •(= r) to • Transformation
Table A.10 R Distribution for the Runs Test of Randomness
Table A.11 W+ Distribution for the Wilcoxon Signed-Rank Test
Table A.12 R1 Distribution for the Mann-Whitney Rank-Sum Test
Table A.13 Quantiles of the Lilliefors Test Statistic ˆ Dn
Table A.14 Quantiles of the Kolmogorov-Smirnov Test Statistic Dn
Table A.15 Quantiles of the Kolmogorov-Smirnov Test Statistic Dn,m When n = m
Table A.16 Quantiles of the Kolmogorov-Smirnov Test Statistic Dn,m When n = m
Table A.17 Quantiles of the Shapiro-Wilk Test Statistic W
Table A.18 Coefficients for the Shapiro-Wilk Test
Table A.19 Durbin-Watson DW Statistic
Table A.20 D Distribution of the Von Neumann Ratio of the Mean-Square
Successive Difference to the Variance
Solutions to Selected Exercises
References and Suggested Reading
Index