You are here

Theory and Methods of Statistics

P. B. Bhattacharya and Prabir Burman
Publisher: 
Academic Press
Publication Date: 
2016
Number of Pages: 
515
Format: 
Paperback
Price: 
150.00
ISBN: 
9780128024409
Category: 
Textbook
[Reviewed by
Robert W. Hayden
, on
10/9/2017
]

There exists a pretty standard syllabus for an introductory statistics course, so when reviewing an introductory statistics textbook one can simply discuss what if anything is unusual about the book at hand and then turn to whatever the text does particularly well or poorly among the standard topics. That is not possible with the book under review here. To a first approximation, it is a very theoretical, very traditional, and very mathematical “mathematical statistics” textbook. It is quite different from many popular textbooks of today and so might best be understood in historical context. Your reviewer first studied statistics from Mathematical Statistics (1962) by John Freund. That book was rather like an advanced calculus text, with a mix of theory and applications. Just flipping pages, there is more English than mathematics. For the text under review, there is little English among the mathematical symbols. Someone interested in the applications of statistics who had taken only a first course would recognize much in Freund, very little in this book. Even such applications as are offered are presented in terms of the theory and would not look familiar to that veteran of a single prior course.

Another classic from the same era as Freund is Hogg and Craig’s Introduction to Mathematical Statistics (1958). It is more oriented toward theory and less toward applications than Freund, but not nearly as much so as the book under review. Hogg and Craig later forked Hogg and Tanis, Probability and Statistical Inference (1977) which was somewhere between Freund and Hogg’s earlier book with Craig on the balance between theory and applications. However, its major innovation was the inclusion of many examples of computer simulations done for pedagogical purposes. In addition, there was available a suite of Fortan programs and a 332 page lab manual. These supported using a computer to run simulations similar to those in the text, as well as to analyze real data as included in the lab manual. (This is before there was Minitab.) The book under review has very little raw data and largely ignores computers.

The 1980s saw a trend toward greater inclusion of applications and experience with real data. Some of the better known texts from that era include those by Mendenhall, Scheaffer and Wackerly, by Larsen and Marx, and by John Rice — all still in print.

Perhaps the most recent change in mathematical statistics textbooks is Mathematical Statistics with Resampling and R (2011) by Chihara and Hesterberg. As the name suggests, in addition to lots of data, this book uses resampling methods and the R programming language both for pedagogical purposes and for analyzing real data. The rapid growth of data science in recent years is likely to add pressure for more applications, more data, and more computing in all statistics courses. The book at hand seems to be bucking that trend. This does not mean it is a bad book, but it may be a niche book.

There will always be a need for a small number of mathematical statisticians, and this book would be a viable choice for training those people. It covers most of the usual topics and some less usual ones. There is an entire chapter on convergence of sequences of random variables, and it includes a section on inequalities. (As every good analyst knows, these are often the key to proving a theorem about limits or convergence.) There is an interesting chapter on curve estimation which includes regression as well as multiple other approaches, and unifies the lot. There are also chapters on robust estimators and time series. While there is not much English prose in this book, what there is is well written if rather terse. Scattered thoughout are a number of interesting examples and asides with bits of wisdom for the reader that give the book more personality than most textbooks. As might be expected, exercises generally ask for proofs. There is a 13 page index.

The authors concede that the book contains far more material than can be covered in a year. The book is based on lecture notes for four courses and one outline suggested would require two full years. In addition to its use as a textbook, this volume would make a good reference on the theory behind many standard methods. The chapter on time series means it might be a particularly good reference (for the teacher) in a year-long course in business statistics. In short, this is a good and interesting book for a small market.


After a few years in industry, Robert W. Hayden (bob@statland.org) taught mathematics at colleges and universities for 32 years and statistics for 20 years. In 2005 he retired from full-time classroom work. He contributed the chapter on evaluating introductory statistics textbooks to the MAA’s Teaching Statistics.

  • Dedication
  • Preface
  • 1: Probability Theory
    • Abstract
    • 1.1 Random Experiments and Their Outcomes
    • 1.2 Set Theory
    • 1.3 Axiomatic Definition of Probability
    • 1.4 Some Simple Propositions
    • 1.5 Equally Likely Outcomes in Finite Sample Space
    • 1.6 Conditional Probability and Independence
    • 1.7 Random Variables and Their Distributions
    • 1.8 Expected Value, Variance, Covariance, and Correlation Coefficient
    • 1.9 Moments and the Moment Generating Function
    • 1.10 Independent Random Variables and Conditioning When There Is Dependence
    • 1.11 Transforms of Random Variables and Their Distributions
    • Exercises
  • 2: Some Common Probability Distributions
    • Abstract
    • 2.1 Discrete Distributions
    • 2.2 Continuous Distributions
    • Exercises
  • 3: Infinite Sequences of Random Variables and Their Convergence Properties
    • Abstract
    • 3.1 Introduction
    • 3.2 Modes of Convergence
    • 3.3 Probability Inequalities
    • 3.4 Asymptotic Normality: The Central Limit Theorem and Its Generalizations
    • Exercises
  • 4: Basic Concepts of Statistical Inference
    • Abstract
    • 4.1 Population and Random Samples
    • 4.2 Parametric and Nonparametric Models
    • 4.3 Problems of Statistical Inference
    • 4.4 Statistical Decision Functions
    • 4.5 Sufficient Statistics
    • 4.6 Optimal Decision Rules
    • Exercises
  • 5: Point Estimation in Parametric Models
    • Abstract
    • 5.1 Optimality Under Unbiasedness, Squared-Error Loss, UMVUE
    • 5.2 Lower Bound for the Variance of an Unbiased Estimator
    • 5.3 Equivariance
    • 5.4 Bayesian Estimation Using Conjugate Priors
    • 5.5 Methods of Estimation
    • Exercises
  • 6: Hypothesis Testing
    • Abstract
    • 6.1 Early History
    • 6.2 Basic Concepts
    • 6.3 Simple Null Hypothesis vs Simple Alternative: Neyman-Pearson Lemma
    • 6.4 UMP Tests for One-Sided Hypotheses Against One-Sided Alternatives in Monotone Likelihood Ratio Families
    • 6.5 Unbiased Tests
    • 6.6 Generalized Neyman-Pearson Lemma
    • 6.7 UMP Unbiased Tests for Two-Sided Problems
    • 6.8 Locally Best Tests
    • 6.9 UMP Unbiased Tests in the Presence of Nuisance Parameters: Similarity and Completeness
    • 6.10 The p-Value: Another Way to Report the Result of a Test
    • 6.11 Sequential Probability Ratio Test
    • 6.12 Confidence Sets
    • Exercises
  • 7: Methods Based on Likelihood and Their Asymptotic properties
    • Abstract
    • 7.1 Asymptotic Properties of the MLEs: Consistency and Asymptotic Normality
    • 7.2 Likelihood Ratio Test
    • 7.3 Asymptotic Properties of MLE and LRT Based on Independent Nonidentically Distributed Data
    • 7.4 Frequency X2
    • Exercises
  • 8: Distribution-Free Tests for Hypothesis Testing in Nonparametric Families
    • Abstract
    • 8.1 Ranks and Order Statistics
    • 8.2 Locally Most Powerful Rank Tests
    • 8.3 Tests Based on Empirical Distribution Function
    • Exercises
  • 9: Curve Estimation
    • Abstract
    • 9.1 Introduction
    • 9.2 Density Estimation
    • 9.3 Regression Estimation
    • 9.4 Nearest Neighbor Approach
    • 9.5 Curve Estimation in Higher Dimension
    • 9.6 Curve Estimation Using Local Polynomials
    • 9.7 Estimation of Survival Function and Hazard Rates Under Random Right-Censoring
    • Exercises
  • 10: Statistical Functionals and Their Use in Robust Estimation
    • Abstract
    • 10.1 Introduction
    • 10.2 Functional Delta Method
    • 10.3 The L-Estimators
    • 10.4 The M-Estimators
    • 10.5 A Relation Between L-Estimators and M-Estimators
    • 10.6 The Remainder Term Rn
    • 10.7 The Jackknife and the Bootstrap
    • Exercises
  • 11: Linear Models
    • Abstract
    • 11.1 Introduction
    • 11.2 Examples of Gauss-Markov Models
    • 11.3 Gauss-Markov Models: Estimation
    • 11.4 Decomposition of Total Sum of Squares
    • 11.5 Estimation Under Linear Restrictions on β
    • 11.6 Gauss-Markov Models: Inference
    • 11.7 Analysis of Covariance
    • 11.8 Model Selection
    • 11.9 Some Alternate Methods for Regression
    • 11.10 Random- and Mixed-Effects Models
    • 11.11 Inference: Examples From Mixed Models
    • Exercises
  • 12: Multivariate Analysis
    • Abstract
    • 12.1 Introduction
    • 12.2 Wishart Distribution
    • 12.3 The Role of Multivariate Normal Distribution
    • 12.4 One-Sample Inference
    • 12.5 Two-Sample Problem
    • 12.6 One-Factor MANOVA
    • 12.7 Two-Factor MANOVA
    • 12.8 Multivariate Linear Model
    • 12.9 Principal Components Analysis
    • 12.10 Factor Analysis
    • 12.11 Classification and Discrimination
    • 12.12 Canonical Correlation Analysis
    • Exercises
  • 13: Time Series
    • Abstract
    • 13.1 Introduction
    • 13.2 Concept of Stationarity
    • 13.3 Estimation of the Mean and the Autocorrelation Function
    • 13.4 Partial Autocorrelation Function (PACF)
    • 13.5 Causality and Invertibility
    • 13.6 Forecasting
    • 13.7 ARIMA Models and Forecasting
    • 13.8 Parameter Estimation
    • 13.9 Selection of an Appropriate ARMA model
    • 13.10 Spectral Analysis
    • Exercises
  • Appendix A: Results From Analysis and Probability
    • A.1 Some Important Results in Integration Theory
    • A.2 Convex Functions
    • A.3 Stieltjes Integral
    • A.4 Characteristic Function, Weak Law of Large Number, and Central Limit Theorem
    • A.5 Weak Convergence of Probabilities on C[0,1]
  • Appendix B: Basic Results From Matrix Algebra
    • B.1 Some Elementary Facts
    • B.2 Eigenvalues and Eigenvectors
    • B.3 Functions of Symmetric Matrices
    • B.4 Generalized Eigenvalues
    • B.5 Matrix Derivatives
    • B.6 Orthogonal Projection
    • B.7 Distribution of Quadratic Forms
  • Bibliography
  • Index