Bayesian methods represent a large community when it comes to data analysis and statistical modeling. There are numerous applications, ranging from medical sciences to social sciences. The advancement of technology (computer power) has provided a more fruitful application of Bayesian methods. The author presents the freeware WINBUGS as a main software tool for the analysis.
The table of contents gives us a clear idea of what the author intends to cover in the book. The book is really intended for the reader to learn the methods presented and not just merely to remind oneself of them. The book contains theory, examples, computer code and explanations, but it is based on the theoretical concepts.
The theoretical exposition is quite thorough and clear. The presentation style is very inviting. The author chooses to explain methods in an intuitive and narrative way rather than just stating the important concepts. Examples also form a major part of the text. Without them it would just be a dry account of the Bayesian framework. There are indeed numerous examples in every chapter. They are accompanied with the computer code (WINBUGS, mostly) and the appropriate output. Each example is discussed and the output explained. Data and code used in the examples are also available for download (the web site is given in the book). In general I can say that examples provide means for further practical analysis and exploration. A vast number of references is provided at the end of each chapter.
If one is serious about learning the Bayesian statistical methods than one could definitely start with this book, assuming some background knowledge. Appropriate prerequisites would be upper level undergraduate probability, some matrix algebra (not necessary, but helpful) and a familiarity with WINBUGS, R or MATLAB .
Neglecting the computer code presented and along with that the details of the examples greatly diminishes the pedagogical effectiveness of the book, as these serve as the main drivers for learning the theory presented. Exercises, which are provided at the end of each chapter, also greatly help in understanding the concepts. Most of the exercises are applied, i.e. one should use the computer to do them. The author provides some hints and computer code for some exercises.
Overall, I think this is an excellent book for anyone wishing to learn Bayesian modeling. It is adequate for a researchers, graduate students and also upper-level undergraduates.
Chapter 1 Introduction: The Bayesian Method, its Benefits and Implementation.
1.1 The Bayes approach and its potential advantages.
1.2 Expressing prior uncertainty about parameters and Bayesian updating.
1.3 MCMC sampling and inferences from posterior densities.
1.4 The main MCMC sampling algorithms.
1.4.1 Gibbs sampling.
1.5 Convergence of MCMC samples.
1.6 Predictions from sampling: using the posterior predictive density.
1.7 The present book.
Chapter 2 Bayesian Model Choice, Comparison and Checking.
2.1 Introduction: the formal approach to Bayes model choice and averaging.
2.2 Analytic marginal likelihood approximations and the Bayes information criterion.
2.3 Marginal likelihood approximations from the MCMC output.
2.4 Approximating Bayes factors or model probabilities.
2.5 Joint space search methods.
2.6 Direct model averaging by binary and continuous selection indicators.
2.7 Predictive model comparison via cross-validation.
2.8 Predictive fit criteria and posterior predictive model checks.
2.9 The DIC criterion.
2.10 Posterior and iteration-specific comparisons of likelihoods and penalised likelihoods.
2.11 Monte carlo estimates of model probabilities.
Chapter 3 The Major Densities and their Application.
3.2 Univariate normal with known variance.
3.2.1 Testing hypotheses on normal parameters.
3.3 Inference on univariate normal parameters, mean and variance unknown.
3.4 Heavy tailed and skew density alternatives to the normal.
3.5 Categorical distributions: binomial and binary data.
3.5.1 Simulating controls through historical exposure.
3.6 Poisson distribution for event counts.
3.7 The multinomial and dirichlet densities for categorical and proportional data.
3.8 Multivariate continuous data: multivariate normal and t densities.
3.8.1 Partitioning multivariate priors.
3.8.2 The multivariate t density.
3.9 Applications of standard densities: classification rules.
3.10 Applications of standard densities: multivariate discrimination.
Chapter 4 Normal Linear Regression, General Linear Models and Log-Linear Models.
4.1 The context for Bayesian regression methods.
4.2 The normal linear regression model.
4.2.1 Unknown regression variance.
4.3 Normal linear regression: variable and model selection, outlier detection and error form.
4.3.1 Other predictor and model search methods.
4.4 Bayesian ridge priors for multicollinearity.
4.5 General linear models.
4.6 Binary and binomial regression.
4.6.1 Priors on regression coefficients.
4.6.2 Model checks.
4.7 Latent data sampling for binary regression.
4.8 Poisson regression.
4.8.1 Poisson regression for contingency tables.
4.8.2 Log-linear model selection.
4.9 Multivariate responses.
Chapter 5 Hierarchical Priors for Pooling Strength and Overdispersed Regression Modelling.
5.1 Hierarchical priors for pooling strength and in general linear model regression.
5.2 Hierarchical priors: conjugate and non-conjugate mixing.
5.3 Hierarchical priors for normal data with applications in meta-analysis.
5.3.1 Prior for second-stage variance.
5.4 Pooling strength under exchangeable models for poisson outcomes.
5.4.1 Hierarchical prior choices.
5.4.2 Parameter sampling.
5.5 Combining information for binomial outcomes.
5.6 Random effects regression for overdispersed count and binomial data.
5.7 Overdispersed normal regression: the scale-mixture student t model.
5.8 The normal meta-analysis model allowing for heterogeneity in study design or patient risk.
5.9 Hierarchical priors for multinomial data.
5.9.1 Histogram smoothing.
Chapter 6 Discrete Mixture Priors.
6.1 Introduction: the relevance and applicability of discrete mixtures.
6.2 Discrete mixtures of parametric densities.
6.2.1 Model choice.
6.3 Identifiability constraints.
6.4 Hurdle and zero-inflated models for discrete data.
6.5 Regression mixtures for heterogeneous subpopulations.
6.6 Discrete mixtures combined with parametric random effects.
6.7 Non-parametric mixture modelling via dirichlet process priors.
6.8 Other non-parametric priors.
Chapter 7 Multinomial and Ordinal Regression Models.
7.1 Introduction: applications with categoric and ordinal data.
7.2 Multinomial logit choice models.
7.3 The multinomial probit representation of interdependent choices.
7.4 Mixed multinomial logit models.
7.5 Individual level ordinal regression.
7.6 Scores for ordered factors in contingency tables.
Chapter 8 Time Series Models.
8.1 Introduction: alternative approaches to time series models.
8.2 Autoregressive models in the observations.
8.2.1 Priors on autoregressive coefficients.
8.2.2 Initial conditions as latent data.
8.3 Trend stationarity in the AR1 model.
8.4 Autoregressive moving average models.
8.5 Autoregressive errors.
8.6 Multivariate series.
8.7 Time series models for discrete outcomes.
8.7.1 Observation-driven autodependence.
8.7.2 INAR models.
8.7.3 Error autocorrelation.
8.8 Dynamic linear models and time varying coefficients.
8.8.1 Some common forms of DLM.
8.8.2 Priors for time-specific variances or interventions.
8.8.3 Nonlinear and non-Gaussian state-space models.
8.9 Models for variance evolution.
8.9.1 ARCH and GARCH models.
8.9.2 Stochastic volatility models.
8.10 Modelling structural shifts and outliers.
8.10.1 Markov mixtures and transition functions.
8.11 Other nonlinear models.
Chapter 9 Modelling Spatial Dependencies.
9.1 Introduction: implications of spatial dependence.
9.2 Discrete space regressions for metric data.
9.3 Discrete spatial regression with structured and unstructured random effects.
9.3.1 Proper CAR priors.
9.4 Moving average priors.
9.5 Multivariate spatial priors and spatially varying regression effects.
9.6 Robust models for discontinuities and non-standard errors.
9.7 Continuous space modelling in regression and interpolation.
Chapter 10 Nonlinear and Nonparametric Regression.
10.1 Approaches to modelling nonlinearity.
10.2 Nonlinear metric data models with known functional form.
10.3 Box–Cox transformations and fractional polynomials.
10.4 Nonlinear regression through spline and radial basis functions.
10.4.1 Shrinkage models for spline coefficients.
10.4.2 Modelling interaction effects.
10.5 Application of state-space priors in general additive nonparametric regression.
10.5.1 Continuous predictor space prior.
10.5.2 Discrete predictor space priors.
Chapter 11 Multilevel and Panel Data Models.
11.1 Introduction: nested data structures.
11.2 Multilevel structures.
11.2.1 The multilevel normal linear model.
11.2.2 General linear mixed models for discrete outcomes.
11.2.3 Multinomial and ordinal multilevel models.
11.2.4 Robustness regarding cluster effects.
11.2.5 Conjugate approaches for discrete data.
11.3 Heteroscedasticity in multilevel models.
11.4 Random effects for crossed factors.
11.5 Panel data models: the normal mixed model and extensions.
11.5.1 Autocorrelated errors.
11.5.2 Autoregression in y.
11.6 Models for panel discrete (binary, count and categorical) observations.
11.6.1 Binary panel data.
11.6.2 Repeated counts.
11.6.3 Panel categorical data.
11.7 Growth curve models.
11.8 Dynamic models for longitudinal data: pooling strength over units and times.
11.9 Area apc and spatiotemporal models.
11.9.1 Age–period data.
11.9.2 Area–time data.
11.9.3 Age–area–period data.
11.9.4 Interaction priors.
Chapter 12 Latent Variable and Structural Equation Models for Multivariate Data.
12.1 Introduction: latent traits and latent classes.
12.2 Factor analysis and SEMS for continuous data.
12.2.1 Identifiability constraints in latent trait (factor analysis) models.
12.3 Latent class models.
12.3.1 Local dependence.
12.4 Factor analysis and SEMS for multivariate discrete data.
12.5 Nonlinear factor models.
Chapter 13 Survival and Event History Analysis.
13.2 Parametric survival analysis in continuous time.
13.2.1 Censored observations.
13.2.2 Forms of parametric hazard and survival curves.
13.2.3 Modelling covariate impacts and time dependence in the hazard rate.
13.3 Accelerated hazard parametric models.
13.4 Counting process models.
13.5 Semiparametric hazard models.
13.5.1 Priors for the baseline hazard.
13.5.2 Gamma process prior on cumulative hazard.
13.6 Competing risk-continuous time models.
13.7 Variations in proneness: models for frailty.
13.8 Discrete time survival models.
Chapter 14 Missing Data Models.
14.1 Introduction: types of missingness.
14.2 Selection and pattern mixture models for the joint data-missingness density.
14.3 Shared random effect and common factor models.
14.4 Missing predictor data.
14.5 Multiple imputation.
14.6 Categorical response data with possible non-random missingness: hierarchical and regression models.
14.6.1 Hierarchical models for response and non-response by strata.
14.6.2 Regression frameworks.
14.7 Missingness with mixtures of continuous and categorical data.
14.8 Missing cells in contingency tables.
14.8.1 Ecological inference.
Chapter 15 Measurement Error, Seemingly Unrelated Regressions, and Simultaneous Equations.
15.2 Measurement error in both predictors and response in normal linear regression.
15.2.1 Prior information on X or its density.
15.2.2 Measurement error in general linear models.
15.3 Misclassification of categorical variables.
15.4 Simultaneous equations and instruments for endogenous variables.
15.5 Endogenous regression involving discrete variables.
Appendix 1 A Brief Guide to Using WINBUGS.
A1.1 Procedure for compiling and running programs.
A1.2 Generating simulated data.
A1.3 Other advice.