- Membership
- Publications
- Meetings
- Competitions
- Community
- Programs
- Students
- High School Teachers
- Faculty and Departments
- Underrepresented Groups
- MAA Awards
- MAA Grants

- News
- About MAA

Publisher:

Polka Dot Publishing

Publication Date:

2005

Number of Pages:

541

Format:

Hardcover

Price:

39.00

ISBN:

9780970999559

Category:

Textbook

[Reviewed by , on ]

Allen Stenger

12/3/2013

This is a cookbook with many interesting recipes. The frame story concerns Fred, a 6-year-old math genius, and is an extended narrative where he has many adventures and finds opportunities to use statistics. The story is engaging, if unbelievable, and shows statistics in everyday but somewhat contrived contexts. The book is one in a series of books designed for self-study and aimed mostly at homeschoolers, but including some college level topics, such as this one.

The organization of the book is unusual, grouping ideas according to how the data is gathered or presented rather than by statistical test. The book includes an Emergency Statistics Guide, similar to the symptom guides found in family medical guides, that uses a decision tree to guide you to the correct test based on the properties of the data and on what you are attempting to find out. Very Good Feature: the book constantly alerts you to the samples sizes needed for a test to work reliably.

Limitations: This is definitely a cookbook, with minimal discussion of mathematics and little attempt to justify the recipes. It is not calculus-based, and assumes nothing past high-school algebra. You don’t even need to understand *e*, logarithms, or square roots, because the book guides you to use a calculator to figure anything involving these.The exercises, although all word problems, are simple and are for drill rather than to expand understanding. I believe the premise is that the reader will do statistics with this book as a handbook, and therefore does not need to rely on memory and so the book does not include a lot of drill. There is no real-world data and no real-world examples, although the examples are very simplified versions of real-world problems.

This book lists for about one-third the price of other introductory statistics texts. This comes from several factors: (1) it has about half as many pages as the competition; (2) it is printed in black and white on ordinary book paper, not in full color on slick paper; (3) it is sold primarily at retail through the web and has essentially no marketing costs. The page count is kept low primarily by not having many worked examples; there is essentially one fully-worked example per statistical test, with some more sketched examples in the exercises. To put the cost in perspective, if this had been a Dover reprint, it would probably list for about $25 in softcover, so $39 for a hardcover of a current book seems reasonable.

Bottom line: an intriguing approach to introductory statistics at a bargain price, that would be a good text for service courses. It should also work for intelligent readers who want to get an understanding of what statistics can do.

Allen Stenger is a math hobbyist and retired software developer. He is webmaster and newsletter editor for the MAA Southwestern Section and is an editor of the Missouri Journal of Mathematical Sciences. His mathematical interests are number theory and classical analysis. He volunteers in his spare time at MathNerds.org, a math help site that fosters inquiry learning.

**Chapter 1: Descriptive Statistics**

Frequency distributions scatter diagrams, averages— mean, median and mode, linear regression, populations vs. samples, histograms, range, percentiles, deciles, quintiles, quartiles variance, sigma notation, standard deviation for populations and for samples distributions— skewed, platykurtic, leptokurtic, bimodal

**Chapter 2: Probability**

Outcomes, sample space, events— independent, complements, mutually exclusive Venn diagrams

**Chapter 3: Conditional Probability**

\(\mathcal{P}(A\mid B)\) notation, definition of conditional probability Bayes’ Theorem and its proof generalized Bayes’ Theorem

**Chapter 3½: Looking Forward to the Next Four Chapters**

The Future— zero samples, the Past— one sample, the Present— two samples, the Present— three or more samples

**Chapter 4: The Future—Zero Samples**

Poisson distributions, e, factorial, continuous vs. discrete variables, exponential distributions — three forms permutations and combinations, Bernoulli variables binomial distributions, hypergeometric distributions, multinomial distributions, extended hypergeometric distributions, normal distributions — Gaussian distributions, normal curves to approximate binomial distributions

**Chapter 4½: The Art of the Sample**

Null hypothesis — H_{0}, the problem of induction— Hume’s problem, the problem of small samples, type I and type II errors, levels of significance, The Ten Rules of Fair Play, data mining, cherry picking, data snooping, pilot samples, alternative hypotheses, one-tail vs. two-tail propositions, dealing with sensitive questions in a survey, dealing with bad luck in surveys, simple random surveys, systematic samples, cluster sampling, stratified samples outliers, statistical significance vs. actual significance, 13 alternatives to saying “H_{0} is tenable.”

**Chapter 5: The Past—One Sample**

Why no one knows what time it is, Normal Distributions— large samples, but a small part of the population, z-scores, determining sample size confidence intervals, Central Limit Theorem, point estimates, Wald confidence intervals vs. Agresti-Coull confidence intervals, finite population correction factors, Normal Distributions— large samples that are a large part of the population, Student’s t-Distribution, Lilliefors test for normality, standardizing data, cumulative normal frequency, Wilcoxan Signed Ranks test— the Median test, uniform distributions, symmetric distributions, Sign test, power of a test, data— nominal, ordinal, interval, ratio, parametric vs. nonparametric statistics, Sign test for nominal data, Kolmogorov-Smirnov goodness-of-fit test for uniform distributions, for normal distributions, Chi-Squared test, for goodness-of-fit test the Lie Detector test, is-the-sample-too-variable test sequences— random, cyclical, trends, Runs test

**Chapter 5½: Secrets of the Binomial Proportion**

Starting with a small sample of a Bernoulli variable, we determine the confidence interval for \(\pi\), the proportion, of “good” items in the underlying population, a small history of the problem, Monte Carlo method, the journal article (from* The Journal of Fredometrika*) which describes a new approach to the problem

**Chapter 6: The Present—Two Samples**

Paired samples, Two Paired Samples \( (\mu_1-\mu_2) \) test, Wilcoxon Signed Ranks test for two paired samples, Signs test for two paired samples, Signs test for paired samples of Hot & Cold, Two Proportions with 2 samples in 2 categories, independent samples, Two Large Independent Samples test, Two Independent samples when \(\sigma_1\) and \(\sigma_2\) are known, F-Distribution test, Two Small Independent Samples test where the populations are normal and the standard deviations are roughly equal, Two Small Independent Samples test where the populations are roughly normal but the standard deviations are quite different from each other (a. k. a. the Smith-Satterthwaite test), Mann-Whitney test, Chi-Squared test for 2 samples in many categories contingency tables, one sample with two variables, Chi-Squared test with Yates Correction for 2 samples in 2 categories, Fisher’s Exact test for 2 samples in 2 categories,

**Chapter 7: The Present—Many Samples**

One-Way ANOVA test for independent samples weighted averages, Post-test for One-W ay ANOVA for independent samples, One-W ay ANOVA for matched samples (blocked samples), Post-test for One-W ay ANOVA for matched samples, Two-Factor ANOVA with one observation per cell, Post-test for Two-Factor ANOVA, ANOVA tables, Two-Factor ANOVA with several observations per cell, Kruskal-Wallis test, Post-test for Kruskal-Wallis, Chi-Squared test for nominal data, three or more samples correlation vs. causation

**Chapter 7½: Emergency Statistics Guide**

**Chapter 8: Finding Regression Equations**

Linear regression prediction intervals, Pearson Product Moment, Correlation Coefficient (r), coefficient of determination, multiple regression, normal equations, coefficient of multiple determination (R²), adjusted coefficient of multiple determination, design variables, dummy variables, saturated models, multicollinearity method, step down method nonlinear regression, logarithmic curves, reciprocal curves power curves, exponential curves, parabolic curves, two independent variables with possible interaction, logistic regression

**The Field Guide**

Future — The population is known and you want to know what the sample will look like. You start with zero samples: Hypergeometric Distribution, Extended Hypergeometric Distribution, Binomial Distribution, Multinomial Distribution, Poisson Distribution, Exponential Distribution, Normal Distribution

Past— The sample is known and you want to know what the population, was that gave this sample. You start with one sample: Normal Distribution — \(n > 30\) and the sample is small compared with the population., Normal Distribution — \(n > 30\) and the sample is large compared with the population., Student’s t-Distribution, Binomial Distribution (large sample, \(n > 30\)),Binomial Distribution (small sample, \(n \leq 30\)), Kolmogorov-Smirnov goodness-of-fit test, Lilliefors test, Wilcoxon Signed Ranks test, Sign test — Does the population have that median?, Sign test for Nominal Data, Chi-Squared test (goodness of fit), Chi-Squared test (Lie Detector), Chi-Squared test (Is the population too variable?), Runs test

Present — You start with two samples and want to know how do they compare with each other: Two Paired Samples \((\mu_1-\mu_2)\), Wilcoxon Signed Ranks test, Sign test for two paired, Sign test for two paired samples of nominal data., Two Proportions in two categories, Two Large Independent Samples, \(n \geq 30\), Two Independent Samples (\(\sigma_1\) and \(\sigma_2\) known), F-Distribution test, Two Small Independent Samples, roughly equal standard deviations, Two Small Independent Samples (Smith-Satterthwaite) with very different standard deviations., Mann-Whitney test (a.k.a. Wilcoxon Rank-Sum test), Chi-Squared test (\(\chi^2\)), two samples of nominal data in multiple categories., One Sample with Two Variables, Chi-Squared test (\(\chi^2\)) — Yates Correction, Fisher’s Exact test,

Present— You start with three or more samples and want to know how do they compare with each other: One-W ay ANOVA (independent samples), Post-test for One-W ay ANOVA (independent samples), One-W ay ANOVA (matched samples), Post-test for One-W ay ANOVA (matched samples) Two-Factor ANOVA (one observation per cell), Post-test for Two-Factor ANOVA (one observation per cell) Two-Factor ANOVA (multiple observations per cell), Kruskal-W allis test, Post-test for Kruskal-W allis, Chi-Squared (\(\chi^2\)), three samples of nominal data

**Tables**

Table A: Binomial Coefficients

Table B: Kolmogorov-Smirnov (one sample)

Table C: Standard Normal Curve (area from 0 to z)

Table D: Standard Normal Curve (area from \(-\infty\) to z)

Table E: Standard Normal Curve (area from –z to z)

Table F: Student’s *t*-Distribution

Table G: Lilliefors

Table H: Wilcoxon Signed Ranks

Table I: Sign test

Table J: Chi-Squared (\(\chi^2\))

Table K: Runs test

Table L: Mann-Whitney (Wilcoxon Rank-Sum)

Table M: Fisher’s Exact test

Table N: *F*-Distribution

Table O: Kruskal-Wallis test

Table P: Binomial Proportion Intervals

Index

- Log in to post comments