Mathematics faculty, especially those of us who teach at smaller institutions, increasingly find ourselves called upon not only to provide instruction in statistics, but also to act as statistical consultants for students and colleagues in other departments and to direct student research in projects related to data analysis. Some of us find ourselves designing and teaching a second course in statistics, or even teaching the elements of the much-ballyhooed field of “Data Science.”
Data analysis these days is often conceptualized as an activity that draws on three areas of competence:
Although the first area (statistics) requires a substantial change in perspective — it is not really a branch of mathematics at all — at least the methodology of inferential statistics employs reassuringly-heavy doses of familiar fields of mathematics such as linear algebra and probability theory. In addition, many mathematicians have gone before us into statistical territory, and are active participants in several energetic organizations (including the SIGMAA on Statistics Education and the United States Conference on the Teaching of Statistics) to help fellow mathematicians along. As for the area of substantive knowledge, we have learned to rely on our colleagues (and students) for guidance.
It’s really the third area — programming/hacking skills — that holds many of us back. These are not a timeless set of skills, and acquiring them is a matter of breadth, flexibility and patience, rather than depth, concentration and clever deduction. Hacking simply isn’t a part of traditional mathematical training. In many ways it runs counter to the mathematical temperament, so the sense of satisfaction attendant upon having, at last, wrangled one’s computer into Doing Something is very much an acquired taste, for most of us.
The tools with which we must acquire at least a passing familiarity are so many, and the initial hurdles (installation and configuration of free software, for instance) are potentially so frustrating that we need a guide: we need someone who inspires us with what can be accomplished eventually, and who tries to help past over some of the initial points of blockage.
Christopher Gandrud, a political economist at the Hertie School of Governance in Berlin, aims to act as such a Guide. His text is organized around the concept of reproducible research.
According to Roger Peng, a biostatistician at the Bloomberg School of Public Health at Johns Hopkins University, research in the computational sciences is said to be reproducible provided that: “the data and [computer] code used to make a finding are available and they are sufficient for an independent researcher to recreate the finding.” Although this definition addresses the reliability of scientific research, it also has implications for the practical conduct of research. Conducting one’s research in a reproducible way makes for
The three basic components of reproducible research in data analysis are:
For each of these three components one can choose between several tools, but Gandrud emphasizes a set of options that has become overwhelmingly popular in statistics and the natural and social sciences.
For a programming environment he recommends R, along with the Integrated Development Environment made by RStudio. R is a statistical programming language that was built to resemble bell Lab’s S, but unlike S it is entirely free software and is extensible through various convenient web-based repositories.
To store and access data Gandrud distinctly favors the version control system known as Git along with the popular web service GitHub. Git and GitHub require a greater learning investment than standard cloud storage services such as Dropbox, but they offer considerably greater flexibility and power. The reproducible research novice may begin with Dropbox but will find herself moving to Git after a few months, especially since it has been integrated conveniently into RStudio. In addition, new contributed R packages permit applications written in R to be run directly from their GitHub repositories.
For communication of results Gandrud recommends the remarkable R package known as knitr, which implements the paradigm of literate programming — the weaving of text and computer code into a single text file that is then processed (“knit”) into any of several chosen formats: HTML, pdf or even a Word document. The source file is a data report that contains all of the ode necessary to read in the data, run statistical routines, produce graphs, etc., but also includes one’s analysis and interpretation. The knitted result is a complete and polished report, whereas the source file is a record of that permits a colleague to see exactly how one arrived at one’s numerical results, and to reproduce these results exactly.
The literate programming paradigm is remarkably convenient. If you find a mistake in your data, you do not have to re-run all of your routines, make all-new graphs and insert the results into your report. You simply edit the data file and push a button to re-knit your report. Reports can be distributed as hard-copies or published immediately to the web. They need not be static documents, either. With appropriate formatting they can be knit into interactive documents that can be read by anyone with a computer that runs R, or that can be hosted on the web by special “Shiny” servers that run R on the back end.
Remarkably, all of the above tools are completely free. (R is free by design. RStudio exists in free and commercial versions. Github charges a monthly fee only for private repositories. Shiny servers come in free and commercial versions as well.) Students therefore have access to all of them. Nowadays when I advise an honors thesis or summer research project involving data analysis, I make sure that the student reviews R and we always begin with a Git and GitHub tutorial.
I entered the world of data computing with great reluctance, and only because I saw the benefits of teaching statistics with R, even for elementary students. The first edition of Reproducible Research with R and RStudio was an invaluable companion in the early stages of my journey, and I trust that the second edition will be equally useful to aspiring data analysts.
Addendum: The following additional resources may be of interest to fellow instructors:
Homer White is Professor of Mathematics at Georgetown College, in Kentucky. A typical Jack-of-All-Trades small-college mathematician, he enjoys the teaching of statistics at all levels, statistical consultation, and even institutional research. His interests and occasional forays into research in the history of mathematics include the geometrical works of Leonhard Euler and the mathematics of classical India.
Introducing Reproducible Research
What Is Reproducible Research?
Why Should Research Be Reproducible?
Who Should Read This Book?
The Tools of Reproducible Research
Why Use R, knitr/rmarkdown, and RStudio for Reproducible Research?
Getting Started with Reproducible Research
The Big Picture: A Workflow for Reproducible Research
Practical Tips for Reproducible Research
Getting Started with R, RStudio, and knitr/rmarkdown
Using R: the Basics
Using knitr and rmarkdown: the Basics
Getting Started with File Management
File Paths and Naming Conventions
Organizing Your Research Project
Setting Directories as RStudio Projects
R File Manipulation Commands
Unix-Like Shell Commands for File Management
File Navigation in RStudio
Data Gathering and Storage
Storing, Collaborating, Accessing Files, and Versioning
Saving Data in Reproducible Formats
Storing Your Files in the Cloud: Dropbox
Storing Your Files in the Cloud: GitHub
RStudio and GitHub
Gathering Data with R
Organize Your Data Gathering: Makefiles
Importing Locally Stored Data Sets
Importing Data Sets from the Internet
Advanced Automatic Data Gathering: Web Scraping
Preparing Data for Analysis
Cleaning Data for Merging
Merging Data Sets
Analysis and Results
Statistical Modelling and knitr
Incorporating Analyses into the Markup
Dynamically Including Modular Analysis Files
Reproducibly Random: set.seed
Computationally Intensive Analyses
Showing Results with Tables
Basic knitr Syntax for Tables
Creating Tables from Supported Class R Objects
Showing Results with Figures
Including Non-Knitted Graphics
Basic knitr/rmarkdown Figure Options
Knitting R’s Default Graphics
Including ggplot2 Graphics
Presenting with knitr/LaTeX
Bibliographies with BibTeX
Presentations with LaTeX Beamer
Large knitr/LaTeX Documents: Theses, Books, and Batch Reports
Planning Large Documents
Large Documents with Traditional LaTeX
knitr and Large Documents
Child Documents in a Different Markup Language
Creating Batch Reports
Presenting on the Web and Other Formats with R Markdown
Further Customizability with rmarkdown
Slideshows with Markdown, rmarkdown, and HTML
Publishing HTML Documents Created by R Markdown
Citing Reproducible Research
Licensing Your Reproducible Research
Sharing Your Code in Packages
Project Development: Public or Private?
Is it Possible to Completely Future Proof Your Research?