David M. Bressoud July, 2009
Phase I details
Phase II details
I am pleased to announce that the MAA anticipates funding from the National Science Foundation to support a five-year study, Characteristics of Successful Programs in College Calculus, that will investigate the instruction of mainstream Calculus I courses with the following goals:
The principal and co-principal investigators for this project are myself, Marilyn Carlson, Michael Pearson, and Chris Rasmussen. To accomplish our goals, we will need help from and the cooperation of the mathematical community in the United States.
This project will be conducted in two phases. Phase I will entail a large-scale web-based survey to identify factors that are correlated with success in Calculus I. Phase II will identify eight highly successful calculus programs at various types of institutions. During Fall term 2012, we will send teams to explore what is happening at these institutions and to conduct explanatory case studies for the purpose of understanding what it takes to create a successful program of calculus instruction. An important component of this project is the dissemination through publications, presentations, and workshops of the information gathered in both phases. The ultimate goal is to help math departments determine what use of resources will have the greatest impact on student performance and retention.
The Phase I survey will be conducted in the Fall term of 2010. Late in Spring 2010 we will use stratified random sampling to choose approximately 600 colleges and universities whose mainstream Calculus I classes will be surveyed. Both instructors and students will be asked to respond to the survey, both at the beginning and end of the term. We have begun to identify basic demographic questions as well as the variables for which we will need to control. These include type of institution, socioeconomic status of students, reasons for studying Calculus, year in college, and prior mathematical experience. We have also begun to specify the factors that may promote student success or, in the other direction, contribute to a decision to leave mathematics. Potential factors include class size, use of technology, instructor’s status and number of years teaching, instructor’s pedagogical content knowledge base, means of instruction including the mix of lecture and peer instruction or other active learning, and type and frequency of assessment including how homework is handled. Most of these are broad categories that will need to be refined. The intent of the Phase I study is not just to discover what is effective. We also want to learn whether there are practices that are commonly believed to be helpful but in fact do not correlate with improved performance.
This coming academic year will be spent deciding what we want to measure and how we will measure it, keeping in mind that this must be done within the constraints of web-based surveys of reasonable length. I ask for your help in identifying factors that we should measure and questions that will help us to measure them. A website for your suggestions is available at www.maa.org/Surveys/TakeSurvey.aspx?SurveyID=86L1672. We are engaged in a literature review as preparation, and we also request that you list on this website any references to studies that you believe may be relevant to our work.
Phase I details
The immediate goal of the Phase I study is the construction of an epidemiological model of success in Calculus I. That is to say, think of success in Calculus I as the disease and various factors such as background of the instructor, size of class, prior AP experience, or homework policy as possible contributors to contracting the disease. The point of such a model is not to be able to identify with certainty what causes the disease, but rather to tease out correlations that suggest factors that are likely to be contributing, and also to identify factors that might have been thought to be relevant, but in fact show little correlation with success. This is the first step in understanding relevant factors, laying the groundwork for successive studies that can examine the case for actual cause and effect.
In developing our survey instruments and building the resulting model, we will rely heavily on the assistance of Philip Sadler and Gerhard Sonnert of the Science Education Department at the Harvard-Smithsonian Center for Astrophysics. Sadler and Sonnert have conducted a large-scale study, Factor Influencing College Success in Science (FICSS), that has built such a model to examine the aspects of a student's experience in high school that influence success in science courses taken in college. Among their discoveries was that there is little or no transfer between scientific disciplines: an advanced high school course in physics does not help with college chemistry. The exception is mathematics: an advanced high school course in mathematics contributes to improved performance in all of the sciences, including college biology where—controlling for all other factors—advanced work in high school mathematics improves biology scores by an average of 0.14 standard deviations . Sadler and Sonnert are currently conducting a similar study, Factors Influencing College Success in Mathematics (FICS-Math), to determine the factors in high school preparation that predict success in college Calculus I. The MAA study is intended to complement theirs by controlling for the factors that they have identified and focusing on what happens within the college class.
The first task is to define success for the purposes of this survey. Because this will be a very large-scale, web-based survey that must be conducted entirely within a single term (semester or quarter), the measure of success for a given class is very simple: the percentage of students who were enrolled in the second week and finish the course with a grade of C or higher and the percentage of students who started with the intention of continuing on to a second course in calculus compared with the percentage that have that intention at the end of Calculus I.
The first year of Phase I will begin by deciding what factors we need to measure and how to ask the questions that will enable us to measure them. We will then assemble and pilot test the survey instruments. The factors to be measured will be determined through literature searches, discussions with our advisory panel, and other input including the request put out in this article. As a preliminary pass, we have identified the following possible factors:
Instructor attributes: professional status (Assistant, Associate, or Full Professor, part- or full-time adjunct instructor, graduate student), professional preparation to teach calculus, awareness of common difficulties and misconceptions, and attitude toward institution, students, and teaching.
Departmental focus: use of placement exams, explicit learning goals, use of standardized exams, professional development opportunities, monitoring of student retention.
Classroom variables: size of class, text and curriculum, use of break out into recitation or laboratory sections, incentives for attendance, format and mix of presentation (lecture, small group interaction, question/answer), use of calculators in class and/or on exams, frequency and nature of assessments (exams, quizzes, gateway exams, homework, reading reflections), and use of pedagogical strategies (including clickers) to increase active participation by students.
Out of class expectations: homework policy (including use of web-based tools to grade and/or provide feedback on homework), exploratory assignments including use of graphing calculators and/or computers, hours spent studying each week, encouragement of study groups, use of Learning Center, writing assignments, group projects.
In addition, we will control for student variables including intended major, previous math courses, results of placement tests, gender, year in college, parental education, socioeconomic status, SAT/ACT scores, and high school type and grades.
This coming spring, we will be recruiting colleges and universities, selected via a stratified random sample, to participate in this survey that will be conducted in the Fall term of 2010, an initial survey in the second week of class and a final survey near the end of the term.
Instructor obligations. Each instructor will be asked to go to an online survey at the start of the Fall term to enter basic data about the class: initial enrollment, type of institution, placement policy, textbook and other supporting curricular materials, class format (use of lecture, group work, in-class use of calculators or computers, clickers), use of smaller recitation or laboratory sections, use of group work, projects, and/or writing assignments, whether and how students will be held accountable for reading the textbook, homework policy, use of web-based instruments for homework or other assessments, and the number, nature, and relative weights of assessment instruments including gateway tests, midterms, homework, and projects and whether graphing calculators are allowed for all, some, or none of the test questions. This survey will also gather data on the nature of institutional support for calculus instruction and data on the instructor: status (Assistant, Associate, or Full Professor, full- or part-time adjunct instructor, graduate student), nature of any training or preparation given by the department for teaching this course, awareness of common student misconceptions, and attitudes toward students and teaching.
At the semester’s end, the instructor will go online again to report the total number of students who completed the course as well as the number of students who received each of the possible grades. In addition, students will be surveyed twice, at the start and end of the semester. Instructors will agree to count participation in these surveys toward the final course grade.
Student survey administration. In the second week of class,
students will be directed to a web-based survey that will collect information
on basic demographic data, prior high school math experiences, intended major
or majors, and motivations for taking Calculus I. Instructors will be asked
to count survey participation as part of the total grade (as part of one homework
grade, for example). After each student completes the survey, an individualized
certificate of participation will be generated that will then be turned in
to the instructor for appropriate credit. This enables the instructor to monitor
who has taken the survey without breaking student anonymity on the survey
Immediately before the final exam, students will again access a web-based survey that will collect data on their experience in this calculus class, whether and how their intended major has changed, whether they now intend to continue to the next course in calculus, and to predict their grade for the course. While this is not a highly reliable predictor of the actual grade, it provides the even more useful information of student self-appraisal of performance in this course. The individualized certificate of participation will again be turned in to count as part of the final grade. By comparing numerical answers such as birth date and home zip code, it will be possible to match the first and second student surveys without breaking student anonymity.
Sadler and Tai have pioneered the use of the web for follow-up to survey investigations, since they have found that students quite enthusiastically add their email addresses. We will draw on this expertise and e-mail students after the semester is over to allow us to gather information that students prefer their math professor not see or that benefits from extra time upon which to reflect. We can also resolve ambiguities in student answers by contacting them directly, substantially increasing the yield of valid surveys.
Phase II details
Phase II will begin in the Fall of 2011 with the selection of eight departments, chosen across a variety of types of institutions, for in-depth case study analysis and the training of teams to conduct this analysis. We will select schools that have demonstrated success in Calculus I. As we begin to narrow the pool of colleges and universities from which we will make our selection, it will be possible to collect further information on the schools that are potential sites for the case studies. Once the case studies have begun, it will be possible to employ a rich definition of success. Among the attributes of success that we will study are
In addition, we will be particularly attentive to programs that have instituted changes to the Calculus I program that have resulted in significant improvement in these measures of success.
There will be four teams of researchers, each of which will conduct case study analyses at two colleges during the Fall of 2012. The 2-year college team will be led by Sean Larsen, the undergraduate college team will be led by Eric Hsu, Chris Rasmussen will lead the team looking at comprehensive universities, and Marilyn Carlson will lead the team studying research universities. These studies will develop a theoretical framework that will help us to understand the factors under which students are likely to succeed in calculus. They will also serve to demonstrate to the mathematical community how these insights can be implemented in a variety of contexts.
The ultimate purpose of all of this activity is to improve the instruction of calculus by helping departments and individuals understand what works, why it works, how it can or should be done, and where additional effort or resources will produce the greatest benefit.It is part of the MAA's effort to provide faculty with the information and resources that they need to continue the improvement of undergraduate mathematics education.
 Philip M. Sadler and Robert H. Tai. 2007. The Two High School Pillars Supporting College Science. Science. vol. 317. 27 July 2007. pages 457–8.
Purchase a hard copy of the CUPM Curriculum Guide 2004 or the Curriculum Foundations Project: Voices of the Partner Disciplines.
Find links to course-specific software resources in the CUPM Illustrative Resources.
Find other Launchings columns.