David M. Bressoud March, 2009
The voting to which I refer is electronic voting, via clickers, in class. They are usually used in large enrollment classes as a means of monitoring student understanding of basic concepts, of actively engaging students in thinking about the material, and of facilitating small group interaction and peer instruction. Multiple choice questions are posed. Students respond instantaneously and anonymously (relative to their peers), and their responses are displayed in a histogram. This article will survey why one might use clickers, how they can be used, and what we know about their effectiveness.
Most of the evaluation of clicker effectiveness has been based on surveys of student attitudes [1,2,7,8]. Except for complaints about cost, these are overwhelmingly positive. Caldwell at West Virginia Univesity  does have evidence that they improve attendance and retention. Crouch and Mazur at Harvard  have some comparative information illustrating their effectiveness, but the only large-scale controlled study of which I am aware was conducted at Ohio State in courses on Electricity and Magnetism . It showed that, when clickers are tied to peer instruction, there is a significant improvement in conceptual understanding, an improvement that was considerably more pronounced for female students than for male students. With clickers, male and female students performed at the same level.Without them, males did significantly better.
I also include an annotated bibliography of some of the more interesting recent articles on the use of clickers.
The February issue of Notices contains an article  on their use in a calculus class at Northwestern. The first 2009 issue of Science contained two articles, one by Eric Mazur , the popularizer of this technology to facilitate peer instruction, on their use in physics classes at Harvard, the other on their use in an introductory genetics course at the University of Colorado, Boulder .
The Mazur article is a good introduction to the why and how of clickers. His piece in Science explains how he was led to begin using them. Though he is renowned as a lecturer, he was frustrated with the pure lecture format. This culminated in the moment when a student asked him, "How should I answer these questions? According to what you taught me or according to the way I usually think about these things?" That drove home for him the distinction between being able to take what has been presented in class and use it to answer questions, and having a working understanding of the ideas that lie behind what has been presented. The contrast is often much sharper in physics than mathematics because students usually come to physics with preconceived, pre-Newtonian conceptions of physical interaction. But we do see this in mathematics. One simple example is the student who reverts to an assumption of linearity when at a loss of how to proceed with the analysis of a function with which he (or she) is uncomfortable. Another is the example of the student who ties positive slope to increasing quantity, regardless of the context, and therefore has a lot of trouble recognizing that the slope of the derivative of f informs us of the concavity of f, rather than whether the function f itself is increasing or decreasing.
The first reason to use clickers in class is to provide quick feedback of student understanding of such basic concepts. It is interesting, yet not too surprising, that in the survey of student attitudes toward clickers in the Northwestern calculus class, the advantage that was agreed on by the largest fraction of students (84.8%) was that they "[help] the teacher become more aware of student difficulties with the subject matter." Following the lead of the physicists who developed the Force Concept Inventory to reveal student misconceptions about physcial mechanics, various Calculus Concept Inventories have been developed, identifying the basic components of a conceptual understanding of calculus. These can be used with clickers to probe student understanding and to identify weak spots in the conceptual framework that may need to be addressed before moving on to the next piece of mathematics.
The second reason is that using clickers forces students out of the purely passive role of transcribing the instructor's lecture into their notebooks. Professor Mazur identifies this problem when he talks about his experience providing lecture notes for his students. Their reaction was to complain that he was "lecturing straight from [his] lecture notes." This may sound strange until you realize that for most students, transcribing lecture notes is the only thing they know to do in lecture classes. When I surveyed students in a large lecture section of Calculus I at Penn State in Fall, 1993  and asked what they do in class, 95% said that they take notes. It was only 43% of the students that added that they listened, paid attention, watched, tried to understand, or otherwise described an active engagement with the lecture. A representative response was from the student who described what happens in class as, "The prof gives notes and does a few examples. Most students pay attention and try to take notes, but he moves quickly. I usually end up behind and start doodling." The implication is that if one cannot write fast enough to transcribe the lecture, then there is no other way to use that classtime productively.
Clickers engage all of the students and enable them, at least occasionally, to think about the mathematics that has just been presented.The problem is not that students don't want to think about the mathematics. It is that they do not know how. Clicker questions help structure these reflective moments and force students to think about a question enough to be able to make a choice among a variety of answers. In the Northwestern study, 70% of the students agreed with the statement, "I have to think more in classes with clickers than in traditional lecture classes."
The real power of clickers, however, lies in their potential for fostering student interaction. I saw this very dramatically at Macalester. For several years, we ran a large quantitative reasoning class. At least, with 120 students, it was large by Macalester standards.We enlisted faculty from many different disciplines who showed how basic quantiative methods were used in their disciplines. These were polished Power Point presentations with interesting material, but student response was disappointing.Those students who sat in the front half of the room were engaged, ready to ask questions, clearly interested. The other half were not.After a couple of years of trying tweak the topics and presentations, we bought a set of thirty clickers, organized the students at tables, and incorporated clicker questions into each lecture.Students had to discuss the question with their tablemates and decide on a common answer from each table. The whole room would come to life at these clicker breaks, and the interest level shot up as students waited for the denouement.
The calculus classes at Northwestern used a similar format: A question would be presented, students would talk about it and decide on their group's answer, and then use the clicker to provide the group's response.
At other places such as Harvard, the University of Colorado, and Ohio State, clickers are used in a more sophisticated fashion that requires each student to have his or her own clicker. In the Electricity and Magnetism class at Ohio State, they run three-question sequences: The first question is straightforward. Most students answer it correctly. The second question requires an application of this concept in an unfamiliar context. This is more challenging. Each student thinks about it and commits to an answer. Students then talk about the problem in small groups. They each answer the second question a second time, and the instructor deals with any remaining points of confusion. Then there is a third question, again an application of the same concept but in a different context. Again students answer, then discuss, then answer again. Complex as this sounds, less than 20% of class time was spent on these clicker questions.The key is not to give difficult problems, but rather questions that probe student ability to recognize how a basic concept plays out in an unfamiliar context.
The University of Colorado used a less intensive version that began with a challenging question followed by discussion, second attempt at an answer, and then another question that applied the same concept in a different context. For the second question, there was no follow up discussion among the students.The University of Colorado group also collected strong evidence of the importance of peer interaction in the process of student learning .
The Harvard, University of Colorado, and Ohio State methodologies are based on the fact that students do learn from each other. Research at the University of Colorado confirms that this was not just because students learned the right answer from the smartest student in that group. There is a group dynamic that can make the collective smarter than any of the individual members.
There may be some psychological benefit from clickers regardless of how they are used. Students find themselves more connected to the class. But the only significant improvements in student learning that have been documented have tied the use of clickers to peer instruction. Mazur and others have argued quite forcefully that if the technology does not affect the pedagogy, then it has no lasting effect. The real impact comes not directly from the technology but that it is an effective means of structuring small group interactions and holding these groups accountable, even within a large class format.One controlled experiment by Lasry  suggests that colored cards can be just as effective. My own experience in classes of 30 students is that this is small enough that circulating among the groups, listening to the student interaction, and then calling on selected individuals from different groups is more effective than clicker, partly because it gives me much more flexibility in deciding when to engage students in small group discussions and how to focus these discussions.
As in every other application of technology that I have encountered, clickers are a tool that can be used well or poorly. Using them well is not easy. The article by Ding et al.  suggests how much thought can and should be put into clicker questions. But as faculty are pressed to teach ever larger sections, clickers combined with peer instruction can make a significant difference to student learning.
 Bode, M., D. Drane, Y. B.-D. Kolikant, and M. Schuller. 2009. A Clicker Approach to Teaching Calculus. Notices of the AMS. 56:253–256. www.ams.org/notices/200902/rtx090200253p.pdf
This article explains how clickers were used in a calculus classes at Northwestern, gives examples of sample questions, and reports the results of a survey of student attitudes gathered over 3 years from 348 students in six classes. To highlight a few of their findings: 79.0% of the students agreed with the statement, "Using the clickers helps me enjoy this class more than I enjoy a traditional lecture class"; 79.3% agreed that "Discussing clicker questions with other students in the class helps me to understand better the subject matter"; 80.5% agreed that "Team members were actively involved in solving the question"; and 79.2% agreed that "Collaborative work among group members contributed to a better quality solution to th problems."
 Boyle, J. T. and D. J. Nicol. Using classroom communication systems to support interaction and discussion in large class settings. www.ph.utexas.edu/~ctalk/bulletin/glasgow2.pdf
A description of the use of clickers and peer instruction in a mechanical engineering class at the University of Strathclyde. Evaluation was via a survey of student attitudes: 74% believed that their understanding was better than it would have been in traditional lectures, 75% said that the clickers helped them to understand the concepts, 91% reported that they had to think more than in a traiditonal class, 95% said that they were more actively engaged than in a traditional class.
 Bressoud, D.M. 1994. Student Attiudes in First Semester Calculus. MAA Focus. 14:6–7. www.macalester.edu/~bressoud/pub/StudentAttitudes/StudentAttitudes.pdf
This presents the results of an open-ended survey of students attitudes and perceptions in a large (about 350 students) section of mainstream Calculus I at Penn State in the Fall semester of 1993.
 Caldwell, J. E. 2007. Clickers in the Large Classroom: Current Research and Best-Practice Tips. CBE–Life Sciences Education. 6:9–20. www.lifescied.org/cgi/reprint/6/1/9.pdf
A discussion of the use of clickers and peer instruction in biology and college trigonometry at West Virginia University. For two sections of trigonometry taught by the same instructor, one with clickers and the other without, grades were signifcantly better in the clicker section. Comparing sections of introductory Biology with and without clickers, clickers noticeably improved attendance (from 60% to 90%) and retention up to the final exam (from 90% to 95%). There is also a useful discussion of the drawbacks of clickers including loss of lecture time, technical problems, and cost.
 Crouch, C. H. and E. Mazur. 2001. Peer Instruction: Ten years of experience and results. American Journal of Physics. 69:970–977. web.mit.edu/jbelcher/www/TEALref/Crouch_Mazur.pdf
This is primarily an explanation of how clickers and peer instruction have been implemented in physics classes at Harvard. Primary evidence for their effectiveness is a steady improvement in scores on both conceptual and quantitative questions over the ten years that the program had been in effect. There also was a comparison of a single quantitative question between a traditionally taught class in 1999 and a class taught in 2000 using peer instruction. Students in the peer instruction class did considerably better.
 Ding, L., N. W. Reay, A Lee, and L. Bao. 2009 (in press). Are we asking the right questions? Validating clicker question sequences through student interviews. American Journal of Physics.
A discussion of the analysis and validation of clicker questions.
 Draper, S. W. and M. I. Brown. 2004. Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assited Learning. 20:81–94. www.psy.gla.ac.uk/~steve/ilig/papers/draperbrown.pdf
An account of the use of clickers and peer instruction at the University of Glasgow in a variety of classes: psychology, computer science, medicine, dental science, veterinary science, biology, philosophy, and statistics. Summative evaluation was based on student perception of whether they had benefited from the use of clickers. Most students either "definitely benefited" or considered that there was a net benefit. Only in philosophy did the number of students who were neutral or negative come close to 50%.
 Elliott, C. 2003. Using a personal response system in Economics teaching. International Review of Economics Education. 1:80–86. www.economicsnetwork.ac.uk/iree/i1/elliott.htm
An account of the use of clickers in a microeconomics class at the University of Lancaster, though apparently without peer instruction through class discussion of answers. Evaluation was conducted by a survey of student attitudes. Students were positive about the experience.
 Lasry, N. 2008. Clickers or Flashcards: Is There Really a Difference?. The Physics Teacher. 46:242–244. scitation.aip.org/journals/doc/PHTEAH-ft/vol_46/iss_4/242_1.html
The author compared two sections of a mechanics class, one of which used clickers and other used colored cards to respond to the "clicker questions." Both groups used the model of peer instruction. There was no significant difference in their improvement on concept questions.
 Mazur, E. 2009. Farewell, Lecture? Science. 2 January. 323:50–51. sciencemag.org/cgi/content/short/323/5910/50
A general description of why clickers help and how they can be used to greatest effect by the Harvard physics professor who originated the clicker-based pedagogy.
 Smith, M. K., W. B. Wood, W. K. Adams, C. Wiemen, J. K. Knight, N. Guild, and T. T. Su. 2009. Why Peer Discussion Improves Student Performance on In-Class Concept Questions. Science. 2 January. 323:122–124. www.sciencemag.org/cgi/content/full/323/5910/122
A study of clickers in an introductory genetics class at the University of Colorado, Boulder that shows that group discussion does improve conceptual understanding. Students were asked to respond to a question, given an opportunity to discuss it in small groups, asked a second time to answer it, and then given a parallel question, all before getting any feedback from the instructor.Of those who answered incorrectly the first time, 78% answered the parallel question correctly. The conclusion is that students learn from peer discussion, and it is an important component of the effective use of clickers.
 Reay, N. W., P. Li, and L. Bao. 2008. Testing a new voting machine question methodology. American Journal of Physics. 76:171–178. link.aip.org/link/?AJPIAS/76/171/1
This is the only large-scale comparative study of the effectiveness of clickers and peer instruction of which I am aware, conducted at Ohio State in classes in Electricity and Magnetism.For each of three consecutive semesters, one section was taught with clickers and one without. Students were given common pre- and post-tests of concept questions. In the first two quarters, less than 20% of the time in the clicker sections was spent on clicker questions. In the third quarter, this rose to 50%. In the first two quarters, students in the clicker sections showed a considerably greater gain in understanding of the concept questions. The gain was much less pronounced in the third quarter. This may be because there was a greater discepancy in their pre-test scores: Students in the clicker section in the third quarter did much better on the pre-test than their counterparts in the non-clicker section. Or, the less significant advantage may be because clickers were used much more extensively than in other quarters, suggesting that there may be an optimal amount of time to be spent on clicker questions. For all three quarters, being in the clicker section had a much more pronounced benefit for female students than for male students.
 Project MathQuest at Carroll College (http://mathquest.carroll.edu/ )Derek Bruff writes: "This is an NSF-funded project aimed at developing clicker question banks for linear algebra and differential equations. They’ve written and tested hundreds of clicker questions for those courses and made their question banks available on their Web site. They’ve also linked to other question banks and other useful resources on their resources page: http://mathquest.carroll.edu/resources.html. They have the most complete set of links to papers and Web sites regarding clickers in mathematics I know of. I’ve conducted two minicourses on teaching with clickers with the Carroll College faculty at the Joint Meetings in the last couple of years."
 GoodQuestions Project at Cornell University (http://www.math.cornell.edu/~GoodQuestions/index.html <http://www.math.cornell.edu/%7EGoodQuestions/index.html> )
This project, headed by Maria Terrell, was aimed at developing a question bank of calculus clicker questions. Their question bank is available on their Web site.
 Bruff, D. O. 2009.Teaching with Classroom Response Systems. Jossey-Bass. San Francisco
The author interviewed faculty in a variety of disciplines (not just mathematics) and put together this book featuring example clicker questions and activities from those interviews and from the literature. You can find out more about his work with clickers on his Web site, http://derekbruff.com/site/?page_id=6. He maintains a blog on teaching with clickers (http://derekbruff.com/teachingwithcrs) as well as an extensive bibliography (http://www.vanderbilt.edu/cft/resources/teaching_resources/technology/crs_biblio.htm).
Purchase a hard copy of the CUPM Curriculum Guide 2004 or the Curriculum Foundations Project: Voices of the Partner Disciplines.
Find links to course-specific software resources in the CUPM Illustrative Resources.
Find other Launchings columns