In August 1990 at the Joint Mathematics Meetings in Columbus, Ohio, the Subcommittee on Assessment of the Mathematical Association of America's (MAA) Committee on the Undergraduate Program in Mathematics (CUPM) held its organizational meeting. I was subcommittee chair and, like my colleagues who were members, knew little about the topic we were to address. The impetus for assessment of student learning was from outside our discipline, from accrediting agencies and governing boards, and even the vocabulary was alien to most of our mathematics community.
We considered ourselves challenging as teachers and as student evaluators. We used high standards and rigorous tests in our courses. What else could assessment be? We were suspicious of evaluating student learning through group work, student-compiled portfolios, and opinion surveys. And we were uncertain about evaluating programs and curricula using data about student learning gathered in these unfamiliar ways. Although many of us believed that testing stimulated learning, we were not prepared to integrate more complex assessment schemes into our courses, curricula, and other departmental activities.
We began learning about assessment. In its narrowest sense, our charge was to advise the MAA membership on the assessment of student learning in the mathematics major for the purpose of improving programs. Sorting out distinctions in the meaning of testing, student evaluations, program evaluations, assessment, and other recurring words and phrases was challenging, though easily agreed to by committee members accustomed to precision in meaning. We were to discover that assessment of student learning in a multi-course program was familiar to us, but not part of most of our departments' practices in undergraduate programs. In fact, departments and faculties had confronted a similar scenario in implementing a placement scheme for entering freshman students. The most comprehensive placement schemes used a variety of data about students capabilities in pre-college mathematics to place them in the appropriate college course. However, many placement schemes were influenced by the practices of traditional testing in undergraduate courses and relied on a single measurement tool, a test, often multiple-choice.
Another place where we had used a multifaceted scheme for assessment of student learning was in our graduate programs, particularly the doctoral programs. Individual course grades are much less critical and meaningful in a doctoral program, where assessment of learning relies heavily on comprehensive examinations, interviews, presentations, and an unarticulated portfolio of interaction between the student and the graduate faculty. And, finally, there is the major capstone experience, the dissertation.
The subcommittee decided to draft a document that would outline what a program of assessment of student learning should be, namely a cycle of setting learning goals, designing instructional strategies, determining assessment methods, gathering data, and using the results to improve the major. Because of a lack of research-based information about how students learn mathematics, we decided not to try to address what specific tools measure what aspects of learning.
By 1993 we had a draft document circulating in the mathematics community asking for feedback. The draft presented a rather simple cyclical process with lists of options for learning goals, instructional strategies, and assessment tools. Some were disappointed that the draft did not address the more complex learning issues, while others failed to find the off-the-shelf scheme they thought they wanted. This search for an off-the-shelf product was similar to the circumstances in the 70s and 80s with placement schemes and, as then, reflected the lack of interest in ownership of these processes by many mathematics faculty members.
In spite of the suspicions and disinterest of many in our mathematics community, the discipline of mathematics was further along than most collegiate disciplines in addressing assessment of student learning. Through attendance at and participation in national conferences on assessment, we soon learned that we were not alone in our confusion and that the rhetoric about assessment was fuzzy, unusually redundant, and overly complicated.
The draft document, in general, received positive reviews, while eliciting minimal suggestions for improvement. Departmental faculties faced with a mandate of implementing an assessment program were generally pleased to have a simple skeletal blueprint. Feedback did improve the document which was published by the MAA in the June 1995 issue of Focus. [1]
After publication of the report to the MAA membership in 1995, the subcommittee had one unfinished piece of business, compiling and distributing descriptions of specific assessment program experiences by departmental faculties. We had held one contributed paper session at the 1994 Joint Mathematics Meetings and another was scheduled in January 1996, to be organized by subcommittee members Barbara Faires and Bill Marion.
As a result of the contributed paper sessions, we concluded that the experience within departments was limited, but by 1996, there were encouraging reports. At the 1996 meetings Bill Marion, Bonnie Gold, and Sandra Keith agreed to undertake the compilation the subcommittee had planned and to address a broader range of assessment issues in a volume aimed for publication by the MAA. This volume is the result.
The articles on assessing the major in the volume make at least three points. First, the experiences related here show that attitudes have changed over the past seven years about various instructional and assessment strategies. Group work, comprehensive exams, focus groups, capstone courses, surveys, and portfolios are now widely discussed and used. This change has not been due to the assessment movement alone. Reform in teaching and learning, most notably in calculus over the past decade, has promoted these non-traditional teaching and learning methods, and, as a result, has called for new assessment strategies. The confluence of pressures for change has made these experiences more common and less alien.
Second, the articles show that our experience is still too limited to allow documentation of significant changes in learning. As the articles indicate, the symptoms are encouraging, but not yet conclusive.
Third, the articles make it clear that developing an assessment program requires intense faculty involvement at the local level and a strong commitment by individual faculty leaders. The variety of experiences described in the section on assessing the major spans several non-traditional methodologies: portfolios, capstone courses, comprehensive examinations, focus groups, and surveys of graduates and employers. The variety enriches the volume considerably.
Some who read this volume may be disappointed by again not being led to a recipe or an off-the-shelf program of assessment. Only after one investigates the complexity of establishing a promising assessment program will one fully appreciate the work of those who are relating their experiences here and those who have compiled and edited this volume. This volume contributes significantly to an ongoing process of improving teaching and learning in collegiate mathematics. No doubt, someday it will be valuable only as a historical document as our experiences grow and we understand better how students learn and how to measure that learning more effectively.
But, for now, this is a valuable volume recounting well thought out experiences involving teachers and students engaged in learning together. The volume is probably just in time as well: too late for the experiences to be mistaken for off-the-shelf recipes, but in time to share ideas to make assessment programs better.
Reference
[1] Committee on the Undergraduate Program in Mathematics (CUPM). "Assessment of
Student Learning for Improving the Undergraduate Major
in Mathematics," Focus: The Newsletter of
the Mathematical Association of America, 15 (3),
June 1995, pp. 24-28. Reprinted on pp. 313 of this volume.
![]() |
||
![]() |
![]() |
![]() |
![]() |