Assessment: The Burden of a Name
Bernard L. Madison
University of Arkansas and Mathematical Association of America
A ballad by Johnny Cash, A Boy Named Sue, chronicles a boy's growing up and the hardships that ensued because of his name. Fighting in bars and taverns and withstanding the insults of detractors seemingly give the boy character and strength as he becomes a man. However, the ballad ends with the main character's avowal to name his own son "anything but Sue!" An analogous ballad might someday be written about assessment.
Thrust onto the US higher education scene in the final two
decades of the twentieth century, assessment continues to suffer mightily from misunderstanding, much of it because of the burden of its name with its multiple meanings [2] and interpretations. The other weighty contributor to this misunderstanding is assessment's cadre of early promoters -- administrators, governing boards, accrediting agencies, and legislatures. Most college faculty believed that assessment was, as the name implied, only some kind of comprehensive evaluation. They knew, as did every farmer, that weighing one's produce did not hasten its readiness for market. They also knew that the motivations of the promoters of assessment were anchored in evaluation and accountability. So the lines were drawn and assessment has struggled against these misunderstandings to gain both respectability and usefulness in US higher education.
Struggling With the Name
Efforts have been made to modify the assessment rubric to better convey meanings and purposes. We distinguished between summative assessment and formative assessment to try to clarify why assessment is done. We resorted to assessment cycles to imply that assessment was a continuous process rather than a discrete event. We added prepositional phrases to clarify the purpose when we talked of assessment of student learning and assessment in the service of learning. We tried to distinguish kinds of assessment by referring to classroom assessment, large-scale assessment, authentic assessment, and alternative assessment. Grant Wiggins authored a book [4] with a title that attempts to delineate the purpose of assessment, Educative Assessment. But the noun, and hence the center of attention, is assessment, and this word continues to convey misleading meanings and images in spite of modifying word or phrases. Choosing another noun will probably not help, though name changes are the order of the day in the "dot.com" world. Sometimes, non-meaning is the key in these new name searches, as many of us remember - for crossword puzzles, if nothing else - the search for Exxon to replace Esso. A nonsense rubric might be the solution for assessment, but my thesis here is that we already know what assessment should be and really is, and we just need to acknowledge that. In these few pages I will elaborate on this thesis.
Some History
Comprehensive assessment of individual student learning in an entire academic program is not new to US higher education. In the early years, end-of-program examinations, some using external examiners, were the norm for college degrees. Expanding enrollments of the twentieth century made large-scale assessment of learning in academic programs less practical. Consequently, most assessment of student learning was bound up in course grades, mainly using what we now call classroom summative assessment. Most course grades depended on a one-dimensional evaluation process - periodic in-class examinations - and some comprehensive final examinations over individual courses. Many of the current collegiate faculty grew up with this assessment scheme and found it reasonably satisfactory, so there was no groundswell for change from that faculty. Yet, through use in some academic programs, that faculty acknowledged the value of comprehensive formative assessment using multi-dimensional measures of learning. The programs that attracted such assessment most often were the terminal graduate degree programs, typically the doctoral programs.
Assessment under Other Guises
Consider how doctoral students and new doctorates are assessed, both for individual learning and for program evaluation and improvement. Many times, course grades are not determining; most grades are A's with a few B's. Doctoral students are judged by their participation in seminars where they listen, discuss, and present. They are almost constantly in conversations with graduate faculty and potential thesis directors, being judged on how well they understand and being coached in areas where they need help. They are tested by faculty committees, in presentations ranging from thesis design to oral examinations. They sit for written examinations over a range of courses and subject areas. Eventually they participate in a significant capstone experience, writing and defending a dissertation. The assessment of achievement of doctoral students continues beyond the doctoral degree, to their employment successes (e.g. achieving tenure) and their publishing records. Most discipline faculties have no doubt about the quality of their doctorates; there are elaborate assessment processes that tell them. And with each doctoral student, the process of educating new doctoral students may be refined and improved. Thus the assessment can be formative, or an assessment cycle. Perhaps this is one reason why US graduate education is indisputably the best in the world.
So, if discipline faculties use these comprehensive schemes for
their doctoral students, why not use analogs for their undergraduates to assess their learning in general education or study in depth? The major reason is that undergraduate students far outnumber doctoral students, and assessment of student learning of a sample of the students for the purpose of program improvements has not been widely adopted. Yet, most faculty do practice formative assessment, albeit unknowingly and casually, in their classrooms.
Even in the outmoded and discredited lecture method that most
of
us still use, formative assessment is often very much present. As we lecture, we survey faces, looking for signs of understanding or puzzlement, and we adjust accordingly. Some of us sprinkle our lectures with generic questions such as "Do you see?" or "Is that clear?" I can remember professors of mine who inserted such a question randomly and frequently, to the point that counting the number of occurrences of the question in the lecture became an amusement. Often times, though, these questions represented a subliminal obligation, and were not asked to elicit an answer. They were, however, recognition that a part of teaching is gauging understanding and responding with changes in instructional methods. Perceived lack of time prevented a more substantial judgement of learning and more substantial analyses of how learning could be improved. And, of course, we were dealing with only one course, limiting our assessment accordingly. Furthermore, we knew, if we really thought about it, that feedbacks from expressions or head nodding were unreliable. Students, too, developed habits of behavior like my professors who reflexively asked, "Do you see?"
Responses to the Assessment Movement
Even though collegiate faculty through their actions showed strong belief in assessment - even formative assessment - the way assessment came to most faculties created resistance, or, at best, ritualistic compliance. Some faculties at some schools, e.g. Alverno College [1], had adopted assessment as an integral part of their instructional program and were thriving. Yet most models of assessment seemed not to adapt to larger, more diverse institutions, so many administrations tried to build assessment from the top down, or, bottom up, depending on how you view the hierarchy in higher education institutions. Some created, for goodness sakes, vice presidents for assessment, giving it status parallel to fund-raising, computing technology, and fiscal affairs. This added fuel to the faculty belief that assessment belonged to others, and that it was an unnecessary waste of resources.
The assessment movement swept aside this faculty reluctance,
and
assessment programs for varying and often misunderstood purposes were mandated by governing boards, legislatures, and accrediting agencies. The American Association of Higher Education (AAHE) began holding annual Assessment Forums. I attended several of those in the early 1990s to try to learn about assessment. I had been appointed Chair of the Subcommittee on Assessment of the Committee on the Undergraduate Program in Mathematics of the Mathematical Association of America (MAA), and we were charged to advise MAA on assessment. Eventually, we did write guidelines [3] for mathematics departments to follow in setting up an assessment cycle for the purpose of program improvements, and hence more student learning. We explained how one should set learning goals, devise and implement instructional strategies, measure learning, and then start all over again, using what had been learned from the experience of previous cycles. We were getting closer to the true meaning of assessment, but we were not there yet. Our assessment cycles were still described as add-ons to instructional programs.
My AAHE Forum Experiences
My experience at the AAHE Assessment Forums helped greatly with my understanding of assessment. Some of the presentations amazed me - among the most amazing were the ones giving curricula on assessment in higher education graduate programs. I saw little involvement by the disciplinary faculties. What I saw was a huge cottage industry on assessment being formed and thriving external to the very core activity to which it was presumably directed, teaching and learning in colleges. I was struck by the repetition in the presentations, and, at the same time, puzzled by seemingly different meanings of assessment. I was struck by my familiarity with many of the ideas in assessment programs and the techniques, too. I was struck by the use of language - words took on meanings different from how they were understood in my discipline of mathematics. The plenary speakers were inspiring, articulate, and memorable, clearly having thought deeply about something I believed I had just discovered, but also being very knowledgeable about higher education. The whole experience was perplexing, but I wasn't sure why. I had not yet mapped the assessment they were talking about onto my experience.
What Assessment Really Is - Or Ought To Be
I slowly began to realize that I had met assessment before, many times, but under different rubrics. Assessment was really a part of teaching and learning. It was just probing further along the lines of my professor's "Do you see?" It was finding complex answers to that question and going further to find ways to increase understanding. It was not something foreign or external to the teaching and learning process; it was an integral part. Therefore, its name was misleading and the way of imposing it from outside the teaching and learning process was at best misguided.
Assessment is neither new nor exotic. It is and has been a
part
of every faculty member's work. All that is new is going beyond one class and one professor to ask that question "Do you see?" over a broader range of material and probe further to find how learning can be improved. So why do we need another word - one that conjures up visions of tax bills - to describe a part of teaching? Assessment should be done to enhance teaching, increase learning, and improve programs because it is a part of those processes. Its identification as something external to the process of teaching and leaning has greatly hindered implementing the new and productive ideas of the assessment movement. So, let's think of a better name and a better way to have disciplinary faculties claim ownership of something that is already theirs. Perhaps a name that suggests this would be helpful, such as responsive teaching. As the Johnny Cash ballad ends, "anything but assessment!"
References
- Alverno College Faculty (1979), Assessments at
Alverno College. Milwaukee, WI: Alverno Publications.
- Ewell, P. T. (2001), An Emerging Scholarship: A Brief
History of Assessment (Draft). Boulder, CO: NCHEMS.
- Mathematical Association of America (1995), Assessment
of Student Learning for Improving the Undergraduate Major in Mathematics.
Reprinted in Gold, B., S. Keith, and W. Marion (eds), Assessment Practices
in Undergraduate Mathematics, 279-284, MAA Notes #49 (1999), Washington,
DC: MAA.
- Wiggins, G. P. (1998), Educative Assessment, San
Francisco: Jossey-Bass.
Washington, DC
November 2001
Back to Getting Started with Assessment
|