![]() |
Getting Started With Assessment | |
Assessment of Undergraduate Mathematics Bernard L. Madison UNIVERSITY OF ARKANSAS We are between the proverbial rock (externally designed assessment) and the hard place (internally designed assessment). We must either turn over the assessment business to those who don't teach mathematics or spend a great deal of time and effort to discover how to do something about assessment ourselves. My hope lies in the community of mathematicians, but there will be no three-line proofs that solve this problem. --Hank Fraudsen, University
of Tennessee, 1991 This paper is a result of an electronic mail discussion conducted in April and May, 1991. The fourteen participants in the electronic mail focus group consisted of twelve mathematics or mathematics education faculty members and two persons from outside mathematics. Four of the mathematics faculty members now hold university administrative positions outside the mathematics department. Experience in assessment ranged from near zero to several years of university-wide coordination of assessment activities. The institutions of the participants were at various stages of assessment, ranging from having established full-scale comprehensive programs to only thinking about and discussing assessment generally. Some felt that their institutions would not institute assessment programs unless they were mandated, and those who had comprehensive programs were operating under governing board mandates. The institutions represented included small private colleges, regional comprehensive public universities, and large public research institutions. This paper is guided more by the content of the electronic mail discussion than by the current literature, theory, and practice in student outcomes assessment. The references at the end of the paper will lead the reader to other discussions. Evolution of Assessment Traditional assessment in U.S. higher education splits into two modes, more or less historically. In early U.S. education, comprehensive end-of-program examinations were common, essentially the rule. External examiners of students, following the European tradition, were used by most institutions. As both geographic separation and numbers of students increased, external examiners and comprehensive examinations became less common. Individual course examinations and grades became the common method of assessment. Programs were evaluated on reputation, curriculum, and available resources, under the assumption that these had a high correlation with student learning. As school enrollments, particularly collegiate enrollments, increased in the decades following World War II, the diversity of student preparation and post-secondary programs created a need for assessment for minimum competencies. Standardized tests, mostly with multiple-choice answers, emerged as the tools for measuring lower order skills for purposes of college admissions, course placements, or diagnostic teaching decisions. This situation was probably more a consequence of the available tools than of the effectiveness of the assessment using these tools. The focus was clearly on individual student assessment for making decisions about an individual student. Program evaluation and faculty evaluation were not central issues in minimum competencies assessments. Program evaluation continued using reputation, curriculum, available resources, and scattered information about successes in student preparation. The competencies of entering students, however, became an ingredient in program evaluation, but "value added" measures were not common. The public has the right to know what it is getting
for its expenditure of tax resources; the public has a right to know and
understand the quality of undergraduate education that young people receive
from publicly funded colleges and universities. They have a right to know that
their resources are being wisely invested and committed. --John Ashcroft [1],
Governor of Missouri, 1986 The second wave of assessment, coming mostly within the last decade, was principally based on using assessment of student learning as a means of program and faculty evaluation. In some cases, allocation of resources is affected by the results of assessments of student learning. Presumably, more student learning implies more resources in these cases. The major impetus for this type of assessment was accountability of institutions and programs. Much of the assessment has been mandated by governing units, from state governments down to university administrations. The Education Commission of the States reports that more than three-fourths of the states have a student assessment effort planned or in place. The American Council on Education reports that a majority of colleges and universities is in some stage of developing student assessment programs. The future of assessment in U.S. higher education seems sure to bring new and different methods. Hardly anyone is content with our current ability to measure success in achieving educational goals. This is especially true in the liberal arts, where, as Grant Wiggins [9] has said, assessment needs to be built upon the distinction between wisdom and knowledge. (Wiggins' paper is reprinted as Appendix A to this report.) Mathematics faculty in higher education have little experience in setting the educational goals that are necessary for the current assessment movement. Our curricula are designed from courses, and educational goals, frequently unarticulated, are derived from courses and not the other way around. Either we have to come to grips with adapting our circumstances to the current assessment movement or we have to design new assessment tools that will be more adaptable to what we believe to be our (as of yet unarticulated) educational goals in undergraduate mathematics. The Current Assessment Movement A simple and favorable description of the new assessment mode is the attempt by an institution or a program to answer periodically three questions:
One focus group participant expressed these questions as follows: "I describe [our assessment] program by asking three questions: (1) What does your department intend to do for its students? (2) What evidence is there that you do it well? (3) What could be done by you (and the Dean) to help you do it better?" Of course, that is simpler sounding than it really is. The questions are enormously difficult to answer and, in the general setting, encompass all of educational theory and practice. However, for an individual program or collegiate major, partial answers are tractable and far better than no answers at all. The nature of the evidence presented to answer the second question is what separates student outcomes assessment from the traditional program reviews. Traditional program reviews have focused on questions of how the local curriculum compares to a national or common curriculum, or whether the library resources are adequate. Student learning outcomes is a new ingredient in program reviews, although data on competencies of entering students have been considered in the past. Some now see assessment (testing) of student learning as an integral part of program evaluation. One participant wrote, "I bind student assessment with program evaluation. I'm not much of a believer in regularly giving students tests and then publishing the results of those tests without giving interpretations and conclusions based on those tests." Where assessment measures have a regulatory, budgetary, or
even public relations purpose, they are likely to develop ... disproportionate
influence. --Ernst Benjamin [4], AA UP,
1990 Student outcomes assessment in higher education is conducted at several levels: basic skills, core curriculum, end-of-program, and in postgraduate workplaces. Assessment of basic skills is an extension of activity over the past two or three decades and is concerned with determining minimum competency for entry or as a benchmark for vMue-added testing. Assessment of learning in the core curriculum, or in general education, is frequently at the end of the first two college years and is sometimes a hurdle for entry into advanced courses. Often when this is a hurdle, the tests are labeled as "rising junior" tests. End-of-program assessment concerns both a comprehensive assessment of undergraduate learning and an assessment of learning in a major program--that is, of study in depth. The relationship between study in depth and learning in other parts of the undergraduate program are also subject to assessment at this stage. Alumni follow-up through surveys aims at determining how undergraduate learning has enabled the graduate to succeed in the workplace. A comprehensive assessment program usually includes assessment of all four types, some attitude analysis, and integration of the results. Assessment in Mathematics Assessment of learning in undergraduate mathematics involves all four assessment areas: basic skills, core curriculum, major, and workplace. Assessment of the undergraduate mathematics major is of primary interest to most mathematics faculty members, to members of our focus group, and to the MAA. Nevertheless, separation of assessment of the major from other types of assessment is difficult, and many of the philosophical and operational issues are the same across all types of assessment of undergraduate student learning. Assessment of learning in the mathematics major has been generally equated with assessment in the individual courses that make up the major. Some departments have had comprehensive examinations, capstone courses, or senior projects, but most departments do not attempt any comprehensive assessment of learning in the major. The Graduate Record Examination (GRE) has provided the only significant standardized assessment tool of learning in the mathematics major, and it is focused on preparation for graduate study. Within the past five years, the Educational Testing Service (ETS) has developed a Major Field Achievement Test in mathematics. Use of this test is not widespread and many faculty members are skeptical about the independent value of a score on such a test. Little evidence of use was discovered during the focus group discussion. Said one participant, "We are not enamored of such standardized tests." Another added, "Everything I read says that these do not work. Each of our institutions is sufficiently different that there are problems with the packages." The student outcomes assessment movement did not originate in mathematics departments, and among mathematicians there is a mixture of skepticism and optimism about the effort. Most are involved because of external mandates. In the words of one participant, "In all probability, we would not be in the assessment business had the state not mandated it." But some see the need and look forward to progress. "The purposes and motivation for assessment of undergraduate education and of the mathematics major ought to be to see whether we are making good use of the time and talents of our students and faculty to provide an education as defined in Grant Wiggins' paper [9]." The Main Players The players in assessment in undergraduate mathematics are students, faculty members, college and university administrators, institutional governing boards, state coordinating boards, state legislatures, and the public. Already, most states have mandated some type of assessment in higher education, and the others are likely to follow suit soon. By and large, mathematics faculty members are skeptical of this movement and see little or no benefits resulting. Students are largely unaware of the external movement and less than highly motivated to participate with best efforts. As of now, the main players seem to be university or college administrators responding to mandates from state governments. "The main players in assessment on my campus right now are administrators above the college level," said one participant. "However, the main players should be the faculty." Most agree with that sentiment and further believe that unless faculty members get involved at the planning level, there will be trouble down the road and little benefit will accrue. The roles of various administrators differ from program to program. In one state institution where end-of-program assessment is mandated, the clean is "... in charge of making sure that something grows out of the various forms of assessments that we conduct. In some cases that has meant new positions for a department, in others it has meant summer funds for renovating the curriculum, in others it has meant partial support for a department's attempts to improve the advising of majors." In other cases, it has meant saying "no" to requests that ran counter to indications of what needed to be done in the light of assessment. Recognizing differences in institutions and allowing for flexibility is a key to success of mandated assessment. As one participant reported, "Under the benevolent eye of the Higher Education Coordinating Board, the public institutions of higher education cooperate in developing assessment systems, but do not march to exactly the same drummer. Another commented, "Faculty attitude toward assessment here is not as negative as one might think. The state has paid for our assessment efforts and has allowed us to design a very flexible program that lets us do many of the things we want to do anyway." There are indications that student involvement is a problem, especially when assessment of students' learning has no direct consequences on their receiving degrees. "One of the most difficult issues that we have faced ...is getting our students to participate, and to give the assessment tests, portfolios, etc., their best shot." When tests are not a graduation requirement, then motivation is an issue. Some colleges require undergraduates to achieve a certain score on the GRE in order to graduate with a mathematics major, and some report requiring comprehensive examinations, capstone courses, or senior projects. Establishing Goals for Assessment If our testing encourages smug or thoughtless mastery--and it does--we undermine the liberal arts. --Grant Wiggins, 1990 There is general agreement that the most difficult step in establishing a program of assessment is determining educational goals. Repeatedly, throughout the focus group discussion, the MAA was urged to publish samples of goals statements. There is no need to identify departments; in fact, the feeling is that identification tends to canonize a few departments and discourage beginners. Considerable discussion within the mathematics community over the past five years has at least laid the groundwork for formulating these goals statements. The question to answer is "What do we want our students to learn?" The report [2] of the joint MAA and Association of American Colleges (AAC) project on study in depth addressed this issue as follows: Many would argue that goals for study in depth can be effective only if supported by a plan for assessment that persuasively relates the work on which students are graded to the objectives of their education. Assessment in courses and of the major as a whole should be aligned with appropriate objectives, not just with the technical details of solving equations or doing proofs. Many specific objectives can flow from the broad goals of study in depth, including solving open-ended problems; communicating mathematics effectively; close reading of technically-based material; productive techniques for contributing to group efforts; recognizing and expressing mathematical ideas embedded in other contexts. Open-ended goals require open-ended assessment mechanisms; although difficult to use and interpret, such devices yield valuable insight into how students think. Relatively few mathematics departments now require a formal
summative evaluation of each student's major. The few that do often use the
Graduate Record Examination (or an undergraduate counterpart) as an objective
test, together with a local requirement for a paper, project, or presentation
on some special topic. Some institutions, occasionally pressured by mandates
from on high, are developing innovative means of assessment based on
portfolios, outside examiner, or undergraduate research projects. Here's one
example that blends a capstone course with a senior evaluation: The Senior Evaluation has
two major components to be completed during the fall and spring semesters of
the senior year. During the fall semester the students are required to read
twelve carefully selected articles and to write summaries of ten of them.
(Faculty written summaries of two articles are provided as examples.) This work
comprises half the grade on the senior evaluation. During the fall semester
each student chooses one article as a topic for presentation at a seminar.
During the spring semester the department arranges a seminar whose initial
talks are presented by members of the department as samples for the students.
At subsequent meetings, the students present their talks. Participation in the
seminar comprises the other half of the grade for the Senior Evaluation. Because of the considerable variety of goals of an
undergraduate mathematics major, it is widely acknowledged that ordinary
paper-and-pencil tests cannot by themselves constitute a valid assessment of
the major. Although some important skills and knowledge can be measured by such
tests, other objectives (e.g., oral and written communication; contributions to
team work) require other methods. Some departments are beginning to explore
portfolio systems in which a student submits samples of a variety of work to
represent just what he or she is capable of. A portfolio system allows students
the chance to put forth their best work, rather than judging them primarily on
areas of weakness. The recommendations from the National Council of Teachers of
Mathematics for evaluation and assessment of school mathematics convey much
wisdom that is applicable to college mathematics. Assessment must be aligned
with goals of instruction. If one wants to promote higher order thinking and
habits of mind suitable for effective problem solving, then these are the
things that should be tested. Moreover, assessment should be an integral part
of the process of instruction: it should arise in large measure out of learning
environments in which the instructor can observe how students think as well as
whether they can find right answers. Assessment of undergraduate majors
should be aligned with broad goals of the major: tests should stress what is
most important, not just what is easiest to test. Obviously, one set of goals will not suffice for all programs, even for all undergraduate mathematics majors. Many have different tracks and emphases. Furthermore, mathematics faculty members are unaccustomed to viewing the undergraduate curriculum from a set of goals. That is not the way the curriculum has developed over the lifetimes of the current faculty, at least. Courses and topics are viewed as inherently belonging to an undergraduate program, and many have never questioned the purpose of certain topics or courses. One participant lamented about this circumstance, saying, "It is hard for outsiders (or even insiders, come to think of it) to believe that there are parts of our curriculum whose purpose we do not exactly know." Others agreed: "I am not convinced that our curriculum is designed from goals to courses." In fact, the evidence is that it is the reverse, from courses to goals, if goals are even articulated. Another added, "I have been troubled for many years with the lack of systematic curriculum evaluation in our department. I once gathered the minutes of the committee meetings for a ten-year period looking for the rationale for our curricular decisions. Usually changes were due to young faculty trying to reform our curriculum to conform with one that they had seen elsewhere." One participant outlined the general task by saying, "We need to identify what we want to teach--including the non-content ideas such as mathematical maturity, problem-solving ability, and ability to write and understand proofs." Some who have begun the process of articulating goals have encountered new problems. One reported, "My feeling is that the department here, after two years of discussion, is more polarized over philosophy than at any previous time. Mathematics faculty are not equipped to articulate assessment goals. The MAA has to step forward." Others pointed to other problems by observing, "We have too little knowledge about how well our service courses prepare students." "Any type of student achievement assessment needs to take student characteristics into account, especially when it is used to compare programs." Some who had worked on goals articulation had suggestions. One such suggestion was to look at all the final examinations a student takes over the degree program as a start. "I have often told beginning teachers who are trying to state goals to use the final examinations as a first approximation. Of course, we have many goals which do not appear on these exams. The lamentable but frequently asked question, 'Will this be on the exam?' is particularly nettlesome when the answer is 'no.' We all know this happens frequently." Another participant offered general goals: "One of our major goals is preparing our students to go out and continue learning on their own, whether in an academic setting or not. Another is enabling them to organize and communicate what they do know." A recent AAUP Committee report on mandated assessment was not optimistic about being able to measure the goals in a student's major field of study. The following statement on assessment in a major field of study is taken from that report [3, p. 38]: Most faculty members agree on the importance of assessing
systematically a student's competence in the major, as shown by the
multiplicity of forms of assessment that many departments employ. Yet even in
this disciplinary context the range of possible student options after
graduation makes it unlikely that an externally-mandated assessment instrument
would do anything more than gauge the lowest common vocational denominator. The
major is properly regarded as a vehicle for deepening the student's independent
research and study skills, and thus standardized assessment of achievement in
the major field raises precisely the same objections as it does in general
education. Learning for its own end--for the purpose of developing
breadth, intellectual rigor, and habits of independent inquiry--is still
central to the educational enterprise; it is also one of the least measurable
of activities. Whereas professional curricula are already shaped by external
agencies, such as professional accrediting bodies and licensing boards, the
liberal arts by contrast are far more vulnerable to intrusive mandates from
other quarters; for example, the governors' report professes to find evidence
of program decline "particularly in the humanities." To be sure, even
in the liberal arts a student's accomplishments in the major can be measured
with relative objectivity by admission procedures at the graduate and
professional level that include GRE scores as one of the bases for judgment.
But a student majoring in English may wish to pursue a career in editing,
publishing, journalism, or arts administration (to name only a few); a political
science major may have in mind a career in state or local government or in the
State Department. Either of them may have chosen his or her major simply out of
curiosity, or perhaps out of a desire to be a well-educated citizen before
going on to law school or taking over the family business. For these reasons we suggest that the success of a program
in the major field of study is best evaluated not by an additional layer of
state-imposed assessment but by placement and career satisfaction of the
student as he or she enters the world of work. Whereas imposed assessment
measurements will at best--and rightly--attract faculty cynicism and at worst
lead to "teaching to the test," no responsible faculty member will
ignore the kinds of informed evaluation of a program available through a candid
interchange with a graduating senior or recent graduate. Worries About Assessment Assessment is not just the average score of your majors on a multiple choice test. --David Lutzer, College of William
and Mary, 1991 Two questions are confronted on campuses where mandated assessment is a fact or being considered:
These questions prompted a bit of debate among the participants. Faculty members are suspicious. How will the assessment data be used? Will honest assessments be rewarded or punished? Will assessments be consistent across departments? What is in it for faculty members except more work? The rewards system needs changing, according to one participant who suggested, "To motivate the faculty to take the lead we should redefine research' to include scholarly activities related to the teaching of mathematics and reward that research (when it is of high quality) as well as we reward the publication of new theorems." One supporter of assessment countered the skeptics: I cannot imagine a department that would say that 'we have educational goals, but we aren't really concerned about whether our students achieve them' ...Assessment should have consequences. A properly done assessment can be used to focus both departmental and administration attention on what must be done to improve a program. Self-knowledge on the part of a department should lead to self-improvement. Whether or not it does is the true test of a local administration's commitment to assessment. Use of Assessment Data Many faculty members believe that assessment is a prelude to cutting programs and that those who mandate assessment and publication of the results will not be content unless programs are cut. This is believed to be the only way that legislators and administrators can prove to a skeptical public that they are doing their jobs. This is comparable to the belief that the only way that the public will believe that faculty evaluations are useful is that some faculty members are dismissed as a consequence. "Colleagues express concerns about how the administration will use such assessments. The fear is that they will be misused. Somebody is always ready to dictate what programs are of value and which don't deserve to be supported." Generally, participants with the less experience in assessment voiced more negative feelings: The typical faculty member does not believe that
there are any benefits to externally mandated assessments. The problem is that
the purposes of assessment are not clear and different purposes are confused. One has to overcome the faculty perception that assessment is
a no-win situation. I have serious reservations about assessment of entire
colleges and universities since it is external, and thus invites abuse, misuse
of information, and use of very shallow measuring instruments. In addition, it
is very difficult, if not impossible, to compare entire colleges and schools
with a simple assessment procedure. It is clear that any assessment statements must address at
least two different kinds of mathematics departments--departments at
undergraduate colleges and departments at universities. Changes are going to be
easier to make at colleges, simply because of the size of the departments
involved and the fact that all of the instruction is undergraduate. These comments were offset by participants with experience that had generated support for assessment. One reasoned, When the state began its mandated assessment program, the university responded with a locally controlled program of assessment which was largely ignored by faculty in most departments (including mathematics). We have learned very little about our curriculum or instruction from the decade of data gathering which has taken place .... Nevertheless, I am a believer in the need for some assessment. The article by Grant Wiggins [9] was most interesting and inspirational, and I would look there for where we should be headed. In addition to general suspicions, there are some specific worries about assessment, especially externally mandated assessment. These include having assessment shape the curriculum, infringement on academic freedom, compromising the balance between academic integrity and democratic direction, and the costs of assessment programs. Shaping the Curriculum The most obvious and alarming worry is movement toward what has been called the positivist approach to the curriculum--only aim to teach those things that are objectively assessable, or "teaching only to the test." In this way, assessment limits the curriculum negatively. One participant was certain: "The danger with assessment is exactly that it shapes the curriculum and the way in which courses are taught." Another added, "I fear that if assessment is mandated from the outside that the mathematics faculty will not be the ones to set the goals." Yet another braved the wrath of his colleagues, saying, "I support assessment shaping course content under the right circumstances. (I have colleagues that would string me up for that statement.) However, I believe the goals have to be set first." In 1990 Ernst Benjamin [4] wrote in the Chronicle for Higher Education that "State mandated assessment is dangerous not because evaluation is inappropriate, but because the requirement that universities demonstrate their quality in politically acceptable or popular terms--unmediated by the expertise of an accrediting body or the systematic procedures of a governing board--deprives universities of the safeguards that insure a balance between academic expertise and democratic direction." One participant disagreed with Benjamin's thesis. "I see this as a red herring. If a department sets its own goals, evaluates its own success at achieving those goals, and uses that evaluation as a basis for proposing new strategies to reach the goals (and perhaps proposes changes in those goals), what could be the possible harm? The disagreement over this question seems to be growing out of a too narrow definition of assessment ..." Academic Freedom How might assessment, budgetary incentives, and public scrutiny threaten academic freedom and faculty control over the curriculum? The AAUP worries about threats to academic freedom and has stated a position on "Mandated Assessment of Educational Outcomes" in the November/December 1990 issue of Academe [3]. The following are extracts from the "protections for the role of the faculty and for reasonable institutional autonomy" as recommended in the AAUP Committee report [3, p. 40]:
There was some disagreement among the participants as to whether assessment, externally mandated or not, was a serious threat to academic freedom. If mathematics department faculty members controlled the assessment--both implementation and use of results--there would be little threat. Many faculty members are already accustomed to departmental syllabi and even department-wide examinations. One said, "There is a threat to academic freedom if and only if assessment is hopelessly misused." Another countered, "Assessment and the publication of the results can be a major threat to the balance between academic freedom and faculty control over the curriculum. This is particularly true when the assessment is simplistic and the results are used to make major budgetary decisions. For example, if our test scores don't 'improve' each time, we may lose a substantial amount of funding. This leads to the search to improving scores rather than educating our students." The new wrinkle in the current assessment movement of using student performance to evaluate programs and teaching raised serious concerns among the participants. Said one, "Implementing the use of student performances to evaluate teaching is very complicated. The true test of teaching is how much the students learn and how they can use what they have learned." What are the risks in using student performance to judge teaching practice, teacher performance, curriculum, programs, institutions, and systems? The risks are many, indeed, according to one participant. "Look at the situation in athletics where performance is judged by student performance. The recruitment scandals are just the tip of the iceberg, and the coaches are never asked to teach 'service' courses!" If student performances are used in program and teaching evaluation, then there will be the same incentives to 'cheat' as are present in intercollegiate athletics. Some elaborate control mechanism like that imposed by the intercollegiate athletic associations may become necessary. Costs The costs of an effective assessment program are generally believed to be large, both in faculty time and in materials. However, those perceptions about materials costs may be inflated. At one institution, over the last four years, fifteen arts and sciences departments carried out reportedly effective assessment programs for an average annual cost of $3,000 each. That covered the cost of two external reviewers per department, senior and alumni surveys, special testing, and copying of portfolio materials. Nevertheless, participants voiced their concern about large costs. "The costs of a truly effective assessment program are probably enormous. In these days of financial shortages I doubt if we would be able to bear the costs of even a modestly effective program at campus levels without a reduction in the quality or quantity of education we provide." Concern about the size of the costs are not all the worries. Where the money will come from also worries some people. An official of one state government was quoted as saying that assessment program could be funded by the savings from eliminating inadequate programs. Examples of Programs What a liberal education is about--and what assessment must be
about--is learning the standards of rational inquiry and knowledge production. --Grant Wiggins, 1990 According to the discussions of this focus group, there are not very many exemplary programs of assessment now, especially not of programs of assessment in the undergraduate mathematics major. When asked if there are exemplary programs, one participant with particular insight responded cautiously: "One hopes that there are. But, as a resident of a campus which has acquired a reputation for leadership in assessment of general education clue to ten years of effort by some pretty capable and devoted people, I would conclude that there are no such examples. If our program has gained as much credibility as it seems to with as little impact as I can find there must be very little high quality competition." Nevertheless, there are some programs that appear successful. Two participants described the programs at their institutions as follows. In the department we have chosen to look at our program
objectives and to develop means of assessing each of them. The result will be
to require each senior to develop a portfolio containing (1) a report of their
senior project which might be a variety of things from a presentation, paper,
course project, or teaching unit, (2) a collection of graded problems and
proofs from upper-division courses, (3) a completed attitude instrument, and
(4) scores from a standardized examination. We have what is still called a comprehensive examination
requirement in the major, which has evolved from a written examination of all
the major courses taken into a requirement for independent study (with a faculty
advisor) and oral and written presentation of the results. Most of us think of
this as more of a capstone experience, tying together a lot of loose threads
and moving the student to the next stage of independence (usually), but it does
give us a lot of non-quantified information about the strengths and weaknesses
of our students as mathematical thinkers. Portfolios of students' work are becoming a common part of assessment programs. These are generally viewed as snapshots of the students' work over a given program. The collection of a student's final examination papers would diverge a bit from the snapshot idea, but would provide a cross-sectional view of some goals of the program and how the student met those goals. However, not all goals are tested on examinations. Course-embedded assessment is receiving increased attention. One major research institution, although reporting no significant activity in comprehensive assessment, reported institution-supported research projects in course-embedded assessment. Examples of areas of inquiry include acquisition of critical thinking skills as they relate to major concepts in a course, students' abilities to integrate content across topics, and effectiveness of student learning experiences such as collaborative learning, use of technology, and internship placements. What Should the MAA Do? What should the MAA do? Publish some guidelines! This request came up over and over again. There was a similar refrain in the late 1970s about placement and diagnostic testing. The MAA responded then through the Committee on Placement Examinations, now Committee on Testing, but never really gave the recipe that many people craved. That is probably how this new assessment effort will play out also. The situations at various institutions are so different that only very general guidelines can be recommended to apply to all. Further, participants believed that the MAA should state clearly that assessment must be a departmentally based process of goal statement followed by evaluation of success in achieving goals, followed by a combined departmental/administration effort to improve vis-a-vis those goals. Members want to know what kind of tools and approaches are being used effectively. They want a thoughtful discussion of what assessment can mean to a department, and they especially want samples of learning goals statements (maybe for specific mathematics courses) from departments with assessment programs. One experienced assessment coordinator stated: After two years experience in helping departments with assessment, I think there are three things that would be useful: (1) sample statements of student learning goals--the most difficult thing to produce--or an MAA statement on student learning objectives in undergraduate mathematics; (2) some methods for determining student achievement; and (3) a rationale for assessment that would persuade MAA members that evaluating undergraduate education by focusing on what is happening to the students (specifically, what students are learning in relationship to what we would like them to learn) is a good idea. Others want the MAA to serve the role that Ernst Benjamin [4] calls the mediation of the expertise of an accrediting agency or the systematization of a governing board's procedures in controlling the use of assessment. Samples of requests in this direction follow.
Finally, one participant brought the problem home, saying, "A number of our committee members have talked about the MAA doing various grand and wonderful things with respect to assessment. From where I sit, we are that MAA committee. At this point there is certainly no consensus about what the terminology and problems are, let alone what the solutions can be." Focus Group Participants aaupeb@gwuvm.gwu.edu Ernst Benjamin, Amer. Assoc of University Professors. bushaw@wsuvm1.bitnet Donald Bushaw, Washington State University. comers@citadel.bitnet Stephen Comer, The Citadel. pa24948@utkvm1.utk.edu Henry Frandsen, University of Tennessee. harvey@math.wusc.edu John Harvey, University
of Wisconsin. djlutz@wmvm1.bitnet David Lutzer, College
of William and Mary. bmadison@uafsysb.uark.edu Bernard Madison,
Moderator, Univ. of Arkansas. richard_millman@csusm.edu Richard Millman, Calf. State Univ., Santa Barbara. wm@virginia.edu Ned Moomaw, University
of Virginia. cpeltier@bach.helios.nd.edu Charles Peltier,
Saint Mary’s College. rhoades@iubacs.edu Billy Rhoades, Indiana
University. romberg@vms.macc.wisc.edu Thomas Romberg,
University of Wisconsin. math19j@jetson.uh.edu James Stepp, University
of Houston. joswaffo@ilstu.bitnet Jane Swafford, Illinois
State University. References
|