You are here

Curriculum Guide to Majors in the Mathematical Sciences

Assessment of the Mathematics Major

Do you want to do everything you reasonably can to ensure that the students you graduate with majors in the mathematical sciences have learned the central concepts and acquired the essential skills you are offering them?  Virtually every faculty member will reply, “Of course!”  This is the principal purpose of assessment.  And yet, assessment is viewed as an unwanted chore in almost all mathematics departments, done grudgingly and postponed for as long as possible.  The purpose of this short note is to show how assessment can yield significant program improvements without becoming a major burden.

First, an attitude change is needed.  Rather than doing assessment because it is imposed from the outside, do the assessment with one question in mind:  how can we improve the experience of our mathematics majors so that they better attain the goals of our program?

Goals and objectives

This leads immediately to the first step in good assessment:  articulate the goals of the program!  The goals should come from the department’s mission statement (if it has one) and be aligned with the institution’s mission.  Until the department agrees on what the goals are, you cannot coherently assess how well your students are achieving them.  You should keep in mind as you do this, that some goals are long-term, and even as students graduate, they may still be in the early stages of attaining them.  Also, some goals – say, that students attain an understanding of the deep connections among mathematical subjects – may not be assessable without considerable effort.  But usually once you have articulated the overall program goals, you can find at least several more detailed learning objectives (or “learning outcomes” – the terminology varies a bit) that indicate whether students are achieving each major goal of the program.  To write a learning objective, finish the sentence “The student can….”  The objectives should be measurable – not necessarily with a number, but with a clear concept of what it would mean to attain them and to partially attain them.  This stage, of articulating the programs goals and objectives, will take several hours, maybe even a department retreat.  And it is extremely important that all full-time faculty involved in the program participate in it.  One immediate  benefit of doing this is that the department often realizes how much agreement it has about its mission.  There may be a few goals that not everyone agrees on, or agrees are high priorities, but generally a department really does have a largely common purpose. 

Revise courses

The next step is to determine where in its program the department is giving students the opportunity to work toward achieving its goals for them.  Since most complex goals are not attained when students first work on them, the department needs to look at how the program builds toward that goal.  Usually a serious analysis of students’ paths toward the goals leads the department to realize that there are gaps in the program.  Modifying or adding courses to fill these gaps is the first positive outcome of the assessment process.  More generally, when problems are found at any point during the assessment process, that is a result of assessment and it is appropriate to make improvements at that time, rather than waiting until the full assessment cycle has run its course.  Once your program is sufficiently coherent, it is time to start involving students in assessment.

Identify artifacts and develop rubrics

This is done by identifying what artifacts should be collected from the students to determine their degree of success in meeting a given objective.  At least one such artifact needs to be identified for each objective, though sometimes the same artifact may be used to assess multiple objectives.  The artifacts may come from activities that are already part of some courses.  (Often not all faculty teaching the courses may be using these activities.  However, there has to be agreement that, if an activity is being used for the assessment plan, all instructors will use it.)  Or they may be summative activities covering the student’s career at the institution, such as comprehensive examinations, exit interviews, or alumni surveys used toward the end of a student’s work or after graduation.  While eventually the whole department needs to agree on the assessment artifacts to be collected, the identification of appropriate artifacts for individual objectives can be split among small subcommittees charged with bringing proposals to the full department.

At the same time that artifacts are being chosen, drafts of rubrics should be developed to determine how to interpret what is collected.  Otherwise, the department ends up with a huge mass of material and no idea how to interpret what it has gathered.  The point of developing rubrics is to find a uniform standard that will enable those looking at student work to determine to what extent each student has met the specified learning outcome.

Collect and analyze data

Of necessity, to assess student learning, you need to collect samples from students.  Most departments find that dividing the learning objectives into four to six clusters, and examining one cluster each year, makes the work manageable.  Students should be involved in the process and be told that the material is being gathered to improve the program for future students.  Generally a subcommittee of the department will do the initial analysis of data that has been collected, using the agreed-upon rubrics.  They then prepare a report for the department of what has been found.  Normally this would include how many students have achieved the given objective, how many have surpassed it, and how many have not achieved it (and perhaps how many have partially achieved it).  Ideally this report should be quite specific about the existence of any problems.  It should also include a summary of areas in which the department is achieving its goals and areas which require further work or discussion.

Report results to the department

Once the subcommittee has put together an initial report, the department needs to meet to discuss what is to be done.  Sometimes the data that has been collected is ambiguous:  it is not clear whether students are acquiring the desired skills and understandings.  Or a substantial proportion of students may not have achieved the objectives, but it is not clear why.  In this case, additional work may need to be done before the department can decide on a remedy.  More data, not in the initial assessment plan, may need to be collected and analyzed.  Good assessment should lead to program improvements, but when problems become apparent, care needs to be taken to identify where they are coming from.

Identify changes to be made

When an objective is not being attained, there can be many causes.  Students may not have had enough opportunities to acquire the desired skills, the skills may not have been taught or not taught well, or the expectations may be set too high for the students in the program.  The department thus needs to examine the program more closely to see where the problem is.  For example, if the first time students’ mathematical writing is examined critically is on the item used for assessment, they are not likely to do well.  When this happens, courses earlier in the program where students can begin work on the skill need to be identified.  On the other hand, sometimes the level of expected achievement is not reasonable for the students the institution attracts and an adjustment of the department’s goals and objectives is called for.  Or the types of artifacts being collected are not appropriate for the objectives they are supposed to measure, or are no longer an appropriate activity because the course has changed for other reasons. Often it is a combination of program issues and assessment practice that needs revision.  In any case, the purpose of assessment is to improve the program – that is, what students get out of their mathematics major.  So the identification of changes to be made – and implementing the changes – is the most important step of the assessment process, often referred to as “closing the loop.”

Assessment must be cyclic

Making changes based on assessment data is called “closing the loop” because assessment is not a “once-and-done” activity.  Every goal should be assessed at least every 4-6 years.  When a department makes changes to improve student learning, it needs to check that those changes have worked.  In addition, changes in the outside culture or in the students coming to an institution, mean that what was working five years earlier may not still be working.  Perhaps a couple of crucial faculty members have retired or left, and students are not getting the same kinds of experiences through their studies.  So it is important that all the issues are re-examined periodically, and that the assessment process also be revised as needed.  In addition, later iterations of the cycle provide longitudinal data that allow comparison of student outcomes over time.  Of course, if a problem becomes apparent at some point when the particular issue is not scheduled for assessment, it is appropriate to respond to the problem immediately, rather than wait for its turn in the cycle – as long as there is sufficient care taken to determine what the problem is and what possible remedies there might be!

To quote from the original CUPM assessment document, the questions to ask include:

First, about the learning strategies:

  • Are the current strategies effective?
  • What should be added to or subtracted from the strategies?
  • What changes in curriculum and instruction are needed?

Secondly, questions should be raised about the assessment methods:

  • Are the assessment methods effectively measuring the important learning of all students?
  • Are more or different methods needed?

Finally, before beginning the assessment cycle again, the assessment process itself should be reviewed:

  • Are the goals and objectives realistic, focused, and well-formulated?
  • Are the results documented so that the valid inferences are clear?
  • What changes in record-keeping will enhance the longitudinal aspects of the data?

The Assessment Cycle

Resources

The following books are good sources for information about assessing the major:

A good introduction to the assessment process is the initial 1995 CUPM report, “Assessment of Student Learning for Improving the Undergraduate Major in Mathematics”, reprinted in the first volume mentioned below and available online.

B. Gold, W. Marion, and S. Keith, eds., Assessment Practices in Undergraduate Mathematics, MAA Notes # 49, Washington, DC:  Mathematical Association of America, 1999

L. Steen, ed., Supporting Assessment in Undergraduate Mathematics. Washington, DC:  Mathematical Association of America, 2006