Have Our Students with Other Majors Learned the Skills They Need?

William O. Martin and Steven F. Bauman

University of Wisconsin-Madison and North Dakota State University

A large university begins by asking teachers in other disciplines not for a "wish list" but for a practical analysis of the mathematical knowledge required in their courses. Pretests for students reflect these expectations, and discussion of results encourages networking.

Background and Purpose

Quantitative assessment at Madison began for a most familiar reason: it, along with verbal assessment, was externally mandated by the Governor of Wisconsin and the Board of Regents. Amid increasing pressure for accountability in higher education ([3]), all UW system institutions were directed to develop programs to assess the quantitative and verbal capabilities of emerging juniors by 1991. Although the impetus and some support for the process of assessment were external, the implementation was left up to the individual institutions.

The University of Wisconsin at Madison has been using a novel assessment process since 1990, to find whether emerging juniors have the quantitative skills needed for success in their chosen upper-division courses; a similar program began at North Dakota State University in 1995. A unique characteristic of both programs is a focus on faculty expectations and student capabilities across the campuses, rather than on specific mathematics or statistics courses.

The important undergraduate service role of most mathematics departments is illustrated by some enrollment data for the UW-Madison Department of Mathematics: in Fall 1994, the department had about 200 undergraduate majors and enrollments of about 6500 in courses at the level of linear algebra, differential equations, and below. Some of these students go on to major in a mathematical science; most are studying mathematics for majors in other departments. Mathematics faculty must perform a delicate balancing act as they design lower-division course work that must meet diverse expectations of "client faculties" across the campus.

Method

In a program of sampling from departments across the campus, we have gathered information about quantitative skills used in specific courses and the extent to which students can show these important skills at the start of the semester. Instructors play a key role in helping to design free-response tests reflecting capabilities expected of incoming students and essential for success in the course. Two important characteristics of this form of assessment are direct faculty involvement and close ties to student goals and backgrounds. We have found that the reflection, contacts, and dialogues promoted by this form of assessment are at least as important as the test results.

The purpose of assessment is to determine whether instructional goals, or expectations for student learning, are being met. We have found that explicit goals statements, such as in course descriptions, focus on subject content rather than on the capabilities that students will develop. Such statements either are closely tied to individual courses or are too broad and content focused to guide assessment of student learning. Complicating the situation, we encounter diverse goals, among both students and faculty. In response to these difficulties, we sample in junior-level courses from a wide range of departments across the campus. (e.g. Principles of Advertising, Biophysical Chemistry, and Circuit Analysis). Instructors are asked to identify the quantitative capabilities students will need to succeed in their course. With their help, we design a test of those skills that are essential for success in their course. We emphasize that the test should not reflect a "wish list," but the skills and knowledge that instructors realistically expect students to bring to their course.

By design, our tests reflect only material that faculty articulate students will use during the semester — content that the instructor does not plan to teach and assumes students already know. This task of "picking the instructor's brain" is not easy, but the attempt to identify specific, necessary capabilities, as opposed to a more general "wish list," is one of the most valuable parts of the assessment exercise.

A significant problem with assessment outside the context of a specific course is getting students (and faculty!) to participate seriously. We emphasize to participating faculty members the importance of the way they portray the test to students and to inform students that

On scantron sheets, each problem is graded on a five-point scale (from "completely correct" to "blank/irrelevant"); information is also coded about the steps students take toward a solution (for example, by responding yes or no to statements such as "differentiated correctly" or "devised an appropriate representation"). Within a week (early in the term) the corrected test papers are returned to students along with solutions and references to textbooks that could be used for review.

Although we compute scores individually, our main focus is on the proportion of the class that could do each problem. Across a series of courses there are patterns in the results that are useful for departments and the institution. Over time, test results provide insight to the service roles of the calculus sequence. We also use university records to find the mathematics and statistics courses that students have taken. Without identifying individuals, we report this information, along with the assessment test score, to course instructors.

Findings

During the first five years of operation at UW-Madison nearly 3700 students enrolled in 48 courses took assessment project tests of quantitative skills. We have found that instructors often want students to be able to reason indepen dently, to make interpretations and to draw on basic quantitative concepts in their courses; they seem less concerned about student recall of specific techniques. Students, on the other hand, are more successful with routine, standard computational tasks and often show less ability to use conceptual knowledge or insight to solve less standard problems ([1]), such as:

Here are the graphs of a function, f, and its first and second derivatives, f´ and f´´. (Graph omitted.) Label each curve as the function or its first or second derivative. Explain your answers.

(In one NDSU engineering class 74% of the students correctly labeled the graphs; 52% of them gave correct support. In another engineering class, which also required the three-semester calculus sequence, 43% of the students supported a correct labeling of the graphs.)

Here is the graph of a function y = f(x). Use the graph to answer these questions:

(a) Estimate f´(4). (On the graph, 4 is a local minimum. 84% correct)

(b) Estimate f ´(2). (On the graph, 2 is an inflection point with a negative slope. 44% correct)

(c) On which interval(s), if any, does it appear that f´(4) < 0? (65% correct)

(Percentages are the proportion of students in a UW engineering course who answered the question correctly — a course prerequisite was three semesters of calculus)

To illustrate common expectations, these two problems have been chosen by instructors for use in many courses. Our experience suggests that many instructors want students to understand what a derivative represents; they have less interest in student recall of special differentiation or integration techniques. Few students with only one semester of calculus have answered either problem correctly. Even in classes where students have completed the regular three-semester calculus sequence, success rates are surprisingly low. Most students had reasonable mathematics backgrounds, although more than half of the 87 students mentioned here had a B or better in their previous mathematics course, which was either third semester calculus or linear algebra. Problem success rates often are higher if we just ask students to differentiate or integrate a function. For example, over three-quarters of the students in the same class correctly evaluated the definite integral . (See [5] for a discussion of student retention of learned materials.)

Indicative of the complex service role played by the lower division mathematics sequence we noted the differing balance of content required by faculty in the three main subject areas: (a) Mathematics (four distinct courses); (b) Physical Sciences (five courses); and (c) Engineering (six courses) , and we structured our problems to lie in four main groups: (A) non-calculus, (B) differential calculus, (C) integral calculus, and (D) differential equations. In mathematics courses, for example, 60% of the problems used were non calculus; physical science drew heavily from differential calculus (56% of the problems), while engineering courses had a comparatively even balance of problems from the four main groups.

Use of Findings

Important advantages of this assessment method include:

Instructors have mostly reacted very favorably to the assessment process. Some report no need to make changes while others, recognizing difficulties, have modified their courses, sometimes through curriculum, or by omitting reviews or including additional work. Students report less influence, partly because many mistakenly see it as a pretest of material that they will study in the course. Some fail to see the connection between a mathematical problem on the test and the way the idea is used in the course. In technical courses, typically around half the class may report studying both before and after the assessment test and claim that the review is useful. Most students, when questioned at the end of the semester, recognize that the skills were important in their course but still chose not to use assessment information to help prepare.

We report annually to the entire mathematics faculty, but we have probably had greater curricular influence by targeting our findings at individuals and committees responsible for specific levels or groups of courses, particularly precalculus and calculus. Findings from many assessed courses have shown, for instance, that faculty members want students to interpret graphical representations. This had not always been emphasized in mathematics courses.

An early, striking finding was that some students were avoiding any courses with quantitative expectations. These students were unable to use percentages and extract information from tables and bar graphs. A university curriculum committee at UW-Madison viewing these results recommended that all baccalaureate degree programs include a six-credit quantitative requirement. The Faculty Senate adopted the recommendation, a clear indication that our focus on individual courses can produce information useful at the broadest institutional levels. Result of assessment not only led to the policy, but aided in designing new courses to meet these requirements. We are now refining our assessment model on the Madison campus to help assess this new general education part of our baccalaureate program.

How do faculty respond when many students do not have necessary skills, quantitative or otherwise? Sometimes, we have found a "watering down" of expectations. This is a disturbing finding, and one that individuals cannot easily address since students can "opt out" of courses. Our assessment can help to stem this trend by exposing the institutional impact of such individual decisions to faculty members and departments.

Success Factors

Angelo and Cross, in their practical classroom assessment guide for college faculty [1], suggest that assessment is a cyclic process with three main stages: (a) planning, (b) implementing, and (c) responding (p. 34). Although we have cited several positive responses to our assessment work, there have also been instances where assessment revealed problems but no action was taken, breaking our assessment cycle after the second stage. We expect this to be an enduring problem for several reasons. First, our approach operates on a voluntary basis. Interpretation of and response to our findings is left to those affected. And the problems do not have simple solutions; some of them rest with mathematics departments, but others carry institutional responsibility.

Some of our projects' findings are reported elsewhere ([2], [4]). While they may not generalize beyond specific courses or perhaps our own institutions, the significance of this work lies in our methodology. Because each assessment is closely tied to a specific course, the assessment's impact can vary from offering particular focus on the mathematics department (actually, a major strength), to having a campus-wide effect on the undergraduate curriculum.

Assessment has always had a prominent, if narrow, role in the study of mathematics in colleges and universities. Except for graduate qualifying examinations, most of this attention has been at the level of individual courses, with assessment used to monitor student learning during and at the end of a particular class. The natural focus of a mathematics faculty is on their majors and graduate students. Still, their role in a college or university is much larger because of the service they provide by training students for the quantitative demands of other client departments. It is important that mathematicians monitor the impact of this service role along with their programs for majors.

References

[1] Angelo, T.A., & Cross, K.P. Classroom assessment Techniques (second edition), Jossey-Bass, San Francisco, 1993.

[2] Bauman, S.F., & Martin, W.O. "Assessing the Quantitative Skills of College Juniors," The College Mathematics Journal, 26 (3), 1995, pp. 214-220.

[3] Ewell, P.T. "To capture the ineffable: New forms of assessment in higher education," Review of Research in Education, 17, 1991, pp. 75-125.

[4] Martin, W. O. "Assessment of students' quantitative needs and proficiencies," in Banta, T.W., Lund, J.P., Black, K.E., and Oblander, F.W., eds., Assessment in Practice: Putting Principles to Work on College Campuses, Jossey-Bass, San Francisco, 1996.

[5] Selden A. and Selden, J. "Collegiate mathematics education research: What would that be like?" College Mathematics Journal, 24, 1993, pp. 431-445.

Copies of a more detailed version of this paper are available from the first author at North Dakota State University, Department of Mathematics, 300 Minard, PO Box 5075, Fargo, ND 58105-5075 (email:wimartin@ plains.nodak.edu).

  Topical Index  
Previous in Book Table of Contents Next in Book
 SAUM Home