For students to believe that we expect them to understand the ideas, not just be able to do computations, our tests must reflect this expectation. This article discusses how this can be done.
Background and Purpose
Tests timed, in-class, individual efforts are still the primary assessment tool used by mathematics faculty at DeKalb College, but comparing current tests to tests of ten, or even five, years ago show that perhaps the characteristics given above are the only things that are still the same. And even these three descriptors are often wide of the mark. A variety of other types of assessment are used on a regular basis at DeKalb College, but this essay is primarily a report on how tests have changed.
DeKalb College is a large, multi-campus, public, commuter, two-year college in the metropolitan Atlanta area. We have approximately 16,000 students, many of whom are non-traditional students. The curriculum is liberal arts based and designed for transfer to four-year colleges. There are only a handful of career programs offered by the college. The mathematics courses range from several low-level courses for the career programs through a full calculus sequence and include discrete mathematics, statistics, linear algebra, and differential equations courses. In each course we follow a detailed common course outline, supplemented by a course-specific teaching guide, and use the same textbook in each class. At regular intervals we give common final examinations to assess the attainment of course goals, but otherwise individual instructors are free to design their own tests and other assign-ments. A comprehensive final examination is required by the college in all courses. In practice, mathematics faculty often work together in writing tests, assignments, and final exams.
Mathematics teachers at all levels joke about responses to the dreaded question, "Will this be on the test?" But students are reminding us with this question that in their minds, tests are the bottom line in their valuation of material. Most students value ideas and course material only to the extent that they perceive the instructor values them, and that is most clearly demonstrated to them by inclusion of a topic on a test. Our students come to us with very strong ideas about what tests are and what they are not. Changing tests means changing our messages about what we value in our students' mathematical experiences. Acknowledging this means we have to think about what those messages should be. At DeKalb College, we see some of our entry-level courses primarily as skill-building courses, others as concept-development courses. Even in courses that focus on skill building, students are asked to apply skills, often in novel settings. We want all students to be able to use their mathematics. Also, fewer skills seem critical to us now; in time it will be hard to classify a DeKalb College mathematics course as skill building rather than concept building.
Technology was one of the original spurs for rethinking how tests are constructed; writing common finals was the other spur. One of the first issues we tackled in using technology was how to ask about things that faculty valued, such as students' knowing exact values of trig functions. Those discussions and their continuations have served us well as we use more and more powerful technology. Some of the questions that recur are: Is this something for the technology to handle? at this stage in the student's learning? ever? Is this topic or technique as important as before? worth keeping with reduced emphasis? Common finals mean we have to discuss what each course is about, what is needed for subsequent courses, and what can really be measured. The driving force now is our evolving philosophy about our courses and their goals. Changes in course goals have resulted in changes in instruction. We use lots of collaborative work, lab-type experiences, extended time out-of-class assignments, and writing in a variety of settings. All of these require appropriate types of assessment, and this affects what our traditional tests must assess.
When we do want to evaluate mathematical skills, we ask directly for those things we think are best done mentally. The experience of choosing appropriate technology including mental computations and paper-and-pencil work is a necessary part of testing in a technology-rich environment. We try to be careful about time requirements as we want to assess skill not speed. As skill assessments, the next two examples differ very little from textbook exercises or test items from ten years ago, but now items of this type play a very minor role in testing.
Example: Give the exact value of sin .
Example: (a) Give the exact value of .
(b) Approximate this value to three decimal places.
Another type of question for skill assessment that gives the student some responsibility is one that directly asks the student to show that he or she has acquired the skill.
Example: Use a 3 X 3 system to demonstrate that you know and can apply Cramers rule.
Example: Describe a simple situation where the Banzhaf power index would be appropriate. Give the Banzhaf index for each player in this situation.
Assessing Conceptual Understanding
Good ways to measure conceptual understanding can be more of a challenge to create, but there are some question styles we have found to be useful. Here are examples of (1) creation items, (2) reverse questions (both positive and negative versions), (3) transition between representations, (4) understanding in the presence of technology, and (5) interpretation.
(1) Creation tasks are one way to open a window into student understanding. Often a student can identify or classify mathematical objects of a certain type, but can not create an object with specified characteristics.
Example: Create a polynomial of fifth degree, with coefficients 3, -5, 17, 6, that uses three variables.
Example: Create a function that is continuous on [-5,5], whose derivative is positive on [-5,3) and negative on (3,5], and whose graph is concave up on [0, 2] and concave down elsewhere.
(2) One thing we have learned is that students are always learning. Whether they are drawing the conclusions the instructor is describing is an open question however. Reversing the type of question on which the student has been practicing is a good way to investigate this question. These two examples come from situations where exercises are mostly of the form: Sketch the graph of . (The graphs have been omitted in the examples.)
Example: Give a function whose graph could be the one given.
Example: Why can the graph given not be the graph of a fourth degree polynomial function?
(3) Being able to move easily between representations of ideas can be a powerful aid in developing conceptual understanding as well as in problem solving, but students do not make these transitions automatically. Test items that require connecting representations or moving from one to another can strengthen these skills as well as reveal gaps in understanding.
Example: What happens to an ellipse when its foci are "pushed" together and the fixed distance is held constant? Confirm this using a symbolic representation of the ellipse.
Example: The columns of the following table represent values for f, f´, and f´´ at regularly spaced values of x. Each line of the table gives function values for the same value of x. Identify which column gives the values for f, for f´, and for f´´. (Table omitted here.)
(4) When technology is used regularly to handle routine computations, there is a possibility that intuition and a "feel" for the concept is being slighted. If this is a concern, one strategy is to ask questions in a form that the available technology can not handle. Another is to set a task where technology can be used to confirm, but not replace intuition.
Example: Let f(x) = ax2 and g(x) = bx3 where a > b. Which function has the larger average rate of change on the interval ? Support your conclusion mathematically.
Example: Change the given data set slightly so that the mean is larger than the median.
What is another result of the change you made?
(5) Another source of rich test items is situations that ask students to interpret a representation, such as a graph, or results produced by technology or another person.
Example: The following work has been handed in on a test. Is the work correct? Explain your reasoning.
Example: The following graph was obtained by dropping a ball and recording the height in feet at time measured in seconds. Why do the dots appear to be further apart as the ball falls? (The graph is omitted here.)
Assessing Problem Solving
Because developing problem-solving skills is also a goal in all of our courses, we include test items that help us measure progress to this goal. Questions like this also tell our students that they should reflect on effective and efficient ways to solve problems.
Example: Our company produces closed rectangular boxes and wishes to minimize the cost of material used. This week we are producing boxes that have square bottoms and that must have a volume of 300 cubic inches. The material for the top costs $0.34 per square inch, for the sides $0.22 per square inch, and for the bottom $0.52 per square inch. (a) Write a function that you could use to determine the minimum cost per box.
(b) Describe the method you would use to approximate this cost.
Example: (a) Use synthetic division to find f(2.7) for the function f(x) = 2x5 - 3x3 + 7x - 11. (b) Give two other problems for which you would have done the same calculations as you did in part (a).
In addition to the broad categories described thus far, questions that require students to describe or explain, estimate answers, defend the reasonableness of results, or check the consistency of representations are used more often now. All these changes in our tests are part of a rethinking of the role of assessment in teaching and learning mathematics. We understand better now the implicit messages students receive from assessment tasks. For example, tests may be the best way to make it clear to students which concepts and techniques are the important ones. Assessing, teaching, and determining curricular goals form an interconnected process, each shaping and re-shaping the others. Assessment, in particular, is not something that happens after teaching, but is an integral part of instruction.
Use of Findings
As our tests have changed, we have learned to talk with students about these changes. Before a test students should know that our questions may not look like those in the text and understand why they should be able to apply concepts and do more than mimic examples. Students should expect to see questions that reflect the full range of classroom and homework activitiescomputations, explorations, applications and to encounter novel situations. We suggest that students prepare by practicing skills (with and without technology, as appropriate), by analyzing exercises and examples for keys to appropriate problem-solving techniques, and by reflecting on the "big" ideas covered since the last test. After a test, students should have opportunities to discuss what was learned from the test, thought processes and problem-solving strategies, and any misunderstandings that were revealed. These discussions, in fact, may be where most of a student's learning takes place.
The world provides many wonderful and clever
problems, and newer texts often contain creative exercises and
test questions. But our purpose is to learn what the student
knows or does not know about the material, not to show how
smart we are. So we examine course goals first, then think
about questions and tasks that will support those goals. We
suggest also reconsidering as well the grading rubric for tests.
Using an analytic or holistic grading rubric can change the
messages sent by even a traditional test. Consider the proper role
of tests in a mix of assignments that help students
accomplish course goals. And ask one final question: do you really
need to see timed, in-class, individual efforts?