This pre- and post-testing system in a liberal arts mathematics course raises interesting questions about testing in general, and asks why students may sometimes appear to go backwards in their learning.
Background and Purpose
King's College is a liberal arts college with about 1800 full-time undergraduate students. Its assessment program has several components. The program as a whole is described in [2] and [3], and [4] outlines the program briefly and details a component relating to mathematics majors.
The component of interest here is assessment in core (general education) courses. The assessment program was initially spearheaded by an academic dean, who built upon ideas from various faculty members. Assessment in each core course was to consist of administering and comparing results from a pretest and post-test. By the time the program had matured sufficiently for the College's Curriculum and Teaching Committee and Faculty Council to endorse a college-wide assessment policy, it became clear that not all disciplines would be well served by the pre/post-testing approach; the college policy was written in a more liberal fashion, and the use of a pretest became optional.
The use of both a pre- and post-test, however, proves to be very useful in the quantitative reasoning course, which is required of all students who do not take calculus, i.e., humanities and social science majors. It has no prerequisite, nor is there a lower, remedial course. The course covers a selection of standard topics problem solving, set theory and logic, probability and statistics, and consumer math. Recent texts have been [1] and [5].
Method
As with every core course, quantitative reasoning is loosely defined by a set of learning goals and objectives for the student. These are initially formulated by the Project Team for the course. (The team typically consists of the instructors teaching the course; it may also have members from other departments to provide a broader perspective, though that is not currently the case.) The set of goals and objectives must ultimately be approved by the College's Curriculum and Teaching Committee, which represents the faculty's interest in shaping the liberal education our students take with them into society.
Learning objectives are distinguished from learning goals in that the former are more concrete and more easily assessed. For example, one of the eight objectives for quantitative reasoning is: "to be able to compute measures of central tendency and dispersion for data." By contrast, the six goals for the course are more abstract, as in: "to become more alert to the misuses of statistics and of graphical representations of data." All goals and objectives are phrased in terms of student (not teacher) performance.
The learning goals and objectives guide what a Project Team decides should be taught and how an assessment instrument should be designed. Teams differ in their approaches on the various sections of a course; for example, different instructors may be using different textbooks to teach the same core course. The Quantitative Reasoning Project Team has chosen to use a single textbook and a single assessment instrument. Atypical of most core courses, the pre- and post-tests for quantitative reasoning are identical, both being handed back at the same time. This approach limits how the pretest can be used as a learning tool during the course, but it provides the instructor the cleanest before-and-after comparisons. By not returning the pretest early, he/she does not have to worry about whether the second test is sufficiently similar to the pretest on one hand and whether students are unduly "prepped" for the post-test on the other.
While the quantitative reasoning course strives to provide the kind of sophistication we want each of our graduates to possess, the pre/post-test focuses even more intently on skills we might hope an alum would retain years after graduation or an incoming freshman would already possess! While the test is sufficiently comprehensive to span the full range of areas covered by the course, it does not evaluate the entire collection of skills taught in the course. It fails intentionally to test for knowledge of the more complicated formulas (e.g., standard deviation). The pre/post-test also deliberately avoids the use of "jargon" that might be appropriate in the course, but which is unlikely to be heard elsewhere. For example, in the course we would discuss the "negation" of an English sentence; this term would be used in homework, quizzes, and tests. On the pre/post-test, however, a sentence would be given, followed by the query: "If this sentence is false, which of the following sentences must be true?"
The pre/post-test also makes a more concerted attempt to detect understanding of concepts than do traditional textbook homework problems. For example, given that the probability of rain is 50% on each of Saturday and Sunday, students are asked whether the probability of rain during the weekend is 50%, 100%, less than 50%, or in between 50% and 100%; a formula for conditional probability is not needed, but understanding is.
The pre/post-test differs from the course's quizzes and tests in other ways. It consists of 25 short questions, about one-third of which are multiple-choice. No partial credit is awarded. The problems are planned so that computations do not require the use of a calculator. This latter feature is imposed because the pretest is given on the first day of class, when some students come sans calculator. The pretest does not contribute to the course grade; even so, our students seem adequately self-motivated to do the best they can on the test. Each student should be able to answer several pretest questions correctly. The pretest thus serves as an early-warning system, since any student who gets only a few answers right invariably will need special help. Time pressure is not an issue on the pretest because the only other item of business on the first day of class is a discussion of the course syllabus. The post-test likewise is untimed, since it is given on the same day as the last quiz, and nothing else is done that day. The post-test counts slightly more than a quiz. It becomes as an immediate learning tool, as it is used in reviewing for the last test and the final exam.
Findings
Since the pre- and post-tests are identical, and questions are graded either right or wrong, it is easy to collect question-by-question before-and-after data. During our six years of pre/post-testing, the precise balance of questions has changed slightly each year as we continually strive to create the most appropriate test. Still, consistent patterns emerge with regard to the learning exhibited in different subject areas. For instance, our students tend to be more receptive to probability and statistics than to logic.
Another finding is that the pretest is in fact, not a reliable predictor of success in the course except at the high and low ends of the scale; viz., students who score high on the pretest (better than 15/25) do not experience difficulties with the course while students who score very low (worse than 7/25), do. Similarly, post-test scores do not closely correspond to final exam scores. This is partly because some students make better use of the post-test in preparing for the final exam. Also contributing to the discrepancy is the difference in focus between the post-test and the final exam. This raises the legitimate question of whether the post-test or the more traditional final exam provides the "better" way of producing a grade for each of our students.
The most startling pattern to emerge is a good-news-bad-news story that is somewhat humbling. Most of our students arrive already knowing the multiplicative counting principle of combinatorics. By the end of the course, they have made good progress mastering permutations and combinations but at a cost: consistently fewer students correctly solved a simple pants-and-shirts problem on the post-test than on the pretest!
Use of Findings
Even though students do not see the results of their pretests until late in the course, those results can be useful to the instructor. The instructor is forewarned as to the areas in which a class as a whole is strong or weak. Where a class is weak, additional activities can be prepared in advance. Where a class is strong, the instructor can encourage students by pointing out, in general terms, those skills which most of them already have. At the individual level, if a student has a very low pretest score, he/she may be advised to sign up for a tutor, enroll in a special section, etc. An obvious way to use before-and-after comparative data is for teachers and learners to see where they might have done their respective jobs better. For teachers, who typically get to do it all over again, comparing results from semester to semester can indicate whether a change in pedagogy has had the desired effect. Again, the news may be humbling.
A case in point: knowing that students were likely to "go backwards" in regard to the multiplicative counting principle, I attempted to forewarn students to be on guard against this combinatorial backsliding; pre/post-test comparisons revealed that my preaching had no impact! What was needed to overcome this tendency was a new type of in-class exercise designed to shake students' all-abiding faith in factorials.
Another potential use for pre/post-test data is to make comparisons among different sections of a course. Doing this to evaluate the effectiveness of faculty could be dangerous, as we all know how different sections of students can vary in talent, background, attitude, etc. But an important application of the data has been to compare two sections taught by the same professor. A special section of the quantitative reasoning course was set up for students self-identified as weak in math. The section required an extra class meeting per week and a variety of additional activities (e.g., journals). Pretest scores confirmed that, with one exception, the students who chose to be in the special section did so for good reason. Pre/post-test comparisons indicated that greater progress was made by the special sectionnot surprising since students who start with lower scores have more room to move upward. But, in addition, the post-test scores of the special section were nearly as high as in the regular section taught by the same instructor.
Success Factors
Comparing results from two tests is the key to this assessment method. Some particulars of our approach using the same test twice, using simple questions, giving no partial credit simplify a process which, even so, tells us much we would not have known otherwise about the learning that is or is not taking place; they are not essential to the success of testing students twice. However, pre/post-test comparative data only reveals what progress students have made. It does not reveal what brought about that progress or what should be done to bring about greater progress. In the case of the special section for weak students, the pre/post-test could not tell us to what extent the journal, the extra work at the board, and the extra homework assignments each contributed to the success of that section. Likewise, merely detecting negative progress in one area (combinatorics) was not enough to improve the teaching/learning process; a new pedagogical approach was needed.
Pre/post-testing does not provide the formula for improvement. It must be accompanied by a teacher's creativity and flexibility in devising new techniques. As with any powerful tool, it is only as good as its user! Ultimately the most important factor in the success of this assessment method is not how it is administered, but how it is used.
References
[1] Angel, A.R., and Porter, S.R. Survey of Mathematics with Applications (Fifth Edition), Addison-Wesley Publishing Company, 1997.
[2] Farmer, D.W. Enhancing Student Learning: Emphasizing Essential Competencies in Academic Programs, King's College Press, Wilkes-Barre, PA, 1988.
[3] Farmer, D.W. "Course-Embedded Assessment: A Teaching Strategy to Improve Student Learning," Assessment Update, 5 (1), 1993, pp. 8, 10-11.
[4] Michael, M. "Assessing Essential Academic Skills from the Perspective of the Mathematics Major," in this volume, p. 58.
[5] Miller, C.D., Heeren, V.E., and Hornsby, Jr.,
E.J. Mathematical Ideas (Sixth Edition),
HarperCollins Publisher, 1990.
![]() |
||
![]() |
![]() |
![]() |
![]() |