The author helped create MAA Guidelines for Quantitative Literacy and here spells out how this document was used at her large university. This school tested students and graded results in a variety of courses. Results led to curricular changes.
Background and Purpose
In the mid-eighties the faculty at Northern Illinois University (De Kalb, Illinois) reviewed the requirements for their baccalaureate graduates and determined that each should be expected to be at least minimally competent in mathematical reasoning. Setting up an appropriate means for students to accomplish this expectation required the faculty to take a careful look at the diversity of student majors and intents for the undergraduate degree. The result of this examination was to establish multiple ways by which the competency requirement could be met, but that program carried with it the burden of showing that the various routes led to the desired level of competency in each instance. In this article the quantitative literacy program at NIU will be described, and some aspects of assessment of it will be discussed.
The author's involvement in this work stems from her service on the University's baccalaureate review committee, the subsequent establishment of a course at NIU related to competency, and the chairing of a national committee of the Mathematical Association of America (MAA) on quantitative literacy requirements for all college students. The latter resulted in the development of a report published by the MAA  which has also been available at the Web site: "MAA ONLINE" http://www.maa.org.
Northern Illinois University is a comprehensive public university with roughly 17,000 undergraduate students who come largely from the top third of the State of Illinois. The University is located 65 miles straight west of Chicago's loop and accepts many transfer students from area community colleges. For admission to the University students are to have three years of college preparatory high school mathematics/computer science (which means that most students come with at least two years of high school mathematics.) Special programs are offered for disadvantaged students and for honors students. The University consists of seven colleges and has 38 departments within those colleges, offering 61 distinct bachelor's degree programs within those departments. Designing a quantitative literacy program to fit a university of this type represents both a challenge and a commitment on the part of the university's faculty and administrators.
The term "quantitative literacy program" is defined in the 1995 report of the Mathematical Association of America's Committee on the Undergraduate Program in Mathematics (CUPM) regarding quantitative literacy requirements for all college students (See  Part III). The recommendations in that report are based on the view that a quantitatively literate college graduate should be able to:
1. Interpret mathematical models such as formulas, graphs, tables, and schematics, and draw inferences from them.
2. Represent mathematical information symbolically, visually, numerically, and verbally.
3. Use arithmetical, algebraic, geometric and statistical methods to solve problems.
4. Estimate and check answers to mathematical problems in order to determine reasonableness, identify alternatives, and select optimal results.
5. Recognize that mathematical and statistical methods have limits.
Such capabilities should be held at a level commensurate with the maturity of a college student's mind.
CUPM notes in Part III of the report that to establish literacy for most students a single college course in reasoning is not sufficient, because habits of thinking are established by means of an introduction to a habit followed by reinforcement of that habit over time as the habit is placed in perspective and sufficiently practiced. Thus, CUPM recommends colleges and universities establish quantitative literacy programs consisting of a foundations experience and continuation experiences. The foundations experience introduces the desired patterns of thought and the mathematical resources to support them, while the continuation experiences give the student the growing time essential to making new attitudes and habits an integral part of their intellectual machinery.
At Northern Illinois University the foundations experience for the quantitative literacy program consists of taking a specific course as part of the general education program set for all students. However, the specific course may vary from student to student depending on placement data and the probable major of the student.
Many major programs at NIU require students to study specific mathematical content. For students in those major programs which do not require the taking of specific mathematics, the Department of Mathematical Sciences devised a foundations course called Core Competency. This course focuses on computational facility and facts, the interpretation of quantitative information, elementary mathematical reasoning, and problem solving. Topics in the course are taught in applications settings which may be part of the real life experience of a college graduate and include some probability and statistics, geometry, logical arguments, survey analysis, equations and inequalities, systems of equations and inequalities, and personal business applications. Elementary mathematics is reviewed indirectly in problem situations.
In contrast, the courses offered to meet the needs of students whose programs require specific mathematical content often demand a background of computational facility greater than that which is needed merely for admission to NIU and have objectives which relate to their service role. Thus while a student taking one of these latter courses may have attained the computational facility which the core competency necessitates, the student may not have acquired the broad mathematical skills the core competency entails. So in examining how the various service courses the Department taught met the competency objectives, the Department believed that the six courses (1) Elementary Functions; (2) Foundations of Elementary School Mathematics (a course for future elementary school teachers and taken almost exclusively by students intending that career outlet); (3) Introductory Discrete Mathematics; (4) Finite Mathematics; (5) Calculus for Business and Social Science; and (6) Calculus I would satisfy the competency objectives provided a student completed such a course with a grade of at least a C.
The Calculus I course is always offered in small sections of about 25 students, and, in a given semester, some of the other courses are offered in small sections as well. However, some are offered as auditorium classes which have associated recitation sections of size 25-30. Since some students take more than one of the seven courses listed, it is not easy to know precisely which students are using the course at a given taking for the competency requirement, but approximately 13% meet the competency by taking the Core Competency course, 16% meet it by taking Elementary Functions; 8% meet it by taking Foundations of Elementary School Mathematics; 1% meet it by taking Introductory Discrete Mathematics; 25% meet it by taking Finite Mathematics; 25% meet it by taking Calculus for Business and Social Science; and 12% meet it by taking Calculus I.
In executing the competency offerings, two questions emerged which subsequently became the framework about which an assessment plan was set.
1. What hard evidence is there that these seven routes (including the Core Competency route) each lead to at least a minimal competency?
2. Can we obtain information from a plan for measurement of minimal competency which can be used to devise better placement tests for entering students? (The Department has historically devised its own placement tests to mesh with the multiple entry points into the mathematics offerings which entering students may face.) Could this same information be used to construct an exit examination for degree recipients regarding quantitative literacy?
Tests were prepared for use in the Fall of 1993 which could be administered to a sample which consisted of nearly all students in each of the courses taught that semester which were accepted as meeting the competency requirement. These tests were comparable in form and administered near the end of the semester so as to determine the extent to which students had the desirable skills. Because these tests had to be given on different dates in various classes, multiple forms had to be prepared. Each form included questions testing computational facility, questions involving problem solving, and questions requiring interpretation and mathematical reasoning. Each form consisted of ten multiple choice questions, three open-ended interpretive questions, and three problems to solve. To be sure that students took seriously their performance on these examinations, each course attached some value to performance on the examination which related to the student's grade in the course. For example, in the Core Competency course the test score could be substituted for poor performance on a previous hour examination, while in the Calculus for Business and Social Science course points were given towards the total points used to compute the final grade in the course.
A uniform grading standard was drawn up for the examinations, and a team of five graders evaluated all of the examinations except for the Calculus I papers which were graded by the course instructors according to the set guidelines. Examinations were scored for 2358 students. The course Introductory Discrete Mathematics was not offered in Fall 1993, so data from it needed to be gathered another semester. Clearly many hours were devoted to grading! The examinations were designed so that "passing" was determined to be the attainment of at least 50% of the possible 100% completion of the test for the Core Competency students. Test scores were divided according to the computational facility section (the multiple choice part) and the remainder, which was viewed as the problem solving component.
To interpret the scores on the graded examinations a comparison had to be made with the grades the students received in the course they took. These showed the pass rates in the table below for the foundations experience requirement on courses which are used to satisfy that requirement.
A comparison of the means of the various test forms suggested two of the seven were harder than the others (these forms were given in sections of more than one course and compared). Both of these forms were taken by students in the Elementary Functions course. For those students in the Elementary Functions course who did not take the harder forms, the pass rate was 71%. Looking at the total performance of all students in the Elementary Functions course who took an assessment test, it was seen that they scored high on the computational facility portion of the examination, but relatively low on the problem solving component. In contrast, the students in the Foundations of Elementary School Mathematics course showed weak scores on the computational facility portion of the test and relatively high scores on the problem solving component. Students in the remaining courses averaged a respectable passing percentage on each examination component.
|Course||Number of students taking test||Pass Percentage of those who received competency credit|
|Foundations of Elementary School Mathematics||122||64|
|Calculus for Business and Social Science||398||80|
Use of Findings
Overall analysis of the assessment data led to the conclusions that:
Other conclusions about the study were that we should periodically do an analysis across all the courses mentioned, but it need not be done in all courses during the same semester. Also devising many comparable tests is likely to result in some unevenness across test forms a situation which needs to be carefully monitored. (Using test forms in more than one course and using statistics to compare the test results among the different forms should "flag" a bad form.) And although it was not desirable, multiple choice questions were used on the computational facility part of the test merely to save time in grading. (When fewer students are being examined in a short period of time, the use of multiple choice questions can be eliminated).
This assessment process took considerable effort on the part of the author (a seasoned faculty member) and two supported graduate assistants. Two others were brought in to help with grading. However, the assessment process was organized so as to be minimally intrusive for the individual course and instructor.
This assessment process grew out of the Department of Mathematical Sciences' ongoing concern for the success of its programs and was furthered by the University's General Education Committee which then recommended the plan the Department proposed regarding assessment of the core competency requirement. Consequently the University's Assessment Office partially supported the two graduate assistants who assisted in the construction of the examinations, the administration of the examinations, and the grading of the examinations.
Now the Department needs to turn its attention to the follow-up uses noted above and to an assessment of the role of the continuation experiences. At present these continuation experiences vary from student to student also in accordance with the student's major. But there has been discussion within the University of having a culminating general education experience for all students which might involve a project having a quantitative component. In either case a current intent is to analyze student achievement based on the follow-up test noted above, where possible, and on the faculty judgment of student attainment of the five capabilities listed in the CUPM report (and listed above). The appendix of that CUPM report has some scoring guides to try.
 Committee on the Undergraduate Program in
Mathe-matics. Quantitative Literacy for College
Literacy, MAA Reports 1 (New Series), Mathematical Association
of America, Washington, DC, 1996.