In this article a "less" comprehensive exam to assess student learning in the core courses taken by all under-graduate mathematics majors at a regional, comprehensive university in the Midwest is discussed. We are guided through the process involved in developing the assessment instrument which is used in all four tracks of the mathematical sciences program: Actuarial Science, Mathematics, Mathematics Education and Statistics.
Background and Purpose
Ball State University is a comprehensive university with approximately 17,000 undergraduates. The University offers a strong undergraduate liberal and professional education and several graduate programs. The Department of Mathematical Sciences includes about 40 faculty, and graduates about 50 undergraduate majors each year.
Assessment activities for mathematics majors began at Ball State University in 1988 with data collection using the Educational Testing Services (ETS) Major Field Test in Mathematics. Coincidentally, our department initiated in the same year a new core curriculum to be completed by all of our undergraduate majors. These students pursue one of four majors: Actuarial Science, Mathematics, Mathematics Education, or Statistics. The core of courses common to these programs has grown to now include: Single- and Multi-variable Calculus, Linear Algebra, Discrete Systems, Statistics, Algebraic Structures (except Actuarial Science) and a capstone Senior Seminar.
Experience with the ETS examinations and knowledge of the mathematical abilities needed by successful graduates suggested that a specifically focused evaluation of the new core curriculum was both appropriate and desirable. It became apparent that there were several expectations relative to the core courses: the ability to link and integrate concepts, the ability to compare and contrast within a conceptual framework, the ability to analyze situations, the ability to conjecture and test hypotheses, formation of reasoning patterns, and the gaining of insights useful in problem solving. That is, the core courses (and thus the assessment) should maximize focus on student cognitive activity and growth attributable to concept exposure, independent of the course instructor.
Method
Development and construction of a pilot instrument (i.e., a set of questions to assess the core curriculum) were initiated during 1991 and 1992. We independently formulated a combined pool of 114 items using various formats: true/false, multiple choice, and free response. Several items were refined and then we selected what to keep. Selection of items was based on two criteria: there should be a mix of problem formats, and approximately half of the questions should be nonstandard (not typically used on in-class examinations). Part I, containing 21 items covering functions and calculus concepts, and Part II, containing 18 items covering linear algebra, statistics, discrete mathematics, and introductory algebraic structures, were constructed. Each part was designed to take 90 minutes for administration.
The nonstandard questions asked students to find errors in sample work (e.g., to locate an incorrect integral substitution), extend beyond typical cases (e.g., to maximize the area of a field using flexible fencing), or tie together related ideas (e.g., to give the function analogue of matrix inverse). These questions often asked for justified examples or paragraph discussions.
Departmental faculty members other than the investigators were assigned to select subjects and administer the instruments in Fall 1991 (Part I) and Fall 1992 (Part II). Subjects were students who had completed appropriate core components, and were encouraged by the department to participate on a voluntary basis during an evening session. The students were not instructed to study or prepare for the activity, and there was no evidence to indicate that special preparation had occurred. While the student population for each part was relatively small (between 10 and 15), each population represented a good cross-section of the various departmental majors. Subjects were encouraged to give as much written information as possible as part of their responses, including the true/false and multiple choice items. Responses proved to be quite authentic and devoid of flippant, inappropriate comments. The subjects gave serious consideration to the instruments and gave fairly complete responses.
The responses were evaluated by the investigators independently, using a scoring rubric developed for this purpose. This rubric is presented in Table I. Differences in interpretation were resolved jointly through discussion and additional consideration of the responses. However, we did not formally address the issues of inter-rater reliability and statistical item analysis.
Table 1: Scoring Rubric
| score | criteria |
| conceptual understanding apparent; consistent notation, with only an occasional error; logical formulation; complete or near-complete solution/response | |
| conceptual understanding only adequate; careless mathematical errors present (algebra, arithmetic, for example); some logical steps lacking; incomplete solution/response. | |
| conceptual understanding not adequate; procedural errors; logical or relational steps missing; poor response or no response to the question posed. | |
| does not attempt problem or conceptual understanding totally lacking. |
Findings
Data collected suggested that subjects appear to be most comfortable with standard items from the first portion of the Calculus sequencefunctions, limits, and derivatives. Subjects showed marginally better aptitude through nonstandard items than standard items in the subsequent topics of Calculusintegration, sequences, and series. In fact, by considering the frequencies associated with the Calculus items, a clearer picture of subject responses emerged. Of the items attempted by the subjects, slightly more than half of the responses were minimally acceptable. The remainder of the responses did not reflect adequate conceptual understanding. About two-thirds of those giving acceptable responses presented complete or near-complete responses. The data suggested that most inaccurate responses were not due to carelessness, but rather incomplete or inaccurate conceptualization.
For the non-Calculus items, subjects demonstrated great variability in response patterns. A few subjects returned blank or nearly-blank forms, while others attempted a few scattered items. When only a few items were attempted, subjects tended to respond to nonstandard items more often than to standard items. It appeared that subjects were more willing to attempt freshly-posed, nonstandard items than familiar but inaccessible standard items. A few subjects attempted most or all of the items. Of the non-Calculus items attempted by the subjects, about half of the responses were minimally acceptable.
Subsequently, revised (shortened) instruments were administered to a second set of approximately 15 subjects in Spring 1996. The resulting data suggested that the same basic information could be obtained using instruments with fewer items (12 to 15 questions, balanced between standard and nonstandard items) and a 60 minute examination period. It was observed from response patterns to the original and revised instruments that significant numbers of subjects do not explain their answers when specifically asked to do so. By the students' responses, it appears that attitudinal factors may have interfered with the assessment process. It was perplexing to the investigators that departmental majors appear to possess some of the same undesirable attitudes toward mathematics that cause learning interference in non-majors. We are in the midst of a pre/post administration of a Likert-type instrument developed to assess our majors' attitudes toward mathematics.
It should be noted that all core courses during the period of assessment were taught by standard methods. Because course syllabi did not require a technology component at the time, the degree of use of available technology such as graphing calculators or available computer software varied widely among instructors and courses.
Use of Findings
Our students need experience pushing beyond rote skills toward a more mature perspective of the subject. In order to help our students to develop better discussion and analysis skills, restructuring of the core courses was carried out beginning in 1994. We expect these changes will introduce students to more nonstandard material and to utilize nonstandard approaches to problem solving. These changes should lead to better problem-solving abilities in our students than is currently apparent.
Restructured versions of our Calculus, Discrete Mathematics, and Algebraic Structures courses came online in Fall 1996. Our Calculus sequence has been restructured and a new text selected so that the organization of the course topics and their interface with other courses such as Linear Algebra is more efficient and timely. More discretionary time has been set aside for the individual instructors to schedule collaborative laboratory investigations, principally using Matlab and Mathematica. The extant Discrete Mathematics course has been restructured as a Discrete Systems course and expanded to include logic, set theory, combinatorics, graph theory, and number systems development. The Algebraic Structures course now builds on themes introduced in the Discrete Systems course. A restructured Linear Algebra course with integrated applications using graphing calculators or computer software is in process. We anticipate that these curricular refinements will be evaluated during the 1998-99 academic year.
Success Factors
This project forced us to grapple with two questions: "What do we really expect of our graduates?" and "Are these expectations reasonable in light of current curricula and texts?" This concrete approach to these questions helped us to focus on our programmatic goals, present these expectations to our students, and gauge our effectiveness to this end. The project has guided us to redefine our curriculum in a way that will better serve our graduates' foreseeable needs. As their professional needs change, our curriculum will need continued evaluation and refinement.
Developing a custom instrument can be time intensive, and needs university support and commitment. Proper sampling procedures and a larger number of student subjects should be used if valid statistical analyses are desired.
Reference
[1] Emert, J.W. and Parish, C.R. "Assessing
Concept Attainment in Undergraduate Core Courses
in Mathematics" in Banta, T.W., Lund, J.P., Black,
K.E.,and Oblander, F.W. eds., Assessment in Practice:
Putting Principles to Work on College
Campuses, Jossey-Bass Publishers, San Francisco, 1996, pp. 104-107.
![]() |
||
![]() |
![]() |
![]() |
![]() |