This article presents an administrative look at the ramifications of accommodating various departments' views of how quantitative literacy is to be defined. The issue is: what are the students telling ushow do we interpret the answers they provide to the questions we've asked? The value of "fuzzy" assessment is discussed in the interpretation of a simple survey which helps move a collective bargaining institution on track.
Background and Purpose
St. Cloud State University is a medium-sized university of about 15,000 students, the largest of the state universities in Minnesota. As a public university within a statewide public university system, it operates with strained resources and constraints on its independence, and serves at the public whim to an extent that is at times unnerving. In 1992 a decision was made to merge the four year universities with the community college and technical college systems, and the state legislature has been demanding maximization of transferability across the public higher education system. This process has been developing with minimal attention to the issue of assessment: assessment functions have been basically assigned to individual universities so that institutions assess their own transfer curriculum for students moving to other institutions.
St. Cloud State has math-intensive major programs in science, engineering, education and business, but it also has many largely or totally "mathasceptic" major programs in arts, humanities, social science and education. The general education program has until now had no math requirement, but an optional mathematics "track" within the science distribution students may count one of three non-major mathematics courses as part of the science distribution or, (under charitable dispensation!) use two advanced mathematics courses at the calculus level or higher to substitute for the math alternative. Until recently, admission standards permitted high school students with little in and less beyond elementary algebra to enroll. Looking at the national literature on the importance of mathematical "literacy" for career success and citizenship awareness, we felt this state of affairs was problematic.
The challenge of assessing quantitative literacy in a total general education curriculum lies in the amorphousness of the problem. Traditional testing trivializes the problem because restricted testable objectives in mathematics fails to take account of the variety of mathematical needs of the students in terms of their goals and expectations. Our purpose in doing assessment here is thus not to validate academic achievement, but to provide a rough overview of what is happening in the curriculum in order to identify general needs for future academic planning.
Rather than use expensive testing methods, we decided to settle for a cheap and simple way of getting some sense of what was going on with regard to general education learning objectives. General education courses can only be approved if they met three of the following five criteria: basic academic skills (including mathematical skills), awareness of interrelation among disciplines, critical thinking, values awareness, and multicultural/gender awareness. A 1987 accreditation review report had expressed concern that we had not been monitoring the student learning experience, but had settled merely for "supply side" guarantees that weren't guarantees of learning at all. To provide more of a guarantee, we established a five-year course revalidation cycle under which all general education courses needed to be reviewed in terms of demonstrations that they were each meeting at least three of the five objectives above.
Method
We developed a simple survey instrument that asked students in all general education classes whether they were aware of opportunities to develop their level of performance or knowledge in these criterion areas: 1) basic academic skills including mathematics, 2) interdisciplinary awareness, 3) critical thinking, 4) values awareness, and 5) multicultural and gender and minority class awareness. The survey asked the students to assess the opportunity for development in these areas using a 1-5 scale. The widespread surveying in conjunction with the revalidation process gave us a fuzzy snapshot of how the program worked, with which we could persuade ourselves that change was needed. Such a "rhetorical" approach to assessment may seem questionable to mathematicians, but I would argue that such techniques when used formatively have their value.
Institutional assessment at St. Cloud State has always been problematic. The unionized faculty have been extremely sensitive to the dangers of administration control, and to the need for faculty control over the learning process. This sensitivity has translated into a broad faculty ownership of such assessment processes. Thus, the changes in the general education program developing out of the assessment process have been very conservative, maximizing continuity. Also, the faculty has claimed the right to be involved in the design and approval of such a simple thing as the development of a student survey whose already determined purpose was to ask about whether already defined general education objectives were being emphasized in a particular class. The assessment committee and assessment coordinator have authority only to make recommendations for the faculty senate to approve. While this governance process makes development processes restrictive, it permits a fairly broad degree of faculty ownership of the process that provides a positive atmosphere for assessment operations.
We wanted to use the survey instrument in a way that would provide feedback to the instructor, to the department for course revalidation, and to the general education program to help trace the operationality of our program criteria. The surveying system has been working as follows. When we began the process in 1992, we received approval for a general survey of all general education classes being taught during the spring quarter. Because many lower level students are enrolled in various general education classes, this meant that we would need to be able to process something like 20,000 forms. We developed a form with questions based on the program criteria, and had the students fill out that form, and then coded their responses onto a computer scannable form. These forms were then processed by our administrative computing center, and data reports returned to the appropriate departments for their information. Thereafter, the survey was available for use as departments prepared to have their general education courses revalidated for general education for the next five year cycle. A requirement for revalidation was that survey data be reported and reflected upon.
At the end of the first year, a data report of the whole program was used to identify criteria that were problematic. In 1996, as part of a program review for converting from a quarter calendar to a semester calendar, we reviewed and revised the program criteria, and created a new survey that reflected the new criteria. Again, we requested and gained approval for a broad surveying of all general education courses. This new survey has proved a more varied and useful instrument for assessing general education. However, it remains a student-perception instrument of fairly weak validity and reliability, not an instrument for measuring academic achievement. Even so, it has proven useful, as the following focus on the quantitative literacy criterion illustrates.
Findings
Figure 1 is a graph of the results from the first surveying from 1992 through 1995. It shows averages for classes in each of the program areas: communication, natural science distribution, social science distribution, arts and humanities distribution, and electives area (a two course area that allows students to explore areas for possible majors in more pre-professional areas). What the picture showed was that very little mathematics was happening anywhere in the general education program. In fact, we found only 3 classes in the whole program "scored" at a mean level over 3.5 out of a range of 1 to 5 (with 5 high). The courses in the science distribution showed the strongest mathematical content, the social science courses showed the weakest.
We found it easy, of course, to identify partial explanations the serious mathematics courses with large enrollments were not in the general education program, the question was framed narrowly in terms of mathematical calculation rather than quantitative thinking, and so forth. But none of that got around the fact that the students were telling us that they were aware of having precious little mathematical experience in the general education program. The effect was to put quantitative literacy in a spotlight as we began monitoring the program.
Use of Findings
The information provided by the graph was widely discussed on campus. Our concern about mathematical awareness in general education became a major issue when the Minnesota legislature decided to require that all public universities and
colleges convert from a quarter to a semester calendar. This required a reshaping of the general education program, and in particular, reestablishing a quantitative literacy requirement as part of a general education core. In addition, the program criteria have been revised to provide a stronger basis for program assessment. The "mathematical component", rephrased as the "quantitative literacy" component, has been redefined in terms of quantitative and formal thinking, and additional basic skills criteria have been added relating to computer experience and science laboratory experience. Results coming from the new survey that started to be used in the winter of 1996 indicate that the students are seeing more mathematics
content everywhere (see the Quantitative Thinking results in Figure 2), but particularly in the social science distribution block as understanding data and numerical analyses become defining criteria. We have tried a small pilot of a Quantitative Literacy assessment of upper division classes across the curriculum using a test, admission data, mathematics course grades, and a survey. We found no significant correlations, probably indicating the roughness of the testing instruments, but did get a strong indication that the general education level math courses were not effectively communicating the importance of mathematics knowledge and skills to success in the workplace.
We will continue the surveying process, anticipating that as all students have to work with the new semester quantitative thinking requirement, they will be able to deal with more sophisticated numerical analyses in other general education courses. This improvement should be visible in individual courses, in the course revalidation review process, and in the program as a whole in the results from survey. Since our goal under the North Central Association assessment requirement is to be able to document the improvement of student learning in the program, we look forward to such results.
Success Factors
The first success factor as been the way a simple survey has generated meaningful discussion of definitions and learning expectations of general education criteria. Whereas course evaluation has tended to generate anxiety and obfuscation in reporting, the survey has provided a vehicle for productive collegial discussion within the faculty at different levels. For example, the revalidation process requires that faculty rationalize the continuation of a particular social science course using survey results. Suppose the math result for that course was 3.8 out of 5, with a large variance and 25% of the students indicating that this goal seemed not applicable to the course. That might reflect the extent to which understanding data analyses and measuring social change is basic to that course, but it definitely leads to some thinking about just how quantitative-thinking expectations for a course are to be defined, and just where the numbers are coming from. This then would be discussed in the application for revalidation, and some decisions would probably be made to include some alternative reading and perhaps some work with a statistical analysis package. Admittedly the definition of quantitative literacy is loosely defined by non-mathematicians, but once it is defined, however loosely, it becomes discussible at the program level. Part of the five-year program review involves reviewing overall survey results and course descriptions to see what is meant in various departments by "quantitative thinking" and how it is taught across the program curriculum.
A second success factor is the current framework of institutional and system expectations concerning assessment. Our faculty have been aware that assessment reporting will be required in the long term to justify program expenditures and that, to meet accreditation expectations, assessment reporting needs to include data on academic achievement. Student learning data is thus required in the second revalidation cycle, and survey results have been providing some student input that help to define critical assessment questions. In the case of the hypothetical sociology course, the survey score is something of an anomaly: in spite of the emphasis in the class on the use of data to describe and evaluate social realities, half of the 75% who even thought mathematical thinking was relevant to the course scored the emphasis from moderate to none. However "soft" that result from a validity perspective, it provides something of a shock that would motivate faculty interest in emphasizing mathematical thinking and tracking student ability and awareness of mathematical reasoning in tests, papers, and other activities in the class. In this sense, the survey has served as a crude engine for moving us along a track we are already on.
Editor's Note
When asked if the numbers could be used to present
more specific results, Philip Keith replies, no, because, "We
are dealing with a huge program that has been flying in the
dark with pillows, and the point is to give questions like `why
did the students give this score to this item and that to that'
some valid interpretation. For example, students in
Geography XXX feel that writing is more important than other
students do, in English Composition. How is this possible? Well,
this Geography class serves a small band of upperclassmen
who work continually on research reports which count
toward their major in chemistry and meteorology. In other
words, the numbers don't determine or judge anything, they
point! When they are counterintuitive, then you look
for explanations. And that's a useful way to get into
assessment in murky areas."
![]() |
||
![]() |
![]() |
![]() |
![]() |