Bloom's Taxonomy in Developing Assessment Items

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

The end of the twentieth century was marked by a change in education systems at K-12 levels across the country and abroad. Our government agencies at every level have initiated efforts to improve the education offered to all students by holding schools, teachers, and students accountable for academic learning and achievement (Rothman, 1995). In the field of mathematics education, concerns about curricula, instructional practices, and levels of student achievement led to publication of standards for curriculum and evaluation, teaching, and assessment by the National Council of Teachers of Mathematics (1989, 1991, 1995, 2000). These documents outline a vision for mathematics education that shifts away from the computation-laden curriculum of the past and toward a challenging, concept-driven curriculum that empowers students to solve problems and reason logically. The standards articulate five general goals for all students (NCTM, 1989, p. 5):

  1. learn to value mathematics;
  2. become confident in their ability to do mathematics;
  3. become mathematical problem solvers;
  4. learn to communicate mathematically; and
  5. learn to reason mathematically.

Draga Vidakovic, Jean Bevis, and Margo Alexander are in the Department of Mathematics and Statistics at Georgia State University.

Starting with ‘Calculus Reform’ efforts in the late 80’s and the 90’s, the focus of educational attention has been broadened to include development of the standards of intellectual development, content, pedagogy, and assessment beyond K-12 level. These standards will provide college and university systems and individual instructors with a focus for their efforts to assist students in reaching their academic potential.

Whether the accountability systems tied to standards will result in better instruction and academic success for all learners remains open to question. There are indications of positive outcomes of the reform effort, but we need much more evidence before we call it a success. Nevertheless, effects of the new accountability systems include increased emphasis on the importance of student assessment and an interest in integrating the processes of teaching, learning, and assessment. Pre-reform systems focused on instruction -- and educators perceived assessment as a necessary evil -- but the push for standards and for tests aligned with those standards has placed assessment in a prominent position in the instructional process. Instructors are more often focusing on students’ learning and conceptual understanding and are asking themselves early and often, "How will I know if students know, understand, and are able to apply the content of this discipline?"

In this paper we describe the development of an initial database for online formative assessment to be used as independent or supplemental material for a precalculus course. We developed our questions in WebCT, a system for easy online course management, using Bloom’s Taxonomy for their framework. This is an ongoing process, so you will not find a wide spectrum of questions in our database at this point. We believe that what we have so far is worth sharing, and we expect publication of our work to initiate collaboration with faculty from other institutions. Our intention in the development so far has been to emphasize the use of assessment items as "reflective tools" for students and as informative tools for instruction, not as a "testing tools" -- we will explain the difference later. In the following sections we describe the course and the online assessment tools, with special focus on the types of questions used in the assessment.

As you review our progress on the database, please keep in mind the Nine Principles of Good Practice for Assessing Student Learning (American Association for Higher Education, undated):

  1. The assessment of student learning begins with educational values.

  2. Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.

  3. Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes.

  4. Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes.

  5. Assessment works best when it is ongoing and not episodic.

  6. Assessment fosters wider improvement when representatives from across the educational community are involved.

  7. Assessment makes a difference when it begins with issues of its use and illuminates questions that people really care about.

  8. Assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change.

  9. Through assessment, educators meet responsibilities to students and to the public.

We are currently involved in two national projects that have been established with the goal to help university instructors in the transition towards standards-based teaching and learning. One of these is Quality in Undergraduate Education (QUE), a national project of faculty at selected four-year public institutions and their partners at two-year colleges, who are working together to establish content standards and develop aligned assessments to guide undergraduate education. There are projects in many states supported by the Education Trust under the P-16 umbrella that use standards-based teaching and learning as a tool for integrating grade school and college curriculum. In Georgia, we are involved in PACTS: Performance Assessment for Colleges and Technical Schools, a project to develop free response assessment items that may be used in college admission decisions.

Published September, 2003
© 2003 by  Draga Vidakovic, Jean Bevis, and Margo Alexander

Bloom's Taxonomy in Developing Assessment Items - Introduction

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

 

The end of the twentieth century was marked by a change in education systems at K-12 levels across the country and abroad. Our government agencies at every level have initiated efforts to improve the education offered to all students by holding schools, teachers, and students accountable for academic learning and achievement (Rothman, 1995). In the field of mathematics education, concerns about curricula, instructional practices, and levels of student achievement led to publication of standards for curriculum and evaluation, teaching, and assessment by the National Council of Teachers of Mathematics (1989, 1991, 1995, 2000). These documents outline a vision for mathematics education that shifts away from the computation-laden curriculum of the past and toward a challenging, concept-driven curriculum that empowers students to solve problems and reason logically. The standards articulate five general goals for all students (NCTM, 1989, p. 5):

  1. learn to value mathematics;
  2. become confident in their ability to do mathematics;
  3. become mathematical problem solvers;
  4. learn to communicate mathematically; and
  5. learn to reason mathematically.

Draga Vidakovic, Jean Bevis, and Margo Alexander are in the Department of Mathematics and Statistics at Georgia State University.

Starting with ‘Calculus Reform’ efforts in the late 80’s and the 90’s, the focus of educational attention has been broadened to include development of the standards of intellectual development, content, pedagogy, and assessment beyond K-12 level. These standards will provide college and university systems and individual instructors with a focus for their efforts to assist students in reaching their academic potential.

Whether the accountability systems tied to standards will result in better instruction and academic success for all learners remains open to question. There are indications of positive outcomes of the reform effort, but we need much more evidence before we call it a success. Nevertheless, effects of the new accountability systems include increased emphasis on the importance of student assessment and an interest in integrating the processes of teaching, learning, and assessment. Pre-reform systems focused on instruction -- and educators perceived assessment as a necessary evil -- but the push for standards and for tests aligned with those standards has placed assessment in a prominent position in the instructional process. Instructors are more often focusing on students’ learning and conceptual understanding and are asking themselves early and often, "How will I know if students know, understand, and are able to apply the content of this discipline?"

In this paper we describe the development of an initial database for online formative assessment to be used as independent or supplemental material for a precalculus course. We developed our questions in WebCT, a system for easy online course management, using Bloom’s Taxonomy for their framework. This is an ongoing process, so you will not find a wide spectrum of questions in our database at this point. We believe that what we have so far is worth sharing, and we expect publication of our work to initiate collaboration with faculty from other institutions. Our intention in the development so far has been to emphasize the use of assessment items as "reflective tools" for students and as informative tools for instruction, not as a "testing tools" -- we will explain the difference later. In the following sections we describe the course and the online assessment tools, with special focus on the types of questions used in the assessment.

As you review our progress on the database, please keep in mind the Nine Principles of Good Practice for Assessing Student Learning (American Association for Higher Education, undated):

  1. The assessment of student learning begins with educational values.

     

  2. Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.

     

  3. Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes.

     

  4. Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes.

     

  5. Assessment works best when it is ongoing and not episodic.

     

  6. Assessment fosters wider improvement when representatives from across the educational community are involved.

     

  7. Assessment makes a difference when it begins with issues of its use and illuminates questions that people really care about.

     

  8. Assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change.

     

  9. Through assessment, educators meet responsibilities to students and to the public.

We are currently involved in two national projects that have been established with the goal to help university instructors in the transition towards standards-based teaching and learning. One of these is Quality in Undergraduate Education (QUE), a national project of faculty at selected four-year public institutions and their partners at two-year colleges, who are working together to establish content standards and develop aligned assessments to guide undergraduate education. There are projects in many states supported by the Education Trust under the P-16 umbrella that use standards-based teaching and learning as a tool for integrating grade school and college curriculum. In Georgia, we are involved in PACTS: Performance Assessment for Colleges and Technical Schools, a project to develop free response assessment items that may be used in college admission decisions.

Published September, 2003
© 2003 by Draga Vidakovic, Jean Bevis, and Margo Alexander

Bloom's Taxonomy in Developing Assessment Items - Course Content

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

The catalog description of the precalculus course we teach at Georgia State lists the content areas: trigonometric functions, identities, inverses, and equations; vectors; polar coordinates; conic sections. The current textbook is Barnett, Ziegler, and Byleen (2001).

The content standards (CS) developed by our department and used as guidelines for teaching the course are organized into eight groups:

  • CS1. Quantitative Reasoning
  • CS2. Abstract and Algebraic Functions
  • CS3. Defining the Trigonometric Functions
  • CS4. Use of the Trigonometric Functions
  • CS5. Mathematical Proofs
  • CS6. Analytic Geometry
  • CS7. Vectors
  • CS8. Applications

Each CS contains detailed requirements regarding the acquisition of knowledge and skills that students are expected to meet upon the completion of the course. These standards move beyond simple computations to understanding and relating concepts. When possible, problems are based on the use of multiple representations. For example, consider the problem of conversion between rectangular and polar coordinates for a point P. Instead of using only algebraic formulas, an item may encourage or even require students to also use a diagram with rectangular and polar coordinate systems overlaid, as shown in Figure 1. After plotting the point in one system, the coordinates can be read from the other.


Figure 1: Multiple representations

The concepts and problems of a precalculus course can be very different from those of the prerequisite algebra courses. In preceding courses, students may have seen computation and algebraic manipulations emphasized. By contrast, this course should be highly visual. Our materials emphasize the geometry of triangles and circles. Properties of functions such as increasing, slope, asymptotes, foci, vertex, directrix, periodicity, and amplitude are illustrated through their graphs. In addition, many concepts are understood, explored, and manipulated using diagrams. This includes diagrams for winding functions, transformations of graphs, transformations of coordinate systems, vector addition, and vector resolution.

In the next section we describe the course design and organization of the WebCT-based Precalculus as taught by four faculty in our department. We refer to these classes as the "experimental sections."

Bloom's Taxonomy in Developing Assessment Items - Course Organization/Design

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

Our precalculus course carries three semester-hours credit. Classes meet for 150 minutes per week in two or three meetings per week for 15 weeks. In the WebCT-based Precalculus sections, a complete set of course materials is available online, including syllabus, schedule, assignments, content/lessons, quizzes, and tests. In addition to providing links to these materials and tools for tracking student performance on quizzes, WebCT provides communication tools such as discussion boards, chat rooms, and e-mail. Students in these sections may complete the course entirely in class, entirely online, or through a combination of classes and online work. Currently, all students complete online quizzes, and most of them come to class on a regular basis.

The course materials are organized into approximately 35 lessons and are available both online and in print. An online lesson contains various questions and tasks for students to work on along with their reading. They may check their answers in a pop-up window, which has the advantage of giving every online student an opportunity to "reply," in contrast to a classroom situation in which only a few students may be participating. Typically, lessons may be shortened and made more interesting to some students by asking questions such as "Would you like another problem?" or "Would you like more explanation?". Students choose whether to answer these questions -- the additional material is provided in pop-up windows for those who request it. Java applets provide additional interactivity and animated illustration of concepts.

There are online quizzes for each lesson. These comprise a significant online aspect of the course. While they are only a minor part of the course grade, they provide major benefits, including immediate feedback on student progress to both the student and the instructor. Pedagogical values of the quizzes include reinforcement and engagement for concepts recently covered in the corresponding lesson. Administered after the small-group work in the classroom, they give additional focus for students’ interaction during the following class period. The challenge to instructors is to develop quiz questions that will engage the students most effectively.

As developers, we also use a technique of "cycling" to reinforce concepts. Whenever possible, topics, diagrams, methods, and computations are revisited. But each time they are approached in a different setting or from a different direction. For example, variations in Figure 1 (repeated below) are used for winding functions, polar coordinate systems, vector components, relating arc length to central angle, and applications involving clock faces, circular tracks, and Ferris wheels.


Figure 1 (repeated): Multiple representations

Bloom's Taxonomy in Developing Assessment Items - Assessment

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

In developing a WebCT based Precalculus course, we gave an important place to assessment of student learning and understanding. WebCT has a friendly "Quiz" environment -- a program for generating online quizzes, tests, and surveys that students can take on their own computers, a computer network on campus, or anywhere else with Internet access. Features of the WebCT Quiz program include

  • developing and using a large database of questions;
  • randomly selecting or ordering questions from the database;
  • timing or not timing tests;
  • allowing or not allowing tests to be re-taken;
  • restricting test access to specific students.

The program offers immediate score feedback to the student, with a "flag" feature showing the status of each question (answered, unanswered). A report to the instructor of an individual or class performance can be downloaded and converted to Microsoft Excel.

Among various assessment tools, traditional formats play important roles. For example, if the teacher's goal is to evaluate students' recollection of facts, understanding of algorithmic processes such as mathematical computation, or ability to retell a story in a written narrative, then traditional assessments are most appropriate (Resnick and Resnick, 1996). In these cases, traditional assessment formats might have greater validity because of the opportunity to sample student learning with more items and a broader range of questions than alternative assessments allow.

We agree with Kulm (1990) in believing that assessment should be a continuous, ongoing process that involves examining and observing students’ behaviors and developing questions to promote their conceptual understanding. When assessment is integrated with instruction, it informs teachers about what activities and assignments are the most useful and what level of teaching is most appropriate (Shepard, 2000; Black and William, 1998). For instance, during instruction, informal and formative assessment helps teachers know when to move on, when to ask more questions, when to give more examples, and what responses to student questions are most appropriate.

We developed online assessment items with two objectives: to inform and guide students’ learning, and to inform and guide our teaching practice. Our intention was to minimize the use of online assessment for the purpose of computing a "grade." For that reason, in our practice, students' performances on the online assessments count for only a small percentage of their final course grades. Rather, we use this information as a vehicle to empower our students to be self-reflective learners who monitor and evaluate their own progress. This claim is supported by numerous anecdotal reports from class meetings following the quizzes, by e-mail dialogs between instructor and students, by conversations in office hours, and by questions on the WebCT bulletin board. Additionally, a doctoral student in mathematics education is currently studying aspects of WebCT that support students' active, self-reflective learning. Her preliminary findings provide evidence of students as being reflective learners in the experimental sections. At the same time, we use information obtained through online assessments to evaluate our teaching strategies and to modify and develop our curriculum and classroom activities. This kind of assessment is known as a formative assessment (Black and William, 1998).

Guiding assumptions for the changes in assessment procedures contrast "instruction focused on learning to name concepts and follow specific procedures" with higher order thinking, which is non-algorithmic and complex (Assessment Reform Group, 1999). Traditional assessment items emphasize computing a numerical result or simplifying an algebraic expression. Often the alternatives in multiple-choice and matching questions are simply numbers or algebraic expressions reflecting this emphasis. We wished to continue using these question formats since, among other advantages, they are easily graded by an automated process. To move beyond the previous emphasis on computation, we made every effort to design questions in these formats with varied lists of alternatives (Angelo and Cross, 1993). As a result, we have questions in which the list of alternatives consists of parts of various diagrams or graphs, parts of a proof, or justifications of steps of a proof. The following sections will focus on the types of cognitive tasks we used as a guide when creating multiple-choice, matching, short answer, and essay types of questions in our online testing tools.

The NCTM Standards emphasize the importance of comprehensive and continuous assessment of students’ learning. They also emphasize the importance of "essay" (or free response) questions, without neglecting the advantages of multiple-choice, true/false, short answer, and matching formats. We believe that providing students with the opportunity to gauge how much they understood from each lesson may serve as a guide for their studies. The WebCT quizzes fulfill this role and also supplement other testing and assessment tools. So far, there are 35 WebCT quizzes (one for every lesson), two WebCT tests, and one WebCT survey.

Bloom's Taxonomy in Developing Assessment Items - A Framework for Developing Online Assessment Items

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

 

We utilize Bloom's taxonomy (Krathwohl, Bloom, and Masia, 1964; see also Krumme, 2001) as a framework for developing online assessment questions. The framework is based on the notion of hierarchy of thought processes and consists of the following six categories. You may click on each category name for a short description.

Each category requires more complex thinking than the one preceding it and also builds on or incorporates the preceding types of thought in order to proceed to the "higher" levels. Assessment items developed using this framework should include a range of levels and thinking processes, with the majority of them accessing a higher level of thinking. Thus, in developing questions, we keep in mind that we want students to think, make connections, question the information included in the problem, process the information, and reflect on their answers.

Depending on their background and motivation, individual students may approach a problem at different levels of Bloom's hierarchy. This is more likely true of students at different colleges, with different instructors, using different texts, than of students in the same class using the same text. Please keep this in mind as we illustrate how these levels apply to typical students in our classes.  When you think about these items in the context of your own students, you may well disagree with our classification. More importantly, we find that we can use Bloom's taxonomy as a guide to help us construct more stimulating assessment items. We believe that other instructors can do the same.

Our pedagogical goals are to:

  1. encourage students' thought processes to move from simple to complex;

     

  2. generate cognitive conflicts (explanation);

     

  3. foster a sense of student-student and student-teacher interactions; and

     

  4. help students draw connections to their own mathematical experiences

Bloom's Taxonomy in Developing Assessment Items - Sample Assessment Items

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

Our assessment plan includes development of multiple choice, matching, short answer, and essay types of questions and is based on using the WebCT program. We will describe each type of question by the way we use it, illustrate it with examples, and classify it in the highest possible category in Bloom’s taxonomy framework.

Multiple-choice questions. Typically this type of question takes the form of a short question or implied question (the stem) followed by four or five optional answers, with at least one correct answer, and all of the others wrong (the distracters). Our goal was to use this type of question to assess higher levels of thinking. We often found that such questions also require activities at lower levels of Bloom’s taxonomy. Thus, tasks requiring lower-level thinking are used as bridges to higher-level tasks. For example, we use tasks that emphasize memorization as a part of the problem in a multiple-choice question, a part that leads to more comprehensive -- higher level -- activities.

With multiple-choice questions we use the following types of tasks:

  1. Calculation. The purpose of this kind of task is to ensure that students are learning necessary computational skills, in addition to conceptual understanding. Example 1 illustrates a question from this category. We assume the student’s approach would be to find that the coordinates of vector u are (1,3) and then to add those coordinates to the coordinates of the given point, P(2,2), to obtain that the coordinates of the displaced point as (3,5) [i.e., (1,3) + (2,2) = (3,5)]. Alternatively, the student may solve the problem graphically. Ideally, the student would work the problem both ways and compare the results. By providing an algebraic representation of P and a graphic representation of u, the problem requires the student to convert one to the other. Of course, with a given list of choices, the student may not do any calculation at all, but randomly select their answer from the list. We hope students are honestly using these quizzes as learning tools, not simply as work required for the course.
  2. Graphic representation. Students are required to use given information and a graphical representation in order to answer the question. In Example 2, students need to use the given graphical representation to identify vectors u and v, use the rule for geometric addition of two vectors, and then find the coordinates of the vector that represents the sum. In this particular case, students may also choose to solve the problem algebraically by adding the two given vectors to obtain the resultant vector, i.e. (3,1) + (-1,2) = (2,3), and then identify the case on the graph with all three vectors. A graphical representation problem with multiple responses (not just a single correct answer) is shown in Example 3.
  3. Algebraic manipulation. Students are expected to use algebraic formulas or factoring polynomials in order to find the answer. In Example 4, students need to factor a polynomial or use the binomial formula to simplify the left side of the equation and then proceed to solve the given trigonometric equation.
  4. Mathematical modeling. Students are expected to translate a word problem into the corresponding mathematical model. Example 5 illustrates a task in which students are expected to recognize and use the concepts of trigonometric functions to express parametrically the position of the free end of the minute hand on a clock. In doing so, they need to relate the given information about the rate of the minute hand and the angle that describes the position of the free end with respect to the positive x-axes.

All five examples of multiple choice questions

Matching questions. WebCT allows lists of matching items to be of different lengths. All that is required is a correspondence between the first list and a subset of the second list. We use this type of question in the following ways:

  1. Matching unknowns to items from a list of answers, or matching the answers to a list of unknowns. That is, students are given a list of possible answers and required to match them to a list of unknown variables or select NONE if there is no corresponding match. This kind of question may involve calculations, graphing or observation from a given graph, or algebraic manipulations and/or transformations.
  2. Matching the steps constituting a proof of a trigonometric identity to their justification, purpose, or order of appearance.

In these kinds of tasks, students need to identify all items relevant to their task and then match the corresponding items from the lists.

Four examples of matching questions

Short answer questions. This is the type of question we use least in our quizzes, and we use it with mainly short and simple tasks.

Example of a short answer question

Essay questions. The WebCT quiz tool has an option for creating an essay type of question. Our example illustrates a task that incorporates an interactive environment (applet) with student manipulation, observation, and writing. This item is not graded automatically. It has great value for students and instructors in fostering interactivity and developing observational and writing skills. The instructor may leave comments or feedback to the student on an "essay" type of answer, including challenging questions in the context of the student’s answer. This allows the student to access the question a second time to respond to those questions. This cycle could be repeated multiple times. It is also suitable for challenging tasks to be completed in cooperative learning groups.

Example of an essay question

Bloom's Taxonomy in Developing Assessment Items - Discussion, Teaching Implications, and Conclusion

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

Good understanding of the concepts included in a precalculus course is crucial for building students’ mathematical understanding, confidence, and success in all subsequent undergraduate mathematics courses. We have observed that the majority of students enrolled in our course have weak mathematical backgrounds and low motivation. Consequently, the course has been identified as a course with a high "drop" rate. We are using WebCT to develop quizzes, tests, and on-line materials that emphasize conceptual understanding of functions, as well as of numeric and algebraic manipulation and algorithms. Typically, students beginning a precalculus course try to solve problems using simple rules without seeking any understanding of related concepts. Some popular rules include (-)(-) = (+), "and means plus", "the slope is the coefficient of x", and Note that the rules may be incomplete or even incorrect, but they are still adopted by students who avoid conceptual understanding. By using tasks at higher levels of Bloom’s taxonomy, we are forcing students to move beyond the uninformed use of such rules. We hope that this will help students retain knowledge as well as improved understanding and attitude.

We found Bloom’s taxonomy to be a useful framework for developing multiple-choice, short-answer, matching, and essay questions that can involve students in complex cognitive tasks. We emphasize that we classify the task/item in a certain level of Bloom’s taxonomy based on the highest level of cognitive task posed to the student. That is, if the highest level of expectation for the student is to remember some facts, definitions, terminology, symbols, etc., such a task is at the knowledge level. If the student is expected to translate, illustrate, extrapolate, estimate, predict, identify/distinguish, interpret -- without necessarily relating it to other material or seeing its fullest implications -- the task is classified at the comprehension level. A task that requires students to use abstraction and apply it in particular and concrete situations is classified at the application level. When the student is expected to break down information into its constituent parts, considering their relationships and organizational principles, the task represents the analysis level. When the situation/task is opposite to analysis -- the student is expected to put together elements and parts to form a whole -- it is said to be at the synthesis level. And finally, when the student is required to use criteria and judgment to justify something based on internal/external evidence, the task is at the evaluation level.

We use online assessment in our teaching in several ways:

  1. to observe individual student learning and to identify the concepts and issues with which students have difficulty;
  2. to modify old assessment items or to develop new ones and to adapt our teaching strategies to include more discussions on those concepts with which students have difficulty;
  3. as a vehicle for student-teacher interactions.

For an example of the third use, we encourage students to ask questions about quizzes at the beginning of each class period. Very often a class starts with student statements such as "I have a question about an item on the last quiz". The instructor encourages those students to present the problems or items together with their solutions. Usually, more than one student had the same problem on a particular item, which motivates them to listen and to participate. On the other hand, students who were successful with those items engage by sharing their solutions.

We believe that class discussions initiated by students have value in that the students are active participants, motivated, and reflective learners. In the assessment literature, this kind of assessment is referred to as formative assessment (see, for example, Black and William, 1998). The assessment is called formative when information is used as feedback by students/learners about their understanding (and sometimes skills) and by teachers to inform their practice, e.g., to evaluate and modify teaching strategies or to assess students’ understandings and misunderstandings throughout the teaching process.

Recommendations from professional organizations encourage teachers to use formative assessment in their classrooms. At the K-12 level, teachers are advised to use already developed and tested assessment items -- items developed or approved by testing agencies. But there is a shortage of databases with appropriate testing items at the undergraduate level. Teaching mathematics at the undergraduate level may be quite variable due to differences in instructor style. However, there is anecdotal evidence that many instructors rely solely on problems drawn from their textbooks.

In developing our assessment items, we were guided by the objectives we set for our students in each lesson. Additionally, we have observed students’ performances on all items and modified them accordingly, if necessary. For instance, the items that were answered by all students correctly were either modified to incorporate more complex questions or their weight was modified. On the other hand, if there were any items answered correctly by only a few students, these were modified appropriately after class discussions during which we identified the causes. This process contributes towards the validity (the degree to which an item measures what it is supposed to measure) and reliability (consistency of item results) of database items.

It is worth mentioning that the experience of one of the authors on the University System of Georgia Regents Mathematics Test Development Committee -- developing a rising junior examination in mathematics suitable for all University System colleges and universities -- has been useful in developing our items. Additionally, all of the authors are members of the Quality in Undergraduate Education committee for developing standards and assessment for undergraduate mathematics courses.

We started development, implementation, and revision of the online assessment a few semesters ago. The results of implementation, including students’ reactions, have been positive so far. There are indications, based on the attitude pre- and post-tests, that students’ attitudes tend to change over the semester in at least three areas. Specifically, at the end of the semester:

  1. a smaller percentage of students believe that "memorizing is the most important in learning mathematics";
  2. more students believe that they "can learn mathematics"; and
  3. a larger number of students feel comfortable working with computers.

Although these results represent our individual observations in our own classrooms -- they are not the results of a well-developed study -- they are nevertheless of significant value to us. These results motivated us to devote numerous hours in developing and revising assessment items. The assessment outcomes have informed us on individual student’s progress, as well as helping us reshape our instruction. For example, each of us has been in a situation in which we revised a planned daily class activity to accommodate class discussion about certain issues or concepts that appeared on a quiz.

In conclusion, our process of developing and using online course material and assessment items in the WebCT environment has been useful for us, both as authors and instructors, and for our students in the following ways:

  1. We are motivated to set and state clearly the objectives for the course and for each individual lesson.
  2. We can follow closely the progress of each student.
  3. We have immediate feedback on the concepts that give our students difficulty, and we can revise our daily teaching plan to intervene right away.
  4. The system gives our students immediate feedback and opportunity for self-reflection.
  5. The system gives students and instructors additional motivation to interact.

Bloom's Taxonomy in Developing Assessment Items - References

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

American Association for Higher Education (undated). Nine Principles of Good Practice for Assessing Student Learning. Available online at http://www.aahe.org/assessment/principl.htm (accessed 8/15/03).

Angelo, T. A.and P. K. Cross (1993). Classroom Assessment Techniques: A Handbook for College Teachers. San Francisco.

Assessment Reform Group (1999). Assessment for Learning: Beyond the Black Box. Cambridge: University of Cambridge School of Education.

Barnett, R. A., M. R. Ziegler, and K. E. Byleen (2001). Precalculus: Functions and Graphs, Fifth Edition. New York: McGraw-Hill.

Black, P., and D. William (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80 (2), 139-148.

Krathwohl, D. R., B. S. Bloom, and B. B. Masia (1964). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook II: Affective Domain. New York: David McKay Co., Inc.

Krumme, G. (2001). Major Categories in the Taxonomy of Educational Objectives (Bloom 1956). Available online at http://faculty.washington.edu/krumme/guides/bloom.html (accessed 8/22/03).

Kulm, G. (Ed.) (1990). Assessing Higher Order Thinking in Mathematics. Washington, D.C.: American Association for the Advancement of Science.

Means, B., and K. Olson (1997). Technology and Education Reform. Volume 1: Findings and Conclusions. Studies of Educational Reform. Menlo Park, CA: SRI.

National Council of Teachers of Mathematics. Commission on Standards for School Mathematics (1989). Curriculum and Evaluation Standards for School Mathematics. Reston, VA.: The Council. Available online at http://standards.nctm.org/Previous/CurrEvStds/index.htm (accessed 8/15/03).

National Council of Teachers of Mathematics. Commission on Teaching Standards for School Mathematics (1991). Professional Standards for Teaching Mathematics. Reston, VA: The Council. Available online at http://standards.nctm.org/Previous/ProfStds/index.htm (accessed 8/15/03).

National Council of Teachers of Mathematics (1995). Assessment Standards for School Mathematics. Reston, VA: The Council. Available online at http://standards.nctm.org/Previous/AssStds/index.htm (accessed 8/15/03).

National Council of Teachers of Mathematics (2000). Principles and Standards for School Mathematics. Reston, VA: The Council. Available online at http://standards.nctm.org/document/index.htm (accessed 8/15/03).

Piaget, J. (1963). The Origins of Intelligence in Children. New York: Norton.

Piaget, J., B. Inhelder, and A. Szeminska (1960). The Child's Conception of Geometry (E. A. Lunzer, Trans.). New York: W. W. Norton & Company.

Resnick, D. P., and L. B. Resnick (1996). Performance Assessment and the Multiple Functions of Educational Measurement. In Implementing Performance Assessment: Promises, Problems, and Challenges, edited by M. B. Kane and R. Mitchell. Mahwah, NJ: Erlbaum.

Rothman, R. (1995). Measuring Up: Standards, Assessment, and School Reform. San Francisco: Jossey-Bass.

Shepard, L. A. (2000). The Role of Assessment in a Learning Culture. Paper presented at the Annual Meeting of the American Educational Research Association. Available at http://www.aera.net/pubs/er/arts/29-07/shep01.htm (accessed 8/15/03).