![]() |
Additional Online Case Studies & Appendices | |
Developing
a Departmental Assessment Program: North
Dakota State University Mathematics William O. Martin, Doğan Çömez0. Abstract. This article describes the process used to
develop assessment in the mathematics programs at the North Dakota State
University (NDSU). The Mathematics Department has developed a comprehensive
assessment process that examines student learning in (a) services courses, (b)
the major, and (c) the masters and Ph.D. program. The most ambitious component,
established with external funding, examines the introductory mathematics
courses in conjunction with the NDSU general education program. Assessment of
the undergraduate and graduate programs involves many of the Department’s
faculty. All components of the project are designed to minimize extra demands
on participants, to provide useful information for participants as well as the
Mathematics Department and University, and to focus on assessment as an
integrated part of departmental activities rather than an “add-on” activity
done primarily for external purposes. 1. Context and
Setting. North Dakota State University is a land
grant, Doctoral I research university, and is the top institution in the state
for graduating agriculture, engineering, mathematics and science students with
baccalaureate through doctorate degrees. The number of undergraduate students
(Fall 2002) is 9,874; and the number of graduate students is 1272. The average
ACT composite score of all entering students (Fall1 997) is 23.1 (the national
average is 2 1.0). The student to teacher average ratio is 19 to 1. Most of the
classes specifically relating to the majors typically have fewer than 25
students, and mostly research faculty with terminal degrees teach those
courses. The normal teaching load for research faculty is four courses per
year. The Department of
Mathematics at NDSU offers BS (mathematics and secondary mathematics
education), MA, and PhD degrees. The Department also has a major service role
for other science and mathematics-intensive programs in the institution,
particularly in the Colleges of Science and Mathematics, Engineering, Business
Administration, and Pharmacy. The Department offers a broad and balanced
curriculum of courses with 15 tenure-track faculty and about 10 lecturers
(Computer Science and Statistics are separate departments). In Fall 2002 there
were 38 mathematics majors in sophomore-senior standing among 83 undergraduate
majors. Many talented students participate in the EPSCoR-AURA program;
mathematics faculty members frequently supervise the undergraduate research
projects of talented mathematics students. The undergraduate mathematics
major’s degree program culminates with a capstone course, usually completed
during the senior year. The Department, as the largest service department on the
campus, enrolls 300-400 students each in calculus I and calculus II every
semester (taught in a large lecture and recitation format) and 150-300 students
per semester in each
of calculus III and differential
equations (taught in classes of about 35 students). The Department
provides free tutoring services for all 100-300 level mathematics courses,
staffed mostly by graduate students and talented undergraduate mathematics and
mathematics education majors. 2. Assessment
Project Goals. Our goal is to develop and conduct a
comprehensive assessment program to monitor the impact of all of our
instruction on student learning of mathematics. We focus on three components of
our instructional role: (a) Service courses through the first two undergraduate
years, (b) the undergraduate program for mathematics majors, and (c) the
graduate program. The assessment program is designed to involve many
departmental faculty in our activities and to coordinate our departmental
efforts with the work of the University Assessment Committee. 3. Assessment
Program Description. 3.1. Development of the program. Two components of our departmental assessment
activities have been developed separately: (a) a campus-wide quantitative
assessment project focusing on first- and second-year service courses through
multi-variable calculus and differential equations and (b) departmental
assessment of our undergraduate major and graduate programs. The campus-wide
quantitative assessment project uses a model first developed by Martin and
Bauman at the University of Wisconsin-Madison (Bauman and Martin, 1995; Martin,
1996) that originally was funded at NDSU by the Office of Assessment and
Institutional Research. A recent, more extensive implementation occurred with
support from the Bush Foundation of Minneapolis. The departmental
degree program assessment activities were developed to make use of existing
instructional activities, reducing financial costs and time demands on faculty.
Data is obtained from specific courses required of all undergraduate students,
graduate program written and oral examinations, and advisor reports.
Additionally, the Department has developed and begun to implement a peer review
of teaching program, which will provide additional information about
instruction and student learning. 3.2. Departmental service role
assessment. The most ambitious component of our
assessment activities is the quantitative assessment project. Briefly, the
purpose of the project is to gather information about (a) quantitative skills
used in specific beginning upper-division courses and (b) the extent to which
students can show these important skills at the start of the semester.
Instructors play a key role in helping to design free-response tests reflecting
capabilities expected of students from the first week and essential for success
in the course. Two important characteristics of this form of assessment are (a)
direct faculty involvement and (b) close ties to student goals and backgrounds.
We have found that the reflection, contacts and dialogs promoted by this form
of assessment are at least as important as the test results. The process begins
with the selection of beginning upper-division courses across the campus. These
courses are selected either (a) by the Department Assessment Committee or (b) by
the instructors themselves. Course instructors, selected from a range of
departments, identify the specific quantitative skills their students need. The
students are then given a test at the start of the semester designed to
determine whether they have these skills. The tests, given early in the term,
assess the extent to which students possess those quantitative skills that
their instructors (a) identify as essential for survival in the course, (b)
expect students to have from the first day of class, and (c) will not cover
during the course. The tests are intended to be neither “wish lists” nor
comprehensive examinations of the content of prerequisite mathematics courses. A sample report
for Mathematics 265 (University Calculus III, Spring 2002) is attached as
Appendix A. This report was provided to the course instructors, the Department
of Mathematics, and the University Assessment Committee. The report includes a
copy of the two test versions that were used. In each test we have reported success rates for the students who took the test
(proportions who successfully answered each question), reported by problem (Report, pages
6 ff.). The report also provides (a) information about the performance of
students on each test version (Report,
page 1), (b) a ranking of
problems by their success rates (Report,
page 2), and (c) information
about the grades students earned in previous mathematics and statistics courses
(Report,
pages 3 and 4). Corrected test
papers are returned to students, along with solutions and specific references
for remediation, within one week. Instructors receive information about the
students’ test performance a few days later. Thus, early in the semester both
students and instructors possess useful information about instructor
expectations, student capabilities, and the need for any corrective action. We
do not prescribe any specific action in relation to the test results, leaving
those interpretations and decisions to the course instructor and students. We
do indicate here each type of problem is covered in textbooks used in NDSU
mathematics courses so that instructors and students can review the material,
if necessary. We have developed
a reliable grading system that allows mathematics graduate students, with
limited training, quickly to record information about the students’ work and
their degree of success on each problem. The coding system provides detailed
data for later analysis while allowing the quick return of corrected papers to
the students. The sample report in Appendix A includes summary comments about
students’ performance on the tests (for example, on the first and second pages
of the report). Information of two
kinds is generated by this assessment process: (a) a detailed picture of those
quantitative skills needed for upper-division course work in other departments
and (b) an assessment of the quantitative capabilities of emerging juniors
outside the context of specific mathematics courses. The first comes from
personal contacts with faculty members as we design the test and interpret the
results; the second is provided by analysis of students’ performance on the
assessment project tests and their quantitative backgrounds as shown by
university records. 3.3. Mathematics degree programs
assessment. We have also developed a process for
assessing learning in the Department’s three degree programs: Bachelors, Masters, and Doctoral.
Because we have extensive contact with our majors and graduate students over
more extended periods than students in service courses, a priority was to make
better use of existing data rather than developing new, specialized assessment
instruments. Faculty members reviewed the Department’s instructional
objectives, which had been prepared as part of early assessment activities for
the university, and identified existing opportunities to assess learning in
relation to these stated objectives. We were able to locate evidence related to
all objectives. The evidence was obtained from three main sources: (a) The
undergraduate introductory proof course (Math 270, sophomore level) and our
capstone course (Math 490, senior level); (b) Graduate qualifying and final
examinations; and (c) Graduate student advisors. We developed forms to be
completed by faculty members (a) teaching targeted courses, (b) preparing and
grading departmental examinations, and (c) advising graduate students. Sample
rating forms from each of the three programs are attached to this report in
Appendix D. 3.4. Department Instructional
Objectives. The Department had previously adopted a list
of objectives for student learning in its three degree programs (see Appendix
B). We designed rating forms that list objectives that might be assessed
through observations in a particular context (for example, the masters
comprehensive exam or the capstone course–see Appendix D). Faculty are asked to
rate students as fail, pass, or high pass on each outcome. They are then asked
to provide descriptive comments about student performance as shown by this
assessment or activity to provide evidence that supports their evaluations and
to expand on the ratings. These forms are available for faculty members to
complete while they conduct the targeted activities. Faculty ratings and
comments are based on the standard tools of measurement used to assess and
evaluate the student performance in a class, such as classroom tests, quizzes,
written assignments, and group work reports. The Department has course
descriptions (called TACOs for Time
Autonomous Course Outlines)
for instructors in all undergraduate courses and uses common exams and grading
in most introductory courses. These are designed to help ensure a degree of
uniformity for sections taught by different instructors and from semester to
semester. Completed forms
are returned to the Department Assessment Committee, which analyzes results and
prepares a summary report to the Chair, Graduate and Undergraduate Program
Directors, and the Department. This process has ensured that a large majority
of our Department’s faculty are involved in assessment activities each year. At
the same time, the extra demands made on individuals by assessment is
minimized-most faculty are only asked to provide information they obtained for
other reasons and to review and react to the summary assessment report. This is
a welcome change for the Chair, in particular, who formerly took responsibility
mostly alone for preparing the annual assessment report for the University
Assessment Committee and university administration. 4. Implementation. The assessment program implementation is
being done in an ongoing fashion while focusing in one or more courses each
year, and continuing the data gathering in the courses whose assessment has
begun earlier. To illustrate our implementation process we provide the
assessment activities for the academic year 2002-2003. 4.1. Aspect of Program to be
Assessed. We chose to focus this year on the
three-semester engineering-calculus sequence, introductory linear algebra, and
differential equations. The guiding question for our work was “Do students
develop the quantitative skills they need for success in later studies in their
chosen field?” To respond to this question we investigated three things: 1.
What are the
existing goals for our introductory service courses? 2.
What are the
quantitative expectations of our clients (for example, later math courses,
engineering programs, physical science programs)? 3.
To what extent
do students meet the expectations of our clients? 4.2. Status of learning goals for
this subprogram. We have two kinds of goals for this program.
The Department has an explicit objectives statement that covers the undergraduate
program, including these courses. These objectives are described in Appendix B
as: 1.
Students will
be able to analyze problems and formulate appropriate mathematical models 2.
Students will understand
mathematical techniques and how they apply. 3.
Students will
recognize phenomena and be able to abstract, generalize, and specialize these
patterns in order to analyze them mathematically. 4.
Students will
be able to express themselves in writing and orally in an articulate, sound and
well-organized fashion. This project additionally identifies implicit
objectives for the introductory sequence of service courses. Part of the data
analysis includes a review of the items that appear on tests. This analysis
identifies implicit goals and objectives for the service program. An important part
of the project is for the Mathematics Department to review and respond to the
findings, including these implicit goals. This took place at assessment
committee and departmental meetings during April and May. 4.3. Activities during 2002-03. Following the guidelines we set for this year’s
assessment program, we completed the following activities: 4.3.1. Quantitative Assessment of general
education and service courses. This
is the continuation of the assessment process we started seven years earlier
and is an ongoing process for the
regular calculus sequence; and
initiation of the assessment process for focus courses for this year (the
three-semester engineering-calculus sequence, introductory linear algebra and
differential equations). This part of the program implementation involved more
assessment and reporting than analysis and response, particularly for the new
courses. 4.3.2. Undergraduate majors. We had faculty members rate student
performance in the introductory proof course and in the senior seminar capstone
course. 4.3.3. Graduate students. Faculty members and advisors rated student
performance on exams and progress toward their degree (see forms in Appendix
D). 4.3.4. Increased involvement of faculty. We have wanted to increase faculty
involvement in the assessment program for many years. It seemed that having the
same small group of faculty conducting the assessment activities did not
promote wider faculty involvement, since most assumed the people who had done
it before would continue to take care of the work. Working with the Department
administration, we adopted a new strategy to increase faculty involvement: Each year a new group of 4-5 faculty (which
includes at most two faculty from the previous year) would conduct assessment
activities. This strategy worked well. The new members of this year’s
assessment committee took ownership of the program, carrying the bulk of the
activities, but they were not intimidated by the task since they had a good
model to use as a template for their activities and reports and experienced
faculty members to provide guidance. Formation of the committee for the next
year’s assessment activities has been significantly easier since more faculty
were willing to participate, recognizing that the task did not impose onerous
expectations for additional work. 4.3.5. Peer review of teaching. Several faculty developed a proposal for a
departmental peer review of teaching program to complement the limited
information provided by student course evaluations. The committee that
developed this program began their planning in Fall 2001. The program was
adopted by the Department in Fall 2002 and has been piloted by four pairs of
faculty or lecturers during 2002-3. (Appendix E) 4.3.6. Connections to University
Assessment Committee (UAC) activities. One Department member, Bill Martin, has been actively involved
in NDSU assessment activities as a member of the UAC steering committee, the
University Senate Executive Committee, and the Senate Peer Review of Teaching
Board. This institutional involvement has contributed to the integration of
Department assessment activities with the assessment work being conducted at
NDSU. Consequently, activities conducted in the Mathematics Department have
helped to shape the assessment strategies adopted at the university level. 5. Insights and
Lessons Learned. 5.1. Findings and success factors. The process we have developed takes an
ongoing, integrated approach that seeks to embed assessment activities in our
instruction. We believe the process provides useful insights to the learning
that takes place in our programs. To illustrate the sort of information we
obtain, a recent summary report described findings of the annual quantitative
assessment project, that focuses on service courses, in this way: The tests of
greatest interest to the Department of Mathematics were given in [Calculus III]
(235 students, four instructors), [Calculus III with vector analysis] (47 students,
one instructor), and [Differential Equations] (264 students, five instructors).
These courses include many students who are majoring in technical programs
across the campus, including physical sciences, mathematics, and engineering.
All require students to have successfully completed the first year regular
calculus sequence. A sample course report, giving detailed information about
the outcomes, is attached as Appendix A. Faculty members discussed reports of
the Fall 2001 tests during a December faculty meeting. The discussions ranged
over the nature of the assessment program (for example, whether the tests were
appropriate) and the success rates. While there was a range of opinions
expressed at the meeting, it
was agreed that the program was potentially very useful and should continue.
These initial results did not lead to specific proposals for course changes
this year. Individual
faculty who taught the courses in which assessments were given were asked for
their reactions to the test results. T he tests revealed areas of strength in
student performance along with weaknesses that concern faculty. These patterns
were reflected both in the comments at the meeting and in written responses to
the reports. There was agreement by many that the information was useful as an
indicator of program strengths and weaknesses. More specific information about
success rate patterns and their perceived significance is provided in the
reports themselves. So far, our
assessment findings have not led to major changes in courses or programs at
NDSU. A current focus of our work is on making better use of the information
obtained from assessment activities. We plan to have a more extensive review
and discussion of findings by departmental faculty, now that we have data from
several years. The purpose of the discussion is to address several questions: 1.
What do the
findings show about student learning and retention from our courses? 2.
What might
account for these patterns? In particular, why do students seem to have
specific difficulties? 3.
What could and
should the Department do to address areas of weakness? 4.
Are we
satisfied with the Department’s stated goals and our assessment procedures,
having attempted to assess student achievement in relation to the stated goals
for several years? While the focus of
each test is on a particular course, we are able to gain a broader perspective
on faculty expectations and student achievement by pooling results from
different assessments and over several years. The table below illustrates the
patterns that can be discerned in the results. The table also summarizes some
generalizations we can make based on tests administered by the project. We have
found three levels of mathematics requirements or expectations in courses
across the campus. Within each level, patterns of students’ success rates have
become apparent over the years. The course level
is based on mathematics prerequisites. For example, Level 2 courses require
just one semester of calculus (examples include Finance and Agricultural
Economics courses). The success rates range from High (where more than two-thirds of the tested
students in a class are successful) down to Low (when
under one-third of the students are able to solve a problem correctly). Each
cell reports a general trend we have observed. For example, typically any
calculus problem administered to students in a Level 2 course will have a low
success rate. The cell also mentions a specific problem to illustrate the
trend. The example problem for typically low success rates in a Level 2 course
is asking students to estimate the value of a derivative at a point given a
graph of the function. The most important characteristic of this table is that
it illustrates how the use of tests that are custom-designed for particular
courses can still provide detailed and useful information about mathematics
achievement on a much broader scale at the institution.
Appendix C
contains a more complex table that illustrates how even more detailed
information can be extracted from a large number of tests administered across
many departments and years. The table illustrates that not only success rates
on particular problem types, but even the distribution of types of problems can
be analyzed to help identify how mathematics is used across the campus in
different programs. This table compares the nature of tests and patterns of
success rates in mathematics, engineering, and physical science courses, all of
which require the full three-semester normal introductory calculus sequence. The table is based
on 240 individual problem success rates (PSR-success rates for each time a
problem was used on a test). The three groups of courses were: (a) Mathematics
(four distinct courses, including a differential equations course that was
tested in successive semesters; with 58 PSR); (b) Physical Sciences (five
distinct courses, including a two-course atmospheric science sequence with
retested students in successive semesters; 68 PSR); and (c) Engineering (six
distinct courses, two of which-electrical and mechanical engineering-were
tested in successive semesters; 114 PSR). The table has been included not so
much for detailed analysis of its content in this paper, but to illustrate the
detailed information that can be provided by this assessment process. For example, the
table illustrates quite different patterns of mathematics usage across the
three disciplinary areas: Mathematics
courses emphasized non
calculus material (60% of the problems that appeared on tests in those
courses), Science courses drew most heavily on differential
calculus material (56% of problems), while Engineering courses
had a more balanced use of problems from across all the introductory areas (22%
non calculus, 31% differential calculus, 16% integral calculus, 26%
differential equations, and 5% probability and statistics). Much more detailed
information is included about specific types of problems and typical success
rates. For example, the first entry for Mathematics is
Graph
Interpretation problems
which appeared on two different tests in one math course (2-1). These problems
represented 3% of all problems that appeared on math course tests, and the
median success rate across all problems of this type that were administered in
a math course fell in the second quartile (2) representing 25-50% for students
taking those tests. 5.2. Dissemination of Findings. Our assessment findings have been shared with
four distinct groups: (a) Mathematics faculty at NDSU, (b) NDSU departments who
depend on mathematics, (c) other NDSU faculty interested in departmental
assessment, and (d) mathematics faculty from other institutions involved in the
MAA Assessment Project SAUM. The first two groups are most interested in
student performance and its implications for their courses and programs. The
second pair are interested in the assessment methods employed by our project. A goal of our
work, both in the design of assessment activities and the strategies used to
involve faculty and disseminate results, has been to only do things that have
value for participants. For example, when we ask students to take tests, we
want it to have personal value for them at that time rather than just appealing
for their participation for the good of the department or institution.
Similarly, when we ask faculty to conduct an assessment in their class or to
review reports, they should feel they have gained valuable insights as a result
of their work rather than submitting a report because it is required for some
external purpose. 6. Next steps and
recommendations Some of our work requires the assistance of a
graduate student to help with test administration and data analysis and some
financial support for duplication and test scoring. We have found support for
this work through external grants and are working to institutionalize this
support as a part of the University’s institutional assessment and
accreditation activities. The work is valued at the institutional level because
the extensive service role played by mathematics is well recognized.
Consequently, we expect to receive some level of institutional support for our
general education assessment activities, the ones that require the most extra
work to conduct and analyze. We recognize that
we have to date had more success gathering and disseminating assessment data
than getting faculty to study and respond to the findings. This partly reflects
the natural inclination of faculty to focus on their own courses than on the
broader picture of how programs are working to develop student learning. We
plan to concentrate our efforts now on ensuring that assessment findings are
regularly reported and discussed by faculty, both in participating departments
and in the Mathematics Department. We believe that regular conversations about
the patterns of results will lead to the formulation and implementation of
responses to shortcomings revealed by assessment activities. Our approach
reflects the belief that faculty are in the best position to respond to
findings and that our most important role is in providing accurate information
about student achievement. Consequently, our reports focus on providing
descriptive statements about student performance, rather than making detailed
recommendations for changes in courses and instruction. We also believe
that widespread faculty involvement in assessment activities is a necessary
condition for an effective assessment program. Our strategy has been to adopt a
non judgmental approach that seeks to minimize special effort required of
participants and to ensure that participants clearly see that they stand to benefit
from the activities in which they are involved. Our efforts to increase
departmental and university faculty involvement and impact will continue. The
strategies initiated during the last academic year seem to work. The
Department’s assessment committee will continue to work with UAC and General
Education Committee to increase the impact of the departmental assessment
activities to a broader audience. References Bauman, S. F., & Martin, W. O. (May 1995). Assessing the Quantitative Skills of College Juniors. The College Mathematics Journal, 26(3), 214-220. Martin, W. O. (1996). Assessment
of students’ quantitative needs and proficiencies. In T. W. Banta, J. P. Lund,
K. E. Black, & F. W. Oblander (Eds.), Assessment in Practice: Putting
Principles to Work on College Campuses. San Francisco: Jossey-Bass. Appendix A. Sample
report for Math 265 ( Calculus III), Spring 2002. Preliminary Assessment Results Mathematics 265: Calculus III Spring 2000-02 Sixty-eight students took two versions of a
eight-item free-response test in Mathematics 265 (Professor Sherman) during the
second week of the Spring 2002 semester. The test was designed to see the
extent to which students had quantitative skills required for success in the
course. Students were not allowed to use calculators while completing the
assessments. Graduate students from the Department of Mathematics graded the
papers, recording information about steps students had taken when solving the
problems. The graders also coded the degree of success achieved on each problem
using the following rubric:
figures. The second pair of charts gives the
distribution of partial credit scores called scoresum (each problem was awarded 0-4 points, E=0 to A=4). It appears that many students will need to
review some mathematics covered on the test, since a majority were successful
on less than half the problems. Almost two-thirds of the students (44 of the
68) achieved overall success on four or fewer of the eight problems. The problems are ranked according the degree
of success students achieved on each problem in the following table.
The problems are primarily sorted in this
table by proportion of students who received a code of A or B, indicating that
at least essentially correct. For reference, the second and third columns
report the proportion of students who had the completely correct (A, column 2)
and the proportion who made good progress (C, column 3). The problems have been divided into three
groups. At least two-thirds of the students could integrate using the
Fundamental Theorem of Calculus, and solve using properties of logarithms.
About three-quarters of the students successfully set up a definite integral to
compute the area enclosed by a parabola and line. Fewer then three-fifths of the students
completed the square to find the center and radius of an equation, or used
substitution to evaluate an integral and/or indefinite integral. Similarly they
successfully used a sign table of a function and its derivative to sketch a
graph of the function and estimate the derivative of a function at a point from
its graph. Under a quarter of the students could solve using implicit
differentiation, or calculate the area enclosed by one-loop of a four-leaved
rose. Mathematics Backgrounds University records provided information about
the mathematics courses that had been taken by students in these classes. The
following tables report up to the four most recent mathematics courses recorded
on each student’s transcript. Every student with available records indicate
exposure to at least one mathematics course. The median for Math 166 was a B,
but one should notice that almost half of the students received and A in the
course. Calculus III is a retake for seven students that were tested, of these
seven, two have no record of taking the prerequisite.
These histograms help to illustrate possible
connections between test score and grade in the most recently completed
mathematics or statistics course. On version one students with higher grades
(lighter shades in divided frequency bars) in most recent course generally
scored somewhat higher on this assessment test, as one might expect. Reactions We asked the instructor five questions about
the test results. Her responses are summarized below. Instructor:
Department: Reactions to these results: Suggested responses(action plans):
Appendix B.
Department mission statement and program objectives. Mission Statement. The mission of the Department of Mathematics
is teaching, research and other scholarly activities in the discipline;
providing quality education to our B.S., M.S. and Ph.D. students and
post-doctoral associates; and influencing the mathematical climate of the
region positively. The Department strives for excellence in teaching its majors
and service courses, while providing stimulating and informative courses. The
Department’s research activities include pure and applied mathematics. Program Objectives. A. Bachelors program
B. Masters program
C. Doctoral program
Appendix C. Patterns
of Test Results in Mathematics, Science, and Engineering
Appendix D: Sample
Rating Forms
Appendix E: Mathematics
Department Peer Review of Teaching Program Peer Evaluation of Teaching Proposal NDSU
Department of Mathematics The Department of Mathematics believes that
the purpose of peer evaluation is to help faculty recognize and document both
strengths and weaknesses in their teaching. The word “peer” means that this
activity should involve reciprocal observation and discussion of teaching and
learning by small groups of 2-3 faculty who exchange visits in each other's
classes. The committee believes that the members of the department have all the
qualifications necessary to make this process reach its intended goal. The
committee proposes that: 1. Tenure-track faculty be reviewed at least once each year; Tenured associate professors be reviewed at least once every other year; Tenured full professors be reviewed at least once every three years. 2. The process begin with the identification of
the faculty to be evaluated by the chair. Then the faculty member identifies
his/her teaching goals and strategies (in writing). These objectives are
discussed with a peer colleague or colleagues, with a view to developing
evidence that supports the individual's claims. This evidence could come from
classroom observations, student evaluations, and review of written course
materials, such as tests and assignments. It should include multiple sources
(i.e., not a single classroom observation). After reviewing this evidence, the
group prepares a report that describes the activities and the extent to which
the evidence supports the original claims. The report should include plans for
future teaching strategies, including possible changes or enhancements that the
faculty member plans to try. 3. A
team of 2-3 faculty members will complete the work described in (2) for each
member of the team. This helps to ensure that peer evaluation does not become a
one-way process that involves one person observing and evaluating another
primarily for external purposes. Instead, the process is designed primarily to
increase collegiality and reflective practice within the department, while
providing documentary evidence of the regular review of teaching that can be used
for external purposes (such as annual reviews, PT&E). 4. Observers of a faculty member should include
at least one member of the department PT&E committee. 5. The observation process should always include
a Pre-Observation Conference between the observee and observer to discuss the
objectives of the class to be observed and other relevant issues (see Peer
Review Observation Instrument). Following the in-class observation, a
Post-Observation Conference must also be held to discuss the observations as documented
by the Peer Review Observation Instrument. Attachments: Peer Review Observation Instrument (sample),
Characteristics of Effective Observers/Things to Avoid (sample guide),
Tips for Observers and Observees (sample) |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||