SAUM Bibliography
- Annotated Bibliography of Related Research. An on-line bibliographic database with abstracts of over 100 research documents related to undergraduate mathematics education that is searchable by issue, indicator, author, and title. Part of the Indicators of Quality in Undergraduate Mathematics Education project described below.
- Internet Resources for Higher Education Outcomes Assessment. North Carolina State University. A rich web resource featuring links to other web sites with general resources, assessment handbooks, and assessment of specific skills. Special feature: hundreds of links to assessment-related pages at individual colleges and universities, as well as links to web sources of student assessment of courses and faculty.
- Library of Assessment Resources for Mathematics Teaching. The Math Forum, Drexel University.Provides links to nearly 200 on-line assessment resources inventoried by The Math Forum.
- The National Postsecondary Education Cooperative (NPEC) is a partnership of postsecondary institutions, associations, government agencies, and organizations devoted to providing "better data" to higher education in the service of "better decisions." The site provides information relevant to student access, student outcomes, quality data practices, as well as state and institutional indicators. One product is the Sourcebook on Assessment cited below.
- Texas Higher Education Assessment
- Alfred, R., Peter Ewell, J. Hudgins, & Kay McClenney (1999). Core Indicators of Effectiveness for Community Colleges. Second Edition. Washington, DC: Community College Press, American Association of Community Colleges. Addresses policymakers' concerns for "high performance" and provides a model that institutions can use to assess their effectiveness. The report examines various components of "effectiveness" and then offers 14 core indicators, organized according to the various missions of community colleges: (1) student goal attainment; (2) persistence; (3) degree completion rates; (4) placement rate in the workforce; (5) employer assessment of students; (6) licensure/certification pass rates; (7) client assessment of programs and services; (8) demonstration of critical literacy skills; (9) demonstration of citizenship skills; (10) number and rate who transfer; (11) performance after transfer; (12) success in subsequent related coursework; (13) participation rate in service area; and (14) responsiveness to community needs.
- Ball, Lynda and Kaye Stacey (2003). "What Should Students Record When Solving Problems with CAS?," in Computer Algebra Systems in Secondary School Mathematics Education, James T. Fey, et al., editors. Reston,VA : National Council of Teachers of Mathematics, pp. 289-304. An account of experiences in Australia where the authors worked to find ways to encourage students to record the reasoning used in applying technology to mathematical tasks and to assess that reasoning.
- Bass, Hyman (1993). "Let's Measure What's Worth Measuring." Education Week, October 27, p. 32. An "op-ed" column supporting Measuring What Counts from the Mathematical Sciences Education Board (MSEB). Stresses that assessments should (a) reflect the mathematics that is most important for students to learn; (b) support good instructional practice and enhance mathematics learning; and (c) support every student's opportunity to learn important mathematics.
- Cannon, Raymond and Bernard L. Madison (2003). "Testing with Technology: Lessons Learned," in Computer Algebra Systems in Secondary School Mathematics Education, James T. Fey, et al., editors. Reston,VA : National Council of Teachers of Mathematics, pp. 305-328. An account of lessons learned from 1982 to 1998 when use of various types of calculators were allowed or required on the AP Calculus examinations.
- Charles, R., Lester, F., and O'Daffer, P. (1987). How to Evaluate Progress in Problem Solving, Reston, VA: National Council of Teachers of Mathematics.
- Committee on the Undergraduate Program in Mathematics (CUPM) (1995). "Assessment of Student Learning for Improving the Undergraduate Major in Mathematics," Focus: The Newsletter of the Mathematical Association of America, 15 (3), June, pp. 24-28. Recommendations from the Mathematical Association of America (MAA) for departments of mathematics to develop a regular "assessment cycle" in which they (1) set student goals and associated departmental objectives; (2) design instructional strategies to accomplish these objectives; (3) select aspects of learning and related assessments in which quality will be judged; (4) gather assessment data, summarize this information, and interpret results; and (5) make changes in goals, objectives, or strategies to ensure continual improvement.
- Crosswhite, F.J. (1972). "Correlates of Attitudes toward Mathematics," National Longitudinal Study of Mathematical Abilities, Report No. 20, Stanford University Press.
- Dossey, John A. and Kenneth J. Travers (2001). "Evaluating Undergraduate Programs: Indicators of Departmental Health." Focus: The Newsletter of the Mathematical Association of America, 21:6 (August/September) pp. 18-19. A brief introduction to issues unfolded in the Undergraduate Mathematics Education Indicators Project at the University of Illinois.
- Dossey, John and Alan Schoenfield (2003). "Student Outcomes and Assessment." Undergraduate Mathematics Education Indicators Project, Chapter 4. Urbana, Ill: University of Illinois, Office of Mathematics, Science and Technology Education (MSTE).
- Ewell, Peter T. with Janet Ray (2003). "Institutional and Systemic Issues." Undergraduate Mathematics Education Indicators Project, Chapter 6. Urbana, Ill: University of Illinois, Office of Mathematics, Science and Technology Education (MSTE).)
- Ewell, Peter T. and Lynn A. Steen (2003). "The Four A's: Accountability, Accreditation, Assessment, and Articulation." Focus: The Newsletter of the Mathematical Association of America, 23:5 (May/June), p. 6-8.
- Fennema, Elizabeth and J. Sherman (1976). "Fennema-Sherman mathematics attitudes scales: Instruments designed to measure attitudes toward the learning of mathematics by females and males," JSAS Catalog of Selected Documents in Psychology, 6 (Ms. No. 1225), p. 31.
- Fuller, Milton (1997). "The Impact of Graphics Calculators on Undergraduate Mathematics: Is Assessment a Barrier to Progress?" Mathematics Learning Center, Central Queensland University. Analysis (from Australia) of how tertiary mathematics assessments reduce students' momentum and enthusiasm for learning mathematics by barring graphing calculators which students have learned to use in the secondary schools.
- Gold, Bonnie, et al., editors (1999).Assessment Practices in Undergraduate Mathematics. Washington, DC: Mathematical Association of America, 1999. A collection of over seventy brief reports from dozens of different U.S. colleges and universities providing a wider variety of methods of assessing the major, teaching, classroom practice, the department's role, and calculus reform.
- Hagedorn, Linda Serra (1997). "Success in College Mathematics: Comparisons between Remedial and Non-Remedial First Year College Students." Paper presented at the Annual Meeting of the American Educational Research Association. A study based on data from the National Center on Postsecondary Learning and Assessment (NCPLA). The analysis indicates that non-remedial students in this sample have parents with a higher education, come from families with a higher total income, received more encouragement to pursue higher education, and reported spending more time studying in high school.
- Hilton, Peter (1993). "The Tyranny of Tests." American Mathematical Monthly, April, pp. 365-369. Several suggestions for "reducing the distorting effect" which tests exert, principally on undergraduate mathematics.
- Houston, Ken (2001). "Assessing Undergraduate Mathematics Students." In The Teaching and Learning of Mathematics at the University Level, Derek Holton, Editor. Dordrecht: Kluwer Academic Pub, pp. 407-422.
- Houston, S.K., C.R. Haines, A. Kitchen, et al. (1994). Developing Rating Scales for Undergraduate Mathematics Projects, University of Ulster.
- Hurtado, Sylvia and Eric L. Dey (2003). "A Framework for Monitoring and Increasing Undergraduate Student Participation in Mathematics Education." Undergraduate Mathematics Education Indicators Project, Chapter 5. Urbana, Ill: University of Illinois, Office of Mathematics, Science and Technology Education (MSTE).
- Joint Policy Board for Mathematics (1994). Recognition and Rewards in the Mathematical Sciences. Providence, RI: American Mathematical Society. Discussion of faculty expectations in relation to institutional rewards. Findings include a general dissatisfaction with current methods of evaluating teaching as well as uncertainty about the weight of effective teaching in college expectations and rewards.
- Jones, Elizabeth A. and Steve Richard (2003). The NPEC Sourcebook on Assessment: Definitions and Assessment Methods for Communication, Leadership, Information Literacy, Quantitative Reasoning and Quantitative Skills. Washington, DC: National Postsecondary Education Cooperative (NPEC), National Center on Education Statistics, (forthcoming). Descriptions of tests used to assess skills in four critical areas, including conceptual and methodological, considerations for selection of assessment methods. Intended to help institutions determine whether available tests really measure these skills. A Summary is available at http://nces.ed.gov/npec/pdf/ ps_JonesSA2002.pdf.
- Keith, Sandra.Z. (1996). "Self-Assessment Materials for Use in Portfolios," Primus, 6 (2), pp. 178-192.
- Kloosterman, P. (1988). "Self-Confidence and Motivation in Mathematics," Journal of Educational Psychology 80, pp. 345-351.
- Kulm, Gerald (1994). Mathematics Assessment: What Works in the Classroom, San Francisco: Jossey-Bass.
- Lester, F. and D. Kroll(1991). "Evaluation: A New Vision," Mathematics Teacher 84, pp. 276-283.
- Madison, Bernard (1992). "Assessment of Undergraduate Mathematics."(HTML) (PDF) In Heeding the Call for Change: Suggestions for Curricular Action, Lynn A. Steen, editor. Washington, DC: Mathematical Association of America, pp. 137-149. Analysis of issues, benefits, worries, and pressures associated with the increasing demand for assessment of undergraduate mathematics. A background paper preceding release of the CUPM report on assessment.
- Madison, Bernard (2002). "Assessment: The Burden of a Name." Project Kaleidoscope. By likening assessment to the browbeaten subject of Johnny Cash's ballad "A Boy Named Sue," this brief essay traces the recent history of assessment in higher education and discusses its various forms and labels.
- Mathematical Association of America (1993).Guidelines for Programs and Departments in Undergraduate Mathematical Sciences. Washington, DC: Mathematical Association of America.
- Mathematical Sciences Education Board (1993). Measuring What Counts: A Conceptual Guide for Mathematics Assessment. Washington, DC: National Research Council. Intended primarily as advice for K-12 mathematics assessment, this report stresses the need for assessment to measure good mathematics, to enhance learning, and to promote access for all students to high quality mathematics.
- McKnight, Curtis, John Dossey, and Kenneth Travers (2003). "Charting the Course: A Conceptual Framework for Developing a National System of Quality Indicators for Undergraduate Mathematics Education." Undergraduate Mathematics Education Indicators Project, Chapter 1. Urbana, Ill: University of Illinois, Office of Mathematics, Science and Technology Education (MSTE).
- McMullin, Lin (2003). "Traditional Assessment and Computer Algebra Systems," in Computer Algebra Systems in Secondary School Mathematics Education, James T. Fey, et al., editors. Reston,VA : National Council of Teachers of Mathematics, pp. 329-336.
- National Council of Teachers of Mathematics (1995). Assessment Standards for School Mathematics. Reston, VA: National Council of Teachers of Mathematics. This third and final volume in NCTM's original set of standards for school mathematics focuses on six standards: effective assessment should reflect appropriate mathematics, enhance learning, promote equity, be based on an open process, promote valid inferences, and fit together coherently.
- National Council of Teachers of Mathematics (1999). Mathematics Assessment: A Practical Handbook for Grades 9-12. Reston,VA: National Council of Teachers of Mathematics. A "how-to" book based on the experiences of classroom teachers. Five chapters cover how to get started, assessment tools to use, putting a program together, using the results, and exemplary assessment tasks.
- National Council of Teachers of Mathematics (2000). Mathematics Assessment: Cases and Discussion Questions for Grades 6-12. Reston,VA: National Council of Teachers of Mathematics. A collection of stories written by mathematics teachers and other educators describing experiences with classroom assessment.
- National Science Foundation (1996). Shaping the Future: New Expectations for Undergraduate Education in Science, Mathematics, Engineering, and Technology. Washington DC. National Science Foundation. Final report of an intensive review of the state of undergraduate education in science, mathematics, engineering and technology (SMET) in America. The year-long review revealed that measurable improvements have been achieved in the past decade but that further improvement will require greater student engagement in their own learning.
- Schoenfeld, Alan (1997). Student Assessment in Calculus: A Report to the NSF Working Group on Assessment in Calculus. Washington DC: The Mathematical Association of America. Report of an NSF working group convened to support assessment of calculus reform projects by providing a conceptual framework together with extensive examples. Grounded in the assumption that assessment requires an understanding of what it means to understand, the report focuses on two major changes related to calculus instruction: revised instructional goals and a growing research base on students' understandings of mathematical concepts. Emphasizes the "fundamental tenet" that, since tests are statements of what is valued, new curricula need new tests.
- Steen, Lynn Arthur (1999). "Assessing Assessment." Preface to Assessment in College Mathematics, Bonnie Gold, et al., (Editors). Washington, DC: Mathematical Association of America. An exploration of issues, principles, and options available to address the wide variety of assessment challenges facing college mathematics departments.
- Stenmark, Jean K., ed. (1991). Mathematics Assessment: Myths, Models, Good Questions, and Practical Suggestions. Reston, VA: National Council of Teachers of Mathematics.
- Travers, Kenneth, et al. (2003). Charting the Course: Developing Statistical Indicators of the Quality of Undergraduate Mathematics Education. American Educational Research Association and the Office of Mathematics, Science and Technology Education (MSTE), University of Illinois. A "synthesis report" of the Indicators project intended to identify questions and related statistics that form a "web of definition" for the status and direction of a mathematics department's program. Designed to provide a framework for collecting data in a systematic way that will enable mathematics departments to make informed decisions for improving effectiveness of their programs.
- Travers, Kenneth J.et al. (2003). Indicators of Quality in Undergraduate Mathematics. Urbana-Champaign, IL: University of Illinois Office for Mathematics, Science, and Technology Education, 2002. A detailed report (available on both CD-ROM and on an almost identical website) of an NSF project intended to help mathematics departments monitor the quality of their lower division undergraduate program. A major goal was to devise statistical measures that can (a) document the characteristics of mathematics programs and practices in a climate of change and (b) to gain experience in ways to effectively carry out a data-based self-assessment study. The report, based on pilot studies at three very different kinds of institutions, identifies ten issues with sixty associated statistical measures (indicators).
- Tucker, Alan C. and James R.C. Leitzel (1995). Assessing Calculus Reform Efforts: A Report to the Community. Washington DC: Mathematical Association of America. A "mid-term" review of the NSF-supported calculus reform movement in the United States, providing background on the motivation and goals of the movement, as well as evidence of changes in content, pedagogy, impact on students, faculty, departments, and institutions.
- Wiggins, Grant (2003). " 'Get Real!': Assessing for Quantitative Literacy,(HTML)" (PDF) in Quantitative Literacy: Why Numeracy Matters for Schools and Colleges, Bernard L. Madison and Lynn Arthur Steen, editors. Princeton, NJ: National Council on Education and the Disciplines, pp.121-143. An informed view of the difficulties of finding authentic assessment items for assessing quantitative literacy. Grant Wiggins is the President of Grant Wiggins & Associates, an educational organization that consults with schools, districts, and state education departments on a variety of issues, notably assessment and curricular change. Wiggins is the author of Educative Assessment (1998), Assessing Student Performance (1999), and (with Jay McTighe) Relearning by Design (2000). Wiggins' many articles have appeared in such journals as Educational Leadership and Phi Delta Kappan.
- William, C.G. (1998). Using Concept Maps to Assess Conceptual Knowledge of Function. Journal for Research in Mathematics Education, 29, pp. 414-21. Examines the value of concept maps as instruments for assessment of conceptual understanding, using the maps to compare the knowledge of function that experts and two groups of students--traditional and nontraditional--enrolled in university calculus classes hold. Discusses the differences between the student and expert groups as well as differences between the two student groups.
- Adams, Thomasenia Lott (1997). "Technology Makes a Difference in Community College Mathematics Teaching." Community College Journal of Research & Practice, 21(5), pp. 481-91. A study of three areas of student assessment in a college algebra classroom--oral interactions, observations, and problem-solving--before and after the use of graphing calculators in class activities. Concludes that the use of the calculators enhanced the teacher's assessment practices in all three areas.
- Alexander, E.H. (1997). "An Investigation of the Results of a Change in Calculus Instruction at the University of Arizona." PhD thesis, The University of Arizona. A study of the effects of change in calculus instruction at the University of Arizona during 1991-93, using concept maps to determine if there was a difference in retained knowledge in students using the Harvard (consortium) materials. Findings: Consortium (reform) students showed slightly improved retention, although the differences were not statistically significant. Consortium students somewhat outperformed traditional students in both retention and grades in subsequent calculus-dependent mathematics, science, and engineering courses, but patterns within the comparisons suggested that these differences were more likely due to better teaching than to the reform materials. Students' reports of attitude towards mathematics showed no statistically significant differences.
- Armstrong, S.M. (1997). "A Multivariate Analysis of the Dynamics of Factors of Social Context, Curriculum, and Classroom Process to Achievement in Calculus at the Community College." PhD thesis, The University of Rochester. Analysis of survey and outcome data (supplemented with in-class participant observation) from calculus students drawn from a stratified sample of community colleges in New York and New Jersey,two-thirds of which used the Harvard (CCH) material. Findings: students who receive a high course grade are more likely to have a strong algebra background, a positive attitude towards mathematics, taken their pre-calculus courses in high school, and had positive engagement in the calculus course. The data did not support the hypothesis that students with low pre-calculus backgrounds can succeed in calculus with the aid of a graphing calculator. The findings also suggest that non-Asian minority students were more likely to be hindered than helped by enrollment in a reform curriculum.
- Baranchik, A. & Barry Cherkas (1998). "Supplementary methods for assessing student performance on a standardized test in elementary algebra." In A. Schoenfeld, J. Kaput, & E. Dubinsky (Eds.), Research in Collegiate Mathematics Education III, pp.216-33. Providence, RI: American Mathematical Society. A study of partial credit assignment (in elementary algebra) in relation to partial understanding, overall score levels, and incorrect alternatives selected by higher scoring students.
- Barnett, J. (1996). "Assessing Student Understanding Through Writing." Primus, 6(1) (March) pp. 77-86. Describes writing assignments designed to encourage students' logical analysis skills, as well as instructional consequences and practical concerns which arise when writing is used as an assessment tool.
- Bauman, Steven F. & Martin, William O. "Assessing the Quantitative Skills of College Juniors." College Mathematics Journal 26:3 (1995), pp. 214-220. Describes a campus-wide project anchored in an item bank of mathematics and statistics problems from which course instructors create prerequisite quantitative skills assessments administered early in the semester to alert students about instructor expectations and their quantitative readiness for different courses. Discusses departmental needs and student capabilities revealed by this assessment; three levels of quantitative expectations; patterns of student performance on quantitative tasks; and the impact of assessment on participants, the mathematics department, and the entire campus.
- Beins, B. C. (1993). "Writing assignments in statistics classes encourage students to learn interpretation." Journal of Educational and Behavioral Statistics, 20(3) pp. 161-164. Comparative study of the effects of different intensity of writing assignments in introductory statistics for psychology majors. Findings: (a) No significant differences emerged regarding conceptual knowledge (b) Students in the class with a heavy emphasis on writing scored significantly better on computation than did one of the moderate-emphasis classes; (c) Students ability to interpret increased with the increased emphasis on writing.
- Bergster, Christer (2003). "Critical Factors and Prognostic Validity in Mathematics Assessment." Proceedings of the 2nd International Conference on the Teaching of Mathematics (at the undergraduate level), University of Crete, 1-6 July 2002. New York: John Wiley. Report on the construction and evaluation of a prognostic test designed for entering college students that is designed not to assess their past learning (which is more procedural than conceptual in character) but to predict performance in beginning college mathematics courses. The test was built on ten factors that were found to be critical for college mathematics: conceptual depth, control, creativity, effort, flexibility, logic, method, organization, process, and speed. These critical factors cut across the content-process distinction and are expressions of a holistic view of mathematical performance in which many of the critical factors are involved in each problem solving process and must be combined for success.
- Bonsangue, M. (1992).The effects of calculus workshop groups on minority achievement and persistence in mathematics, science, and engineering, PhD. Thesis, Claremont, CA: Claremont Graduate School.
- Bonsangue, M. (1994). "An efficacy study of the calculus workshop model," CBMS Issues in Collegiate Mathematics Education, 4, American Mathematical Society, Providence, RI, 1994, pp. 117-137.
- Bookman, Jack and Charles P. Friedman (1994)."A comparison of the problem solving performance of students in lab based and traditional calculus" in Dubinsky, E., Schoenfeld, A.H., Kaput, J., eds., Research in Collegiate Mathematics Education I. Providence, RI: American Mathematical Society, pp. 101-116.
- Bookman, Jack and L.D. Blake (1996). "Seven Years of Project CALC at Duke University--Approaching a Steady State?" Primus, September, pp. 221-234.
- Bookman, Jack and Charles P. Friedman (1994). Final report: Evaluation of Project CALC 1989-1993, unpublished manuscript, 1994.
- Bookman, Jack and Charles P. Friedman (1998) "Student Attitudes and Calculus Reform." School Science and Mathematics. March, pp. 117- .
- Chance, B.L. (1996). "Experiences with Authentic Assessment Techniques in an Undergraduate Introductory Statistics Course." In American Statistical Association, Proceedings of the Section on Statistical Education, pp. 36-44. Examination of the effect of journal writing on students' understanding of introductory statistics by comparison of students in two matched sections, in one of which students were required to keep journals. Overall student achievement and course satisfaction were the same for both groups of students. However, there was more variability among the journal-writing group, suggesting that better students developed deeper understanding whereas weaker students became overwhelmed by the requirement and gave up on the course.
- Ellington, Aimee J. (2003) "An Assessment of General Education Mathematics Courses Contributions to Quantitative Literacy at Virginia Commonwealth University." (Preprint).
- Emert, J.W. and C.R. Parish (1996). "Assessing Concept Attainment in Undergraduate Core Courses in Mathematics" in Banta, T.W., Lund, J.P., Black, K.E.,And Oblander, F.W. eds., Assessment in Practice: Putting Principles to Work on College Campuses, Jossey-Bass Publishers, San Francisco, pp. 104-107.
- Ferrini-Mundy, Joan (1994). CCH Evaluation and Documentation Project, Durham, NH: University of New Hampshire.
- Fisher, Gwen Laura (1996). "The Validity of Pre-Calculus Multiple Choice and Performance-Based Testing as a Predictor of Undergraduate Mathematics and Chemistry Achievement." Concern over the validity of the Algebra Diagnostic Test (ADT) used to determine student preparation for calculus at UC-Santa Barbara led to a suggestion that performance-based questions may provide a better assessment of students' readiness for newer ("reform") calculus courses. In this study two different diagnostic tests were compared to find relationships between test scores and subsequent grades in algebra, calculus for the hard sciences, calculus for the social sciences, and chemistry. Result: The performance-based test had significant correlations with grades in all four classes, although multiple choice testing had a higher correlation while a combination of both provided the best prediction. Symbolic manipulation skills are statistically significant in predicting grades in all four classes.
- Fullilove, R.E., and Philip Uri Treisman (1990). "Mathematics Achievement among African American Undergraduates at the University of California, Berkeley: An Evaluation of the Mathematics Workshop Program," Journal of Negro Education, 59 (3), pp. 463-478.
- Ganter, Susan L. (1997). "Ten Years of Calculus Reform and its Impact on Student Learning and Attitudes," Association for Women in Science Magazine, 26(6).
- Iozzi, Fabrizio (2003). "Collaboration and Assessment in a Technological Framework." Proceedings of the 2nd International Conference on the Teaching of Mathematics (at the undergraduate level), University of Crete, 1 July 2002. New York: John Wiley. Investigation of how calculus students in Milan use interactive software to collaborate among themselves and with their lecturers. The first part looks at which topics are preferred among students and why, which kinds of discussions are more popular, how students discuss the subjects, and the impact of the discussions on their performance; the second discusses details of the collaborative software and its role in assessment.
- Loud, B.J. (1999) "Effects of Journal Writing on Attitudes, Beliefs, and Achievement of Students in College Mathematics Courses." Dissertation Abstracts International, Volume: 60-03, Section: A, page: 0680. A controlled study showing that weekly structured journal writing in a college mathematics course is effective in enabling students to achieve greater success in learning mathematics. Students in the journal writing sections achieved significantly higher grades on the course final examination and exhibited improved beliefs and attitudes about mathematics. Two journal tasks--explaining concepts to others and documenting solution steps--correlate significantly with achievement in mathematics.
- Maura Santos, Ana, et al. (2003). On-Line Assessment in Undergraduate Mathematics."Proceedings of the 2nd International Conference on the Teaching of Mathematics (at the undergraduate level), University of Crete, 1 July 2002. New York: John Wiley. Report of two "very convincing" experiments at Instituto Superior Tecnico, Lisbon, each involving 300 students, with automatic grading and generation of multiple choice questions used to assess students weekly. The goal was not so much to assess students as to provide a weekly stimulus to learning; students are given a week to work on each exercise list, and usually discuss their questions with teachers and fellow-students. The computer-generated system provides each student with unique questions, and automates grading.
- National Science Foundation. Undergraduate Curriculum Development: Calculus, Report of the Committee of Visitors, Treisman, P. Chair, Washington, DC, 1991.
- Penn, Howard (1994). "Comparisons of Test Scores in Calculus I at the Naval Academy," in Focus on Calculus, A Newsletter for the Calculus Consortium Based at Harvard University, 6, Spring, p. 6
- Rash, A.M. (1997). "An Alternate Method of Assessment Using Student-Created Problems." Primus, March, pp. 89-96.
- Rodgers, Kathy V. & William G. Wilding (1998). "Studying the Placement of Students in the Entry-Level College Mathematics Courses." Primus, 8(3), pp. 203-08. Findings from a study of students enrolled in college algebra. Concludes that the setting of placement cut-off scores to optimize predicted success is related to the mission and philosophy of the college or university.
- Schwingendorf, K.E., G.P. McCabe, and J. Kuhn. (To appear) "A Longitudinal Study of the Purdue C4L Calculus Reform Program: Comparisons of C4L and Traditional Students," Research in Collegiate Mathematics Education, CBMS Issues in Mathematics Education, to appear.
- Snook.K. (1998). Toward Accurately Assessing Students' Understanding in Calculus." The Association of Research in Undergraduate Mathematics Education. Analysis of the evolution of types of assessment activities in relation to students' understanding. Preliminary studies suggested that assessments using only problems categorized as traditional or algorithmic or only problems categorized as nontraditional or relational may not accurately indicate a student's level of understanding. The current study, based on data from the United States Military Academy at West Point, uses a "talk-aloud" problem solving interview that allows students more opportunity to reveal their depth of understanding than do written instruments.
- Stage, F. and P. Kloosterman (1995). "Gender, Beliefs,and Achievement in Remedial College Level Mathematics." Journal of Higher Education, 66 (3), 1995, pp. 294-311.
- Vallecillos, Angustias (2003). "Framework for Instruction and Assessment on Elementary Inferential Statistics Thinking." Proceedings of the 2nd International Conference on the Teaching of Mathematics (at the undergraduate level), University of Crete, 1 July 2002. New York: John Wiley. Report of an empirical investigation of a framework for assessing the learning of elementary statistical inference (e.g., populations and samples; inferential processes; sample sizes; sampling types and biases) in three different contexts: concrete, narrative, and numeric.
- West, Richard D. (1995) Evaluating the Effects of Changing an Undergraduate Mathematics Core Curriculum which Supports Mathematics-Based Programs, Ann Arbor, MI: UMI.
- Academic Quality Improvement Project (AQIP). (2002). Principles and Criteria for Improving Academic Quality. Chicago, IL: The Higher Learning Commission.
- Adelman, C. (2000). A Parallel Postsecondary Universe: The Certification System in Information Technology. Washington, DC: OERI, U.S. Department of Education.
- Alfred, R., Ewell, Peter., Hudgins, J., & McClenney, Kay (1999). Core Indicators of Effectiveness for Community Colleges: Toward High Performance, Second Edition. Washington, DC: Community College Press, American Association of Community Colleges. This update to the first (1995) edition addresses policymakers' concerns regarding "high performance" and provides a model for institutions assessing their effectiveness. Presents new directions in assessment entailed by the changing contexts in which colleges operate.
- Alverno College Faculty (1979). Assessments at Alverno College. Milwaukee, WI: Alverno Publications.
- Alverno College Faculty (1994). Student Assessment-as-Learning at Alverno College, 3rd ed., Milwaukee, WI: Alverno Publications.
- American Association of University Professors (1990). "Mandated Assessment of Education Outcomes." Academe, Nov./Dec. 1990. Discusses impact of mandated assessment on traditional arenas of professorial autonomy; focuses on five assessment issues (institutional diversity, skills, majors, value-added, and self-improvement). Concludes with recommendations for learning to live with mandated assessment.
- American Association of Higher Education (1992). Principles of Good Practice for Assessing Student Learning. Washington, DC: AAHE.
- American Association of Higher Education (1994). CQI 101: A First Reader for Higher Education. Washington, DC: AAHE.
- American Association of Higher Education (1997). Learning Through Assessment: A Resource Guide for Higher Education. Washington, DC: AAHE.
- Angelo, Thomas. A. and Cross, K.P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers, 2nd ed. Jossey-Bass, San Francisco, 1993.
- Arenson, Karen W. (2006). "Panel Explores Standard Tests for Colleges." The New York Times, February 9.
- Assessment Update, Trudy Banta, editor. San Francisco: Jossey-Bass. A bimonthly journal on assessment in higher education.
- Astin, Alexander W. (1977). Four Critical Years. San Francisco: Jossey-Bass.
- Astin, Alexander W. (1985). Achieving Educational Excellence. San Francisco: Jossey-Bass.
- Astin, Alexander W, et al. (1992)."Principle of Good Practice for Assessing Student Learning." Washington, DC: American Association of Higher Education. Nine principles for assessing student learning developed by the long-standing annual Assessment Forum of the American Association of Higher Education (AAHE).
- Astin, Alexander W. (1993). Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education. Old Tappan, NJ: Macmillan, 1991; Oryx Press, Phoenix, AZ, 1993. Argues for a comprehensive "talent development" view of assessment that takes account of entering student characteristics, educational experiences, and learning outcomes. Contends that the principles of assessment are simply those of doing good research, applied to the specific topic of student learning and development.
- Baird, L.L. (1988). Value Added: Using Student Gains as Yardsticks of Learning. In C. Adelman (ed), Performance and Judgment: Essays on Principles and Practice in the Assessment of College Student Learning, 205-216. Washington, DC: US Government Printing Office.
- Baker, Mike (2001). "Accountability vs. Autonomy." Education Week, October 31. A warning to the US about negative impacts of testing (in the K-12 setting) based on experiences in England. "Before you get out the measuring stick, you must know what it is you want to measure."
- Banta, Trudy W. (1985). Use of Outcomes Information at the University of Tennessee, Knoxville. In P.T. Ewell (Ed), Assessing Educational Outcomes, New Directions for Institutional Research #47, pp. 19-32. San Francisco: Jossey-Bass.
- Banta, Trudy W. Lambert, E. W., Pike, G.R., Schmidhammer, J.L., And Schneider, J.A. (1987). Estimated Score Gain on the ACT COMP Exam: Valid Tool for Institutional Assessment? Research in Higher Education, 27, 195-217.
- Banta, Trudy W. (1988). Implementing Outcomes Assessment: Promise and Perils, Jossey-Bass, San Francisco, 1988.
- Banta, Trudy W. and Associates. (1993). Making a Difference: Outcomes of a Decade of Assessment in Higher Education. San Francisco: Jossey-Bass. A collection of essays documenting the kinds of benefits that have been realized and the curricular changes which campuses and programs have made through the use of assessment results.
- Banta, Trudy W. (1996). The Power of a Matrix. Assessment Update, 8, 4, pp. 3-13.
- Banta, Trudy W.; Lund, Jon P.; Black, Karen E.; and Oblander, Frances W. (1996). Assessment in Practice: Putting Principles to Work on College Campuses. San Francisco: Jossey-Bass. Consists of 82 documented cases of successful applications of assessment in a variety of disciplinary and campus settings, presented in a common format; cases are cross-referenced according to a number of topical variables to enable them to be compared. Draws lessons from these cases to support and illustrate nine principles of good practice.
- Banta, Trudy W. and Associates. (2002). Building a Scholarship of Assessment. San Francisco: Jossey-Bass, 2002. An edited collection of essays on the history and application of assessment in higher education by many of its leading spokespeople. Argues that assessment is best seen as a scholarly activity undertaken reflectively by teaching faculty to more systematically understand and improve teaching and learning.
- Benjamin, Ernst (1990. "The Movement to Assess Students' Learning will Institutionalize Mediocrity in Colleges." Chronicle of Higher Education, July 5, p. A40. A brief "op-ed" column criticizing the "indefensible consequences" for higher education of rapidly spreading accountability systems that rely on narrow tests.
- Benjamin, Ernst (1994). "From Accreditation to Regulation: The Decline of Academic Autonomy in Higher Education." Academe, July/Aug., pp. 34-36. A worried analysis by the retired general secretary of the American Association of University Professors (AAUP) concerning the impact of increased regulation based on accountability
- Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail. San Francisco: Jossey-Bass.
- Bok, Derek (1986). "Toward Higher Learning: The Importance of Assessing Outcomes." Change, Nov./Dec., pp. 18-27. A classic short essay by then Harvard President Derek Bok outlining the benefits to higher education of assessing the accomplishments and value of a college education.
- Borden, Victor M.H. with Jody L. Zak Owens (2001). Measuring Quality: Choosing Among Surveys and Other Assessments of College Quality. Washington, DC: American Council on Education and the Association for Institutional Research (AIR). A guide summarizing characteristics of several dozen national instruments designed to assess various aspects of institutional quality.
- Boyer, C.M., Ewell, P.T., Finney, J.E., and Mingle, J.R. (1987). Assessment and Outcomes Measurement: A View from the States. AAHE Bulletin, March, pp. 8-12.
- Braskamp, L.A. and Ory, J.C. (1994). Assessing Faculty Work: Enhancing Individual and Institutional Performance, San Francisco: Jossey-Bass.
- Braskamp, LA, Brandenburg, D.C., and Ory, J.C. Evaluting Teaching Effectiveness: A Practical Guide, Beverley Hills Pub., Sage, 1984.
- Callan, Patrick M., William Doyle, & Joni E. Finney (2001). "Evaluating State Higher Education Performance: Measuring Up 2000." Change, March/April, pp. 10-19.Summary of findings from Measuring Up, the November 2000 report of the National Center for Public Policy and Higher Education.
- Callan, Patrick M. and Joni E. Finney (2002). "Assessing Educational Capital: An Imperative for Policy." Change,, July/August, pp. 25-31. Reflections on the implications of Measuring Up, especially about the troubling dearth of information about collegiate learning.
- Cohen, David (2001). "Quality Control or Hindering Quality?" The Chronicle of Higher Education, October 26. Tribulations of the British Quality Assurance Agency for Higher Education, created in 1997 to review the quality of colleges. The agency attempted to require colleges to provide objective evidence that their teaching is effective, that students fare well, and that good management is in place, but encountered great resistance from the colleges they were expected to review.
- Cook, C.E. (1989). FIPSE's Role in Assessment: Past, Present, and Future. Assessment Update, 1, 2, pp. 1-3.
- DeZure, D., ed. (2000). Learning from Change: Landmarks in Teaching and Learning in Higher Education from Change Magazine 1969-1999. Washington, DC: AAHE.
- Doherty, Austin; Tim Riordan; and James Roth (2002). Student Learning: A Central Focus for Institutions of Higher Education. Milwaukee, WI: Alverno College Institute. Report of a collaboration among representatives of twenty-six baccalaureate institutions concerning a variety of initiatives focused on student learning. Assessment is a part of many of the institutions' reports.
- Eaton, Judith S. (2001). "Regional Accreditation Reform: Who is Served?" Change Magazine, 33, 2, pp. 38-45.
- Edgerton, Russell (1990). "Assessment at Half Time." Change, Sept./Oct., pp. 4-5. A brief summary of the political landscape of assessment in higher education by the president of the American Association of Higher Education (AAHE). Claims that state pressures for accountability will continue, but that if institutions define assessment in worthy terms the faculty will find the effort worthwhile.
- Embretson, Susan L. (2003) The Second Century of Ability Testing: Some Predictions and Speculations. Princeton, NJ: Educational Testing Service.
- Enthoven, A.C. (1970). Measures of the Outputs of Higher Education: Some Practical Suggestions for their Development and Use. In G.B. Lawrence, G. Weathersby, and V.W. Patterson (eds), Outputs of Higher Education: Their Identification, Measurement, and Evaluation, pp. 51-58. Boulder. CO: WICHE.
- Erwin, T. Dary (1991). Assessing Student Learning and Development: A Guide to Principles, Goals, and Methods of Determining College Outcomes. San Francisco: Jossey-Bass. A one-volume "primer" on assessment techniques with particular emphasis on developing faculty-made examinations and scoring guides to apply to authentic student work. Treats basic psychometric principles in an effective but "user-friendly" fashion.
- Erwin, T. Dary (2000). The NPEC Sourcebook on Assessment. Volume 1: Definitions and Assessment Methods for Critical Thinking, Problem Solving, and Writing. Volume 2: Selected Institutions Utilizing Assessment Results. Washington, DC: National Postsecondary Education Cooperative (NPEC), National Center on Education Statistics. Volume 1 is a compendium of information about tests at the postsecondary education level used to assess three skills: critical thinking, problem solving, and writing. The Sourcebook itself offers comparative data about the policy-relevance of student outcomes measured in these three skill areas. Volume 2 summarizes and compares results form studies on eight named campuses.
- Ewell, Peter T., and Jones, D.P. (1986). The Costs of Assessment. In C. Adelman (ed) Assessment in American Higher Education, pp. 33-46. Washington, DC: U.S. Government Printing Office.
- Ewell, Peter T. (1988). Implementing Assessment: Some Organizational Issues. In Trudy Banta (Ed), Implementing Outcomes Assessment: Promise and Perils, New Directions for Institutional Research #59, pp. 15-28. San Francisco: Jossey-Bass.
- Ewell, Peter.T. (1989). Hearts and Minds: Some Reflections on the Ideologies of Assessment. In Three Presentations from the Fourth National Conference on Assessment in Higher Education, 1-26. Washington, DC: AAHE.
- Ewell, Peter T., Finney, J.E., And Lenth, C. (1990). Filling in the Mosaic: The Emerging Pattern of State-Based Assessment. AAHE Bulletin, 42, pp. 3-7.
- Ewell, Peter T. (1991) "To Capture the Ineffable: New Forms of Assessment in Higher Education." American Educational Research Association (AREA), Review of Research in Education 17, pp.75-125.
- Ewell, Peter T. (1993). The Role of States and Accreditors in Shaping Assessment Practice. In T.W. Banta and Associates, Making a Difference, pp. 339-356. San Francisco: Jossey-Bass.
- Ewell, Peter T. & Jones, Dennis P. (1994). "Data, Indicators, and the National Center for Higher Education Management Systems." New Directions for Institutional Research, 82, pp 23-35. Views the development of institutional performance indicators in higher education as part of a broader approach to management information and decision making.
- Ewell, Peter T. and Jones, Dennis P. (1996). Indicators of "Good Practice" in Undergraduate Education: A Handbook for Development and Implementation. Boulder, Co: National Center for Higher Education Management Systems (NCHEMS), 1996. Intended to provide colleges and universities with guidance in establishing an appropriate system of indicators of the effectiveness of undergraduate instruction, and to build on this foundation by cataloging a range of exemplary indicators of "good practice" that have proven useful across many collegiate settings.
- Ewell, Peter T. (1997). "Accountability and Assessment in a Second Decade: New Looks or Same Old Story?" In AAHE, Assessing Impact, Evidence and Action, pp. 7-22. Washington, DC: AAHE.
- Ewell, Peter T. (1997) "Strengthening Assessment for Academic Quality Improvement." In Planning and Management for a Changing Environment: A Handbook on Redesigning Postsecondary Institutions, Marvin W. Peterson, David D. Dill, and Lisa A. Mets (Editors). San Francisco, CA: Jossey-Bass, 1997, pp. 360-381. Historical survey of assessment efforts in the U.S. during the last decade in the context of increased accountability requirements, decreased financial resources, and increased experience with assessment on college campuses. Discusses relation of assessment to academic planning and syndromes to avoid.
- Ewell, Peter T. (2001) "Statewide Testing in Higher Education." Change 33:2 (March/April), pp. 21-27. Seeking alternatives to the "extraordinarily limited" repertoire of standardized testing for assessing outcomes of higher education.
- Ewell, Peter T. (2003). "An Emerging Scholarship: A Brief History of Assessment." Denver CO: National Center for Higher Education Management Systems (NCHEMS).
- Farmer, D.W. (1988). Enhancing Student Learning: Emphasizing Essential Competencies in Academic Programs. Wilkes-Barre, PA: King's College Press.
- Farmer, D.W. (1993). "Course-Embedded Assessment: A Teaching Strategy to Improve Student Learning," Assessment Update, 5 (1), pp. 8, 10-11.
- Feldman, K.A., and Newcomb, T.M. (1969). The Impact of College on Students. San Francisco: Jossey-Bass.
- Ferren, Ann (1993). "Faculty Resistance to Assessment: A Matter of Priorities and Perceptions." Commissioned paper prepared for American Association of Higher Education. Analyzes faculty priorities to help understand why assessment is rarely valued by faculty. Argues that assessment must derive from widely agreed goals, must be connected to clear outcomes that the faculty see as beneficial, and must not be simply added to already overburdened faculty loads.
- Field, Kelly. (2006). "Panel to Give Colleges 'Gentle Shove' Toward Testing." The Chronicle of Higher Education, April 7.
- Forrest, A.W., and Steele, J.M. (1978). College Outcomes Measures Project. Iowa City: ACT.
- Frechtling, Joy A. (1995). Footprints: Strategies for Non-Traditional Program Evaluation. Washington, DC: National Science Foundation. A series of papers suggesting diverse strategies for assessing the impact of funded programs both short- and long-term, both intended and unintended.
- Gaither, Gerald H. (1995). Assessing Performance in an Age of Accountability: Case Studies. San Francisco, CA: Jossey-Bass. Case studies from several states and public institutions about the shift from campus-based assessment in the 1980s to state-based accountability systems in the 1990s.
- Gardiner, L. F. (1994). Redesigning Higher Education: Producing Dramatic Gains in Student Learning. Washington, DC: ERIC Clearinghouse, George Washington University.
- Gardiner, Lion F., Caitlin Anderson, and Barbara L.
- Cambridge, editors. (1995). Learning through Assessment: A Resource Guide for Higher Education. Washington, DC: American Association of Higher Education.
- Gladwell, Malcolm. (2001). "Examined Life." The New Yorker, December 17, 86-92.
- Glassick, Charles E., et al., (1997). Scholarship Assessed: Evaluation of the Professoriate. Carnegie Foundation for the Advancement of Teaching. San Francisco, CA: Jossey-Bass. Companion to Ernest Boyer's widely-cited Scholarship Reconsidered: Priorities of the Professoriate, this report outlines standards for evaluating scholarship that transcend differences among disciplines: clear goals, adequate preparation, appropriate methods, significant results, effective presentation, and reflective critique.
- Guba, E.G., and Lincoln, Y.S. (1981). Effective Evaluation: Improving the Usefulness of Evaluation Results through Responsive and Naturalistic Approaches. San Francisco: Jossey-Bass.
- Hanson, G.R. (1988). Critical Issues in the Assessment of Value Added in Education. In T.W. Banta (ed), Implementing Outcomes Assessment: Promise and Perils, New Directions for Student Services #20. San Francisco: Jossey-Bass.
- Harris, J. (2001). Discerning is More than Counting. Forthcoming.
- Harvey, L. and Knight, P.T. (1996). Transforming Higher Education. London, UK: The Open University Press.
- Heffernan, J.M., Hutchings, P., and Marchese, T.J. (1988). Standardized Tests and the Purposes of Assessment. Washington, DC: AAHE.
- Herman, J.L., Morris, L.L., Fitz-Gibbon, C.T. (1987). Evaluators Handbook. Newbury Park, CA: Sage Publications.
- Hersh, Richard H. and John Merrow (editors). (2005). Declining by Degrees: Higher Education at Risk. New York: Palgrave Macmillan.
- Higher Learning Commission (2002). "Assessment of Student Academic Achievement: Levels of Implementation" In Addendum to Handbook of Accreditation, Second Edition, Chapter Reference A, p. 21. Chicago, IL: North Central Association. A tool suggested by the North Central regional accreditation agency to assist institutions in strengthening programs for assessment of student academic achievement.
- Huba, M. E., & Freed, J. E. (1999). Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning." Boston: Allyn & Bacon.
- Hutchings, Pat (1995). From Idea to Prototype: The Peer Review of Teaching. American Association of Higher Education.
- Hutchings, Pat (1996). Making Teaching Community Property: A Menu for Peer Collaboration and Peer Review. Washington, DC: American Association for Higher Education.
- Johnson, Valen E. (2002) "An A is an A is an A ... And That's the Problem." New York Times, April 14.
- Katz, Stanley N. (1994). "Defining Education Quality and Accountability." Chronicle of Higher Education, November 16, p. A56. An "op-ed" statement by the president of the American Council of Learned Societies (ACLS). Urges that colleges and universities "heed the wake-up call" of assessment from elementary and secondary schools and figure out how to define educational quality in terms that are worthy of higher education.
- Kellogg, Alex P. (2001). "Harvard Professor Becomes a Guru on Helping Students: Colleges nationwide turn to his book and his ideas." The Chronicle of Higher Education, August 17. Profile of Harvard professor Richard J. Light, his well-known assessment seminars, and the 1,600 student interviews that led to his recent book Making the Most of College: Students Speak Their Minds.
- Kuh, George D. (2001). "Assessing What Really Matters to Student Learning: Inside the National Survey of Student Engagement." Change, 33:3 (May/June) pp. 10-17, 66. Introduction to NSSE, the new national survey that examines what comes between input and output--namely, the process of learning.
- Kuh, George D. (2003). "What We're Learning About Student Engagement from NSSE: Benchmarks for Effective Educational Practices." Change, 35:2 (March/April) pp. 24-32. Analysis of the first three years of data from a new national survey on dimensions of undergraduate students' engagement in their learning.
- Lenning, O.T., Beal, P.E., and Sauer, K. (1980). Retention and Attrition: Evidence for Action and Research. Boulder, CO: NCHEMS.
- Lenning, OT, Lee, Y.S., Micek, SS, and Service, A.L. (1977). A Structure for the Outcomes and Outcomes-Related Concepts. Boulder, CO: NCHEMS.
- Light, Richard J., Judith D. Singer, and John B. Willet (1990). By Design: Planning Research on Higher Education. Cambridge, MA: Harvard University Press. A guide to doing research on college impact based primarily on the experiences of the path-breaking Harvard Assessment Seminar in the 1980s. Argues for a number of principles of sound assessment research design that are presented understandably and are applicable to a wide range of situations.
- Light, Richard. Harvard Assessment Seminars. Harvard University, Cambridge, MA, 1990, 1992.
- Light, Richard. Making the Most of College: Students Speak Their Minds. Cambridge, MA: Harvard University Press, 2001.
- Lingenfelter, Paul E. (2003). "Educational Accountability: Setting Standards, Improving Performance." Change, March/April, pp. 19-23. Suggestions for how to establish an effective accountability system focused on improving student learning.
- Linn, Robert L. and Joan L. Herman (1997). A Policymaker's Guide to Standards-Led Assessment. Denver, CO: Education Commission of the States, 1997. Analysis of policy implications involved in shifting from norm-referenced assessments (which compare each students' performance to that of others) to standards-led assessments which incorporate pre-established performance goals, many of which are based on real-world rather than "artificial" exercises.
- Loacker, G., Cromwell, L., and O'Brien, K. (1986). Assessment in Higher Education: To Serve the Learner. In OERI, Assessment in American Higher Education: Issues and Contexts. 47-62. Washington, DC: OERI, U.S. Department of Education.
- López, C.L. (1997). The Commission's Assessment Initiative: A Progress Report. Chicago, IL: NCA.
- López, Cecilia L. (2000). "Assessing Student Learning: Using the Commission's Levels of Implementation." Chicago, IL: North Central Association of Colleges and Schools, Commission on Institutions of Higher Education.
- López, Cecilia L. (1999). "Assessing Student Learning: Why we need to Succeed." Assessment and Accountability Forum: Journal of Quality Management in Adult-Centered Education. Special Edition: Regional Accrediting Bodies, 9:2, pp. 5-7,18.
- López, Cecilia L. (1999). "A Decade of Assessing Student Learning: What We Have Learned; What's Next?" Commission on Institutions of Higher Education.
- Maki, Peggy (2002). "Moving from Paperwork to Pedagogy: Channeling Intellectual Curiosity into a Commitment to Assessment." AAHE Bulletin, 54:9, May. The author, director of assessment at AAHE, argues that the thread connecting faculty members' lives inside and outside the classroom is intellectual curiosity about the kinds of learning that students should and do achieve, about the nature of evidence required to understand this learning, and about the habits of mind that characterize different professions.
- Maki, Peggy (2002). "Using Multiple Assessment Methods to Explore Student Learning and Development Inside and Outside of the Classroom." NetResults, NASPA's E-Zine for Student Affairs Professionals.
- Maki, Peggy (2002). "Developing an Assessment Plan to Learn about Student Learning." Journal of Academic Librarianship, January.
- Marcus, Dora, et al. (1993). Lessons Learned from FIPSE Projects II. Fund for the Improvement of Postsecondary Education. Washington, DC: US Department of Education. Descriptions of thirty programs funded by FIPSE from 1989 to 1991, including ten that are focused on assessment and including an assessment resource center, assessment seminars, measures of general education goals, comprehensive assessment in academic disciplines, and a regional assessment network.
- Martin, W. O. (1996)."Assessment of students' quantitative needs and proficiencies," in Banta, T.W., Lund, J.P., Black, K.E., and Oblander, F.W., eds., Assessment in Practice: Putting Principles to Work on College Campuses, San Francisco: Jossey-Bass.
- Mathews, Jay. (2004). "How to Measure What You Learned in College." Washington Post, September 21.
- McClain, C.J. (1984). In Pursuit of Degrees with Integrity: A Value-Added Approach to Undergraduate Assessment. Washington, DC: AASCU.
- McClain, CJ, and Krueger, D.W. (1985). Using Outcomes Assessment: A Case Study in Institutional Change. In P.T. Ewell (ed), Assessing Educational Outcomes, New Directions for Institutional Research #47, 33-46. San Francisco: Jossey-Bass.
- Mentkowski, M., and Rogers, G.P. (1988). Establishing the Validity of Measures of Student Outcomes. Milwaukee, WI: Alverno Publications.
- Mentkowski, M., Astin, A.W., Ewell, PT, Moran, E.T., and Cross, K.P. (1991). Catching Theory Up with Practice: Conceptual Frameworks for Assessment. Washington, DC: AAHE.
- Mentkowski, M., and Associates (2000). Learning That Lasts: Integrating Learning, Development, and Performance in College and Beyond. San Francisco: Jossey-Bass.
- Merrow, John. "Grade Inflation: It's Not Just an Issue for the Ivy League." The Carnegie Foundation for the Advancement of Teaching
- Messick, S. (1988). Meaning and Values in Test Validation: The Science and Ethics of Assessment. Princeton, NJ: ETS.
- Miller, Charles and Geri Malandra. "Accountability/Assessment." Issue paper prepared for the Commission on the Future of Higher Education. US Department of Education, 2006.
- National Center for Education Statistics (1992). National Assessment of College Student Learning: Issues and Concerns. Washington, DC: US Department of Education.
- National Center for Public Policy in Higher Education (2000). Measuring Up 2000: The State-by-State Report Card for Higher Education. San Jose, CA: NCPPHE.
- National Commission on Excellence in Education (1983). A Nation at Risk: The Imperative for Educational Reform. Washington, DC: US Department of Education.
- National Education Goals Panel (1991). The National Education Goals Report. Washington, DC: National Education Goals Panel.
- National Governors' Association (1986). Time for Results. Washington, DC: NGA.
- National Institute of Education, Study Group on the Conditions of Excellence in American Higher Education (1984). Involvement in Learning: Realizing the Potential of American Higher Education. Washington, DC: US Government Printing Office.
- Nichols, J.O. (1989). Institutional Effectiveness and Outcomes Assessment Implementation on Campus: A Practitioner's Handbook. New York: Agathon Press.
- Nichols, James O. (1995). A Practitioner's Handbook for Institutional Effectiveness and Student Outcomes Assessment Implementation (2nd Edition). New York: Agathon Press. A self-described "cookbook" for organizing assessment efforts at the campus and department level. Includes planning charts, workbooks, and other aids for those charged with leading an assessment effort. The discussion is particularly oriented toward meeting assessment-related accreditation requirements.
- O'Banion, T. (1997). A Learning College for the 21st Century. Washington, DC: ACE, Oryx Press.
- Pace, C.R. (1979). Measuring the Outcomes of College. San Francisco: Jossey-Bass.
- Pace, C.R. (1990). The Undergraduates: A Report of their Activities and Progress in the 1980s. Los Angeles: Center for the Study of Evaluation, UCLA.
- Palomba, Catherine A. and Banta, Trudy W. Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey-Bass, 1999. Probably the most comprehensive current one-volume introduction to assessment techniques in higher education. Critically reviews and notes the limits of extant assessment methods including tests, performances, portfolios, surveys, and other methods. Also treats implementation issues associated with assessment such as organizational approaches and the utilization of assessment data.
- Pascarella, ET (1987). Are Value-added Assessments Valuable? In Assessing the Outcomes of Higher Education, Proceedings of the 1986 ETS Invitational Conference, 71-92. Princeton, NJ: ETS.
- Pascarella, Ernie. "How Does College Influence Learning and Cognitive Development?" National Study Of Student Learning (NSSL)
- Pascarella, Ernest T. and Terenzini, Patrick T. (1991). How College Affects Students: Findings from Twenty Years of Research. San Francisco: Jossey-Bass. The most comprehensive current synthesis of the current scholarly literature on college impact available. Presents and analyzes the results of over 2600 recent studies of college students conducted in the last twenty years.
- Perry, William G. (1970). Forms of Intellectual and Ethical Development in the College Years, Holt, Rinehart and Winston, NY, 1970.
- Peters, R. (1994). "Some Snarks are Boojums: Accountability and the End(s) of Higher Education. Change, 26, 6, pp.16-23.
- Romer, Roy (1995). Making Quality Count in Undergraduate Education. Denver, CO: Education Commission of the States. Report by the then-Governor of Colorado on behalf of all US state governors concerning what parents and students expect of higher education and what research says about the characteristics of high-quality undergraduate education. Concludes with recommendations for steps to make higher education more accountable to its public purposes.
- Sax, Linda J. (1996). The American College Teacher: National Norms for the 1995-96 HERI Faculty Survey. Los Angeles, CA: Higher Education Research Institute, University of California at Los Angeles. Summarizes demographic, biographic, professional, and personal characteristics of college faculty based on a survey of 60,000 faculty members at nearly 400 different institutions of higher education.
- Schilling, Karen Maitland and Schilling, Karl L. (1993). "Professors Must Respond to Calls for Accountability." Chronicle of Higher Education, March 24, p. A40. An op-ed column arguing that faculty must take seriously the public's demand for evidence that students are learning, and learning the "right things." Suggests portfolio assessment as an effective strategy.
- Seldin, Peter (1993). "The Use and Abuse of Student Ratings of Professors." Chronicle of Higher Education, July 21, p. A40. An op-ed column lamenting the propensity of colleges to misuse student evaluations of faculty. Gives research-based advice for how to use such ratings intelligently and effectively.
- Seymour, D.T. (1991). On Q: Causing Quality in Higher Education. New York: ACE/Macmillan.
- Shulman, L.S. (1993). Teaching as Community Property. Change Magazine, 15, 6, pp. 6-7.
- Smith, M.K., Bradley, J.L., And Draper, G.F. (1994). Annotated Reference Catalog of Assessment Instruments, Catalogs A-G. Knoxville, TN: Assessment Resource Center, University of Tennessee Knoxville.
- Stevens, Floraline (1993). et al. User-Friendly Handbook for Project Evaluation. Washington, DC: National Science Foundation, 1993. A "how-to" guide to effective assessment for project directors who have neither experience in, nor enthusiasm for, evaluation.
- Swing, Randy L., editor, (2001). Proving and Improving: Strategies for Assessing the First College Year. National Resource Center for the First-Year Experience and Students in Transition, University of South Carolina. "First-year seminars and other programs serving large numbers of first-year students are asked to prove their value more frequently than high status, discipline-based programs." First-year programs that thrive have strong outcome assessments that are closely connected to program goals. "Simply put, assessment findings provide protection and leverage in hard times and guidance for improvement anytime."
- Terenzini, PT, Pascarella, E.T, and Lorang, W. (1982). "An Assessment of the Academic and Social Influences on Freshman Year Educational Outcomes." Review of Higher Education 5 pp. 86-109.
- Terenzini, PT (1989). "Assessment with Open Eyes: Pitfalls in Studying Student Outcomes. Journal of Higher Education."60 pp. 644-664.
- Thornton, G.C., And Byham, W.C. (1982). Assessment Centers and Managerial Performance. New York: Academic Press.
- Tinto, V. (1975). Dropout from Higher Education: A Theoretical Synthesis of Recent Research. Review of Educational Research 45 (Winter) pp. 89-125.
- Trombley, William (2001). "Trying to Measure Student learning." National Center for Public Policy and Higher Education. National CrossTalk, 9:3 (Summer), 1, pp. 7-9. Report on efforts of Missouri's Coordinating Board of Higher Education to convince colleges to test all students in public postsecondary education, in general education, in academic majors, and in technical specialties.
- Walvoord, Barbara V. and Virginia J. Anderson (1998). Effective Grading: A Tool for Learning and Assessment. San Francisco: Jossey-Bass, 1998. Describes how faculty can create their own scoring and assessment rubrics through the technique of Primary Trait Scoring, which enables them to use regular classroom grading process to generate useful assessment data.
- Warren, J. (1984). The Blind Alley of Value Added. AAHE Bulletin, 37(1), pp. 10-13.
- Wiggins, Grant (1989). "A True Test: Toward More Authentic and Equitable Assessment." Phi Delta Kappan, May, pp. 703-713. Argues that misunderstanding about the relation of tests to standards impedes progress in educational improvement. Suggests that only tests that require the "performance of exemplary tasks" can truly monitor students' progress towards educational standards.
- Wiggins, Grant (1993). Assessing Student Performance: Exploring the Purpose and Limits of Testing. San Francisco: Jossey-Bass. Argues that good assessment systems should not just audit summative performance but should also provide feedback useful to students and information useful to teachers. Although intended primarily for a K-12 audience, this is one of the best extant discussions of the limits of testing theory and the merits of authentic assessment approaches.
- Wiggins, Grant (1998). Educative Assessment: Designing Assessments to Inform and Improve Student Performance. San Franscisco: Jossey-Bass. In this volume Wiggins puts the "current craze for 'assessment' of students on a higher, more commonsensical and moral ground." writes Theodore R. Sizer, chairman of the Coalition of Essential Schools. Wiggens, Sizer continues, "brings the sharp eye of a philosopher and the constructive humility of a veteran teacher to the tasks of explaining what we must know about our students' knowing and what this means for practice. This is a constructively unsettling book on our attitudes about learning and about our all-too-familiar and challengeable habits of assessing for it."
- Wiggins, Grant (1990). "The Truth May Make You Free, but the Test May Keep You Imprisoned: Toward Assessment Worthy of the Liberal Arts." (HTML) (PDF) In The AAHE Assessment Forum. Assessment 1990: Understanding the Implications. Washington, DC: The American Association for Higher Education, 17-31. (Reprinted in Heeding the Call for Change: Suggestions for Curricular Action, Lynn A. Steen, editor. Washington, DC: Mathematical Association of America, 1992, pp. 150-162.) Philosophical reflections on the purposes of education in the liberal arts--or in basic (rather than applied) science or mathematics. Focuses on ten principles of education that testing tends to destroy (e.g., justifying one's opinions; known, clear, public standards and criteria; self-assessment in terms of standards of rational inquiry; challenging authority and asking good questions).
- William, C. G. (1998). "Using concept maps to assess conceptual knowledge of function." Journal for Research in Mathematics Education, 29, pp. 414-21. Examines the value of concept maps as instruments for assessment of conceptual understanding, using the maps to compare the knowledge of function held by experts and two groups of students (traditional and nontraditional) enrolled in university calculus.
- Zernike, Kate (2002). "Tests Are Not Just for Kids" New York Times, August 4. Analysis of the accountability pressures that are pushing politicians to demand K-12-style tests in higher education as a means of assessing performance of postsecondary institutions.
- Zwick, Rebecca (2001). "What Causes the Test-Score Gap in Higher Education?" Change, 33:2 (March/April), pp. 23-37. Discussion of the US Department of Education's Office of Civil Rights' guidebook on high stakes testing, first released in April 1999 under the title Nondiscrimination in High-Stakes Testing: A Resource Guide. Key issue: "disparate impact."