Evaluation of Teaching Performance for Improvement and Accountablility
1. Evaluation of Teaching Performance
for Improvement and Accountability
Center for Teaching Excellence and Learning
Technologies: A Colloquium
November 1, 2007
Larry Gould
2. The Current State of Affairs
1. Less than useful primary instrument (e.g.
formative feedback, personnel
evaluation, program evaluation,
advisement, who and for what? etc.)
2. Poor applicability to virtual learning
environment/what gets evaluated?
3. Instruments inconsistent with policy
4. Administration of TEVAL does not create
confidence in results
5. Less than efficient processing and
analysis
3. An Alternative Future: Pedagogical Responsibility
1. Only things important*
• Were exams and other graded materials returned on a timely basis?
• Was there sufficient feedback on tests and papers?
• Were students tested on materials covered in the course?
• Were course materials well prepared?
• Did the course unfold as promised in the syllabus?
• Was the instructor accessible?
• No more than ten questions related to pedagogical
responsibility/comments for improvement
2. Virtual learning environment – support systems, use of
technology, receipt of materials, etc.
*adapted from Stanley Fish, “Who’s in charge here?” Chronicle of Higher Education, 2/5/2005.
4. Abuses and Misuses
1. Beyond student input: over reliance on ratings in the
evaluation of teaching
2. Making too much of too little
• Relationship between teaching and learning/how does a
student know?
• Biases (gender, foreign-born instructors, ethnicity, attractive
professors, easy graders, untenured professors, personality,
class size, type of class, subject areas, required courses,
instructor contamination, etc.)
• Cutting the log with a razor, 3.0 versus 3.1? Huh?
3. Not enough information to make a good judgment (one
course does not a teacher make)
5. Abuses and Misuses
4. Questionable administration of ratings
5. Using the elements of the instrument inappropriately
(instructional delivery skills vs. content expertise
questions)
6. Confusion and lack of attention to purpose, learning
environment and efficiencies in design processes
7. Failure to conduct research to assess validity and
reliability
8. Considerations in selection of method for administration
(online, paper, timing, who, etc.)