Report of the Teaching Conference on Evaluation of Teaching

Date

REPORT FROM THE NBFC RU TEACHING CONFERENCE: Responding to “A Proposal to Improve the Evaluation of Teaching at Rutgers University “ Held on October 27, 2017 at the Busch Campus Center.


Introduction

The attached draft report “A Proposal to Improve the Evaluation of Teaching at Rutgers University” was prepared by the Task Force on Evaluating Teaching commissioned by the Senior Vice President for Academic Affairs, Barbara Lee. Unlike earlier reports from the University Senate and the New Brunswick Faculty Council, this proposal sets up a timetable for implementing a wide-ranging set of procedures to evaluate teaching across all units in the university. In addition to proposing various processes for evaluating teaching, the report proposes that an array of administrative offices accomplish setting up and administering the evaluation procedures.

The NBFC teaching conference (including both panelists and discussion groups) focused on four areas of the report: each of the three modes of evaluating teaching—student surveys, peer observations, and teaching portfolios—and the implementation process itself. There is little question that evaluating teaching is an important task; the question is how to do so in a meaningful way. And, as one panelist suggested, as important as the instrumentality might be, a qualitative objective is to create a culture within departments and the university where teaching is part of an ongoing conversation, and not merely a summative statement on Form 1A.

Summative Evaluations of the Proposal

SIRS: It should not be used in summative evaluations for rehiring, retention, or promotion; if used for formative evaluation, it needs to be corrected for its many biases.

PEER OBSERVATION: If done adequately—training faculty, developing a systematic set of questions and areas to examine--peer observations are useful for formative evaluations; they are not appropriate for summative evaluations.

TEACHING PORTFOLIO: A good way to see into a teacher’s work is to see a portfolio—e.g., teaching philosophy statement, syllabi, assignments, student work with comments, etc. Evaluating the portfolio would require a flexible set of criteria, and training of the evaluators.

IMPLEMENTATION: The timetable for the proposal is too rapid—there need to be intermediate stages to allow development of instruments and methods of evaluation and evaluative criteria.

1. SIRS

The two discussion groups focused on SIRS, as well as one panelist who discussed SIRS, were nearly unanimous in rejecting the use of SIRS for any summative valuation for rehiring, retention, promotion or tenure. Student surveys show a marked bias against women and minorities—biases that are part of everyday cultural prejudices. There are circumstantial biases as well—large classes do less well; STEM courses and writing courses, likely to be very demanding in particular ways, also do less well. To utilize this blunt instrument for retention and promotion processes is to further entrench these cultural prejudices. Both discussion groups pointed out the special vulnerability of PTLs in relation to SIRS. The “yes” and “no” questions of the proposal did not pass muster. On the positive side of SIRS: the written comments are the most useful part of the survey for faculty.

If SIRS is used in formative evaluations, it should be further refined to yield useful data; it should insure a way to correct for bias; and, last there must be a method to get the return rate close to 100%--departments should not be charged (as in the proposal) with improving rates of return, that should be done at the top of the chain. One way to begin to refine the evaluations would be to find out what students think comprises good teaching in general—one panelist reminded us that at Rutgers, prior to SIRS, students evaluated teachers and published the results on their own, pre-ratemyprofessors.com.

2.  Peer Observations

Comment: The Proposal uses the term “Peer Review” to cover two separate forms of evaluations—a) peer observation of classes, and b) a teaching portfolio. We note that “Peer Review” or "Peer Evaluation” is also the method of examining the research portfolio. At the conference, we considered “peer observations” and “teaching portfolios” as two separate modes of evaluation, and note in both cases that “peer review” cannot be done without training and development of standards.

Although peer observations can be very helpful for formative evaluations, all agreed that peer observations should not be used for summative evaluation. In order to be effective and uniform, those who are observing would need to be trained to do so; there must be a clear description of what activities/materials are observed and what criteria (standards) are used to evaluate and rate that observation; and, the frequency of observations also needs to be determined. For example, in one department where peer observation has been a part of evaluating teaching for a decade or more, a junior faculty member is observed on four occasions by a pair of observers over the course of a tenure track process. If the proposal to implement peer observations is mandated, then it was deemed essential that resources required to fully comply with the proposed policy be earmarked. Peer observation will be especially difficult for departments with large numbers of PTLs especially because of the proposed requirement that PTLs be evaluated for every course, every semester—this seemed excessive, especially for PTLs who teach regularly. And, some participants indicated that there were not sufficient numbers of senior faculty to do this work.

3. Teaching Portfolios

Teaching portfolios seemed to be the most promising way to see into and evaluate the work of a teacher. A checklist of possible items for inclusion would aid a teacher in gathering materials—from syllabi to assignments to student work with comments. Developing a portfolio is an individual’s task, but evaluating it would be the task of colleagues. Here, too, training and resources

would be necessary to carry out the work of evaluating portfolios. In addition, in some departments where there is a division of labor (research & teaching) there is also a division of faculty position (tenured and NTT) such that NTTs feel it would be inappropriate for non-teaching faculty to evaluate the teaching of NTT faculty. One panelist with extensive experience evaluating portfolios for both formative and summative purposes indicated just how labor-intensive process of review is. This panel was separate from the SIRS and PEER OBSERVATION panel, and did not have the judgments of those panels. It discussed the use of both instruments for inclusion in the Teaching Portfolio at the discretion of the faculty member under review.

4. The Implementation Process

People in discussion found the planned layout much too rapid, and were concerned that that would endanger its mission. There are several pilots, we learned, and there probably should be pilots in each school— this would take the heat off the proposed deadlines. Participants felt that it was especially unrealistic to propose that departments craft an evaluation plan by March 1, 2018 because 1) Some department chairs are overwhelmed with other tasks and may delay the formation of the proper committees. 2) It will take time for departments to investigate best practices and afford due diligence to the various other intermediary steps involved in crafting a plan and developing the standards by which to evaluate teaching. Participants are concerned that this March 1stdeadline will lead departments to develop a “cavalier attitude” toward this very important task and may lead departments to conservatively gravitate toward the flawed and biased practices that are already in place.

There was broad agreement that the methods and criteria used to assess good teaching should be applied without regard to rank. However, participants stipulated that the frequency of teaching evaluation depends on the nature of the appointment. In principle, all professors should undergo multi-methodological teaching evaluations in accordance with the taskforce proposal. However, inadequate resources will likely preclude thorough and frequent teaching evaluations of all faculty, necessitating that some faculty (e.g. full professors) be at least partially exempt.

Respectfully submitted to the NBFC by Martin Gliserman

Attachment

Attachment Size
TeachingTaskForceDraft.pdf 85.31 KB