Rutgers University Senate
Best Practices in Assessment of Teaching

As Revised May 3, 2002


Input was received from the following:

Professor O'Donnell stated that, although student course evaluation data are problematic (for example, they are anonymous, there is no distinction between students who attended class and those who did not), when they are correlated with student learning as measured by test performance, they are, in general, more reliable than other mechanisms, including peer review of teaching. In particular, peer review, if conducted without guidelines and agreed-upon standards, is very unreliable because each individual uses his/her own personal teaching style to judge the teaching ability of others.

Professor O'Donnell stated that the most reliable use of student course evaluations is when faculty who teach different sections of the same course with a common syllabus are compared. Comparing evaluation scores from a large lecture course with scores from a small upper level or graduate seminar is not reliable. Furthermore, evaluation scores for courses with large enrollments, required courses, and courses involving quantitative material tend to be lower than scores for other courses. She added that current language on the standard student course evaluation form that the written comments are for the instructor's own use are misleading, as they have been used for personnel decisions in some departments.

Professor Clemens discussed the History Department's outstanding and, perhaps, unique approach to mentoring and evaluating teaching. The Department's "Statement of Responsibilities for the Teaching Effectiveness Committee" is attached as an appendix to this report. A Departmental Teaching Effectiveness Committee has been created that assigns a senior faculty member to mentor each untenured assistant professor. In addition, a visitation committee, consisting of two tenured faculty members and a graduate student, is created for each untenured faculty member. The visitation committee visits the untenured faculty member's class four times before reappointment and four more times before the promotion/tenure recommendation is made. The graduate student member of the committee has completed a four-credit course on teaching offered by the History Department. The Department has developed guidelines for how the evaluation and visitation process should be conducted.

The visitation committee writes a report after observing the faculty member's class twice; the report also includes a statement from the individual being evaluated. Four such reports are produced, and are used during the department's consideration of the individual for promotion and tenure. The teaching mentor does not participate in the department's discussion during the reappointment or promotion/tenure review process, but is eligible to vote. Some new associate professors hired with tenure are assigned a mentor as well. According to Professor Clemens, the results of the peer evaluation correlate highly with student course evaluation results.

Professor Gigliotti discussed how student course evaluation results are analyzed and reported. There is no database in which faculty course evaluation scores are reported over time; reports are sent to department chairs and are posted on a University website. He stated that departmental means are very consistent over time; the critical issue is what the department does with the course evaluation scores.

Dr. Devanas discussed the advantages of creating a teaching portfolio. The faculty member describes his/her teaching philosophy, includes information on courses, teaching methods, copies of graded or edited papers, information on advising, tests, assignments, and study questions. Creating the portfolio encourages the faculty member to think about what he/she is doing, and provides helpful information to decision-makers on the quality of the individual's teaching. According to Dr. Devanas, Cook College requires each faculty member to develop a teaching portfolio. The Teaching Excellence Center provides help, instruction, and support to faculty members who wish to develop a teaching portfolio; its website lists resources and other information.

Best Practices and Recommendations

The University Senate made the following recommendations, based on the above input:

  1. The statement on the reverse side of the student course evaluation form that "This information is intended to be used by the instructor to modify or improve the course" should be deleted.
  2. The process of mentoring, peer observation and peer evaluation used by the History Department-New Brunswick is commended. Departments should assign a teaching mentor to every first year untenured faculty member in consultation with that faculty member. Mentoring activities may include meeting periodically to discuss teaching, visiting each other's classes, co-teaching courses, reviewing instructional materials, and other aspects of teaching and student advising. All departments should conduct peer evaluations, taking steps to ensure that there are consistent guidelines and procedures for this process.
  3. Departments should encourage faculty to develop a teaching portfolio for use in evaluations for reappointment, promotion and tenure.
    1. Written comments from students can be included in the portfolio. All written student comments should be available, at least in the supplementary materials, to every level of the reappointment, promotion and tenure process.
    2. A personal statement concerning teaching philosophy and accomplishments as well as scholarship and service should be included with the reappointment or promotion/tenure packet.
  4. Each department should securely keep on file all of the information contained on the completed student course evaluation forms for at least ten years, or since an individual faculty member's last academic promotion, whichever is longer.
  5. Candidates for promotion should be able to list student course evaluation scores for years prior to their last promotion, particularly to demonstrate any changes in student course evaluation scores prior to and after the last promotion.
  6. The Teaching Excellence Center should be asked to maintain a database for each faculty member of student course evaluation scores and summary statistics that will be provided to the individual faculty member, the department chair, and the dean upon request.
  7. The University should report to each department the distribution of raw scores for each course and instructor, as well as the mean scores, as now reported.

Since student-evaluation scores are most useful only when comparisons are made with similar types of courses, a best practice would be for each department to divide its courses into appropriate categories of comparable courses, for teaching evaluation, or comparison purposes. These divisions could include, for example, large lecture courses, laboratory courses, studio courses, seminar courses, honors courses, required courses, graduate courses, and research courses. Each department, from the distributions of raw scores provided by the University, would calculate the departmental means for each category of courses appropriate to that department. It would normalize the student course evaluation scores for each course and instructor against the appropriate departmental mean, i.e.-divide the raw scores by the appropriate departmental mean scores. The department would then use and report these normalized scores in its personnel decisions and recommendations, as well as the raw scores for each instructor. In addition, each department might also wish to compute, use, and report the full score distributions and/or any other statistical quantities, such as grouped median, or standard deviation.