As faculty evaluation deadlines approach, implicit bias may play a more significant role than students expect. Early this October, the journal Assessment & Evaluation in Higher Education published the research of Hamilton Professors Ann L. Owen, Erica De Bruin and Stephen Wu. The report, “Can you mitigate gender bias in student evaluations of teaching? Evaluating alternative methods of soliciting feedback,” was also recently the subject of a Forbes article which outlines the professors’ research and conclusions. The project was an interdisciplinary endeavor, as Professors Wu and Owen are members of the Economics Department and De Bruin is of the Government Department.
The research is a result of a larger committee effort within Hamilton’s faculty. As Owen told The Spectator, “Student voice is an important part of that process, but also important are peer reviews and self-evaluation. Me, Professor De Bruin and Professor Wu were on [an ad hoc committee of the faculty, Committee on Evaluation of Teaching] and the research project was an outgrowth of the committee work.”
In their article, the professors outline how students are made aware of bias, but not its impact on faculty evaluations and how the way these surveys are conducted may be influenced by this bias. “Moreover, to our knowledge,” they write, “no studies have tested interventions to mitigate bias in qualitative comments, where it may be most acute.”
To conduct their research, the professors organized a randomized controlled trial in order to see how two interventions could support the utility of student feedback provided. One of these interventions “varies the instrument that solicits feedback from students,” while the second “delays the timing at which the feedback is solicited.” In this second technique, students would be asked about their faculty experiences at the start of the next semester, rather than the end of the current one. These ideas are the product of scholarship that points out how bias can be heighted in ambiguous situations and as a result of overwork and cognitive exhaustion, such as during finals period.
In the discussion portion of their research, the professors write, “we find that neither intervention significantly reduced the bias against female faculty.” While the research of De Bruin, Owen and Wu did not yield a particular approach in achieving a survey format that mitigates bias, there are several important conclusions to draw based on their work, specifically “the difficulty of removing bias from student feedback.” This understanding, in turn, is able to highlight how more efforts should be made to understand other ways to understand student experiences and survey them in a manner that works to decrease bias, a continued project that would support both faculty and administrators.
When asked about the validity of student evaluations following the outcome of the research, Owen told The Spectator, “I think that students have an important perspective on teaching that nobody else has and we should ask students about their experience in the class. However, we need to use this feedback carefully. We should be cognizant of the bias in both positivity and specificity when interpreting student feedback, try to corroborate the evidence in student feedback with other types of evidence, and be clear about how we are using student feedback in the teaching evaluation process. It is not acceptable to simply declare a person a good or bad teacher based exclusively on student feedback.”