by Jonathan Robe
Ever since Forbes and CCAP teamed up to rank colleges and universities, our use of student evaluations posted on ratemyprofessor.com (RMP) has been severely criticized (see, for instance, this article from InsideHigherEd or this piece in Change). The New York Times ran a piece earlier this year restating the common, skeptical view of RMP. My colleague Jonathan Leirer has responded to some of these criticisms in the past here.
There’s an interesting new article in the May 2010 edition of the electronic journal Practical Assessment, Research & Evaluation dealing with the validity of the RMP data as we use it in our rankings. The authors of this study, April Bleske-Rechek and Kelsey Michels, test three common criticisms made against RMP data: 1) only students with highly negative or highly positive comments post on RMP, 2) students who post on RMP are not typical and representative of the student body as a whole, and 3) student postings exhibit a positive relationship between easiness and quality.
While this analysis was restricted to only one school (and one not included in our rankings at that), the results offer additional support for our use of these data. As far as the three common assumptions are concerned, Bleske-Rechek and Michels conclude that their "findings put each of these notions in doubt." They found that the distribution of RMP ratings at the professor level are near-normal (rather than bimodal) and that students who post on RMP are typical students (in terms of GPA, year in school, and "similar in their focus on grading versus learning"). Furthermore, while the study confirms a positive relationship between quality and easiness at the instructor level, the authors warn that "it is misguided to jump to the conclusion that the association between easiness and quality is necessarily a product of just bias" rather than remaining open to the possibility that the RMP data may reflect that "quality instruction facilitates learning."
In other words, this study potentially undermines the argument that RMP data is widely susceptible to student misuse and therefore an improper measure of actual student perception of the education they receive in the college classroom. While this study is itself somewhat limited (the authors expressed a desire for further expanded research), it does reaffirm our view that student ratings of professors are seriously important to other students, that the RMP data are a good measure for the satisfaction of students with the instruction they receive in college, and that RMP can be improved upon to make it a better measure.