Thursday, May 13, 2010

RateMyProfessor Rears Its Head Yet Again

by Jonathan Robe

Ever since Forbes and CCAP teamed up to rank colleges and universities, our use of student evaluations posted on ratemyprofessor.com (RMP) has been severely criticized (see, for instance, this article from InsideHigherEd or this piece in Change). The New York Times ran a piece earlier this year restating the common, skeptical view of RMP. My colleague Jonathan Leirer has responded to some of these criticisms in the past here.

There’s an interesting new article in the May 2010 edition of the electronic journal Practical Assessment, Research & Evaluation dealing with the validity of the RMP data as we use it in our rankings. The authors of this study, April Bleske-Rechek and Kelsey Michels, test three common criticisms made against RMP data: 1) only students with highly negative or highly positive comments post on RMP, 2) students who post on RMP are not typical and representative of the student body as a whole, and 3) student postings exhibit a positive relationship between easiness and quality.

While this analysis was restricted to only one school (and one not included in our rankings at that), the results offer additional support for our use of these data. As far as the three common assumptions are concerned, Bleske-Rechek and Michels conclude that their "findings put each of these notions in doubt." They found that the distribution of RMP ratings at the professor level are near-normal (rather than bimodal) and that students who post on RMP are typical students (in terms of GPA, year in school, and "similar in their focus on grading versus learning"). Furthermore, while the study confirms a positive relationship between quality and easiness at the instructor level, the authors warn that "it is misguided to jump to the conclusion that the association between easiness and quality is necessarily a product of just bias" rather than remaining open to the possibility that the RMP data may reflect that "quality instruction facilitates learning."

In other words, this study potentially undermines the argument that RMP data is widely susceptible to student misuse and therefore an improper measure of actual student perception of the education they receive in the college classroom. While this study is itself somewhat limited (the authors expressed a desire for further expanded research), it does reaffirm our view that student ratings of professors are seriously important to other students, that the RMP data are a good measure for the satisfaction of students with the instruction they receive in college, and that RMP can be improved upon to make it a better measure.

1 comment:

Overlook said...

I'm wondering if an old fart alumnus can rate their professors retroactively. I have at least one that I'd like to rate.

The detractors of the Forbes/CCAP rankings like to flap their jaws and offer no solutions to assuage the problems that they have with the rankings. Why aren't they also pointing out the problems with the USN&WR?

And while they complain about two criteria - Who's Who and Rate My [liberal indoctrinating] professor, I think the problem that critics have is that they haven't bothered to find out how the rankings are done.

If CCAP took those ingredients out of their ranking method, the same sorry jackasses would find something else to complain about.

I wonder if Patricia McGuire thinks the image of God has been replaced by a mirror.

"Once again, greed trumps truth while masquerading as a consumer service." Huh? Isn't CCAP a research organization that provides research results at no cost to the consumer. So it becomes quit evident that Patricia McGuire does not check facts before publishing her fiction.

"Why not add a category for the number of campus sluts outed on JuicyCampus.com? It is my understanding that the "slut" category was considered, but there was a problem Patricia. With the way you get around, the rankings would be terribly skewed.