Wednesday, August 11, 2010

In Defense of RMP, Part Two

By: Ryan Brady

In past years, CCAP has taken much criticism for our use of the data provided by RatemyProfessor.com as a metric in our annual college rankings published with Forbes (see our previous defense of RMP here). For those unfamiliar, RatemyProfessor.com (RMP) is a website that allows students to evaluate their professors based on a number of criteria, most prominently his/her overall quality. Many critics of the website believe that if a professor easily hands out good grades, they will probably be rewarded with higher ratings from students and therefore the data provided by the website is useless. Realizing there is very likely some validity to this criticism, we have always inversely weighted the “ease” category reported by students on the website, and included that in a professor’s RMP score that goes into the final ranking for the school.

However, we wanted to test this criticism a bit more carefully, and a relatively new website, Campusbuddy.com provides excellent data that allows us to do an interesting analysis. Campusbuddy.com provides aggregate grades given by individual professors at hundreds of public institutions. Since the data is available at the individual professor level, it can be compared with the same professor’s student evaluations from RMP.

Using both data sources we took a sample of over 1,500 professors from 10 randomly selected public schools that appear in the CCAP/Forbes annual listing of America’s Best Colleges. For these 10 schools, we obtained the data for professors in the following disciplines: Economics, Education, English and Physics. Looking specifically at the RMP overall rating of every professor versus the average G.P.A. of their students, we found that there was 35.86% correlation between overall rating and average G.P.A. The same correlation was run separately for each department, the results are as shown:

Economics: 26.13% English: 33.40%
Education: 17.28% Physics: 34.33%

There is no doubt that a correlation does in fact exist between the two variables, however it is much lower than one might have expected. Most of the disciplines do not differ much from the total correlation, although we do see that Education has a much lower correlation than the other subjects. (This is likely due to the fact that Education professors are notoriously known for giving out inflated grades. The average G.P.A. for professors in Education from this sample was 3.6, compared to an Economics G.P.A. of 2.69.)

When looking at the correlation between the “ease” ratings of RMP compared to the average G.P.A.’s, we begin to see a much better relationship. For all professors the correlation was a much higher 44.15%. For each department, the results are as shown:

Economics: 43.55% English: 39.55%
Education: 33.53% Physics: 47.09%

These results indicate that the “ease” rating in RMP is a fairly accurate indicator of the grades that these professors are giving out. Thus, our methodology used in the CCAP/Forbes ranking does indeed account for the tendency for professors to “buy” good evaluations with high grades. Since the “ease” factor is inversely weighted, our ranking does not encourage grade inflation.

While our ranking attempts to minimize the influences of grade inflation, grade inflation itself remains a problem for higher education. Colleges and universities are supposed to exist to transmit knowledge to students and prepare them to be productive citizens. Yet, based on a 2010 National Bureau of Economic Research (NBER) study, students are spending far fewer hours studying today than they did fifty years ago. The average college student spent 40 hours per week on academics in 1961, but by 2004, this figure was down to 27 hours.

Why are students working less? The advent of professor evaluations probably has much to do with it. These evaluations create incentives for both students and professors, to have inflated grades. As professors lower standards and assign higher grades, students in turn reward their professors with better evaluations. George Leef refers to this as the “student-professor non-aggression pact.” This pact however is detrimental to student learning and the creation of human capital. Students need to be actually learning, and not just receiving subjectively high grades, for human capital to be developed. Learning in the end is what truly matters when students hit the work force, and our higher education system is in great need of some serious leadership to tackle the grade inflation problem.

No comments: