by Jonathan Robe
The annual college rankings published by US News and World (USNWR) report have generated heated debate over college rankings in the academic world over the past several decades. While the most serious flaw in the methodology employed by USNWR is that it focuses on the inputs instead of the outputs of college education, it does demonstrate the usefulness of an outside assessment of higher education's performance.
But does the data used by USNWR really support the hierarchy of schools given by the rankings? In order to compute the overall school score, USNWR relies on seven separate categories of data, and each category is assigned a certain weight by USNWR. As Science News reports, a new study published by two mathematicians concludes that the USNWR ranking is highly arbitrary since there is no "defensible empirical or theoretical basis" for the choice of weights for the seven components. Although the two authors acknowledge that the data published by USNWR has some value to potential students and their parents, the weights assigned by the magazine are based solely on individual preferences and thus it is highly biased to present only one possible weighting scheme.
This study analyzes the affect different weighting schemes have on the ranking of the 130 schools included in USNWR Best National Universities for 2008. Using high dimensional geometry, the authors demonstrated that, depending on the weighting scheme selected, the ranks of the schools varied widely. For instance, although Harvard, Yale, and Princeton almost always topped the list regardless of the weighting scheme, the ranks of all of the other schools depended heavily on the component weights. A total of twenty–seven schools could appear in the top four schools, the study concluded, with one scenario putting Penn State at number 1.
Interestingly enough, the researchers published the ranking interval (the range of ranks for a school for 95% of the possible weighting choices). The average ranking interval for the 130 schools is 35.5 (41.1 for those schools not in the top 25), indicating that, in the words of the authors, a school’s “specific placement by USNWR is essentially arbitrary.”
While concluding that selecting one individual preference over another for a ranking system of colleges is not useful to students and their parents, the authors do suggest that college rankings adopt the approach of allowing individual users to access the data, make their own choices for weights, and develop their own specific ranking of schools depending on the factors which are most important to them.
Of course, in order to implement such an approach to college rankings would require that all of the data be made publicly available (USNWR publishes data for only some of its components). Such a move would certainly be for the better since it would increase transparency in higher ed for college bound students and their parents. After all, students should be given as much information as they deem necessary as they make one of the biggest decisions of their lives.
Any ranking, including our recent one in Forbes is guilty of having an "arbitrary weighting scheme," so the question becomes, which weights really matter to students. We think that our variables, which focus on educational output as opposed to the inputs used in USNWR, are better. But this paper is a useful reminder that rankings are always going to be somewhat arbitrary.
*Jonathan Robe is a research assistant for the Center for College Affordability and Productivity and an undergraduate student at Ohio University.