by Luke Myers
On Monday we had a very interesting and engaging conversation about the role of college rankings in higher education (you can find
video here and
coverage here). While most of the presenters supported college rankings on the whole, Cliff Adelman of the Institute for Higher Education Policy denied they had any use. He argued that college rankings are not taken seriously by anyone but “insecure academic administrators,” are nothing more than entertainment that belongs on the sports page, and have no effect on the lives and learning of college students. He rejects that rankings can be a part of encouraging quality in higher education and calls instead for a
Bologna Process-inspired system of accountability.
While we at CCAP whole-heartedly support Adelman's call for better assessment of learning outcomes in higher education, I believe he is dead wrong about the role college rankings can play. Numerous empirical studies suggest that college rankings have tangible and quantifiable effects on students’ college choice and universities’ management practices, as discussed in our
study. While Adelman may find these rankings nothing more than “entertaining,” thousands of high school students and parents rely on them as a part of their college search process. These documented effects cannot be dismissed by mere assertion
There are two problems with claiming college rankings have no role in providing accountability for higher education. First, in order for any form of accountability to be useful to the consumers of higher education and the general public, it must be easily understood. The question and answer period after Adelman’s 20 minute presentation demonstrated that even the highly educated members of the audience were not clear on how the Bologna Process system of accountability operated. The process involves qualification frameworks that universities are supposed to operate within and “diploma supplements” that are to list the exact skills a recipient has demonstrated. Yet with thousands of institutions of higher education in the United States, it is unclear how this complex information—no doubt important to collect—would be useful to high school students who may receive e-mails from over 40 schools in just one day.
Adelman’s support of the Bologna Process represents an ideal of accountability that is by academics, for academics. High school students and their parents do not have time to sort through in-depth information about the hundreds of universities they may be considering. As Adelman struggled to clarify the complex process to the audience, another presenter at the conference captured this sentiment best when he leaned over to me and said “This is why people like rankings.” Rankings simplify information so as to be useful to the average college-bound student when comparing many different institutions.
Second, in light of this demand for easily understood comparative information, Adelman’s remarks ignore the power of incentives. It is true that rankings themselves do not directly affect the lives and learning of college students, but they can provide the incentives for colleges to engage in practices that do. Consumers do pay attention to rankings and the resulting desire for schools to move up in rankings does affect students’ educational experiences on campuses.
I applaud Adelman’s desire to focus on the learning outcomes of higher education, such as application of knowledge, using complex data, and communicating to various audiences. These are the criteria by which institutions should be judged. Focusing on learning outcomes and rankings are not mutually exclusive. If rankings were based on these criteria, the schools’ desire to move up in the rankings would provide the necessary incentive for institutions to spend their resources on practices that promoted these learning outcomes.
But if the information about whether or not institutions actually provide these outcomes is too unwieldy for consumers to quickly distinguish the quality of one school compared to another, there is no way to hold institutions accountable according to these definitions of quality. Where is the accountability if the information is provided in such a way that neither consumers of higher education nor employers can comprehend it? One can create the best system for gathering data on the most important learning outcomes of higher education, but the existence of this data is useless to accountability unless it is easily accessible and understood by the public doing the accounting.
Rankings are not a panacea to the current accountability gap in higher education. Current rankings may actually be providing perverse incentives to administrators and encouraging inefficient use of resources. The focus on learning outcomes stressed by Adelman is vital to the improvement of quality at colleges and universities. But this better source of data must be presented in a way that allows for the functioning of one of the most important forms of accountability: consumer choice. If there is one thing Americans appreciate and can easily understand, it is rankings, and as a result they will continue to find them an important part of their college search process, even if they are published on the sports page.