by Luke Myers
An insightful op-ed and important article appeared on Inside Higher Ed yesterday whose relationship should not go uncommented upon. First, Jason Solomon and Nancy Rapoport criticize the criteria of the U.S. News college rankings, particularly their peer “quality assessment” survey that relies on the reputation of schools among university administrators. The authors note, “There’s an information vacuum on relative educational quality, and no real incentive for any university administrator, acting alone, to figure it out.” The result? Colleges do not compete based on educational quality but instead on the criteria available to and reported by U.S. News—criteria that force institutions to focus on “raising the incoming credentials of their students by throwing money at the crème de la test-takers.”
Unlike most critics of U.S. News, however, Solomon and Rapoport do not call for a boycott of the rankings but rather for institutions to work with the magazine to “come up with a more rational means of doing comparative assessment of educational quality.” They acknowledge that rankings are not going anywhere, so what we need are better rankings. It was this philosophy that drove the Center for College Affordability and Productivity’s entrance into the field of college rankings, with the second annual national ranking being released later today in conjunction with Forbes. At CCAP, however, we will be the first to admit that our rankings are still very imperfect, mostly due to the lack of data that would be necessary to truly judge the educational outcomes produced at most schools.
However, a piece by Doug Lederman, also posted yesterday, reveals that some institutions are already providing the kind of information that could help fix the problems noted by Solomon, Rapoport and CCAP. Lederman reports on the Transparency by Design program, developed by a group of primarily online institutions aimed at educating adults, which he calls “perhaps the most aggressive and expansive” accountability program in the past few years. Participants aim to provide tangible information about their graduates’ learning outcomes and agreed that to do so they would use common definitions and measures. The website reports statistics for each school on student engagement, general educational learning outcomes, and graduate satisfaction with their education using nationally normed surveys and common questions asked of all graduates. The group is currently exploring ways to gather and report information about employers’ satisfaction with their institutions’ graduates.
The information being collected and reported by the institutions participating in Transparency by Design may not be a panacea for ailing quality in higher education, but it is a step in the right direction. If this information was made public by all colleges and universities, college rankings could better judge schools based on the more rational criteria of whether institutions accomplished what should be their primary goal: providing students with the skills necessary to live their lives to their satisfaction after graduation. Only then will institutions have the incentive to compete to better accomplish this goal, rather than to compete on the current basis of wealth and selectivity.