Tuesday, August 12, 2008

Time to Measure what matters

By Jim Coleman

Do current college ranking really measure the quality of schools? There’s reason to think not. Currently the most widely used college ranking is published by U.S. News and World Report (USNWR). Purportedly, these rankings aim to identify the “best” colleges in the country. However, there likely is a large gap between how USNWR defines “best” and what consumers take “best” to mean.

It’s reasonable to assume that typical high school seniors and their parents regard a good college as being, among other things, reasonably affordable and offering a quality education. Most students are looking for schools that will increase their human capital (in the form of knowledge and skills), while not breaking the bank. In essence, consumers define schools as good if they have a degree of value added (the benefit gained minus the cost).

This differs markedly from how USNWR seems to define a “good” school. USNWR derives 50% of a school's score from characteristics of the students as graduating high school seniors, selectivity, and other school inputs (i.e. spending). More importantly USNWR rankings take into account no final outcomes of students. Without any measurement of final outcomes it is impossible for USNWR to compare schools on a value added basis.

So what do the USNWR rankings really measure? Two things: how much the school spends on stuff, and second, where the school stands in the sorting hierarchy. I’ll address both of these in turn.

Spending is important to education; obviously some resources must be used in order to create value, but high spending in and of itself does not indicate a quality education. All too often, a school’s financial resources are directed to creating a country club atmosphere with luxury dorms and climbing walls, and even when the money is put into educational activities such as classrooms and faculty hiring it may have a marginal impact. World class faculty are of little value to students if they are overly occupied with research and classes are taught mainly by grad students.

By “sorting hierarchy,” I mean what type of students does the school identify and admit. For example, Harvard is extremely selective and admits only students with outstanding academic records. By doing this, it sorts out talented individuals from the rest of the population. USNWR tells consumers what caliber of student a school sorts out of the population by focusing on things like student selectivity and student demographics (SAT score, class rank, etc.). This may be useful information to some, but does little to tell us about the actual quality of the school. True, a lot of very successful people come out of Harvard, but when the incoming class is intellectually in the top percentile of the population we should expect them to do well regardless of what school they attend. What really matters to the consumers is how much the school, not students’ individual characteristics, had to do with their success. On this issue USNWR is silent.

All things considered USNWR provides some interesting, but ultimately not very useful information to consumers. What is really needed is a more consumer oriented method of evaluating colleges, and that is what we are trying to provide at CCAP with our new rankings. We are taking the first step in creating a dialogue and a movement away from the current outdated methods of evaluating colleges, and moving towards a system that emphasizes value added. For too long have USNWR and other publications just measured what was easy. It’s time to start measuring what really matters.

No comments: