By Richard Vedder
The Chronicle of Higher Education had a great piece the other day about a new method of rating graduate programs. In the past, the National Research Council (NRC) has done rankings, as well as the ubiquitous US News & World Report (USN&WR). They have been largely based on reputations of schools as assessed by senior scholars, department chairs, etc. Now a private firm, Academic Analytics (AA), has come up with a Faculty Scholarly Productivity Index that evaluates the research output of each faculty member within institutions, based on publications in respected journals, books, research grants, etc. Presumably the higher the research output of the faculty, the better the graduate program. This measure is strictly objective, not based on subjective opinions.
It is interesting how this approach leads to sharply different results than the reputation approach used by the NRC and USN&WR. Take English. Only 3 of the top 11 schools (there was a tie for 10th place) in the AA rankings are in both the last NRC and USN&WR rankings --Cal-Berkeley, Stanford, and Chicago. The AA evaluations rank very highly the Universities of Georgia and Illinois, Penn State, and Washington University in St. Louis -- not in the top 10 in either of the other rankings. The AA rankings tend to emphasize the prestigious old line private schools less, and are based more on merit criteria than with the "old boy’s network" of evaluating things. In my field of economics, the University of California at San Diego is ranked higher than UC rivals Berkeley and UCLA, unlike in virtually all other surveys. If the AA rankings catch on, schools will find it harder to coast by on their reputations for decades after scholarly output peaked.
Three cheers for AA. 30 or so schools are now paying for detailed information regarding their own programs relative to peer institutions. This is an honest attempt to create some sort of bottom line that is objectively determined. It is far from perfect, and what is far more important than the quality of the inputs used (faculty productivity) is what sort of output results. Are the newly minted Ph.D.s in these programs good researchers themselves? And, dare I ask, can they teach? Do they get top jobs?
Much more needs to be done to truly move to good measurement of outcomes in higher education, and this new evaluation does not truly even attempt to do it. But it points the way to solutions. Private entrepreneurs can make money, I think, developing truly objective measures of what students learn or derive from college, information if provided in user-friendly form would help parents make better choices -- and instill more competition into higher education. I would love to see a "learning per dollar spent" measure developed, for example. Along with the National Institute of Deaf's use of Social Security earnings data, this is a promising approach to finding true bottom lines in higher education.