Friday, May 25, 2007

A New Way of Ranking Colleges

By Richard Vedder

US News & World Report, in the best capitalistic tradition, has made some money while serving an important human need: an evaluation of the quality of colleges and universities. The USN&WR rankings are the closet thing to a bottom line in higher education today.

Yet the rankings are very badly flawed in that they are mostly input and opinion based, not based on outcomes and results. Variables relating to the revenues and spending of resources account for about 30 percent of the rankings, and "peer assessment" for another one-fourth. Denying student’s access, thus increasing exclusivity is worth at least 15 percent. The one reasonable component is the school's graduation rate compared with the predicted graduation rate given the resources (including student quality) available to it --but it counts for only 5 percent --less than average faculty salaries. Thus the rankings give schools enormous incentives to raise and spend money --increasing tuition fees in the process.

My student Bob Arnold reminded me of a great study that I have not promoted in this space, namely the College Rankings Reformed piece done for the Education Sector (at last fall. Written by Kevin Carey, the study calls for putting a good deal of emphasis (35 percent in the rankings) on the National Survey of Student Engagement (NSSE) and the Collegiate Learning Assessment. These instruments deal us what students do in college, and whether their ability to think critically has improved (assuming the test is administered at the beginning and end of the college career). He proposes putting a 30 percent weight on post-graduate achievement and satisfaction --job earnings, alumni satisfaction with their lives, their job placement, etc. And he proposes a much higher weight on the actual vs. predicted graduation rate variable. I think Carey's proposal is excellent and some variant of it should be adopted. The Spellings Commission did not explicitly endorse this proposal, but it is certainly consistent with its recommendations of more outcomes-based assessment.

Persons with a conservative or libertarian bent are loathed to require colleges to meet a new federal mandate, and many in the liberal academic establishment do not want to report any more information than at present, because, frankly, some of the information is embarrassing. Yet colleges are up to their eyeballs in governmental involvement already, and a huge number of schools (over 1,000) already use the NSSE test and hundreds are using the CLA. Requiring schools wishing to have federal funds to use and report results on these tests would go a huge way toward building alternative rankings. The Social Security Administration has earnings histories on most American workers. The National Institutes of the Deaf have already reported how their students (collectively) do in terms of earnings. Minor changes in the law would allow (require?) the Social Security Administration to report average earnings for each class of graduates, and to relate that to some national average. A lot can be done to get an outcomes-based assessment system without a huge amount of interference in the ways colleges work. If a school like Hillsdale College wants to opt out because it takes no federal money, so be it, but if you want the federal dollars, you should have to show how those dollars serve students.

Other testing devices exist. I love the Intercollegiate Studies Institute (ISI) work on testing students on knowledge of civic institutions and history at the beginning and ending of their college careers. Some expansion and modification of that would be another good testing instrument --measuring actual knowledge, not just thinking skills and student engagement.

For a modest sum of money, we could have a rating system that enormously reduces incentives for colleges to raise costs -- and also incentives to pay more attention to the undergraduate instruction dimension which is still the bread and butter of most institutions of higher education in the United States.


Ken D. said...

Actually, for the better colleges and universities, a good ranking system is in their own self-interest. Increasingly traditional colleges and universities are going to be facing more competition from lower-cost private companies. If you are a strong traditional institution, its in your own self-interest to be able to document your better outcomes.

So we shouldn't expect the better traditional colleges and univeristies to have a knee-jerk negative reaction to college ranking systems per se. Really, they should favor reliable ranking systems. Perhaps it was for this reason that the National Association of State Universities and Land-Grant Colleges has come out in favor of the accountability movement.

Anonymous said...

Saxon said...

I attended JMU's doctoral program in Assessment in 1999 (and was dismissed for being unprofessional! i.e. having unorthodox views) with the goal of developing a method of evaluating nonprofits, a goal that I narrowed to universities. Since so much social science validates the common sense assertion that student learning and/or critical thinking has very little to do with the college and very much to do with the student (SES, IQ, etc.), I concluded that since teaching and learning were not two sides of one coin but two different coins and since the universities' mandate is to teach, evaluating teaching process is the only valid way to proceed.
Think of it as an advanced (because dealing with sentient beings) measure of statistical process control. Teaching comprises a distinct set of processes. All things being equal, better process should produce better outcomes, however defined.