Thursday, October 30, 2008

Colleges' Perverse Incentives

by Luke Myers

Imagine if the Standard & Poor's stock rating was heavily based on how much companies spent on capital, labor and technology, with no attention to earnings per share or debt to equity. Their ratings would, with good reason, be ridiculed or perhaps simply ignored. Investors know that the money spent by a firm tells little, if anything, about the quality of returns.

Yet the most prominent source for ratings in the higher education industry does exactly this: 50% of a college’s ranking in U.S. News and World Report (USNWR) is determined solely on input factors. This means that simply spending more money will increase a school’s score. A demonstrable increase in educational quality as a result of this spending is not necessary to move up in the ranks.

However, we do not ignore USNWR. In fact, studies have shown that a change in rank has real, statistically significant effects on the admissions outcomes of schools. One found that less favorable rankings one year is followed by an increase in the school’s admittance rate, a decrease in the yield rate and a lower average SAT score for the next year’s incoming class.(1) These effects combined with the fact that high achieving students find rankings very important in helping them determine where to matriculate(2) suggest that colleges experience a tangible difficulty in recruiting higher quality students when their rankings become less favorable.

Thus, colleges and universities are provided disincentives to operate efficiently. A school that produces the same educational quality at a lower price than another school is punished in the USNWR rankings. All else equal, the more efficient (and presumably cheaper) institution, providing the same level of quality, would be ranked lower. Considering the tangible effects that result from a drop in rank, how many colleges are going to put efficiency-improving, cost-cutting measures at the top of their to-do list?

Some higher education administrators may see this as another argument for the abolition of ranking systems. Rankings are not going to disappear, though. They satisfy a demand for information about institutions in which parents and students are making one of the largest investments of their lives. As tuition continues to rise, this demand will only grow. However, a properly structured ranking system —one based on the quality of outputs —can put an end to the “academic arms race” that promotes profligate spending that does not improve education. Such a ranking system would reward universities that provided a high quality education at an efficient price.

The Center for College Affordability and Productivity partnered with this past summer to produce an outcomes-based ranking system. While it was a step in the right direction, the effort was handicapped by one problem: it is difficult to find data on the outcomes of higher education institutions that are reliable, reported by a third-party and publicly available.

The ability for a third-party to reliably measure outcomes exists. The National Survey of Student Engagement (NSSE) measures students’ exposure to activities and teaching practices that lead to improved learning. Given to freshmen and seniors, it measures academic challenge, active learning and student-faculty interaction, among other things. The Collegiate Learning Assessment (CLA) is a test designed to test students’ critical thinking, analytical and communication skills. Again, it is given to both freshmen and seniors, allowing for the observation of an institution’s “value added” in education. Furthermore, CLA scores are highly correlated with ACT scores and can, therefore, be predicted using a model based on incoming students’ scores on the latter test, allowing for the determination of those institutions that under- or outperform their predicted outcomes.(3)

With such data available, college rankings could be greatly improved. They would be more informative for those students investing in a higher education and they would reward colleges for the efficient provision of educational quality rather than for spending more of other people’s money. Unfortunately, the condition on which most schools participate in both the NSSE and CLA is that the results are not made public.

This has to change. If colleges and universities are going to continue to ask for more money from their students in the form of rising tuition costs, those students should be given the information about the effects of an institution’s spending habits. Not all spending is inappropriate, and increased spending might result in better educational outcomes. But the outcome of the money spent should be what is rewarded, not the spending of money itself. The tools for such measurements exist, but only a few colleges release the data. Imagine investing in a company that refused to release information on its earnings; there is a reason such practices were made illegal in the stock market.

Luke Myers is a research assistant for the Center for College Affordability and Productivity and a senior at Ohio University.

1. Monks, James and Ronald G. Ehrenberg. “The Impact of U.S. News & World Report College Rankings on Admissions Outcomes and Pricing Policies at Selective Private Institutions.” Cambridge, MA: National Bureau of Economic Research. Working Paper 7227, July 1999.

2. McDonough et al. “College Rankings: Democratized Knowledge for Whom?” Research in Higher Education, Vol. 39, No. 5, 1998.

3. Carey, Kevin. "College Rankings Reformed: The Case for a New Order in Higher Education," Education Sector Reports, September 2006.

No comments: