By Richard Vedder
One of the best things --arguably the best thing --that the Spellings Commission recommended was the development of value added measures of learning by colleges, and the publishing of those results in an easy-to-understand format on the Internet and in other forms to aid parents, students, taxpayers, and policymakers. There are three possible responses from the academy to this recommendation: one, we will not do it because it is inappropriate or impossible to do; two, we agree with you and will work to implement the recommendation or three, that we are already doing it so this is much ado about nothing.
In today's INSIDE HIGHER ED Scott Jaschik reports on a meeting of the National Symposium on Postsecondary Student Success, where a number of administrators claimed that the third option above applies. "All these national calls would make you think nothing is happening," according to Jon Young of Fayetteville State University. Well, if it IS being done, why are we not hearing about it? Why are we not working to get a relatively small number of standardized instruments to measure "value added" and report the results IN AN EASY-TO-UNDERSTAND fashion to real people, not academic administrators? I think Margaret Spellings is absolutely right to champion this issue, and I am under whelmed by what I read about what is really going on.
To be sure, there are issues. I think we should measure the "value added" of what students learn rather than how they feel or how "engaged" they are. There is a case for having critical thinking being a second metric worth evaluating. Also, I was pleasantly surprised to read that the North Central Association, a major regional accreditor, is offering an alternative accreditation review that seems to emphasis regular evaluation of "value added" type measures. Although the devil is in the details, this is an excellent move in principle. I hope this idea spreads.
Subscribe to:
Post Comments (Atom)
1 comment:
The article below comes from my favorite publication to despise. Nonetheless, they managed to eek out a good article - though it is dated.
I recall my freshman year in college when the Commandant of the dormitory summoned us all to the first floor lounge for a sermon on what was going to happen to us. It was like a very wimpish form of your first day at boot camp with the new DI. But, the guy was right about the one thing that struck me as a challenge that I was not about to fail. He said, "Look at the person on your right and then look at the person on your left. One of them them will NOT graduate from college." And he was right. The person on my left was my room mate - he made it. The guy on my right turned out to be a goof and he didn't come close. I made it. There is a refernce to the success rate, if you will, in the following article. And thought the irony was worth posting.
August 16, 2006
David Leonhardt
Rank Colleges, but Rank Them Right
EARLY this morning, U.S. News & World Report will send e-mail messages to hundreds of college administrators, giving them an advance peek at the magazine’s annual college ranking. They will find out whether Princeton will be at the top of the list for the seventh straight year, whether Emory can break into the top 15 and where their own university ranks. The administrators must agree to keep the information to themselves until Friday at midnight, when the list goes live on the U.S. News Web site, but the e-mail message gives them a couple of days to prepare a response.
By now, 23 years after U.S. News got into this game, the responses have become pretty predictable. Disappointed college officials dismiss the ranking as being beneath the lofty aims of a university, while administrators pleased with their status order new marketing materials bragging about it — and then tell anyone who asks that, obviously, they realize the ranking is beneath the lofty aims of a university.
There are indeed some silly aspects to the U.S. News franchise and its many imitators. The largest part of a university’s U.S. News score, for instance, is based on a survey of presidents, provosts and admissions deans, most of whom have never sat in a class at the colleges they’re judging.
That’s made it easy to dismiss all the efforts to rate colleges as the product of a status-obsessed society with a need to turn everything, even learning, into a competition. As Richard R. Beeman, a historian and former dean at the University of Pennsylvania, has argued, “The very idea that universities with very different institutional cultures and program priorities can be compared, and that the resulting rankings can be useful to students, is highly problematic.”
Of course, the same argument could be made about students. They come from different cultures, they learn in different ways and no one-dimensional scoring system can ever fully capture how well they have mastered a subject. Yet colleges go on giving grades, drawing fine lines that determine who is summa cum laude and bestowing graduation prizes — all for good reason.
HUMAN beings do a better job of just about anything when their performance is evaluated and they are held accountable for it. You can’t manage what you don’t measure, as the management adage says, and because higher education is by all accounts critical to the country’s economic future, it sure seems to be deserving of rigorous measurement.
So do we spend too much time worrying about college rankings? Or not nearly enough?
Not so long ago, college administrators could respond that they seemed to be doing just fine. American universities have long attracted talented students from other continents, and this country’s population was once the most educated in the world.
But it isn’t anymore. Today the United States ranks ninth among industrialized nations in higher-education attainment, in large measure because only 53 percent of students who enter college emerge with a bachelor’s degree, according to census data. And those who don’t finish pay an enormous price. For every $1 earned by a college graduate, someone leaving before obtaining a four-year degree earns only 67 cents.
Last week, in a report to the Education Department, a group called the Commission on the Future of Higher Education bluntly pointed out the economic dangers of these trends. “What we have learned over the last year makes clear that American higher education has become what, in the business world, would be called a mature enterprise: increasingly risk-averse, at times self-satisfied, and unduly expensive,” it said. “To meet the challenges of the 21st century, higher education must change from a system primarily based on reputation to one based on performance.”
The report comes with a handful of recommendations — simplify financial aid, give more of it to low-income students, control university costs — but says they all depend on universities becoming more accountable. Tellingly, only one of the commission’s 19 members, who included executives from Boeing, I.B.M. and Microsoft and former university presidents, refused to sign the report: David Ward, president of the nation’s largest association of colleges and universities, the American Council on Education. But that’s to be expected. Many students don’t enjoy being graded, either. The task of grading colleges will fall to the federal government, which gives enough money to universities to demand accountability, and to private groups outside higher education.
“The degree of defensiveness that colleges have is unreasonable,” said Michael S. McPherson, a former president of Macalester College in Minnesota who now runs the Spencer Foundation in Chicago. “It’s just the usual resistance to having someone interfere with their own marketing efforts.”
The commission urged the Education Department to create an easily navigable Web site that allows comparisons of colleges based on their actual cost (not just list price), admissions data and meaningful graduation rates. (Right now, the statistics don’t distinguish between students who transfer and true dropouts.) Eventually, it said, the site should include data on “learning outcomes.”
Measuring how well students learn is incredibly difficult, but there are some worthy efforts being made. Researchers at Indiana University ask students around the country how they spend their time and how engaged they are in their education, while another group is measuring whether students become better writers and problem solvers during their college years.
As Mr. McPherson points out, all the yardsticks for universities have their drawbacks. Yet parents and students are clearly desperate for information. Without it, they turn to U.S. News, causing applications to jump at colleges that move up the ranking, even though some colleges that are highly ranked may not actually excel at making students smarter than they were upon arrival. To take one small example that’s highlighted in the current issue of Washington Monthly, Emory has an unimpressive graduation rate given the affluence and S.A.T. scores of its incoming freshmen.
When U.S. News started its ranking back in the 1980’s, universities released even less information about themselves than they do today. But the attention that the project received forced colleges to become a little more open. Imagine, then, what might happen if a big foundation or another magazine — or U.S. News — announced that it would rank schools based on how well they did on measures like the Indiana survey.
The elite universities would surely skip it, confident that they had nothing to gain, but there is a much larger group of colleges that can’t rest on a brand name. The ones that did well would be rewarded with applications from just the sort of students universities supposedly want — ones who are willing to keep an open mind and be persuaded by evidence.
Post a Comment