By RIchard Vedder
Whenever I am speaking to some group on higher education, at some point in my talk I usually ask the question "Did Stanford (or Northwestern, or the University of Georgia, etc.) have a good year in 2005? Who in the heck knows? We know how the football tean did, but do we know how the school did in its most important mission, educating students?" The answer, of course, is no.
The Spellings Commission talked about the need for more measurement and greater transparency, including developing measures of what the "value added" is from the college experience. While the commissioners debated and pontificated on this (myself included), the Intercollegiate Studies Institute was doing something about it, administering a pretty good 60 question test to 14,000 students at 50 schools that gives us some good information on the likely amount of learning with respect to civic, historical and economic understanding. I wrote about this a couple of days ago.
Why not take a modified version of the ISI test and give it to far larger numbers of students at more institutions? Suppose the per student cost of testing is $25, and that a sample size averaging 300 is considered necessary to get results that are statistically rather reliable at the individual school level. It would cost $7,500 to survey one school. For $3 million, one could survey 400 schools -- say the 250 schools with the largest enrollments, 50 top national universities and 50 top liberal arts colleges using the US News & World Report rankings, and several dozen schools with at least 750 students chosen on a random basis (some schools would fit in more than one category); I would also be sure that at least two schools were selected in each state. For another $100,000 or so, a national report could be prepared and publicized, giving consumers a pretty good value added measure on schools that educate a very significant portion of all undergraduate four year college students. This should spur even more comprehensive measures of value added down the road, and lead to better outcome-based evaluations of schools.
Schools might not want to participate, but so what? Do what ISI did. Go to public sites where kids hang out and select participants in an unbiased manner. Or, tell the schools "we are going to test your kids anyhow -- do you want to give us some testing rooms and we will say you cooperated with the survey?"
As for the test, I might expand it to, say, 75 questions, with perhaps 30 or so in history, civic institutions and economics like in the ISI survey, perhaps 20 or so in mathematics, 15 or so in English language and literature, and maybe 10 on a miscellany in other areas, particularly in sciences. It would not be a truly comprehensive test, but would examine students on a significant numbers of areas where college grads historically have been expected to have some level of literacy. The tests would be administered to roughly equal numbers to freshman and seniors.
Who would fund this? I think it would be a marvelous project for the deep pockets of the Lumina Foundation, or for a coalition of foundations. For a relative pittance, you could offer significant new information to prospective students, could devise better measures of university productivity, assessing how much learning occurs per dollar spent. To be sure, this might lead schools to require work in the core subjects named above, in effect defining a general education component of the curriculum. However, this general education core is the step child at most schools anyhow, so even "teaching to the test" might be viewed as a positive development.