by Jonathan Robe
Yesterday, after years of delay, the newest (and long anticipated) installment of the National Research Council's ratings of Research-Doctorate Programs is finally out and generating some buzz in higher ed circles. Both the Chronicle and InsideHigherEd have great overviews of the new rankings, though from reading those stories, it almost seems that the NRC is embarrassed to use that "R-word" to describe what it developed and even declined to defend its work, despite the fact that it's been 15 years since the last NRC ratings and that the entire project cost over $4 million.
While these rankings tend to be on the excessively complicated side of things (I love the headline to Times Higher Education's story), in one sense I don't think this will wind up being too big of a problem, at least in terms of providing information to prospective students. Because these rankings focus only on doctoral programs, the students who would be most served probably have a sufficiently developed intellectual ability to sift through the obtuse systems (yes, the NRC saw fit to release not one, but two distinct rating systems) and interpret the data as they need, though there is no guarantee that anyone will ever fully comprehend the NRC's systems. What is puzzling to me is why the NRC went to all the trouble of constructing such complicated systems without producing a full-fledged "do-it-yourself" ranking tool (i.e., let each individual user pick the factors and weights as he wishes). Such an approach would strike the best balance between the competing goals of flexibility, precision, and usefulness in the rankings--and would also allow the ratings to be published in a more timely manner with more up-to-date data.
One positive feature, however, of the NRC ratings is the focus on comparisons across departments and disciplines rather than entire institutions. Of course the reason for this focus is that potential doctoral students are looking for the specific field of study they want to pursue rather than merely a particular institution at which to study. Although it would be more difficult to implement this departmental comparison in undergraduate rankings, it would be nice to see this feature extended in the future to these rankings as well.
The NRC has also made some progress on its methodology since its 1995 ratings. This time there are two different ratings methodologies: the "R" ratings, which rely partially and indirectly on program reputation, and the "S" ratings, which rely upon supposedly more objective measures. Reputational rankings are, in my view, one of the best ways to maintain the status quo in higher ed and prevent innovation or reform; therefore, the move away from reputation in the "S" ratings is certainly to be applauded.
Yet, the rankings still heavily rely upon inputs (publications per faculty, % new students with full support, % interdisciplinary faculty, etc.), and the output measures included (average completion %, % PhD's with academic degrees, etc.) are very scant. And some of the input variables have no discernible relationship to the quality of a doctorate education (e.g., % non-Asian minority faculty, % foreign students, etc.). This inordinate reliance upon inputs continues to be the fatal flaw with the NRC rankings. After all, given that undergraduate tuition and fees are used to subsidize graduate education at many schools, a fixation on the inputs at the graduate level not only encourages the "academic arms race" but does so in a way that increases the financial burden already on undergraduate students while keeping the rise in cost almost invisible to graduate students.
Another, albeit far less pressing, problem with the NRC rankings is the published rankings range. Each school is given a range of ranks, designed to reflect the 90% confidence. While the use of a range of rankings rather than a single, precise ordinal rank is arguably more appropriate for college rankings, the range must be meaningful--that is, not inappropriately broad--in order for the rankings system as a whole to convey a realistic picture of educational quality. The fact that, say, Case Western's doctoral nutrition program is rated at between the 38th percentile to the 69th percentile (and this is by no means the most extreme example) suggests that the "range" may in fact be too vague to be meaningful . Janet A. Weiss, graduate dean and vice provost at the University of Michigan, is entirely correct to question these ranges as she did in the InsideHigherEd story.
All in all, while the NRC has made some improvements to its ratings, and while the new ratings systems do, at some level, increase institutional transparency, I remain troubled by the shortcomings inherent in a system which is based on inputs and doubt that these rankings will, in the long run, be a truly valuable service to students and the public at large.
Wednesday, September 29, 2010
National Research Council Finally Releases Its Ratings of Doctoral Programs
Labels:
information,
rankings
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment