Monday, July 13, 2009

Evaluating Stuff

by Andrew Gillen

At a discussion awhile back, Kevin Carey took some heat for suggesting that we should be trying to evaluate and measure teaching, with opponents saying that there is no way a system of evaluation could get at what's important. He responded something along the lines of (this is from memory)
Evaluating the potential of 18 year old kids is hard too. But because deciding who to admit is important to colleges, they've come up with ways to do so. These processes aren't perfect, but it's important to them, so they try. But when it comes to evaluating teaching, they don't even try.
This is a very good point.* Even an imperfect system of evaluating teaching could, and probably would, be much better than just assuming that every teacher is doing a terrific job. So I was pleased to stumble across a couple of interesting projects on evaluating things that typically haven't been evaluated.
  1. A Human Capital Score calulator - tells you how much you can expect to earn based on SAT score, major, and college HT: EduBubble
  2. Analyzing STEM candidates - "looks at roughly 200 variables to judge the likelihood a student will graduate with a degree in one of the "STEM" subjects"
Both of these will have issues, but it's cool that people are thinking of ways to measure these things, however imperfectly.

*Yes, I'm aware that not everything worth knowing is measurable, and not everything measurable is worth knowing.

1 comment:

capeman said...

This kind of stuff is not so new. We had a great reminder when Robert McNamara passed away last week. He had a mania for measuring things and using the results to make predictions in circumstances that weren't typical. As we know, it worked out splendidly. And that was 40 years ago.