Thursday, February 19, 2009

Replicable Science

by Andrew Gillen

Felix Salmon has some thoughts on the necessity of being able to replicate research.

Spurring the post was this paper by B. D. McCullough and Ross McKitrick which has this hilarious/scary line:
Labour Economics called for replication papers in 1993, but dropped the section after 1997 because they had received no submissions.
Felix's overall take is that
If you're running an empirical study, and your results aren't replicable, your study is largely worthless.

I'm sympathetic to his point in general, but I'm much more forgiving to the point of disagreement. Empirical research (in economics anyway) is plagued by messy data, imperfect proxies, simplifying assumptions, methodological controversy, etc. Thus, you can always accuse the researcher of impropriety (something some of the commenters on this blog take glee in doing to us). But I think we'd be better off giving researchers the benefit of the doubt while at the same time (and this is crucial) recognizing that empirical research is almost by definition imperfect.

Because any given empirical work is imperfect, I'm not going to be much more convinced by seeing the same thing done the same way multiple times. The best use of researchers time is not to replicate already imperfect studies, but to use other methods, other data, other proxies, and other assumptions to help determine just how imperfect the original study was. If we've got multiple studies all done differently and all pointing to the same thing, we can be pretty confident in the result, even though none of the studies inspires much confidence individually.

No comments: