Wednesday, May 27, 2009

Perhaps I Was Wrong

by Andrew Gillen

A few months back, I entered the fray over the replicablity of scientific papers. Some, like Felix Salmon, took the view that if "your results aren't replicable, your study is largely worthless." (Market Movers was taken over by Ryan Avent whose name now appears as the author, but the real author was Felix).

I took an opposing view, arguing that
Because any given empirical work is imperfect... The best use of researchers time is not to replicate already imperfect studies, but to use other methods, other data, other proxies, and other assumptions to help determine just how imperfect the original study was.
But perhaps I'm too willing to give the benefit of the doubt. A post by Kevin Drum draws my attention to this graph from a paper by Alan Gerber and Neil Malhotra. It shows the number of published studies whose results have a particular z score. Anything with a z score above 1.96 is considered to be statistically significant.

Kevin:
if journals accepted articles based solely on the quality of the work, with no regard to z-scores, you'd expect the z-score of studies to resemble a bell curve. But that's not what Gerber and Malhotra found... this is unsurprising. Publication bias is a well-known and widely studied effect
He continues:
But take a closer look at the graph. In particular, take a look at the two bars directly adjacent to the magic number of 1.96 [the blue and the red bars]... They should be roughly the same height, but they aren't even close. There are a lot of studies that just barely show significant results, and there are hardly any that fall just barely short of significance. There's a pretty obvious conclusion here, and it has nothing to do with publication bias: data is being massaged on wide scale.
This takes a big chunk out of my argument.

No comments: