Research Highlights Featured Chart

August 19, 2019

Unbiased results

Gian Romagnoli

Doctors, journalists, policymakers, and many others trust the accuracy of what’s published in academic journals. But a wave of research has shown that even the best publications in the social sciences can be misleading.

In the August issue of The American Economic Review, authors Isaiah Andrews and Maximilian Kasy developed a method for identifying and correcting one of the most common types of bias.

Ideally journals would publish the results of every scientifically valid experiment. But in reality, researchers, editors, and referees are less likely to publish studies that show no effect—so-called null results. For instance, in experimental economics, results that are statistically significant at the 5 percent level are over thirty times more likely to be published than insignificant results, according to the authors. 

The consequence of this publication bias is that estimates in published research can be inflated.

There have been many approaches to fixing the problem through the publication process, such as result-blind reviews. But the authors focus on a statistical technique that can be applied to studies that are already published. 

Figure 6 from the paper shows their correction method applied to results from laboratory experiments published in The American Economic Review and the Quarterly Journal of Economics between 2011 and 2014.

 

 

Figure 6 from Andrews and Kasy (2019)

 

The x-axis indicates the standardized results from each experiment.

The purple “” show the estimates from the original studies. The “” represent the authors’ bias-corrected values. The bars are all 95 percent confidence intervals. (The “” represent adjusted confidence intervals that account for estimation error introduced by the authors’ model.)

The adjusted estimates show what the original estimates would look like if they were free from publication bias. In the chart, the adjusted values are generally lower than the original values, which suggests that null results aren’t being published as frequently as expected in this area of research.

The chart shows that the large estimate found in Kessler and Roth is still significant after being adjusted. And the smaller estimate from Kuziemko et al. is insignificant both before and after the correction. 

However, many results switch from being significant to insignificant. Only two of the eighteen original findings were statistically insignificant. After accounting for publication bias, twelve results are statistically insignificant at the 5 percent level.

The researchers relied on a replication study for this particular application, but their method also works with data from so-called meta-studies.