Researchers (and us, the “upcoming researchers”) have been complaining about all the things that seem to be wrong with psychology or science in general for years: researchers don’t share their data, they use the wrong statistical analyses, they come up with their hypotheses after they have seen their data and they massage their data until something “interesting” pops up. Whenever we, the upcoming researchers, discuss these problems we always end up talking about publication bias: the tendency of journals and researchers to publish studies with significant results, resulting in file drawers full of non-significant (but at least as interesting) studies.
Because of publication bias people only read a small portion of articles on a specific effect and start to believe this effect is true, even though it might not actually exist. In the late 1990’s, for instance, articles were published that supported the hypothesis that reboxetine was an effective medicine in the treatment of major depressive disorder. It was not until 2010, when a meta-analysis looked into the possible presence of publication bias, that researchers discovered that not only was the drug ineffective, it was potentially harmful! What had happened? Only 26% of the patient data had been published. The remaining 74% was not significant and was therefore not published, resulting in a terrible mistake: psychiatrists had been prescribing a potentially harmful pill to patients who were battling major depression. This example clearly shows that publication bias should not be taken lightly. However, for years journals have failed to combat this problem.
But now, finally, things seem to be changing: journals such as Cortex have started working with preregistration, a system in which articles are chosen for publication based on the quality of their methods instead of their outcome and “interestingness”. While this is a wonderful development and will definitely help combat publication bias, it is not enough. In some fields publication bias may have been present for years and preventing it from occurring in future articles is not enough. Therefore it’s very important that researchers check for the possible presence of publication bias when conducting a meta-analysis. My question for the final assignment was: how often do researchers actually do this?
I checked this for 140 randomly drawn meta-analyses (twenty for every two years, from 2000-2013). What I found was that in only 37.14% of the articles researchers checked for the presence of publication bias. Perhaps even more shocking was that in the 88 articles in which no check was conducted only 6 (6.82%) articles mentioned why the authors did not do this (i.e. “because we added unpublished studies in our analyses, publication bias cannot be present” or “we wanted to check for publication bias with a funnel plot, but this was not possible due to a small sample of studies”).
Whether or not these reasons are correct, the main issue here is that apparently a lot of researchers either do not know that publication bias is a serious problem or they simply fail to see it as a problem. Either way: researchers and upcoming researchers need to be taught or reminded of the problems with publication bias and how they can check for this in the future. What I also think would help, is if journals demand these checks for any meta-analysis that is considered for publication.
My question to you is: How do you think we, researchers and upcoming researchers, should combat publication bias? Is there a possibility for a science in which publication bias is not an issue anymore?