Meta analysis in statistics refers to a method of combining independent studies to see if there is any disagreement among the results and look for interesting patterns. In an ideal world all valid results (e.g. results that are found through the use of good methods and statistics) on the topic that is analyzed would be at the disposal of the analyst. Through combining these results the nature of a statistically significant result can be investigated with a broader perspective. Unfortunately, it is rarely the case that all results are published. This is a serious problem.
In reality a positive outcome of a study makes it more likely that you can publish your results (Mahoney, 1977; Bakker, Van Dijk & Wicherts, 2012). When the scientific community pushes researchers to get significant results, factors that are different from the urge to find the truth might come into play. Researchers can react to this extremely by engaging in behavior where anything goes (e.g. fraud) to get significant results. This would leave us with a very biased sample of published research consisting of significant results that do not correspond with the real world. One can correctly argue that the majority of researchers do not go to these extremes however; a reaction that is much more mild than outright fraud can also have a severe effect on the sample of published research (Simmons, Nelson and Simonsohn, 2011; John, Loewenstein & Prelec, 2012). When papers that show true null results are rejected and (unconsciously) encouraging researchers to force results to a pre-specified significance level we are left with unreliable publications. This brings us back to meta analysis. Meta analyzing a biased sample of research is problematic. So, how are we to solve this problem? Here I will mention two solutions: (1) a solution from the perspective of conducting meta analysis and (2) a solution from the perspective of the people that are involved in the publication process.
First, this problem is not new in psychology (Rosenthal, 1979). Researchers themselves have already developed different ways to improve meta analysis in such a way that a publication bias can be detected by making funnel plots, use fail safe N analyses and much more. However all these solutions in meta analysis are to estimate the likeliness of publication bias. Through indirect measures it is measured if there is something like a publication bias. In this way we can never get our hands on the actual size of the bias.
Second, several initiatives have been started to make psychological science more transparent by making all conducted research available for everyone. One of these initiatives that have been here for a while is www.psychfiledrawer.com. Here people can upload their non-significant replications which otherwise would not have been published. A more recent initiative is www.openscienceframework.com. This is a website where researchers can publish almost everything they do in their research and make it available for everyone to use and check.
Making analyses more sophisticated and psychological science more transparent will hopefully reduce the amount of bias in a way that we can (almost) fully rely on published research again.
Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543-554.
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524-532.
Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive therapy and research,1, 161-175.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638-641.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.