History has thought us that peer reviewing was a great idea. And indeed it had been, science had never been bigger than today. However, recent reflections on the peer reviewing system expose some major flaws in the system. Publication and confirmation bias are some of the biases that influence a paper being published. The revolution of printed press accelerated the growth of science. The new revolution is upon us, the internet. I propose that journals should adopt a new format which is completely digital. This new system dictates that new findings(papers) are put on an online discussion board for all to see and comment on. For free. In this way everybody can comment on the process and the writers can defend themselves and perhaps improve on the results, methods or can further examine any possible confounds. In this way science is open as it should be, and more people can influence the field of science and learn from the openly available body of knowledge.
Psychology today has come a long way since its conception, but it is still plagued by many different factors. Fraud and QRP’s are big problems for the scientific psychology community. Do we as psychology researchers generally know what constitutes QRP’s and how to recognize them? Human factors will always play a role in scientific practice in my opinion, so addressing problems like QRP’s shouldn’t solely focus on trying to dehumanize psychology. Instead, there should be a revaluation of science education, where there is a bigger focus on the actual practice of being a researcher and the dangers you’ll face in the field. However, in educational programs like the Research Master Psychology, you already see some of the same problems and incentives emerge, creating a climate in which quantity is valued over quality. These issues are shortly addressed in my paper.
Science is currently in a crisis. Results across the board do not replicate, null results are hardly published and sloppy science to get favourable results are at an all time high. At the heart of this issue lies publication pressure the pressure for scientists to publish as many articles as possible. To solve this issue, I propose a performance interview to make a more qualitative evaluation. Furthermore, an interview could better take into account all different goals of a scientist: science, education, and societal benefit.
Kurt Lewin, one of the founders of social psychology, once stated that “there is nothing as practical as a good theory”. Yet today, many decades later, psychology appears to have all but forgotten about this advice: there is not much theory, and what little there is, is rarely good.
In fact, psychology is a profoundly experimentalist discipline: even those who do appreciate theory appear to do so mostly because it can inform future experimental research.
In contrast, I believe that abstraction is itself a fundamental goal of science, and as such psychologists should aim to construct theories that explain as much of the world as possible with as little as possible: broad in appeal, parsimonious in execution.
To achieve this, theories need to be both abstract (i.e., removed from the operational level) and formal (i.e., stated in mathematical, logical, or comparable terms). Today, few psychologists have the incentives, willingness, or ability to create such theories.
In fact, current structures incentivise the construction of overly narrow theories which can then be molken for experimental publications. Terror management theory, for example, describes a tiny sliver of the human experience and only one of many meaning maintance mechanisms; but there would have been little incentive for its founders to more on to a more abstract theory.
This leads to ideological-theoretical fragmentation of psychology, in which each theory has its adherents, mostly clustered around the institution of its founders, but many theories are outdated or redundant – and nobody is cleaning up.
To counteract this situation, I believe we need theoretical psychology as an independent subdiscipline. Instead of having experimenters dabble as occasional theorists, theory construction should be a specialised skill to be learned early on, then pursued as a career.
Creating a new subdiscipline is not an undertaking any one actor should engage in. Rather, I believe universities, through introducing explicit training in theory construction, funding agencies, through supporting research and outreach projects aimed at creating best practices for theory construction as well as work towards ‘cleaning out’, unifying, and integrating theories, and publishers and editors, through demanding theoretical publications to be formalised and making space for formal theorising, can all support the development of better theory construction in psychology.
I decided to write my letter of concern to Agnetha Fischer. As an example of what I think psychological research can be I used the Pikkety debate. A debate that is characterized by a high technical and social-political discourse but with transparency, of both data and method, as a driving force behind it. I describe what I have been sharing with you in the poster presentation on high competition in psychological science as a possible explanation of scientific misconduct. I mentioned the paradox of the general rejection of the usage of QRP’s together with the estimated high amount of QRP’s. Next to this I used the example of the impact of mentors (both bad and good) to show that self-regulation is apparently too much to expect from researchers themselves and therefore clear rules and arbitration is necessary to minimize misconduct. Because I hold the UvA responsible as the organisation that creates the context of desirable and undesirable behavior of it’s employees I recommend the following:
1. Get informed: Communicate with your employees about the problems psychological science is facing and involve them in policy making on diminishing chances on misconduct like QRP’s. Minimize the chance of feelings of frustration when introducing new strategies.
2. Become an Open Access University: Every UvA-researchers that has published an article should also provide the raw data and data-analyses on the institutes website. This should be an open and interactive environment for registered individuals including those outside the UvA-community: The UvA Open Access website.
3. Scientific Arbitration Team (SA-Team): There should be frequent investigations on published articles on QRP’s and data handling of UvA- employees. This can be done at random just as tax-officers do their audits. This investigations should be lead by the SA-team who are supported by master students of the methodology – and research department. Investigating QRP’s should become a part of the students curriculum because it teaches the students judgement on research and because it will generate a thinking process that will make them reflect on their own integrity.
I included other responsibilities and prospects for future activities for the SA-team which I am happy to share with you while enjoying a drink on a Thursday afternoon.
Most important for me to communicate was that these recommendations should facilitate a transition in which pre-registration and sharing knowledge is the new norm in order to make psychological research more reliable and meaningful. Self-regulation of the individual researchers and the scientific community trough transparency is the key aspect of these “solutions”. I further discuss expected positive consequences (regulation by self and others, enhancement of UvA’s public profile etc.) and expected negative effects (resistent to change, practical, financial) of these recommendations.
My conclusion of it all:
“Although the field of psychology is going true rough times, the UvA could become a precursor in turning the tide. In order to blossom, psychology and the UvA need rigorous change that goes beyond the comfort-zone of the average scientist or organisation. By implementation of new organizational strategies one will enforce a more reliable practice and optimize the conditions for outstanding scientific conduct. I firmly believe that these recommendations will lead to the integrity psychological research needs. That finding the truth becomes once again the core priority of the research practice.”
Cheers and see you around!
Between frightening reports of fraud and other scientific misconducts, psychology risks losing face as a science. A main concern is that the hypotheses and explanations provided by psychologists are worthless, as they were obtained through questionable if not outright wrong methods. To remedy this, innovate projects such as the Open Science Framework and Preregistration are now emerging, aimed to improve the system; to impose rules and regulations that create scientific environment wherein researchers’ practices can be monitored more closely, so that dodgy research behaviors like fraud and QRP’s cannot exist, or at least can be detected more easily. One might wonder though, if this will be enough. After all, it was not the system but the researcher that chose to employ faulty research methods, and it may be optimistic to think that the current state of affairs can be improved without changing the researcher as well. For one, having the system make it harder for some faulty research practices to occur may just lead to an increase in others, that can slip past the system (for example, making up data). With this in mind, this paper will focus on improving psychology as a science by educating its researchers. The goal of this education will be to: 1) provide them with the knowledge to make responsible, well-informed research decisions, and 2) stimulate attitudes that will intrinsically motivate researchers to make responsible, well-informed research decisions. This is expected to reduce the use of questionable research practices and fraudulent behaviors in psychology. This blog will go into explaining: why researchers need to change, what aspects need to change and in what manner, and end in suggestions on how to realize these changes.
Why do researchers need to change? As was said before, changing the system may not be enough. Certain faulty research practices, such as altering or making up data (aka fraud) can still occur. Besides which, inhibiting some malpractices may simply lead to an increase in others; after all, researchers who were motivated to obtain certain results before system changes, will still be motivated to do so after. Making it harder to do wrong will not always lead to less wrongdoing. As long as external motivators (for example, publication pressure and publication bias) are still pushing researchers to obtain certain results, researchers will strive for these results, by any means necessary. Educating researchers may however provide researchers with the intrinsic motivation and means to fight these extrinsic motivators; to prioritize the quality of research over personal gain.
What needs to change then, and in what way? As was said before, the goal of this advice plan is to improve the researcher’s knowledge concerning research practices, and change their attitude towards research. The plan is to provide researchers with knowledge concerning research practices, both accepted and faulty research practices, in terms of research methods, designs, analyses and inferences. The focus will not only be bad research practices, and why they should not be used, but also good research practices, and when they can and cannot be used, why, and how to use them. For example, one can address that deleting data points that don’t support one’s hypothesis is considered a questionable research practice, and why, and what is the proper way to handle if the data does not support the research hypothesis. This will provide researchers with sufficient knowledge to make well-informed, responsible decisions when involved in research. Furthermore, the plan is to change researcher’s attitude towards research, or more specifically, to: stimulate a critical attitude, a willing to face one’s own shortcomings, and an (active) open to change. For example, the focus can be a critical attitude towards research, why this is considered productive, and how to assume this critical mindset. This will arm researchers with an intrinsic motivation to conduct good research, aka research using only acceptable, healthy research practices.
How will this be accomplished? The changes in both knowledge and research will be implemented through education, in the form of lectures with corresponding workshops, and seminars. The lectures will discuss one or more good research practices, one or more faulty research practices, or a research attitude. These initiatives will be organized by the UvA, and are intended for UvA employees working in research or research-related practices (though people from outside the UvA are not excluded from partaking). A certain amount of attendance will be required for doing research-related work at the UvA. Finally, a small entrance fee will be charged to help finance these innitiatives.
This is how educating researchers is expected to help reduce faulty research practices, such as questionable research practices and fraudulent behaviors, in psychology. It should be clear that this solution does not pose the previously discussed changes to the system are bad, or indeed unnecessary, but rather that changing the system would not be very effective unless the researcher is guided through some changes as well. Hopefully, these changes will not only mend the reputation of psychology as a science, but also restore its scientific value and integrity.
I have written a letter to Dr. Eric Eich, editor in chief of Psychological Science. I have proposed a suggestion for improvement of the current peer-review system. First I outlined the problems associated with the current system. Peer-review is underappreciated in the current system: it is often seen as a burden and has to be done in one’s spare time, and is not much rewarded. Also, the author often does not know who the reviewer is. Therefore, it is easy for the reviewer to not spend a lot of time on the review, or even be plain mean. Also, the lack of time spent on the review, makes it more likely for studies in which the researchers have engaged in questionable researchers to slip through.
I have proposed a new peer-review system. In this system, the reviewer does not know who the author is, to prevent certain biases (such as the status of the author, whether he has published many articles in high-impact journals, et cetera). Reviewers receive rewards in the form of review-credits. The authors assigns 1-4 review-credits to the reviewer, based on the quality of the feedback and the usefulness of the review. The editor can then subtract or add one research credit, based on whether the review contains a clear recommendation for the editor concerning publication. The reviewer eventually thus ends up with 0-5 review-credits.
These credits become publicly available, at first on a website but could perhaps also become visible in search engines, next to the number of publications a researcher has. This is a scientist’s review index, with the total number of credits, the number of articles reviewed, and the average number of credits obtained per article. Universities may eventually consider including reviewing in the job descriptions and thus paying for reviews, as good reviewers have a high status.
The review-index will result in reviewers doing a better job on reviews. This results in higher quality of published research articles, as questionable articles are less likely to slip through. Therefore, also authors need to pay attention to clear writing and the prevention of QRPs. Also, the overall review quality will be enhanced, as extrinsic motivation is stimulated. This way, the ship of science will be prevented from sinking.
In this essay, I argue that the scientific world would benefit from a better research education for both students and researchers. Within the University of Amsterdam, re-introducing the so called ‘Research Practical’ is a very practical and easy-to-implement way to acquaint students with several problems in science, such as lack of replication. Also, in this course students can learn how to deal with things like non-replication and finding of a null effect.
The university should also take care to educate their researchers. This can be done through courses in methodology and statistics, or through an open forum on which researchers can discuss their methodological and statistical considerations with their colleagues. Educating researchers does not only benefit science: since mentors have a great influence on their pupils, good researchers will make more good researchers. And if there is one thing that science needs, it is good researchers.
As you may know, psychology has been in a state of crisis in the last years. Social psychology, for example has been shaken by several cases of scientific fraud and there have been other allegations of ‘bad research’. However, the problem of ‘bad research’ is a more general problem. In addition to fraudulent research; questionable research practices (QRPs) or the selective reporting of positive results have broadly distorted the interpretation of scientific data. In some cases, the distortion between what the data are and what the data are interpreted to be is so large that current research cannot be used by the private sector. For example, the pharmaceutical company Bayer reports that they are unable to use ‘scientific findings’ from universities, stating that sometimes as many as 75% of the results cannot be reproduced. This means that some of the ‘knowledge’ that universities produce is inaccurate or false, too unreliable to develop new medication.
However, psychology needs reliable results, if we want to apply psychology to gian something from it. Therefore, I think it is necessary and urgent that universities start to to implement replication into psychology via education. We are likely to need a lot of researcher that can replicate in the future, because many studies indicate that replication rates in psychology and other sciences are low (see Begley and Ellis, 2012). Psychology needs an overhaul in order to advance as a science and therefore it needs a lot of researchers who are able to adequately replicate studies. However, they will not appear out of nowhere. If we do not educate people, we risk that psychology gets a bad reputation among scientists and it may hurt the ties to the private sector. The private sector will not invest in a field that cannot produce applicable, reliable science, which will affect funding for psychological research negatively in the long run.
In order to make the process of reviewing and advancing our science easier, universities should raise a new generation of scientists that is able to increase the reliability of scientific findings by replication. This can only happen if universities provides the means to educate students who are willing and capable to do this. Replication is a corner stone of science and formal education is needed to do it.
Begley, C. G., & Ellis, L. M. (2012). Raise standards for preclinical cancer research. Nature, 483, 531–533.
In my final essay I present a case against null hypothesis significance testing using p-values. I first present several – by far not all – drawbacks and misinterpretations regarding NHST and its results. Then I present an example of a potential problem with using p-values as a decision criterion. My letter ends with a very brief summary of several methods that can be presented next to- or used as alternatives for NHST. Methods include; confidence intervals, effect sizes, and bayesian statistics.
The goal of this overview is to make people aware of controversies in this part of scientific practice, and perhaps to inspire a new view on how we should be educating new scientists. By offering not only theories based on p-value significance testing, but also discuss its flaws and alternatives.