Saving Psychology: Changing Research by Changing the Researcher

Between frightening reports of fraud and other scientific misconducts, psychology risks losing face as a science. A main concern is that the hypotheses and explanations provided by psychologists are worthless, as they were obtained through questionable if not outright wrong methods. To remedy this, innovate projects such as the Open Science Framework and Preregistration are now emerging, aimed to improve the system; to impose rules and regulations that create scientific environment wherein researchers’ practices can be monitored more closely, so that dodgy research behaviors like fraud and QRP’s cannot exist, or at least can be detected more easily. One might wonder though, if this will be enough. After all, it was not the system but the researcher that chose to employ faulty research methods, and it may be optimistic to think that the current state of affairs can be improved without changing the researcher as well. For one, having the system make it harder for some faulty research practices to occur may just lead to an increase in others, that can slip past the system (for example, making up data). With this in mind, this paper will focus on improving psychology as a science by educating its researchers. The goal of this education will be to: 1) provide them with the knowledge to make responsible, well-informed research decisions, and 2) stimulate attitudes that will intrinsically motivate researchers to make responsible, well-informed research decisions. This is expected to reduce the use of questionable research practices and fraudulent behaviors in psychology. This blog will go into explaining: why researchers need to change, what aspects need to change and in what manner, and end in suggestions on how to realize these changes.

Why do researchers need to change? As was said before, changing the system may not be enough. Certain faulty research practices, such as altering or making up data (aka fraud) can still occur. Besides which, inhibiting some malpractices may simply lead to an increase in others; after all, researchers who were motivated to obtain certain results before system changes, will still be motivated to do so after. Making it harder to do wrong will not always lead to less wrongdoing. As long as external motivators (for example, publication pressure and publication bias) are still pushing researchers to obtain certain results, researchers will strive for these results, by any means necessary. Educating researchers may however provide researchers with the intrinsic motivation and means to fight these extrinsic motivators; to prioritize the quality of research over personal gain.

What needs to change then, and in what way? As was said before, the goal of this advice plan is to improve the researcher’s knowledge concerning research practices, and change their attitude towards research. The plan is to provide researchers with knowledge concerning research practices, both accepted and faulty research practices, in terms of research methods, designs, analyses and inferences. The focus will not only be bad research practices, and why they should not be used, but also good research practices, and when they can and cannot be used, why, and how to use them. For example, one can address that deleting data points that don’t support one’s hypothesis is considered a questionable research practice, and why, and what is the proper way to handle if the data does not support the research hypothesis. This will provide researchers with sufficient knowledge to make well-informed, responsible decisions when involved in research. Furthermore, the plan is to change researcher’s attitude towards research, or more specifically, to: stimulate a critical attitude, a willing to face one’s own shortcomings, and an (active) open to change. For example, the focus can be a critical attitude towards research, why this is considered productive, and how to assume this critical mindset. This will arm researchers with an intrinsic motivation to conduct good research, aka research using only acceptable, healthy research practices.

How will this be accomplished? The changes in both knowledge and research will be implemented through education, in the form of lectures with corresponding workshops, and seminars. The lectures will discuss one or more good research practices, one or more faulty research practices, or a research attitude. These initiatives will be organized by the UvA, and are intended for UvA employees working in research or research-related practices (though people from outside the UvA are not excluded from partaking). A certain amount of attendance will be required for doing research-related work at the UvA. Finally, a small entrance fee will be charged to help finance these innitiatives.

This is how educating researchers is expected to help reduce faulty research practices, such as questionable research practices and fraudulent behaviors, in psychology. It should be clear that this solution does not pose the previously discussed changes to the system are bad, or indeed unnecessary, but rather that changing the system would not be very effective unless the researcher is guided through some changes as well. Hopefully, these changes will not only mend the reputation of psychology as a science, but also restore its scientific value and integrity.

 

 

Science under fire: What happened to the trust?

Soft sciences have been under fire recently – psychology in particular. Between shocking cases of fraudulence and questionable research reports, this discipline risks losing the trust of the scientific community. Common critiques are that data is easily fabricated, that data is rarely as definite as it is presented to be, and that results are dependent on the used method of analysis. In short, research results appear to be more dependent on the researcher than the investigated phenomena.

In the case of data fabrication, it’s obvious that the researcher very consciously influences the results. Psychology has seen several examples of this kind of fraud in recent times, like infamous social psychologist Diederik Stapel. On a positive note, instances like these have called forth a lot of attention to the importance of honesty and transparency in the scientific field. This has inspired several new ideas on how to monitor research, like obligated preregistration or the open science framework. On a lesser note, these ideas seem aimed mainly at preventing very deliberate misconduct, and might not focus that much on preventing misbehaviors with a little less fore-thought.

Cases of fraudulence, though probably more spectacular than other misdemeanors, are fairly rare. Other misconducts like changing research designs halfway through or leaving out undesired data points, however, are not (Martinson, Anderson & De Vries, 2005). A lot of this behavior is blamed on pressure to publish. Because of this, one would logically expect that reduction of this pressure would eliminate the problem – let’s ignore for a moment that this is easier said than done. More troubling might be the realization that, apparently, so many psychological researchers give into the aforementioned pressure. The progression of science is dependent on building forth on previously gained knowledge, and this can’t happen if gaining knowledge has to take a backseat to publishing. This goes for all sciences, psychology is no exception, so it very important that all psychologists understand this and consider this their responsibility. Unfortunately recent developments show that a proportion of psychological researchers is quite willing to sacrifice their integrity when it suits them.

The question that follows is what to do about the matter. As mentioned before, recent events have already inspired new measures to be taken. But are these desperate measures? Does psychology really need to be constantly policed? Can researchers not be trusted unless they’re being constantly monitored, like a little kid who can’t keep his hands of the cookie jar unless mommy stays in the room?

My own, personal view isn’t this pessimistic. I believe that a lot of researchers simply don’t understand the impact of their own misconducts. Despite having relatively little experience with psychological research, I myself have already encountered the arguments “just this once”, and “everybody’s does it” once or twice. I think some researchers are quick to dismiss possible consequences of their actions, and I believe this is (partially) because they don’t fully understand the ramifications of what they’re doing. At the risk of sounding naïve, I think simply educating people (about the methods they’re using, about alternative methods, about inferences that can and can’t be made, and of course about the consequences of certain misconducts) might already improve the current situation a lot. More articles, more seminars, more (cross-discipline) communication, I say! Let’s create more awareness. Now, what I’m not saying is that the addition of guidelines and stricter controls wouldn’t prove useful – this may very well scare off intentional evil-doers and remind unintentional baddies to be a little more careful. What I’m saying is that, at the very least, we should give people a chance to better themselves. Reminding people of ethics seems better than enforcing them. If this will actually help, I wouldn’t dare promise. Even so, I believe that this approach to the problem could help nurture psychological research back to health, and hopefully even bring back the trust that has been damaged.

 

By: Jacqueline N. Zadelaar

Literature:

Martinson, B. C., Anderson, M. S., & De Vries, R. (2005). Scientists behaving badly. Journal of Nature, 435(7043), 737-738.