The Good, The Bad, And The Science

It takes a lot of knowledge, effort and diligence to be a good researcher. Every day we make decisions that can affect either our personal life – we may work overtime for extended periods of time to get an article published and in the process neglect our friends, our career – we might cave to a supervisor’s not-so-subtle suggestions to massage the data of our last experiment to get favorable results, but most of the time, they affect both.

When we try to do everything right, we suddenly realize that not every issue is black and white, that there are various valid ways to design a study, various unexpectedly invalid ways to operationalize our dependent variable and many different ways to analyze our data. We find that what one considers a clear instruction puzzles another, that some necessary steps are overlooked by the majority of researchers who publish articles, that some design entire studies without noticing how futile it is to conduct said study when it does not directly test the hypothesis. We discover that many researchers cut corners, often without ill intent, but nonetheless to great effect.

Throughout this course we have heard many stories – those we had heard before, those we heard for the first time and perhaps overlapping, those we will hear time and time again. From researcher degrees of freedom and beyond the sad truth about p-values to philosophical questions such as “What exactly is the probability of a hypothesis?”; from strictly mathematical truths about what analyses are appropriate for what kind of data down to outright fraud.

The last lecture was a colorful composition of numerous short talks, which compared psychiatric (mal-)practice to a displaced exercise in legislation and religion, reminded us of how important it is to stay organized and to keep data-sets neat and tidy , strongly suggested we use multi-level analyses in our future analytic endeavors, informed us that we should simulate dependent data to test p-level adjustment methods developed for dependent data, presented us with a number of options to remove outliers, reminded us again of how important it is to correct for multiple comparisons, criticized the way psychology students are taught statistics and research methods, suggested that we might have to doubly correct for multiple comparisons when investigating brain networks, inquired whether therapist allegiance effects are real, offered a puzzling account of how Simonsohn’s (2012) fraud-detection method failed to detect in-vitro fraud,… and provided us with a brief overview of some good research practices.

At this point I have little to add, but I will leave you with a subtle quote:

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’ – Isaac Asimov

 

To my fellow “Good Science, Bad Science” students:

Mathias, Frank, Sanne, Sara, Marie, Monique, Rachel, Sam, Anja, Mattis, Vera, Barbara, Daan

 

References

Simonsohn, U. (2012). Just post it: The Lesson from Two Cases of Fabricated Data Detected by Statistics Alone. Available at SSRN: http://ssrn.com/abstract=2114571 or http://dx.doi.org/10.2139/ssrn.2114571.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>