The Case for Theoretical Psychology

Kurt Lewin, one of the founders of social psychology, once stated that “there is nothing as practical as a good theory”. Yet today, many decades later, psychology appears to have all but forgotten about this advice: there is not much theory, and what little there is, is rarely good.

In fact, psychology is a profoundly experimentalist discipline: even those who do appreciate theory appear to do so mostly because it can inform future experimental research.

In contrast, I believe that abstraction is itself a fundamental goal of science, and as such psychologists should aim to construct theories that explain as much of the world as possible with as little as possible: broad in appeal, parsimonious in execution.

To achieve this, theories need to be both abstract (i.e., removed from the operational level) and formal (i.e., stated in mathematical, logical, or comparable terms). Today, few psychologists have the incentives, willingness, or ability to create such theories.

In fact, current structures incentivise the construction of overly narrow theories which can then be molken for experimental publications. Terror management theory, for example, describes a tiny sliver of the human experience and only one of many meaning maintance mechanisms; but there would have been little incentive for its founders to more on to a more abstract theory.

This leads to ideological-theoretical fragmentation of psychology, in which each theory has its adherents, mostly clustered around the institution of its founders, but many theories are outdated or redundant – and nobody is cleaning up.

To counteract this situation, I believe we need theoretical psychology as an independent subdiscipline. Instead of having experimenters dabble as occasional theorists, theory construction should be a specialised skill to be learned early on, then pursued as a career.

Creating a new subdiscipline is not an undertaking any one actor should engage in. Rather, I believe universities, through introducing explicit training in theory construction, funding agencies, through supporting research and outreach projects aimed at creating best practices for theory construction as well as work towards ‘cleaning out’, unifying, and integrating theories, and publishers and editors, through demanding theoretical publications to be formalised and making space for formal theorising, can all support the development of better theory construction in psychology.

The Golden Road of Open Access is the Path of Least Disruption

A few days ago, the Netherlands Organisation for Scientific Research (NWO) – the country’s major funding agency – announced that it will soon start to require funded research to be published in a fully open access journal. At a symposium at the University of Amsterdam to mark the beginning of the open access week, NWO president Jos Engelen today reasserted this aim and set out a bold vision: in ten years, all research in the Netherlands will be published following this ‘Golden Road’ to open access.

It would be easy to mistakenly believe that such a commitment to the ‘Golden Road’ to open access is a revolutionary step. After all, big words about making knowledge freely accessible to the public are at play. But the opposite is true: the seeming gold standard is perhaps the most conservative implementation of open access currently discussed, and misses an opportunity for profound changes to how scientists collaborate and scientific findings are communicated.

The cost of gold

In simple terms, the Golden Road states that scientific output should be published in fully open access journals. At today’s symposium, it was mainly contrasted with the ‘Green Road’, in which output is deposited in open access repositories and may still be published in closed access journals.

Golden Road open access has seen significant success in the Netherlands, growing two percentage points annually over the last few years and garnering the support of state secretary for Education, Culture and Science Sander Dekker. But this success has come at the expense of Green Road repositories, as Wouter Gerritsma of Wageningen University showed using the example of Narcis, the national open access repository. That is not only costly; it is also to the detriment of grey literature such as Ph.D. dissertations. Traditional papers make up only 45% of the content on Narcis; but the remaining 55% remain hidden when gold, rather than green, is the publication standard. Indeed, grey literature benefits particularly strongly from open access repositories: seven out of the top ten publications downloaded from UCL’s ‘Discovery‘ repository are dissertations, according to library director Peter Ayris.

The Golden Road also often implies an ‘author pays’ funding model, in which papers are freely accessible, but authors pay the journal for publication. These costs can be significant: a paper in one of the PLoS journals can cost between $1,350 and $2,900. In contrast, repositories – which do not provide peer review – are cheap to run and free to use. Gerritsma that the cost of ‘fully gold open access’ in the Netherlands at current output numbers and publication costs (around $1,200 per paper) would cost $27.7 million – not significantly less than the $34 million Dutch universities currently pay for journal subscriptions. Worse, still: because universities would have to pay for both open access publishing and closed access subscriptions during an unspecified transitory phase, they would face both bills at the same time.

A missed opportunity for change

One of my favourite quotes about the open access movement states that “if open access does not hurt Elsevier, we are doing it wrong”. The Golden Road seems to be exactly that wrong path: it merely shifts around when the bill is paid, but the sums remain (approximately) the same. This would perhaps seem reasonable if the cost of journal subscriptions reflected the value rendered to the scientific community by publishers, but clearly they don’t: scientific publishing is one of the most profitable industries there are, and the cost of subscriptions is widely considered exploitative.

The Golden Road also misses an opportunity to replace the century-old publication model based on journals with procedures and technology for the digital age. At a recent question-and-answer session, Denny Borsboom called for psychology to adopt the publishing model of mathematics and physics, in which results are upload onto arXiv and receive open peer review there, the subsequent publication in journals almost being secondary. In that, he echoed Brian Nosek, who (together with Yoav Bar-Anan) has laid out a grand agenda for opening up scientific communication which gradually decouples steps such as evaluation and publication, which now seem inextricably linked. Open access is a first step in this agenda – but if open access publishing remains firmly in the hands of for-profit journals, the following steps may never come.

Addendum: Wouter Gerritsma’s presentation is now only here.

Discussing good science on Twitter

A while ago I wrote a blog post here on Using Twitter to Explore the Frontiers of Psychological Science, in which I showed how students and researchers can harness social media to discuss good science.

Twitter is a great platform to gather information, discuss ideas, and spread your output. As a discursive medium, it allows anybody quick and barrier-free access to contemporary debates. It is also a powerful tool to spread the word about your own work and the work of others.

To give an example, I have summarised how I have used Twitter to write and publicise two of my posts on this blog. Check it out on Storify!

Han van der Maas and Jelte Wicherts on Scientific Fraud

Jens Förster, who until this year was professor of social psychology at the University of Amsterdam, but has since been accused of fraud, has recently reasserted his innocence: “I did not falsify data. And the main referees confirm that the results of my studies, although unlikely, are not impossible.”

Indeed, an anonymous report from a colleague in the UvA’s methodology department put a number on the possibility of Förster’s data: one in five hundred and two quintillions. It is only the latest in a series of fraud cases in psychology which have shaken the field, but also helped get a discussion on good science into the mainstream.

The case of Jens Förster, and the one of Diederik Stapel before him (who admitted to fraud in dozens of cases), have also involved the UvA itself, and have brought about soul-searching and institutional changes. We spoke with Prof. Han van der Maas, head of the methodology department and the graduate school of psychology at the UvA, and Jelte Wicherts, professor of methodology at Tilburg University.

What is fraud?

One of the first major points to come up in such a discussion is the delineation of scientific fraud itself. Classically, it encompasses fabrication, falsification, and plagiarism; but our guests preferred to focus on the former two: plagiarism may be a problem, but it does not appear to harm science itself (mirroring an argument made by Daniele Fanelli).

On the other hand, the line between fraud and “questionable research practices” (QRPs) is far from clear: as researchers become more and more aware that practices such as selective reporting of conditions distort the scientific record, these may move from being merely discouraged into the realm of fraud. Indeed, van der Maas and Wicherts agreed that such QRPs were, by virtue of their sheer frequency, more harmful to science than outright fraud.

How to detect fraud?

In the case of Jens Förster, detecting fraud was, to some degree, trivial: the reported means and standard deviations alone were so unlikely, even under the most generous of assumptions, that a strong case could be made against him. But even then, in the absence of raw data, an early investigation by the UvA shied away from a verdict of fraud, claiming that QRPs could not be excluded as a source of these patterns (a verdict since overturned by the National Board for Scientific Integrity, LOWI).

Other cases are much harder. Prof. van der Maas was part of the Commissie Drenth, which investigated the work of Diederik Stapel at the UvA, where he spent time as a graduate student and fellow in the ’90s. Raw data absent, the investigation was limited to hunches and guesses; less than necessary for a verdict had Stapel not admitted to fraud. Even in the Förster case, the LOWI investigation, shy of stating that he had committed fraud, merely concluded that the reported patterns (and submitted raw data) had to have been tampered with.

What to do with fraudsters?

The case of Stapel did not only have to come up because it hit close to home at the UvA; it is also one of the best-documented instances of scientific misconduct with a highly public inquiry. In many other cases, however, the details are swept under the rug. Van der Maas cited the case of Marc Hauser, a leading researcher in animal cognition who lost his position over fraud after a three-year enquiry by Harvard University – the results of which have only been publicised cursorily.

But even where efforts are made, they are, in a sense, amateurish: investigations are conducted by peers who are recruited based on their willingness to participate as much as their expertise. Perhaps, then, it would be better if a third party would in charge of investigating fraud. For example, van der Maas argued, the NWO, the major Dutch funding organisation, could have professional fraud investigators on staff who would both conduct random tests and follow up accusations. For now, though, universities are still in charge: the UvA has just announced that it will investigate Jens Förster’s older papers.

What can universities do?

Perhaps surprisingly, van der Maas and Wicherts both denied that publication pressure was to be blamed for fraud. They pointed to cases like Cyril Burt, an early educational psychologist who falsified data on the heritability of IQ late into his career, when he had already been knighted – driven by the utter conviction that his conclusions were right. Indeed, from Mendel to Stapel, fraudster appear to truly believe in their hypotheses – even if they made up the data to prove them. More research on the propensity to commit fraud as an individual difference variable then seems a more promising avenue to explore – the work of Dan Ariely seems a good start.

However, universities do have the power to make fraud more difficult to commit and easier to detect. Wicherts, who has studied the association between willingness to share data and poor research practices, emphasises the power of open data: when raw data is published, excuses such as Förster’s claim that a harddrive crash eviscerated his data become untenable. The Department of Psychology at the UvA has made a first step in this direction by requiring researchers to submit their data to a university database starting 2015, regular checks on data availability included. And lest you think that this is just a friendly service to clear the professors’ hard drives – the server is called ‘Big Brother’.

What about education?

Education, too, should play a role in curbing fraud. But while much of the focus often lies on students – indeed, the Q&A with van der Maas and Wicherts and this blog post are part of a course called ‘Good Science, Bad Science’ focused on improving psychological research practices – it is senior researchers who may need to be targeted more. Drawing on reports from Melissa Anderson at the Human Factors in Science conference early last month, our guests pointed out that the University of Minnesota as regular, mandatory courses in research integrity for faculty members.

There is also a role for methodologists, who should work more closely with researchers in substantives fields, and communicate good practice. This is something that can start at the undergraduate level, where research methods and statistics are often taught on “perfect cases” – normally distributed data, clear-cut analyses, significant p-values. Instead, education should emphasise decision-making in the analysis process – including the decision to consult an expert.

In conclusion, there was no doubt that fraud is bad – although less malevolent corner-cutting and misinformed analyses may well be worse for the progress of science. Yet, there was also great hope and even confidence that psychology is on its way into a better future. If that is the case, prominent cases of fraud have played their role in fuelling the debate on making ours a better science.

We’ve Got the Tools: Literate programming for open science

This morning, the announcement of the second round of the Many Labs collaboration crashed the website of the Center for Open Science. It is a sign that, despite all woes, at this point in time there exists a sense that something can be done to improve psychological science, and forge a better future for our discipline. There are many fascinating initiatives, but I am perhaps most excited about what features in the Center’s tag line: tools for open science.

Many such tools already exist. Among the most powerful, I believe, is the combination of R, Sweave/knitr, and Git/Github that allows researchers to create accessible and reproducible statistical analyses, share them with others, and collaborate on improvments.

R is a statistical programming language that, as Lukas writes on this blog, allows you to “document literally every step in your preprocessing and statistical analyses”. But merely documenting the code is not enough: it is also important to document the decisions that led to this code, and to explain it to the reader; i.e., to write what is called “literate” (human-readable) code. This looks approximately like this:

The following code implements printing of the phrase “hello world”

<<>>=
print(“hello world”)
@

Easy, isn’t it? Sweave and knitr are two solutions for writing literate code, both of which are implemented in the splendid RStudio IDE. They allow the author to create an R file with code that can be executed, but which can also be compiled to produce a human-readable document that includes text, code, and R output. Knitr, which builds on Sweave, should be your preferred solution today, and supports the creation of LaTeX, html, and Markup files.

A great example of the usefulness of R and knitr is provided by Tim Churches, who re-analysed a meta-analysis on the effectiveness of bicycle helmets (if you, like me, prefer to cycle without, the results may be disappointing). If you click the link, you will find what looks like a regular website, but is a Markup file created using knitr and shared with the world on Github.

That last point is the finishing stone in my argument: once documents have been created following the principles of literate programming, they are ready to be shared, just like a written-up paper would be. Github – which Lukas has introduced on this blog – enables you just that. Built on the version management tool Git, it is a platform on which you can archive your code (or indeed, any) files. Better yet: you can ‘fork’ – copy and manipulate – the documents shared by others. If we believe that flawed analyses should be corrected, this is the easiest way to do so.

So, if you have not already learned R (or another statistical programming language such as Python or Matlab), I highly recommend you do. Once you have, nothing is easier than to start writing literate code – as I said, RStudio supports knitr out of the box. The final step is to start sharing your work on Github. Git is supported natively by RStudio, too – here’s a manual.

The Future is Already Here

Attending the seminar on Improving Scientific Practice at the University of Amsterdam last week, I was more than once reminded of the quip by William Gibson (of Neuromancer fame) that “the future is already here – it’s just not very evenly distributed”. It’s a quote that gets thrown around a lot in the hacker circles I used to frequent, but I believe that it’s just as true for science communication.

At one point, Kees Schuyt – a sociologist of law who has been a professor for more than four decades – lamented the lack of post-publication peer review today: in earlier days, a prominent article would have been followed by four, five responses in the journal over the next few issues; but today, he said, he hardly sees this.

Schuyt’s comment is interesting to me not just because it gives an insight into the scientific practices of yore, but also because I see post-publication peer review all around me, everyday. It’s not called that, of course, but scientists, students, and journalists discuss and critique the research of others all the time on blogs and Twitter (see my introduction to using Twitter in the pursuit of good science). Such networked communications technologies are almost perfectly suited to support the scientific discourse – and we are by far not using their full potential.

The Networked Public Sphere

One of the slogans of new media advocates is “we are the media”. Indeed, the strength of blogs lies in the fact that anybody, anywhere, can run one. Seeing the difficulty Nick Brown (another attendee of last week’s seminar) and his collaborators have had in getting their criticism of Barbara Fredrickson’s “positivity ratio” nonsense math published by the original journal, barrier-free access to publishing is an asset to good science.

Yet, a researcher like Prof. Schuyt, reading the journal online or, even more removed, on paper, would never have known about criticism published on a blog. Here, traditional journals lack an interface with other media. The technology to connect original publications and responses already exists: blogs know a so-called “pingback”, which notifies the owner of a link to a specific blog post. Because pingbacks are posted as comments under the article, they are also visible to other readers: they weave a network between articles and responses.

Comment sections, too, are still uncommon in scientific journals, despite their obvious potential for post-publication peer review. Instead, readers wanting to critique a paper have to turn to services like PubPeer – which now offers a browser extension to display comments on publishers’ websites: a user-based work-around where journals have been too slow to adapt.

PLoS One displays tweets containing the article link next to the web version of the paper.

PLoS One displays tweets containing the article link next to the web version of the paper.

Linking papers, blog posts, and comments to each other in such a way is important, in particular for the reader who encounters them later – and we hope our publications to be relevant for years, even decades to come. But in the present moment, the best way to link them is often Twitter: the service allows for real-time discourse linking content across platforms. In its ephemeral nature, Twitter thus complements the more slow-moving debate based on, first, longer blog posts and, finally, responses in journals and review articles.

Making Discourse Visible

If you believe that post-publication peer review is the future, this future is already here – it’s just not very evenly distributed. The tools I have mentioned – blogs with pingbacks, comments, and Twitter integration – already exist, but as of yet, they are used by a technophile few, hardly integrated with scientific publication outlets, and often invisible to the uninitiated.

Integrating official publications with networked media is a particular opportunity for open access journals. Some of them already do: PLoS, for example, allows comments and displays relevant tweets. Other, in particular closed access, publications make it difficult to discuss, review, and correct the scientific record as they lack these features. To integrate these outlets into the discursive network of post-publication peer review will be to evenly distribute the future that’s already here.

Using Twitter to Explore the Frontiers of Psychological Research

Twitter is a great tool to keep up with developments in psychological research good and bad. In this post, I will make a case for using Twitter as a student or researcher. For those yet uninitiated, I will give a simple and easy-to-follow introduction on how the platform works and how to use it. Finally, I will recommend a few advocates of good science to follow on Twitter. All in all, this should enable you to dive right into the debates on post-publication peer review and pre-registered replication reports (and all the other things that make good science).

Why Use Twitter?

Some social media platform – notably Facebook – have been characterised in the literature as ‘semi-public spheres’. The public sphere, of course, was developed as a concept by social theorist Jürgen Habermas, who evoked the coffeehouses of yore as places in which the public could meet in free and non-violent discourse. In the evening, after work is done (the wonderful German word Feierabend unfortunately does not have an adequate English translation), the citizens would come together and discuss the urgent and not-so-urgent matters concerning their community. Twitter, in a sense, is a hyper-public sphere: it is a place not only to meet with friends and acquaintances; but also one in which it is perfectly acceptable to eavesdrop on and even butt into the conversations of others.

Such low barriers to entering a conversation are particularly great when you are a student. Not only do many conversations normally take place behind doors that are, for you, still closed; you will surely feel uncomfortable butting into a chat between your intellectual heroes should you find them, say, at a conference (and they might not appreciate it either). On Twitter, social conventions are more relaxed, and the whole platform is designed around the idea that conversations take place in public, and it’s o.k. to listen. And there sure is a lot to listen to! Many, many researchers and advocates use Twitter, and have meaningful and interesting conversations between each other. Later in this post, I will show you how to find them.

download

And then, of course, there is also the old wisdom that on the Internet, nobody knows you’re a dog (or a lowly master’s student, for the matter). Yet, Twitter is also a great place to make a name for oneself. If you have something smart to say, it’ll be the content more than who you are that counts; and through its features the platform allows anybody who follows you to quickly share your work with others. What could be better for an evangelist (and let us be honest, all committed researchers, and advocates of good science all the more, are missionaries of their cause)? So, Twitter allows you to connect easily with people all over the world, to follow the conversations of thought leaders in your field, and to quickly get your own work and ideas to those who care.

How to Use Twitter

In this section, I will give an accessible introduction to the platform. If you already know how Twitter works – or feel comfortable with social media in general – you’ll probably want to skip this section. Also, if you don’t trust me, there’s a great Twitter primer by the Society for Personality and Social Psychology. All others, here we go!

Although you do not need an account yourself to read content on Twitter, getting one is a great place to start. Not only does it allow you to post tweets yourself; it also enables you to select other users to follow and regularly receive their updates, as well as a few other useful things. Signing up is easy! All you need is an email address and a username you want to go by on Twitter (choose something short and not too cryptic). The site also asks for your full name, but any pseudonym will work. A lot of people do not twitter under their real name! Once you’ve created an account, also upload a profile picture and write a short description so people know what you care about.

Updates on Twitter (‘tweets’) can be up to 140 characters long – being eloquent under these restrictions is a whole art unto itself! So, try to be concise (although it’s customary to split longer messages into multiple tweets. Just indicate that there’s something to follow, e.g. by going 1/2, 2/2). There’s no need to worry about the length of links, though! Twitter has a built-in link-shortener, so all links will be of the same length. Ready to send out a ‘hello world’?

helloworld

Next, you will want to follow some people. Following somebody establishes a one-way link (i.e., they can follow you back, but they do not have to – there is no obligation there): you will see all posts written by that user on your wall (the feed on the home screen). I will recommend a few interesting advocates of good science to follow later, but perhaps you want to go explore on your own first. Twitter offers a host of recommendations, for instance under ‘Who to follow’ and ‘Popular accounts’ on the left-hand side.

Finally, there are three basic, but important features of Twitter. One is the hashtag (although used all over these days, it originated from Twitter): #word. It’s a way of assigning a label to a tweet, and users can search for tweets containing a particular hashtag. A lot of conferences, for instance, have an official hashtag to make it easier to find tweets from participants (go and type ‘#easp2014′ into the search field in the upper right corner – it’s the official hashtag of the conference of the European Association of Social Psychology that was hosted at the UvA over the summer).

Second, there is the retweet option. When you hover over a tweet with your mouse, you will see three option – ‘reply’ (we will get to that), ‘retweet’, and ‘favorite’. When you click on retweet, the post will appear on your wall (and on that of people you follow). It will still be attributed to the original author, but also show that you retweeted it. This is the key feature behind Twitter’s power for spreading ideas – including those linked to in a tweet – at rapid speed and with great reach.

recs

The last, and most important, feature is the ‘mention’. Prefixing another Twitter user’s handle by an ‘@’ will make that user aware of your message (you can find messages directed at you under ‘Notifications’ in the upper left corner). When you start a tweet with an @ and a handle, this message will only appear on the wall of the person you are addressing and contacts you have in common. So nobody will be bothered by your conversation with somebody they don’t know! When you click on the ‘reply’ button underneath each tweet that I mentioned earlier, it’ll set up an @-message that is linked to the original tweet. What’s so great about that? People can see what you are replying to! When a conversation goes back and forth dozens of times, that can be very handy. Also, clicking on a tweet will show you the thread of tweets it (may have) replied to, and all replies it has received. Go find some reply-tweets and try it out!

Now you should be ready to delve right into Twitter. Perhaps you’ll still want to be a passive reader for a while to see what people are talking about, but first you’ll have to find some interesting people to follow. Let’s go do that in the next section!

Finding Conversations

Who exactly you’ll want to follow on Twitter will depend on your own interests. After all, it’s not just good science advocates on there, but literally people from all walks of life. Here, I’ll just recommend a few people whose conversations about good science I’ve found particularly exciting to follow.

Chris Chamber / @chrisdc77 – cognitive neuroscientist and as editor of Cortex one of the driving forces in establishing pre-registered reports in journals.

Brian Nosek / @BrianNosek – social psychologist and director of the Center for Open Science (@OSFramework). Leader in the replication movement.

Neuroskeptic / @Neuro_Skeptic – anonymous blogger and quite certainly the snarkiest critic of neuroscience on the Internet.

Daniël Lakens / @lakens – experimental psychologist and methodologist at TU Eindhoven; open science advocate.

Erika Salomon / @ecsalomon – social psychology Ph.D. student and blogger, among others for the SPSP blog.

Ben Goldacre / @bengoldacre – Science journalist and author of the bestselling books “Bad Science” and “Bad Pharma”; leader of the AllTrials campaign to require registration and publication of clinical trials.

Ed Yong / @edyong209 – science journalist and blogger for the National Geographic. Seemingly never sleeps, and so while not a psychologist, still engaged in many conversations.

Sanjay Srivastava / @hardsci – personality and social psychologist at the University of Oregon and author of the excellent blog The Hardest Science.

Betsy Levy Paluck / @betsylevyp – Princeton professor of psychology and public policy and an outspoken defender of good research practices.

Rolf Zwaan / @RolfZwaan – psychologist at Erasmus University Rotterdam and prolific blogger.

Uri Simonsohn / @uri_sohn – methodologist and leader of the replication movement; recent inventions include the p-curve as a measure of publication bias. Also author (with his colleague Joe Simmons (@jpsimmons)) of DataColada.

Heather Coates / @landPangurBan – data librarian at Indiana University and research transparency advocate.

Jelte Wicherts / @JelteWicherts – Han’s former Ph.D. student; now methodologist at Tilburg University. Speaker at the ‘Human Factors’ conference!

Kai Jonas / @KaiJJonas – hipster. Also social psychologist at the UvA and editor-in-chief of Comprehensive Results in Social Psychology, a journal based on pre-registration.

wall

Matt Wall / @m_wall – neuroscientist and occasional author of the rather useful blog Computing for Psychologists.

Simine Vazire / @siminevazire - personality psychologist and regular blogger on good science.

Dorothy Bishop / @deevybee - developmental neuropsychologist and blogger; advocate for replication.

Michael Eisen / @mbeisen - biologist and co-founder of PLoS; open access advocate.

Alex Holcombe / @ceptional - cognitive neuroscientist and advocate of registered replication reports; runs PsychFileDrawer, a platform for sharing replications.

Dale Barr / @dalejbarr - social scientist and methodologist at the University of Glasgow.

… and last but not least:

Lego Academics / @LegoAcademics

legoacademics

I hope I’ve been able to convince you of the value of Twitter for you as an advocate of good science. If you want to follow me, I’m @simoncolumbus. See you there.