What Does the Psychologist Do?

Like a lot of people we don’t just say what we did, but we add to that why we did it. For example: “I scream at you because I am very angry at you” (what is anger?) or “Because I’m very intelligent I was able to get a high score on that test” (what is intelligence?). We use unobserved entities to explain our behaviour. In psychology we are also concerned with what the causes are of human behaviour.

Psychologists measure all kinds of things that have to do with behaviour. But what exactly do they measure? To answer this question I will make a distinction between manifest variables and latent variables. A manifest variable is a variable that a psychologist is able to directly observe like the response someone gives on a questionnaire. A latent variable is unobserved and that makes it very hard to measure it or even worse, to prove the existence of a latent variable. Examples of latent variables are depression, intelligence, motivation, power, and sadness.

Depression, intelligence, motivation, power, and sadness are all examples of what a psychologist tries to measure. You might think that measuring these things is not that hard. If you think about yourself you might say that you know very well if you are depressed, intelligent or motivated. You might even say, “Come here psychologist, I will tell you how depressed, intelligent and motivated I am”. But then the psychologist will answer that such information is not of any use because it is subjective. And if a psychologist is subjective he cannot work as a researcher at the university.

What the psychologist does very often is indirect measurement of latent variables. If (s)he wants to measure latent variables indirectly (s)he needs manifest variables. Why? Because the psychologist thinks that responses from people that are tested on manifest variables are caused by a latent variable. For example: If someone responds to the statement “I don’t sleep very well” on a seven-point Likert scale with a seven, meaning that someone barely sleeps, we believe that this response is caused by a depression. If we have a collection of statements, we believe we can say something “objective” about a depression (a subjectively constructed latent variable fyi). We do this by putting all the data in a computer so that the latent variable can be calculated. By calculating a number for a latent variable it becomes kind of real.

And how do we do that?

Psychologists have a collection of formal models to their disposal to measure latent variables. A formal model is just a lot of mathematics that have nothing to do with psychology or behaviour.

But how can you measure motivation with something that has nothing to do with motivation? I don’t measure length with a weighing scale do I?

What a psychologist does, is dressing the formal model up with theories, theories about depression, motivation or intelligence. Then he will look if the formal model with all the equations fits the clothes with which the psychologist tried to dress him up. If it does the psychologist can say that his theory is approximately true. We can use the same clothing analogy to show one of the shortcomings.

Shopping stores have a limited amount of sizes they sell their clothes in. Often a lot of different people fit the same t-shirt.

 

Boris Stapel

Recommended reading:

Borsboom, D., Mellenbergh, G. J., & Van Heerden, J. (2003). The theoretical status of latent variables. Psychological review110, 203.

Borsboom, D. (2008). A tour guide to the latent realm. Measurement6, 134-146.

Publication Bias

Meta analysis in statistics refers to a method of combining independent studies to see if there is any disagreement among the results and look for interesting patterns. In an ideal world all valid results (e.g. results that are found through the use of good methods and statistics) on the topic that is analyzed would be at the disposal of the analyst. Through combining these results the nature of a statistically significant result can be investigated with a broader perspective. Unfortunately, it is rarely the case that all results are published. This is a serious problem.

In reality a positive outcome of a study makes it more likely that you can publish your results (Mahoney, 1977; Bakker, Van Dijk & Wicherts, 2012). When the scientific community pushes researchers to get significant results, factors that are different from the urge to find the truth might come into play. Researchers can react to this extremely by engaging in behavior where anything goes (e.g. fraud) to get significant results. This would leave us with a very biased sample of published research consisting of significant results that do not correspond with the real world. One can correctly argue that the majority of researchers do not go to these extremes however; a reaction that is much more mild than outright fraud can also have a severe effect on the sample of published research (Simmons, Nelson and Simonsohn, 2011; John, Loewenstein & Prelec, 2012). When papers that show true null results are rejected and (unconsciously) encouraging researchers to force results to a pre-specified significance level we are left with unreliable publications. This brings us back to meta analysis. Meta analyzing a biased sample of research is problematic. So, how are we to solve this problem? Here I will mention two solutions: (1) a solution from the perspective of conducting meta analysis and (2) a solution from the perspective of the people that are involved in the publication process.

First, this problem is not new in psychology (Rosenthal, 1979). Researchers themselves have already developed different ways to improve meta analysis in such a way that a publication bias can be detected by making funnel plots, use fail safe N analyses and much more. However all these solutions in meta analysis are to estimate the likeliness of publication bias. Through indirect measures it is measured if there is something like a publication bias. In this way we can never get our hands on the actual size of the bias.
Second, several initiatives have been started to make psychological science more transparent by making all conducted research available for everyone. One of these initiatives that have been here for a while is www.psychfiledrawer.com. Here people can upload their non-significant replications which otherwise would not have been published. A more recent initiative is www.openscienceframework.com. This is a website where researchers can publish almost everything they do in their research and make it available for everyone to use and check.

Making analyses more sophisticated and psychological science more transparent will hopefully reduce the amount of bias in a way that we can (almost) fully rely on published research again.

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543-554.

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524-532.

Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive therapy and research,1, 161-175.

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638-641.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.