Is Psychometrics Pathological?

Every few months the highly controversial question whether psychometrics can be classified as real empirical science arises again in the specialist literature of the field. In this context, one often reads eloquent philosophical articles focusing on the status of the social sciences compared to the physical sciences. Frequently, articles point out inherent differences in the study object of the disciplines: namely the complex dynamics within, around and between human beings versus the exact rules that govern the natural world. In the natural sciences, scientists seek to understand how the world, nature and the universe around us work by using scientific methods. They rely on experimental data that can be measured by quantitative representations. In psychometrics there is also an existing conviction that psychological attributes are quantitative: psychometricians assume that they are able to measure concepts like personality or ability.

In my paper, I focused on a recent discussion about this questionable assumption: In the year 2000, the journal Theory & Psychology published an article titled “Normal Science, Pathological Science and Psychometrics”, written by Joel Michell from the University of Sydney. In 2004, Denny Borsboom and Gideon Mellenbergh from the University of Amsterdam wrote a comment on Michell’s paper, which has been published in Theory & Psychology, too. Their discussion centers on the question whether psychometrics is a pathological science.

In his article, Michell (2000) claims that psychometrics is a pathological form of science. He contrasts two terms: normal science and pathological science. Michell’s definition of optimal or normal science is based on the principles of critical inquiry (Michell, 2000, p.640). Critical inquiry consists of two forms of testing, emerging from the two existing types of “error”: logical and empirical. Through critical inquiry it is possible to identify this “error” and be aware of its occurrence. Normal science turns into pathological science when one does not work based on the principle of critical inquiry. According to Michell, psychometrics is a scientific discipline that is in breach of this principle on two distinct levels: (1) the hypothesis that psychological attributes and concepts are quantitative is accepted as true without a serious attempt to test this assumption, and (2) this fact is not discussed and even disguised.

In their comment to Michell’s article, Borsboom and Mellenbergh (2004) argue that, when one declares psychometrics as a pathological discipline, nearly all other scientific disciplines are pathological, too. They base their argument on the Quine-Duhem thesis, originating from the philosophy of science. The Quine-Duhem thesis holds that hypotheses are never tested in isolation since they are always part of a bigger network of hypotheses. Borsboom and Mellenbergh state that it is therefore impossible to test the hypothesis that psychological attributes are quantitative, separately. In this context they claim that one should distinguish between classical test theory (CTT; Lord & Novick, 1968) and item response theory (IRT; Hambleton & Swaminathan, 1985). With their comment, Borsboom & Mellenbergh show the importance of Michell’s critique on psychometrics. Also, they underline that not testing the assumption of quantitative psychological attributes cannot be assigned to the imputed ignorance of psychometricians, but to the restriction that counts for all hypothesis testing: one cannot isolate a hypothesis. Furthermore, they make clear that some of Michell’s claims are an important point of attention for much psychological research: “…often, item scores are simply summed and declared to be measurement of an attribute, without any attempt being made to justify this conclusion”. As we conclude, every week of this course again: it is all about                  a w a r e n e s s.

Human Factors in Science: what is left?

When learning about human factors in science one could start questioning the superiority that is ascribed to the institution of science. As commented on the last post, discovered ‘facts’ about reality depend on culture, time, methodology and the overall paradigm within which we think. But the empirical method itself is not flawless at all: it is invented and conducted by human beings. Inherent in that is that human beings do not function like machines, on the basis of set algorithms. They all have a limited apperception that is prone to skill-, rule- and knowledge-based errors. But what does this mean for the scientific institution and its ideals?

In one of the articles we have read (Mahoney, 1979), the author discussed the psychology of the scientist and in what way it influences science. After disillusioning his reader regarding the handled scientific ideals (e.g. objectivity, rationality, open-mindedness, honesty, etc.) he makes an interesting turn in his reasoning: historically seen irrational associations and wild speculations have lead to revolutionary new theories. So, optimal functioning of scientists is not realistic. But what is it then? Is the scientific idea of progress only possible or at least more creative when we do not handle the mentioned ideals? Or do we have to be more strict about the human pitfalls? Or do you even think the status that the institution of science claims will not ever be reached?

In this context, I want to recommend an interesting essay to you: “Wissenschaft als Beruf” (“Science as Vocation”) by Max Weber (1864-1920). I found it open source in English. I know, everybody is very busy already, but maybe you can find a coffee break to read it. It’s worth it!


Mahoney, M.J. (1979). Psychology of the scientist: an evaluative review. Social Studies of Science, 9.3, 349-375.