After completing several statistics courses I lived in the illusion that I knew the ins-and-outs from, what I thought to be, basic statistical analyses. During this course, however, I saw pitfalls in almost all of them and came to the realization that the application of statistical procedures are not as straightforward as I once thought they were. One of the most striking examples is the analysis of covariance (ANCOVA). A statistical procedure used a lot, seemingly as a way to “control” for confounds. I was always impressed by this procedure, until I found out there is a lot more to it than just “controlling” for confounds.

The analysis of covariance (ANCOVA) was developed as an extension to the analysis of variance (ANOVA) to increase statistical power (Porter & Raudenbush, 1987). By including covariates, the variance associated with these covariates is being “removed” from the dependent variable (Field, 2009). This way, from the manipulation point of view, the error variance in the dependent variable is reduced and hence the statistical power increases, see Figure 1. Given that psychological research is often underpowered (Cohen, 1990), ANCOVA is an important statistical procedure in the revelation of psychological phenomena and effects.

This promising application of ANCOVA, however, only holds when there is no systematic relationship between the grouping variable and the covariate, i.e., the groups cannot differ on the covariate. This is an assumption that many researchers today fail to check. As a result, ANCOVA is widely misunderstood and misused (Miller & Chapman).

The importance of this assumption is illustrated in Figure 2. Namely, when group and covariate are related, removing the variance associated with the covariate will alter the group. In other words, the remaining variance of group after removing the variance associated with the covariate has poor construct validity and the results are therefore uninterpretable.

The general point is that the legitimacy of ANCOVA depends on the relationship between the grouping variable and the covariate. ANCOVA is justified only when there is no systematic relationship between these variables.

On the one hand, it is quite straightforward to defend this judgement in a randomized experiment; given random assignment, individual characteristics are equally distributed across the groups and thus, group means should not differ except by chance, see left panel of Figure 3. As a result, including a covariate in a randomized experiment increases the statistical power. In this sense, ANCOVA is underutilized. On the other hand, when studying pre-existing groups (i.e., non-random assignment), individual characteristics are not evenly distributed across groups and hence a relationship between group and covariate can exist. Thus, including a covariate in a non-randomized experiment might alter the grouping variable and result in unfounded interpretations and conclusions. In this sense, ANCOVA is overutilized.

It is worrisome that ANCOVA is more often applied in non-randomized experiments than in randomized experiments (Keselman et al., 1998). The idea is that researchers want to “control” for pre-existing differences. ). This idea is incorrect since there just is no statistical way to “control” for these pre-existing differences. ANCOVA “removes” the variance that is associated with the covariate, but it does not “control” for the covariate.

We, as researchers, should acknowledge the inabilities (ANCOVA cannot “control” for pre-existing differences) and abilities (ANCOVA can increase statistical power) of ANCOVA. This way we should be able to eliminate unfounded conclusions that are the result of the misapplication of ANCOVA. And, most important, we can expand the strengths of its application: increase statistical power. This way ANCOVA can help to reveal real psychological phenomena and effects.

For a good overview about this problem consult Miller and Chapman (2001). The original paper introducing this problem gives a good example on how the inclusion of a covariate can lead to incorrect conclusions (Lord, 1967).