When testing a hypothesis with ANOVA models, we end up with an F statistic, which is a “signal to noise ratio.” Most of the noise is caused by individual differences among participants in the study. Logically, if we can differentiate between those random differences that cause variance and our experimental treatment that causes variance, we have a much more powerful statistical test. To continue our analogy, when we eliminate noise, the signal gets stronger.
Analysis of Covariance (ANCOVA) is an extension of ANOVA that provides a way of statistically controlling the (linear) effect of variables one does not want to examine in a study. In other words, the variable is thought to confound your results, and you want to rid your analysis of its effects. These extraneous variables are called covariates, or control variables. ANCOVA allows you to remove covariates from the list of possible explanations of the variance in the dependent variable. ANCOVA does this by using mathematical methods rather than experimental methods (e.g., good experimental design) to control extraneous variables.
ANCOVA is used in experimental studies when researchers want to remove the effects of some antecedent variable, such as preexisting conditions. For example, pretest scores are often used as covariates in pretest-posttest experimental designs (as is the analysis of gain or difference scores). ANCOVA is also used in non-experimental research, such as with surveys and nonrandom samples, or in quasi-experiments when subjects cannot be assigned randomly to control and experimental groups.
The basic ANCOVA model requires that these covariates be measured on at least the interval scale. If researchers are worried about a categorical variable, then a multi-way ANOVA can be used to treat the potential nuisance variable as a separate effect. The basic ANCOVA model requires that these covariates be measured on at least the interval scale. When the assumptions of ANCOVA can be met, the method improves the power of the subsequent test by removing systematic variance among subjects by subtracting it from the within-groups error term. Note that the increase in power only occurs if the covariate is actually correlated with the dependent variable. If the correlation is very weak or does not exist at all, then “throwing in” the covariate serves to reduce the power of the analysis.
To calculate this, a regression coefficient is calculated to determine what variance is predictable in the DV by knowing the covariate; once that amount of variance is known, it can be subtracted. This significantly increases the signal-to-noise (F) ratio by reducing the noise in the denominator of the F ratio. The basic process starts just as a regular ANOVA would start, but an adjusted (for the covariate) sum of squares is computed for each component.
Analysis of Covariance (ANCOVA), covariate, statistical control, control variable
Last Modified: 02/14/2019