# Advanced Statistical Analysis:

*A Primer*

*Adam J. McKee, Ph.D.*

*This content is released as a draft version for comment by the scholarly community. Please do not distribute as is. *

## Variance Partitioning

If you want to get at the big idea of all inferential statistics, my answer would be “to partition variance.” We know correlation and regression techniques can be used to talk about relationships and relatedness. Rarely, however, are those things the real goals of the social scientific endeavor. The goal of most science is to *explain social phenomena*. In this search for explanation, we often attempt to identify variables (IVs) that affect the phenomenon we are trying to explain (the DV), but we also want to understand their relative importance. This quest boils down to two basic and related methods. The first is what we’ll call variance partitioning. The other (which we’ll consider in a later section) is the analysis of effects.

Recall that regression analysis can be used as an analytical tool in experimental, quasi-experimental, and descriptive research. The interpretation of regression results is far easier in experimental research because of the magic of random assignment. Still, regression methods can be used with those other types of research so long as due caution is used. I leave you with Elazar Pedhazur’s (1997) sage advice in this regard: “*Sound thinking within a theoretical frame of reference and a clear understanding of the analytic methods used are probably the best safeguard against drawing unwarranted, illogical, or nonsensical conclusions*” (p. 242).

Recall that R^{2} (the coefficient of determination) can be interpreted as the proportion of variance in Y that is explained by knowing the value of a predictor X, or a set of predictors (X_{1}, X_{2}, X_{3}, and so forth). When we refer to “variance partitioning” we are really just talking about determining a proportion of R^{2} that can be attributed to a *particular* X (or set of X values). In the simplest case where only one X is used to predict Y, this work is done for us. There is only one IV, so we can attribute all of R^{2} to that variable.

Understanding variance partitioning in regression requires that we master a new vocabulary. You will encounter these terms frequently in the literature, so learning them is a very good idea. The first term we want to consider is the idea of a **zero-order correlation**. The zero order correlation is the correlation of a particular X with Y with no other X values taken into account. You already know all about these—They are the same thing as Pearson’s r.

One way to think of how we get R^{2} in multiple regression (two or more IVs) is to add up the zero-order correlations. In other words, we could compute a Pearson’s r for each X, square those, and then add them all up. That method will actually work if we stipulate that none of the X values are correlated with each other as well as Y. If they share variance with each other and Y, then R^{2} must be reduced because the overlapping covariance can’t be counted twice. If we do allow that same variance to be counted twice (or more) we can potentially end up with a highly inflated R^{2} that vastly overestimates the predictive power of our model. We can also end up with a nonsensical result such as explaining 125% of the variance in Y by knowing X_{1}, X_{2}, and X_{3}. Logic dictates that we can’t explain more than 100% (R^{2} = 1.0) of the variance in Y.

Note that when we are entering regression models into computer software, *the order of entry is of critical importance*. This is because the first X that we enter into the model gets to “claim” __all__ of the variance it shares with Y. In other words, if we enter X_{1} first (which would be a logical default), we begin computing R^{2} with the zero order correlation of X_{1} with Y. For subsequent variables, the squared **semipartial correlations** are used. The takeaway from this is that when the IVs are at all intercorrelated, the proportion of variance attributed to each variable depends on its order of entry into the model. It is a mistake, then, to assume that one variable (especially the first) is more strongly associated with Y based on its contribution to R^{2}.

This all means that we can be sure of R2 because it does not change, regardless of the order entry of our predictor variables. The relative contribution of each X, however, can change dramatically. Unfortunately, there is no mathematical way to sort all this out. Some authors have gone so far to suggest that the idea of an “independent contribution to variance” as meaningless when the predictors are intercorrelated. This has done little to remove the questionable practice from researchers’ toolboxes and thus the professional literature. Always interpret an author’s discussion of the results of a regression partitioning exercise with extreme caution.

### Hierarchical Regression Analysis

One approach to variance partitioning is to use what has become known as **hierarchical regression analysis**, which has also been called *incremental partitioning of variance*. The idea is pretty simple: You dump all the variables in your study into a regression model. You get R^{2} for that model. You then run the model again, adding your variable of interest. If your new variable contributes some new explanatory power, then R^{2} will rise. (Any time two models are compared, you can examine the change in R^{2}. This “change” is often footnoted using the Greek letter delta: ΔR^{2}). The logic is that such a rise indicates the “independent contribution” of that variable. Pedhazur (1997) warns against such an interpretation (p. 245). He does suggest, however, that such as method is perfectly fine when the researcher wants to examine the effect of the one variable while controlling for the effects of the others (we’ll consider the idea of statistical control in a later section). Simply put, *incremental partitioning of variance is not a valid way of determining the relative importance of a variable*.

### References and Further Reading

Pedhazur, E. J. (1997). *Multiple regression in behavioral research: Explanation and prediction* (3^{rd} ed.). New York: Harcourt Brace.

File Created: 08/24/2018 Last Modified: 08/24/2018

This work is licensed under an **Open Educational Resource-Quality Master Source (OER-QMS) License**.

**[ Back | Content | Next ]**