Variance Partitioning

Fundamentals of Social Statistics by Adam J. McKee

If you want to get at the big idea of all inferential statistics, my answer would be “to partition variance.”  We know correlation and regression techniques can be used to talk about relationships and relatedness.  Rarely, however, are those things the real goals of the social scientific endeavor.  The goal of most science is to explain social phenomena.  In this search for an explanation, we often attempt to identify variables (IVs) that affect the phenomenon we are trying to explain (the DV), but we also want to understand their relative importance.  This quest boils down to two basic and related methods.  The first is what we’ll call variance partitioning.  The other (which we’ll consider in a later section) is the analysis of effects.

Recall that regression analysis can be used as an analytical tool in experimental, quasi-experimental, and descriptive research.   The interpretation of regression results is far easier in experimental research because of the magic of random assignment.   Still, regression methods can be used with those other types of research so long as due caution is used.  I leave you with Elazar Pedhazur’s (1997) sage advice in this regard:  “Sound thinking within a theoretical frame of reference, and a clear understanding of the analytic methods used are probably the best safeguard against drawing unwarranted, illogical, or nonsensical conclusions” (p. 242).

Recall that R2 (the coefficient of determination) can be interpreted as the proportion of variance in Y that is explained by knowing the value of a predictor X, or a set of predictors (X1, X2, X3, and so forth).  When we refer to “variance partitioning” we are really just talking about determining a proportion of R2 that can be attributed to a particular X (or set of X values).  In the simplest case where only one X is used to predict Y, this work is done for us.  There is only one IV, so we can attribute all of R2 to that variable.

Understanding variance partitioning in regression requires that we master new vocabulary.  You will encounter these terms frequently in the literature, so learning them is a very good idea.  The first term we want to consider is the idea of a zero-order correlation.  The zero-order correlation is the correlation of a particular X with Y with no other X values taken into account.  You already know all about these—They are the same thing as Pearson’s r.

One way to think of how we get R2 in multiple regression (two or more IVs) is to add up the zero-order correlations.  In other words, we could compute a Pearson’s r for each X, square those, and then add them all up.  That method will actually work if we stipulate that none of the X values are correlated with each other as well as Y.  If they share variance with each other and Y, then R2 must be reduced because the overlapping covariance can’t be counted twice.   If we do allow that same variance to be counted twice (or more), we can potentially end up with a highly inflated R2 that vastly overestimates the predictive power of our model.  We can also end up with a nonsensical result such as explaining 125% of the variance in Y by knowing X1, X2, and X3.  Logic dictates that we can’t explain more than 100% (R2 = 1.0) of the variance in Y.

Note that when we are entering regression models into computer software, the order of entry is of critical importance.  This is because the first X that we enter into the model gets to “claim” all of the variance it shares with Y.  In other words, if we enter X1 first (which would be a logical default), we begin computing R2 with the zero order correlation of X1 with Y.   For subsequent variables, the squared semi-partial correlations are used.

The takeaway from this is that when the IVs are at all intercorrelated, the proportion of variance attributed to each variable depends on its order of entry into the model.  It is a mistake, then, to assume that one variable (especially the first) is more strongly associated with Y based on its contribution to R2.   This all means that we can be sure of R2 because it does not change, regardless of the order entry of our predictor variables.  The relative contribution of each X, however, can change dramatically.  Unfortunately, there is no mathematical way to sort all this out.  Some authors have gone so far as to suggest that the idea of an “independent contribution to variance” as meaningless when the predictors are intercorrelated.  This has done little to remove the questionable practice from researchers’ toolboxes and thus the professional literature.  Always interpret an author’s discussion of the results of a regression partitioning exercise with extreme caution.


[ Back | Contents | Next ]

Last Modified:  06/04/2021

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.