When the results of an ANOVA are significant, it tells us that at least one pair of means in the analysis is significantly different from another. The problem with this type of analysis is that it does not tell us which mean differences are significant. Post hoc tests are a family of tests that allow us to compare every mean with every other mean to determine which ones are significantly different.

You may wonder why we do not just conduct a t-test for each possible mean comparison and see which ones are significant. The problem with this is what researchers call an **inflated Type I error rate**. Recall that a Type I error means rejecting the null hypothesis when it is true, and the probability of this is specified by our alpha level. Therefore, if the researcher sets alpha at .05, then a Type I error will happen about 5% of the time. If we conduct 20 such tests, then the odds are very high that one of the tests will result in a Type I error. If we want our overall chance of a Type I error to remain at 5% for the entire analysis, we must adjust the probability of each comparison such that they sum to 5% and do not *each* have a 5% chance of error.

**Type I Error Rate**

A Type I error refers to a situation where a researcher rejects the null hypothesis when it is true.

The researchers selected alpha level establishes the level of acceptable risk.

This may seem like a small matter until we consider just how many comparisons there may be. The number of comparisons goes up very quickly as the number of groups increases. For example, when there are only two means, there is only one comparison. When there are three groups, there are three possible comparisons. With four groups, there are six possible comparisons. With five groups, there are ten possible comparisons. To deal with the inflated Type I error rate caused by all of these comparisons, we must use a special significance test that takes the inflation into account. **Post hoc** tests do just that.

**Post hoc tests** are a family of related statistical procedures used to test the significance of individual mean differences after an ANOVA test result is found to be statistically significant.

**[ Back | Contents | Next ]**

Last Modified: 02/12/2019