In statistics, the validity of a test statistic depends on several assumptions being met. Assumptions are things that are taken for granted by a particular statistic. We must verify all of the assumptions associated with a particular statistic when we use it. Some statistics are considered robust against violations of certain assumptions. This means that even when the assumption is violated, we can still have a high degree of confidence in the results. As a consumer of research, it is important to understand that if the assumptions of a statistical test are violated, then we cannot trust the statistical outcome or any discussion points the author makes about those results.
Assumptions are characteristics of the data that must be present for the results of a statistical test to be accurate.
All samples are randomly selected. Most statistical procedures cannot account for systematic bias. Randomly selected samples eliminate such bias and improve the validity of inferences made based on statistical test results.
All samples are drawn from a normally distributed population. Most researchers do not fret over the normality requirement when comparing group means, because the effect of non-normality on p-values is very, very small. When a distribution of scores is not normal because of an outlier, then the problem can be important to consider. Extreme scores have an extreme impact on the mean, as well as variability and correlation. Recall what we said about the effects of extreme scores on the mean in previous sections. If an extreme score means that you should not use the mean, then a statistical test of mean differences makes no sense either.
All samples are independent of each other. This assumption means that there is no reason that the scores in Group A are correlated with the scores of Group B. If you use the same person for multiple measures of the variable that you are studying, then that person’s scores will be correlated. Therefore, if we use a Pretest-Posttest type of design, then we violate the assumption of independent samples. What this means is that we have to use special statistical tests that are designed for correlated scores, often referred to as repeated measures tests. Random selection and random assignment to groups are usually considered sufficient to meet this assumption. Statistical tests sometimes are called “independent samples” tests if they have this assumption.
All populations have a common variance. This assumption is often referred to as the homogeneity of variance requirement. It only applies to some tests statistics; the most common test statistics that have this assumption are the ANOVA family. Data that meet the requirement have a special name: Homoscedastic (pronounced ‘hoe-moe-skee-dast-tic’). Data that violate this assumption (e.g., the two variances are not equal) can be referred to as heteroscedastic. If you keep the treatment group and the control group around the same size (equal Ns), then this assumption is not really that important. Different variances with widely different sample sizes will taint your results.
Hypothesis, Sample, Population, Generalization, Inference, Test Statistic, Research Hypothesis, Null Hypotheses, p-values, Alpha Level, Type I Error, Type II Error, Power, Assumptions, One-tailed Test, Two-tailed Test
Last Modified: 02/07/2019