Decision Errors
After you have computed a test statistic, a decision must be made. You must either reject or fail to reject the null hypothesis. This part is really easy. You compare the computed probability with your preset alpha level (you did set an alpha level before conducting your test didn’t you?) If the probability is less than alpha (less than .05 let us say), you reject the null hypothesis. If the computed probability is bigger than alpha, you fail to reject the null hypothesis. What makes this process a bit complicated is understanding your chances of being wrong—making what statistics folks call a decision error.
Alpha is just a probability threshold that the researcher sets for rejecting the null hypothesis. The alpha level is set by the researcher at the very beginning of the data analysis process—before any test statistics are computed. Some researchers in the social sciences set the value at .05; others set it at the more demanding .01 level. If the probability of your observed relationship is less than the set alpha level (e.g., .01 or .05) then you will reject the null hypothesis.
When it comes to making a decision regarding a null hypothesis, there are only four possible outcomes. If the null hypothesis is true (in the population) and you fail to reject it, you have made a correct decision. If the null hypothesis is false and you reject it, you have made a correct decision. The other two options are incorrect decisions. In other words, you have made a decision error. Researchers have named these Type I errors and Type II errors.
Alpha (α) is the probability threshold set by the researcher for rejecting the null hypothesis.
A Type I Error is also known as a false positive because we reject the null hypothesis when it is true. Thus, we have found a “positive” result when we should not have. Alpha (α) is the probability of a Type I Error. When we say that p < .05, we are saying that the chance of making a Type I Error is less than 5%. When we say that p < .01, we are saying that the chance of making a Type I Error is less than 1%.
A Type I error occurs when a researcher rejects a null hypothesis when it is true for the population.
A Type II Error, also known as a false negative, occurs when the null hypothesis is, in reality, false (we should reject it) but we fail to do so based on our test results. Beta (β) is the Type II Error rate. This is closely related to the power of a statistical test, which is defined as 1 – β.
A Type II error occurs when a researcher fails to reject the null hypothesis when it should be rejected.
Key Terms
Hypothesis, Sample, Population, Generalization, Inference, Test Statistic, Research Hypothesis, Null Hypotheses, p-values, Alpha Level, Type I Error, Type II Error, Power, Assumptions, One-tailed Test, Two-tailed Test
Important Symbols
Last Modified: 06/03/2021