The use of p-values is very often seen in professional journals. It may seem complicated, but the interpretation, at least, is relatively simple. Previously, we discussed the null hypothesis. We said that it is a statement that there is no relationship between your variables of interest—that there is no “effect” in the population (and any observed relationships were caused by chance alone).
The next step is to see if the value you got is very likely if there is, in fact, no relationship in the population. If the value you get is unlikely given no real relationship between your variables, then you say that the results are “statistically significant.” Another way to express this is that if the result you obtained has a very low probability, then we are willing to say that the null hypothesis is not true (reject it) and that the data support the alternate hypothesis. The symbol p stands for the probability of rejecting the null hypothesis when it is true. Commonly we see a symbolic statement such as “p < .05.” This tells the reader that the probability of a relationship observed in a sample being caused by chance is less than 5%.
Note that in the traditional language of hypothesis testing, the lower the probability, the higher the level of statistical significance.
Hypothesis, Sample, Population, Generalization, Inference, Confidence Interval, Test Statistic, Research Hypothesis, Null Hypotheses, p-values, Alpha Level, Type I Error, Type II Error, Power, Assumptions, One-tailed Test, Two-tailed Test
Last Modified: 02/18/2019