One way to examine variability is to consider how far each score deviates from the mean. On an individual basis, this makes intuitive sense. For example, let us say that you have recently taken a statistics quiz. You find out that you made a score of 50%, and feel absolutely awful about your score. It may help your feelings to learn that the class average was 40%. While not a good score, it is higher than the average score of your classmates by 10%. In this example, the difference between your score and the class average is known as a deviation score (D).
If we compute a deviation score for every member of the class, we can examine how spread out each person’s score is from the mean. The problem with examining deviation scores is that there are as many deviation scores as there are raw scores. As the size of the class (or sample) grows, the harder it is to wrap our minds around what the data is telling us. We can summarize the deviation scores just like we did the raw scores: we can compute an average deviation score. The average of the deviations will provide us with a single number that summarizes the “spreadoutness” of all the scores.
For mathematical reasons, we have to do a few extra steps to get where we want to be with this idea of computing an average deviation score. When we do this, the result is a statistic known as the standard deviation. This little statistic is so important that we will spend more time on it later on. For now, just remember that the standard deviation is simply the average distance of the individual scores from the mean of the group. In the next section, we will describe the standard deviation in detail, as well as a closely related statistic known as the variance.
Let’s review what we’ve said so far: The standard deviation is the square root of the variance, and is more commonly used than the variance since the variance is expressed in squared units. For example, the variance of a series of tuition prices is measured in squared dollars, which is nearly impossible to interpret. The corresponding standard deviation is measured in dollars, which is much easier to intuitively grasp.
In statistics, especially when your audience is not composed of social scientists, you should present the most intuitive statistics possible.
Standard deviation and variance are typically superior to some other measures of dispersion, such as the range. The range is the difference between the largest and smallest components in a data set. The range suffers from the disadvantage that it is based on only two scores, so it does not measure the spreadoutness among the leftover values.
Last Modified: 02/18/2019