# Confidence Intervals

There is a huge debate in research circles as to whether tests of statistical significance have any value. Most researchers still think that they do, as evidenced by the massive number of hypothesis tests that still appear in the professional journals. One of the arguments against such tests is that they tend to create a false sense of precision in the mind of the reader, especially if the reader isn’t a researcher. When we compute mean, for example, based on sample data, we know that there is some error in the estimate and that it will not perfectly reflect the true mean of the population. When we present a “point estimate” like that, it is misleading. Many researchers advocate reporting confidence intervals to combat the pernicious effects of false precision.

A confidence interval is a *range* of values that is expected to contain the value of a population parameter with a specified level of confidence (such as 90 percent, 95 percent, 99 percent, and so on). We can construct a confidence interval for a population mean by following three basic steps:

- Estimate the value of the population mean by calculating a sample mean.
- Calculate the lower limit of the confidence interval by subtracting a margin of error from the sample mean.
- Calculate the upper limit of the confidence interval by adding the same margin of error to the sample mean.

The margin of error depends on the size of the sample used to construct the confidence interval, whether the population standard deviation is known, and the level of confidence chosen. The resulting interval is known as a confidence interval. A confidence interval is constructed with a specified level of probability.

When we specify a certain percentage for a confidence interval, we are specifying that we expect the true value of the population mean to land within the confidence interval that many times out of 100. Remember, the mean of the *population* is what we really care about, not just a particular sample. The value of the mean for each sample drawn is an *estimate* of the population mean. The sample mean will be slightly different each time a new sample is drawn. If we draw 100 random samples from a population and compute means and confidence intervals for each sample, 95 of the resulting confidence intervals will contain the true population mean. (You can demonstrate this with computer simulations of you have a lot of time on your hands).

Last Modified: 02/18/2019