### Section 6.2: t-Tests

In experimental research, the researcher will often have an experimental group and a control group. The t-test is designed to allow the researcher to test hypotheses about the differences between the means of these two groups. Ultimately, as with most other statistical significance tests, t-tests yield a value of *p*, which indicates the probability that random sampling errors caused the observed difference between the means and not the treatment.

If the value of *p* is less than the established alpha level, then the researcher rejects the null hypothesis. The lower the probability, the more confidence the researcher can have in rejecting the null. For a t-test, the null hypothesis simply states that there is no difference between the two means.

A common method of presenting the results of several *t*-tests is to present all the results together in a table. These tables will usually contain the frequency, the mean, the standard deviation, and the value of ** t**. The probability of

**is usually provided in the form of footnotes. Often, one asterisk will denote that the mean differences are significant at the .05 level, and two asterisks signify the differences are significant at the .01 level. This convention, however, is by no means universal. Inspect the footnote carefully to determine the level of significance each time you encounter such a table. A value for**

*t***presented without a footnote suggests that the difference between the means was not statistically significant.**

*t*Last Modified: 06/03/2021