Many of my students have said that an effect size is a “measure of the effect.” This signals that they have missed the logic of computing an effect size. Any time we look at whether something we did experimentally or in practice changes a dependent variable, a couple of questions naturally arise. The first is very basic, and is answered by statistical hypothesis tests: Does this treatment really work? The second question, which is often glossed over in the professional literature, is equally important: Does this treatment work in a meaningful way? In other words, how well does it work? To take an example from criminal justice, a small town police executive is interested in reducing a specific type of crime: manufacture of methamphetamine. Let us say that this police policy maker spends $25,000 per year on a new program designed to eliminate clandestine lab operations. At the end of a three-year trial, it is determined that the average number of meth labs is down from 100 to 97. A simple t-test tells the investigator that there is a “statistically significant difference” between the means. In practical terms, this means that the methamphetamine reduction strategy most likely works (and the appearance of it working isn’t because of sampling error).
Once this is known, another question arises. That question boils down to one of efficacy; the program works, but does it work well enough to keep funding it year over year? Most small towns wouldn’t continue to fund a policing initiative that costs $25,000 per year to eliminate a couple of labs when there are many, many labs remaining. The town’s folk and civic leaders would want a more effective program. That is, one with a larger effect size.
Effect Size, Standardized Mean Difference Statistic
Last Modified: 02/18/2019