For consumers of educational research, effect sizes play a key role in understanding what strategies and interventions are likely to have the biggest impact on the learning process and student achievement. In a recent article in Educational Researcher, Matthew Kraft replicated his earlier analysis using a larger data set of effect sizes to realistically reflect what constitutes small, medium, and large effects of educational interventions. He argued for the need to reorient how we interpret effect-size benchmarks and more generally how we measure success in the education sector. Central to his approach is the recognition that many education interventions fail to produce substantial impacts on student outcomes, and rather than dismissing these results, they should be integral to interpreting the policy relevance of effect-size benchmarks and crucial to setting realistic expectations for what counts as meaningful impact.
While research designs and contexts differ in multiple ways that make interpreting effect sizes across studies not entirely straightforward, they are still useful in synthesizing large amounts of data, in discovering patterns, and making broader inferences. This analysis included 973 studies and 3,426 effect sizes and replicated the findings that the effect-size distribution at the 30th percentile = +0.02, 50th percentile = +0.10, and 70th percentile = +0.21. Further breakdowns showed that 36% of effect sizes from standardized achievement measures in randomized control trials were smaller than +0.05. By anchoring effect-size benchmarks in the reality that a large portion of interventions do not significantly increase student achievement, it contributes to a better understanding of realistic growth.
Kraft’s article underscores the need for a nuanced interpretation of effect-size benchmarks, which are meant to help frame evidence-based policymaking and are intended to be coupled with information about statistical significance and the understanding that both the size and the precision of the effect-size estimates count. However, singular focus on the magnitude of the effect size sometimes has the education community overlooking the less shiny interventions that produce incremental improvement. Overall, understanding effect sizes across the educational research landscape contributes to a more informed interpretation of the impact of educational interventions, facilitating evidence-based decision-making in educational policy and practice.