A data analysis of the effect of coaching on teachers' practice and students' outcomes caused a bit of a stir recently. The meta-analysis of 37 studies found that coaching was an effective method of professional development for teachers.
It is worth noting that coaching was most effective in small groups, however. In fact, it was half as effective in large-scale studies that involved more than 100 teachers.
Matthew Kraft, associate professor of education and economics at Brown University, aggregated the results. He found that – and this bit is techy – coaching has a pooled effect size of +0.57 standard deviations (SD) on instruction and +0.11 SD on achievement.
To really get to grips with this, you need to understand what an effect size is, and how big an effect size is. It's hard to give an exact figure, but the Education Endowment Foundation's DIY Evaluation Guide shares some guidance on the interpretation of effect sizes. While these correspond to key stage 1 progress so it isn't a strict comparison, it will give you some idea of the impact of coaching in the classroom. These are:
- -0.01 to 0.18 is a low effect size, equating to 0-2 months' progress
- 0.19 - 0.44 is a moderate effect size, equating to 3-5 months' progress
- 0.45 - 0.69 is a high effect size, equating to 6-9 months' progress
- 0.7 + is a very high effect size equating to 9-12 months' progress.
With coaching having such a healthy impact on instruction, we might be tempted to use these findings to justify an increased focus on coaching as a form of teacher professional development. You'd be forgiven for thinking it's a silver bullet for school improvement.
As with all these things, however, it's not quite that simple. Professor Steve Higgins of the University of Durham had a look to see if the paper stood up to scrutiny. Here's what he found:
- It is a fairly robust meta-analysis – the techniques are sound and applied well.
- Overall it looks like bananarama again – "the quality and focus of coaching may be more important than the actual number of contact hours".
- The results show a change in instruction is associated with a smaller change in achievement. It looks pretty inefficient – this is a big change in behaviour for what looks like a small gain in achievement. Some of the changes brought about by coaching must not be necessary or may be incorrect.
- The results are not likely to be chance for reading, but they are within the margin for error for maths and science.
- When thinking about the overall benefit, the issue is cost effectiveness. How expensive is coaching compared with one-to-one tuition per student? A lot cheaper I expect, so if you can get a reliable impact on student outcomes it is worth doing (particularly as the effects may continue for other groups of students if the teacher's effectiveness has improved).
- The search strategy could have built in publication bias. The expert informant recommendations are fine, but researchers needed to compensate for this selectivity by designing a systematic search which could find these studies (and the others which meet the same criteria). Experts are likely to be pro-coaching and recommend (or remember) successful evidence. This bias inflates the results.
- Finally, at least there is evidence of impact on student outcomes, meaning coaching can improve student outcomes.
The impact for school research leads
In some ways, the implications of these findings about coaching depend upon your setting and context. If you are a school research lead within a multi-academy trust (MAT), where interventions are adopted across the whole group, the results of any coaching intervention are likely to be significantly smaller.
Any intervention must also be seen in terms of the "opportunity cost". What value would you have got from that resource if you had done something else? It's not just immediate benefits, but long-term benefits, costs and any negative unintended consequences, such as attention cost.
These findings show that coaching is no more of a magic bullet than the average intervention, indeed, it may have less than average effect (and it may be a lot more expensive).
Finally, take time to take in different opinions on the topic to really consider your professional thinking around this. This should be informed by reading briefs, but also by your own and your colleagues' experience, your interactions with each other and with opinion leaders, researchers, and other sources of knowledge. Modern technology and social media allow you to contact experts from outside of your own setting – most will be grateful that you have shown an interest in their work and, more often that not, hugely generous with their time and expertise.