Friday, 5 May 2017

The school research lead and the evidence for coaching improving teaching and students' academic achievements

Recently a meta-analysis by (Kraft, Blazar, & Hogan, 2016) affirming the effectiveness of coaching as a developmental tool gained some prominence on twitter (see @GeoffreyPetty).  By aggregating results across 37 studies (Kraft et al, 2016) found pooled effect sizes of +0.57 standard deviations (SD) on instruction and +0.11 (SD) on achievement, which they claim affirms the effectiveness of coaching as a development tool.  Nevertheless, they recognised that in large-scale trials with more than 100 teachers involved – the effect size was half that found in small-scale studies.

Now the temptation is to say we that these findings justify an increased focus on coaching as a form of teacher professional development.  Indeed, it  might even be claimed that coaching is the 'magic bullet' of school improvement   However, with all these things, it’s never quite that simple.  So given that the ‘statistics’ in (Kraft et al., 2016) are ‘above my pay-grade’ I took my own advice (see School Research Leads need ‘mindlines’ not guidelines) and contacted @profstig (Professor Steve Higgins of the University of Durham) to see if the paper stood up to informed scrutiny.  So what did (Higgins, 2017) say?

One, it is a fairly robust meta-analysis. The techniques are sound and applied well.

Two, overall it looks like Bananarama again “the quality and focus of coaching may be more important than the actual number of contact hours” – the scale-up decrease also reinforces this message (0.17 to 0.08) for the drop in impact found from efficacy to effectiveness trials.

Three, “a one SD (i.e. ES = 1.0) change in instruction is associated with a 0.15 SD change in achievement” – looks pretty inefficient to me – this is a big change in behavior for what looks like a small gain in achievement. Some of the changes brought about by coaching must not be necessary or may be incorrect!

Four, the findings not likely to be the result of chance for reading (0.14 ** p<.01,), but within the margin for error for maths and science (smaller ES and fewer studies, so it was never going to reach statistical significance).

Five, when thinking about the overall benefit, for me, the issue is cost effectiveness. How expensive is coaching compared with one-to-one tuition per student? A lot cheaper I expect, so if you can get a reliable impact on student outcomes it is worth doing (particularly as the effects may continue for other groups of students if the teacher’s effectiveness has improved, whereas one-to-one benefits cease when you stop doing it).

Six, I don’t like their search strategy – I think it potentially builds in publication bias. I have no problem with expert informant recommendations, but then they needed to compensate for this selectivity by designing a systematic search which could find these studies (and the others which meet the same criteria. ‘Experts’ are likely to be pro-coaching and recommend (or remember) successful evidence. The trim and fill analysis suggests publication bias inflates the estimates (i.e. doubles it).

Finally – at least there is evidence of impact on student outcomes, meaning coaching can improve student outcomes!

Implications

So what are the implications of this research and discussion for you in your role as a school research lead.

To some extent, they depend upon your setting and context.  If you are a school research lead operating across a range of schools within a multi-academy trust (MAT) and where interventions are adopted across the whole (MAT) the results of any coaching intervention are likely to be significantly smaller than when first applied in a pilot school.  

Any intervention must be seen in terms of the ‘opportunity cost’ i.e. what would have been the value of the next highest valued alternative use of that resource – in terms of both resources and changes in teacher and pupil learning.   So it is important to think not only about the benefits but the long-term benefits and costs – and any negative unintended consequences – such as attention cost.

Regardless of your setting, it’s important not to be overwhelmed by the apparent complexity of research findings.  In this context, as long as you have an understanding of what is an effect size, and how big is an effect size, it’s possible to get some understanding of the issues.   So if we take a range of interpretations of the size of an effect size
  • John Hattie and his estimate of average effect sizes being around +0.4.  
  • The EEF’s DIY Evaluation Guide written by Rob Coe and Stuart Kime, where on pages 17 and 18 they provide some guidance on the interpretation of effect sizes (-0.01 to 0.18 low, 0.19 - 0.44 moderate, 0.45 - 0.69, high, 0.7 + very high) and with effect sizes being converted into months of progress. 
  • Alternatively, if you are interested in the relationship between effect sizes and GCSE grades, you could turn to (Coe, 2002) where he note such, an improvement of one GCSE grade represents an effect size of about +0.5 – + 0.7. 
  • (Slavin, 2016) who suggests an average effect size for a large study (250 participants) is 0.11.  
From this is should be quickly apparent that coaching is no more of a magic bullet than the average intervention, indeed it may have less than average effect, and it also may be a lot more expensive. 

Finally, take time to develop your ‘mindlines’ - (Gabbay & Le May, 2004)  i.e. collectively reinforced, internalised, tacit guidelines. These are informed by brief reading but also by your own and your colleagues’ experience, your interactions with each other and with opinion leaders, researchers, and other sources of largely tacit knowledge.   Modern technology and social media allow you to contact experts from outside of your own setting, most will be grateful that you have exhibited an interest in their work and more often that note hugely generous with both their time and expertise.

References

Coe, R. (2002). It's the effect size, stupid: What effect size is and why it is important.
Gabbay, J., & Le May, A. (2004). Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. Bmj, 329(7473), 1013. doi:10.1136/bmj.329.7473.1013
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement: Routledge.
Higgins, S. (2017, 18 April, 2017). [Meta-analysis on coaching] personal correspondence
Kraft, M. A., Blazar, D., & Hogan, D. (2016). The Effect of Teacher Coaching on Instruction and Achievement: A Meta-Analysis of the Causal Evidence.

Slavin, R. (2016). What is a Large Effect Sixe.  Retrieved from http://www.huffingtonpost.com/robert-e-slavin/what-is-a-large-effect-si_b_9426372.html

5 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. hanks for youe deep research and willing to share your knowledges with us! I really appreciate your work and want to re read this article several times. If you want to find more information which concerns to this theme or just want to find a good academic report try this source and you will find many interesting things and facts about writing which you did not know before.

    ReplyDelete
  5. Regarding effect size "of +0.57 standard deviations (SD) on instruction and +0.11 (SD) on achievement."
    There is a story of a taxi company installing higher quality brakes on all the cars in their fleet, expecting fewer accidents. But the taxi drivers changed their driving habits and the gain from the better breaks was nullified.
    Suppose
    1) Each student sets themselves a goal (the class average goal might be to earn a B)
    2 I teach my subject more clearly
    The consequence is that the student can do a little less homework, revision, etc. and still earn that desired B.
    We teachers can be happy that it is not a zero sum game.

    ReplyDelete