Thursday 18 May 2017

Lesson Study and the 6 As of effective research use.

In recent years Lesson Study (LS) has become an increasingly popular form of professional development.  Indeed, there is a large body of evidence of the positive impact of lesson study and there is evidence that teachers enjoyed and engaged with lesson study as an approach to professional learning. As yet, there is less extensive evidence of the professional learning from research lessons making a permanent difference to daily teaching and learning, nor to pupil learning outcomes, although there is no evidence of any negative impact. The robustness of this evidence is reduced due to the fact that studies that do demonstrate positive outcomes are frequently (and often very) small scale case studies, making generalisation more difficult. However, all studies found a positive impact and the number of small scale studies finding positive impact suggests a general positive trend across many different settings and contexts

The research base on LS as recently been added to by (Lewis & Perry, 2017) who have published research about a randomised control of lesson study supported by mathematical resources (fractions) which has led to improvements in both teachers’ (+0.19 SD effect size) and students’ (+0.49 SD effect size) knowledge of fractions (see this post on how to interpret the size of effect sizes).  As such, and in light of these findings there is a temptation to argue that the results of this trial provides further supporting evidence for those schools wishing to adopt LS as a major component of their programme of professional learning.  

However, before any decisions are made by schools as to whether to adopt LS or not, it is necessary to consider whether Lewis and Perry's research meet the conditions for effective research use.  To help with this task, I will use the 6 As  framework developed by @profstig and outlined in this tweet.






















Accessible

Unfortunately Lewis and Perry's work sits behind a 'paywall'.  Moreover, the Journal for Research in Mathematics Education is not included in the 2000 journals you can access via your membership of the Chartered College of Teaching.  Even if you can get hold of a copy of the article, the statistics used in the paper are advanced, and would not necessarily be accessible even by colleagues who hold advanced degrees in education.  That said,  - if you have an understanding of effect sizes, then it may be possible to access a basic understanding of the claims being made i.e. LS combined with an appropriate resource pack would appear to lead to increases in both teachers' and pupils knowledge of fractions.   On the other hand, Lewis and Perry provide a very clear conceptual map on the inter-relationship between LS, the resources made available and the outcomes for pupils.

Figure 1 taken from (Lewis and Perry, 2014)















Accurate

Given my relative lack of statistical knowledge, I am not in a position to say whether the findings are accurate or not.  However, if you focus on effect size, then it is possible to make a couple of observations.  First, as (Simpson, 2017) notes effect sizes are vulnerable to researcher manipulation in a number of ways.  Now again, my statistics are not strong enough to examine whether in this research this is the case.  However, it does mean we need to make sure we do not take effect-sizes at face value.   Second, there is significant debate about is meant by the size of an effect size.  Indeed with an ES +0.49 SD being claimed, this would suggest we have a result which is larger than the average effect size.

Applicable - specific context and level of use 

The research was conducted mainly in elementary schools in the United States, with volunteer staff, of whom the majority had previous experience of LS.  As such, this does not mean the research is generalisable to a secondary school in England, where there is little or no experience of LS and it is being adopted as a mandatory school-wide approach to professional learning. 

Acceptable to views and beliefs

Given that LS has in recent years become a widely adopted form of professional learning, the research would appear to be outwardly consistent with current views and beliefs re teacher professional learning.

Appropriate to context

This will of course depend upon, the needs of school, staff and pupils, and whether there is a need to develop pupils' knowledge of fractions.  Within the school, there may be other, more pressing problems of practice.  Accordingly, we need to ask : What is the problem that lesson study is the answer to? Fractions are notoriously difficult to teach well, so are a “good bet” or perhaps an “easy win”.   That said, if you chose something you are already reasonably good at, then the gains will be far more marginal. In other words, the focus of improvement efforts should be on areas where there is scope for impact.

Actionable 

There would appear to be two inter-related issues as regards whether the research is actionable or not.  First, LS is potentially extremely resource intensive and there are questions as to whether in current climate of austerity schools have the resources to support the implementation of LS.    However, LS can be managed without significant expense if time and resources spent on less effective forms of PD (e.g. whole school one off speakers, teachers being sent on one day outside programmes, etc) are deployed to support lesson study.  And if done so,  and all PD time is given over to the LS process, there is potential for LS to be a very powerful form of PD.  However, for this to work, schools must be using a model of lesson study that incorporates all of the key features as seen in Japan, and not leaving bits out because they don’t suit them or don’t have the time for them

Second, even if schools have the resources to support LS, the effectiveness of LS with mathematical resources does not meant that LS will be effective without such resources (Lewis and Perry, 2017).   As such, whether this research is actionable depends upon whether appropriate specialist resources are available to support its implementation.    Maybe the fractions resource pack could be adapted to reflect any priority theme for the school, but some form of outside expertise/knowledge is a prerequisite to making lesson study successful, and this is one key feature of lesson that schools in England routinely leave out (see lessonstudy.co.uk for an example of this).  Accordingly, many schools seem to feel that having a group of teachers talk about their practice is enough, with a danger that  this can lead to 'happy talk' if there is not some external external/expertise knowledge from outside the school.

To conclude


On balance, Lewis & Perry's work is a welcome contribution to the evidence-base on the effectiveness of LS.   However, the real value for me of Lewis & Perry's work occurs when it is subject to scrutiny by using @profstig's  6 As model for effective research use.  By doing so, it makes clear some of  the challenges of trying to apply research findings to the context of an individual school.  Even though some research findings may get some favourable publicity and is consistent with current views and practice - this does mean it should be adopted without a degree of structured challenge.

Note

I would like to thank both @sarahselezynov and @profstig for commenting on an earlier draft of this post.

References

Lewis, C., & Perry, R. (2014). Lesson Study with Mathematical Resources: A Sustainable Model for Locally-Led Teacher Professional Learning. Mathematics Teacher Education and Development, 16(1), n1.

Lewis, C., & Perry, R. (2017). Lesson study to scale up research-based knowledge: A randomized, controlled trial of fractions learning. Journal for Research in Mathematics Education, 48(3), 261-299.
Simpson, A. (2017). The misdirection of public policy : Comparing and combining standardised effect sizes. Journal of Education Policy.

Wiliam, D. (2016). Leadership for teacher learning. West Palm Beach: Learning Sciences International.















Friday 12 May 2017

The school research lead and what's the difference between research, evidence-based practice and practitioner inquiry

During a recent #UKEDResChat discussion (Thursday 11 May) an emerging issue was a lack of a shared understanding of the terms research, practice-based inquiry and evidence-based practice.  In order to help try and develop a shared understanding of these terms I have put together the following table - which attempts to draw out the differences between each of the terms.  However, I need to stress that this table is still in the early stages of development and refinement and should be seen as nothing more than a contribution to the discussion.  Accordingly, the table should not be seen as some form of definitive statement of the differences and similarities between research, practitioner inquiry and evidence-based practice.

So here goes



Practice Based Inquiry
Evidence-Based Practice
Research and research methods
Process
Inductive
Abductive
Deductive
Evidence
Emphasis on experience, practitioner expertise and local knowledge
Emphasis on making the most of ‘best existing’ evidence
Emphasis on generating  data of highest scientific rigor
Focus
Focus on practicality of solutions
Focus on plausibility for conclusions
Focus on scientific evidence for basis of conclusions
Validity
Focus on internal validity what works in the here and now
Focus on internal and external validity –what works in my context – and  can  research findings be transferred to my context
Focus on internal validity – did X cause Y in my study
Sources of evidence
Based on inter-connected though potentially  incomplete sources of evidence
Based on inter-connected but different sources of evidence
Based on independent samples
Outcomes
Practice based judgement
Evidence-based judgement
Understanding of the unknown
Aim
To maintain and improve practice
To improve decision-making
To increase generalizable knowledge

Any comments would be gratefully received



Friday 5 May 2017

The school research lead and the evidence for coaching improving teaching and students' academic achievements

Recently a meta-analysis by (Kraft, Blazar, & Hogan, 2016) affirming the effectiveness of coaching as a developmental tool gained some prominence on twitter (see @GeoffreyPetty).  By aggregating results across 37 studies (Kraft et al, 2016) found pooled effect sizes of +0.57 standard deviations (SD) on instruction and +0.11 (SD) on achievement, which they claim affirms the effectiveness of coaching as a development tool.  Nevertheless, they recognised that in large-scale trials with more than 100 teachers involved – the effect size was half that found in small-scale studies.

Now the temptation is to say we that these findings justify an increased focus on coaching as a form of teacher professional development.  Indeed, it  might even be claimed that coaching is the 'magic bullet' of school improvement   However, with all these things, it’s never quite that simple.  So given that the ‘statistics’ in (Kraft et al., 2016) are ‘above my pay-grade’ I took my own advice (see School Research Leads need ‘mindlines’ not guidelines) and contacted @profstig (Professor Steve Higgins of the University of Durham) to see if the paper stood up to informed scrutiny.  So what did (Higgins, 2017) say?

One, it is a fairly robust meta-analysis. The techniques are sound and applied well.

Two, overall it looks like Bananarama again “the quality and focus of coaching may be more important than the actual number of contact hours” – the scale-up decrease also reinforces this message (0.17 to 0.08) for the drop in impact found from efficacy to effectiveness trials.

Three, “a one SD (i.e. ES = 1.0) change in instruction is associated with a 0.15 SD change in achievement” – looks pretty inefficient to me – this is a big change in behavior for what looks like a small gain in achievement. Some of the changes brought about by coaching must not be necessary or may be incorrect!

Four, the findings not likely to be the result of chance for reading (0.14 ** p<.01,), but within the margin for error for maths and science (smaller ES and fewer studies, so it was never going to reach statistical significance).

Five, when thinking about the overall benefit, for me, the issue is cost effectiveness. How expensive is coaching compared with one-to-one tuition per student? A lot cheaper I expect, so if you can get a reliable impact on student outcomes it is worth doing (particularly as the effects may continue for other groups of students if the teacher’s effectiveness has improved, whereas one-to-one benefits cease when you stop doing it).

Six, I don’t like their search strategy – I think it potentially builds in publication bias. I have no problem with expert informant recommendations, but then they needed to compensate for this selectivity by designing a systematic search which could find these studies (and the others which meet the same criteria. ‘Experts’ are likely to be pro-coaching and recommend (or remember) successful evidence. The trim and fill analysis suggests publication bias inflates the estimates (i.e. doubles it).

Finally – at least there is evidence of impact on student outcomes, meaning coaching can improve student outcomes!

Implications

So what are the implications of this research and discussion for you in your role as a school research lead.

To some extent, they depend upon your setting and context.  If you are a school research lead operating across a range of schools within a multi-academy trust (MAT) and where interventions are adopted across the whole (MAT) the results of any coaching intervention are likely to be significantly smaller than when first applied in a pilot school.  

Any intervention must be seen in terms of the ‘opportunity cost’ i.e. what would have been the value of the next highest valued alternative use of that resource – in terms of both resources and changes in teacher and pupil learning.   So it is important to think not only about the benefits but the long-term benefits and costs – and any negative unintended consequences – such as attention cost.

Regardless of your setting, it’s important not to be overwhelmed by the apparent complexity of research findings.  In this context, as long as you have an understanding of what is an effect size, and how big is an effect size, it’s possible to get some understanding of the issues.   So if we take a range of interpretations of the size of an effect size
  • John Hattie and his estimate of average effect sizes being around +0.4.  
  • The EEF’s DIY Evaluation Guide written by Rob Coe and Stuart Kime, where on pages 17 and 18 they provide some guidance on the interpretation of effect sizes (-0.01 to 0.18 low, 0.19 - 0.44 moderate, 0.45 - 0.69, high, 0.7 + very high) and with effect sizes being converted into months of progress. 
  • Alternatively, if you are interested in the relationship between effect sizes and GCSE grades, you could turn to (Coe, 2002) where he note such, an improvement of one GCSE grade represents an effect size of about +0.5 – + 0.7. 
  • (Slavin, 2016) who suggests an average effect size for a large study (250 participants) is 0.11.  
From this is should be quickly apparent that coaching is no more of a magic bullet than the average intervention, indeed it may have less than average effect, and it also may be a lot more expensive. 

Finally, take time to develop your ‘mindlines’ - (Gabbay & Le May, 2004)  i.e. collectively reinforced, internalised, tacit guidelines. These are informed by brief reading but also by your own and your colleagues’ experience, your interactions with each other and with opinion leaders, researchers, and other sources of largely tacit knowledge.   Modern technology and social media allow you to contact experts from outside of your own setting, most will be grateful that you have exhibited an interest in their work and more often that note hugely generous with both their time and expertise.

References

Coe, R. (2002). It's the effect size, stupid: What effect size is and why it is important.
Gabbay, J., & Le May, A. (2004). Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. Bmj, 329(7473), 1013. doi:10.1136/bmj.329.7473.1013
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement: Routledge.
Higgins, S. (2017, 18 April, 2017). [Meta-analysis on coaching] personal correspondence
Kraft, M. A., Blazar, D., & Hogan, D. (2016). The Effect of Teacher Coaching on Instruction and Achievement: A Meta-Analysis of the Causal Evidence.

Slavin, R. (2016). What is a Large Effect Sixe.  Retrieved from http://www.huffingtonpost.com/robert-e-slavin/what-is-a-large-effect-si_b_9426372.html