Friday, 14 September 2018

School research leads, piling up the evidence and a reflection on researchED 2018

Last weekend saw the annual education evidence fest – aka-  ResearchED 2018 take place in St John's Wood,  London .  Unfortunately, one of the inevitable disappointments of attending #rED 18 is that you are unable to see all the speakers that you would like to see.  As such, you are often have to do with other people’s summaries of speakers.  So I was particularly pleased to see Schools Week come up with a headline and article ResearchED 2018 Five interesting things we learned.  I was even more pleased when I saw it contained a short summary of Dr Sam Sims presentation on the positive impact of instructional coaching and references to the statistically significant positive effect of instructional coaching being found in 10 out of 15 studies, which was then used to infer that “probably the best-evidenced form of CPD currently known to mankind”.

For me, this then set off a few alarm bells – not about instructional coaching – but rather what can we legitimately infer and claim when a variety of studies find positive and statistically significant results from the implementation of an intervention in a range of different circumstances.  So to help get to grips with this issue,  I am going to lean heavily on the work of Cartwright and Hardie (2012) to explore the relationship between multiple positive results for an intervention and whether that provides evidence that the intervention will work in your setting

Piling on the evidence

Writing about when RCTS have been conducted in a variety of circumstances, Cartwright and Hardie argue these results can help in justifying a claim that an intervention is effective i.e. it worked somewhere or in a range of different somewheres.

However, this does not provide direct evidence that the intervention  will work here in your circumstance or setting. Cartwright and Hardie argue that seeing the same intervention works in a variety of different circumstances, provides some evidence that the intervention can play a causal role in bringing about the desired change.   It may also provide evidence that the support factors for the intervention to work occur in a number of different settings.  On the other hand, an intervention working in lots of places is only relevant to whether an intervention will work here, if certain assumptions are met.  That is, the intervention can play the same causal role here as it did there.  Second, the support factors necessary for the intervention to play a positive causal role here, are available for some individual post implementation.

Cartwright and Hardie go onto explain about how ‘ it works there, there and there, is supposed to provide evidence that it work here.

X plays a cause role in situation 1
X plays a causal role in situation 2
So X plays a causal role everywhere.

For this argument to be robust – it requires research studies/RCTs from a wide range of settings.  In addition, it requires the observations to be generalisable across settings, but also what types of settings that they are likely to be generalisable.  In addition, it is necessary to take into account the causal connections at work, and how they are influenced by the local support factors.

Problems with induction - a sidebar

At this point it worth remembering that all inductive inferences can be wrong.  Kvernbekk (2016) cites the Bertrand Russell who talks about a chicken

Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to misleading.  The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined vies as to the uniformity of nature would have been useful to the chicken.

As such we ourselves may be no better positions than the chicken which unexpectedly has its neck wrung.

Nevertheless, Cartwright and Hardie go onto argue that lots of positive RCTS results are a good indicator that the intervention plays the same causal role widely enough for it to potentially ‘carry’ to your setting.  However, this is based on the argument that RCTS are carried out in a sufficiently wide range of settings, with some ideally similar to your own, so that you are able to make some generalisations from there to here.  Cartwright and Hardie then go onto argue ‘that bet is always a little bit dicey.’

Discussion and implications

First, be wary of headlines or soundbites – it’s rarely that straightforward.

Second, if you can, access the original research which was used for the headline and read it for yourself, as there may be a range of nuances which you need to be aware of.

Third, if you thinking  that  instructional coaching  might have potential for your school, you need to spend time thinking about the causal roles and support factors necessary to give the intervention a chance of working.  Indeed, you may want to have a look at this post on causal cakes.

Fourth, as Cartwright and Hardie argue – thinking about causal roles and support factors is difficult and hard work and is difficult to get right every time.  However, it’s not beyond you and other colleagues to have a serious discussion about what’s necessary to put in place in order to give an intervention every chance of success.

Fifth, there is a whole separate discussion about whether statistical significance can tell you anything useful at all - which if you not aware of then it's worth having a look at Gorard et al (2017)

And finally

In next week’s post I will be using the work of Kvernbekk (2016) to explore in more detail the challenge of making what worked there work here.

References

Cartwright, N. and Hardie, J. (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford. Oxford University Press.

Gorard, S., See, B. and Siddiqui, N. (2017). The Trials of Evidence-Based Education. London. Routledge
Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.





My new book is due out at the end of September 2018

https://uk.sagepub.com/en-gb/eur/evidence-based-school-leadership-and-management/book257046

Wednesday, 5 September 2018

The school research lead and understanding evidence-informed practice

The start of this  week will have seen most schools have at least one-day of INSET/CPD – call it what you will – to start off the new academic year.  No doubt many colleagues will have played ‘bullsh.t bingo’ – ticking off the number of times terms such as -  research, evidence, evidence-informed practice, best-practice, the evidence says – are used by members of the senior leadership team.  On the other hand, a full understanding of these terms can be really useful in helping you not on only spot the ‘bullsh.t, ’ but also get past the ‘bullsh.t’ and, in doing so make things better for your pupils and school.  So this post is going to get grips with the term ‘evidence-based practice’ and some of the concepts and ideas which are bundled up and implied in its use.  And to help me do this, I’m going to use the work  Kvernbekk (2016) who explores in some depth the role of evidence in evidence-based practice in her book - Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions

Evidence-Based Practice – A summary of Kvernbekk 
  • Evidence-based practice (EBP) can be defined as the production or creation of desirable change and the prevention of undesirable change, somehow guided by evidence of what works.
  • Education is a complex enterprise whose very raison d’etre at least partly is to create produce some form of desirable change.
  • Practitioners have to consider how to create the desired changes, so at least some of the knowledge they need is knowledge of ‘what works’ in bringing about the desired changes .
  • Put simply this  involves causal relationships - a variable X is a cause of Y if it listens and determines its values in response to what it hears
  • However, – causation needs to be seen as:
    • Probabilistic If we do X we increase the chances of getting of Y
    • Manipulationist - X can be changed or manipulated to bring about Y
    • Human agency -An event is a cause of distinct event Y just in case bringing about the occurrence of X would be an effective means by which a free agent could being bring about the occurrence of Y
  • Nevertheless, one cause is seldom sufficient to produce the desired effect and there is a support team of other factors if a cause is to do its work, all of which concerns local facts, for example about the school, stakeholders, pupils and staff
  • So making a judgment about whether an intervention could work here in your setting  requires different sources of  evidence – research evidence, practitioner expertise, school data and stakeholders’ views – which all contribute to the development of  local and non-local knowledge
  • Though in making this judgment , it is essential to be aware that any intervention is inserted into pre-existing conditions – be it the classroom or school -  which lead to an already existing level of ‘output’ of whatever it is that you are trying to change
  • However, if this intervention is going to bring about the desired change then it also requires the system in which the intervention is implemented to be  sufficiently stable for predictable reproducibility to be possible
  • If on the other hand, if we have an unstable system we cannot predict the results of the intervention and planning becomes difficult if not impossible
  • By stability – we mean something persists over time and can be relied upon - can be created in various ways – but if made too tight, too structured, it becomes inflexible and the whole system may collapse.
  • As such, flexibility and room for manoeuvring are necessary to keep the systems stable around its basic values and principles
  • Without that, the system risks losing its identity as education and instead perhaps become a training or testing regimes
  • Because education is an open and complex system, randomness is inevitable and may overturn even the best laid scheme.
  • Overall judgment cores of EBP makes good sense – it’s not a magic bullet and will not solve every problem
  • That said – EBP is more complicated than both advocates and critics have thought.


What does this mean for you as an school research lead?

It would seem to me that Kvernbekk’s analysis of the nature of evidence-based practice may have a number of implications for you in your role as a school research lead.

First, if you are ever asked what’s the point of evidence-based practice in schools – the answer is quite straight forward – it’s about making things better by using evidence of what’s worked.

Second, given the causal nature of evidence-based practice, it makes a lot of sense spending time understanding and exploring the potential of  logic models and the relationship between input, outputs and outcomes.

Third, and that is an issue which I’ll discuss in future posts, evidence-based practice is about fidelity to the principles of an intervention, rather than faithfully copying what seems to have worked in another setting.  In other words, if you try and faithfully replicate what someone else has done in making an intervention work, you are likely to fail in implementing the intervention.  The support factors in your school will be different – so how you implement the intervention will also need to be different.

Fourth, it’s really important to understand the role of local knowledge in trying to work out whether the support factors that are needed for an intervention to work, are actually available in your setting.  Research evidence is just one source of the knowledge required to do this – you need to understand your setting.

Fifth, and this is a point which I think Kvernbekk misses – a fundamental aspect of  evidence-based practice involves the systematic determination of the merit, worth (or value) of the outcome of intervention.  Evaluation involves than determining whether something worked, it involves also asking – was it worth it and this requires an awareness of what is valued

Sixth, and this links to the previous point, evidence-based practice is inextricably linked to the purposes of education.  If evidence-based practice involves bringing about desirable change or in others making improvements, this requires us to understand why things need to be changed and how will we know whether things have got better.

And finally,

If you are at researchED London this Saturday, don’t hesitate to come up to me and say hello

References

Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.








My new book is due in September 2018 Evidence-Based School Leadership and Management: A practical guide

https://uk.sagepub.com/en-gb/eur/evidence-based-school-leadership-and-management/book257046