Friday 31 August 2018

The school research lead and causal cakes

At the start of term there is normally an unusual number  of birthday cakes in school staffrooms, as colleagues who have had birthdays over the summer break bring cakes into school for a belated birthday celebration.  However, if you are school research lead or champion, what you really need to share with colleagues is  something known as a ‘causal cake.’   The concept of a ‘causal cakes’ is particularly useful as it will help you get a better idea of the knowledge needed to help you make reliable predictions as to whether interventions that worked ‘somewhere’ will work ‘here’ in your school. So to help you do this - I’m going to draw upon the work of Cartwright and Hardie (2012) and their work on casual cakes and support factors

Causal cakes, reading scores and homework

Cartwright and Hardie use research about an attempt to improve reading scores through the introduction of homework (we are going to make the assumption that under the right conditions this intervention will work and improve reading scores).  Cartwright and Hardie use the metaphor of a cake to describe what needs to be in place – the ingredients -  if an intervention/policy is to work.  For example for homework to contribute to higher reading scores then you will need:

Student ability
Study space
Student motivations
Homework
Consistent lessons
Supportive family
Work feedback
Other

Put simply, the cake is just an picture of the list of the ingredients i.e. support factors required if homework is going to play a part in improving reading scores.



However, just like the start of term when there will be more than one birthday cake brought into the staffroom, you need to remember that there may well be more than one causal cake contributing to improved reading scores.

Smaller classes and the causal cake for improved reading scores

Cartwright Hardie go on to discuss how alongside the use of homework, that a school may decide that another way to improve the reading scores is through having smaller class sizes. This causal cake might have these ingredients
Smaller classes
Space in which to have the smaller classes
Sufficient qualified teachers
Other


Special needs teaching and the causal cake for improved reading scores

However, this approach may have a detrimental impact on another causal cake contributing to improved reading scores, with the following ingredients/support factors
Special needs teaching
Space
Other

The reduction in class size may lead there to a reduction in the high quality space for necessary for special needs support, therefore reducing the positive contribution to the overall reading score effect of special needs support

As Cartwright and Hardie note there is more than one cake that can contribute to improved reading scores.  Indeed, some cakes might be more effective and less costly than others. On the other hand, a new intervention or approach may add negative contributions or remove factors that would otherwise have made a positive contribution. This is particularly important as the issued of negative side-effects has often been overlooked in educational research, Zhao (2018).

So what does this mean for you as a school research lead?

When thinking about implementing an intervention or new policy within you school, it would be worthwhile to think about the following questions
  • What support factors have to be present if the intervention is to produce positive outcomes?
  • Will the intervention play a positive role in supporting other interventions?
  • Will the intervention play a negative role in ‘supporting’ other interventions and contribute to negative outcomes?
  • If the support factors are not all there, what happens? Nothing? Or less? Or something bad? 
  • Does the absence of a support factors turn a good cake into a bad cake? Or vice versa?
  • Are there other ways, other interventions, that produce the same results. What support factors are necessary?  Adapted from  Cartwright and Hardie  (2012) - (p73)
And finally

Remember, when someone makes a cake it's not just the ingredients that determine how good the cake is, the skill of the baker also plays a major role.  So just getting the ingredients together for a n intervention will not in itself be enough, the intervention will need to be implemented with a high level of skill and expertise.

References

Cartwright, N. and Hardie, J. (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford. Oxford University Press.
Zhao, Y. (2018). What Works May Hurt—Side Effects in Education. New York. Teachers College Press.








My new book due out October 2018 Evidence-Based School Leadership and Management: A practical guide https://uk.sagepub.com/en-us/nam/evidence-based-school-leadership-and-management/book257046


Friday 24 August 2018

The school research lead, the three legged stool and how to avoiding falling on your backside

As a school research lead you will have no doubt spent some of your summer reading research articles – be it systematic reviews or randomised controlled trials RCTS - and thinking about whether the interventions you’ve read about will work in your school and setting.  Indeed, you may have been working on a PowerPoint presentation making the case for why some well researched and evidenced teaching intervention which has shown positive outcomes in other schools, should be adopted within your school.  So to help with you with the task of developing an argument - which begins with ‘it worked there’ and which  concludes ‘it’ll work here’ -   I am going to lean on the work of Cartwright and Hardie (2012).  However, before I do that, it’s necessary to: define a number of terms, the understanding of which are central to getting out the most of out of this blogpost; then, clarify the causal claim being made when we say something ‘works’.

Some useful definitions
  • Causal claim -  a claim in  the form  of "A was a cause of B". 
  • Causal role – how a change in A leads to a change in B
  • Support factors – what factors need to be in place so that a change in A will lead to a change in B
Clarifying what is meant by saying something works

Cartwright (2013) states that when arguing that ‘something works’ three quite different claims are often conflated
  • The intervention works somewhere (there); the intervention causes the targeted effect in some individuals in some settings
  • The intervention works: the intervention causes the target effect ‘widely’
  • The intervention will work here: the intervention will cause the target effect in some individuals in this setting (adapted from Cartwright, p 98)
As such, evidence that supports a claims something works somewhere/there, is not evidence for saying that something will work here.  Cartwright and Hardie (2012) claim that what connects ‘it works somewhere’ with ‘it will work here’ is an argument that looks like this.
  • The policy worked there (i.e. played a positive causal role in the causal principles that held there and the  support factors necessary for it to play this positive role there were present for at  least some individuals there).
  • The policy can play the same causal role here as there
  • The support factors necessary for the policy to play a positive causal role here are in place for at least some individuals here post-implementation p54 
In other words, if you want to show what worked (A) in another school or other schools in bringing about (B), you are going to need to show that the intervention can play the same causal role in your school – in other words, will a change in (A) trigger the same causal mechanism as it did in the other school or schools.  Second, you are going to have to show which of the support factors that were work, and which created the conditions for changes in (A) to bring about changes in (B) are in place and available in your setting.

As such, Cartwright and Hardie argue that the effectiveness argument is like a three-legged stool – it doesn’t matter how strong one leg of the stool maybe, if either of the other legs fails – then the stool will fall over.  Or put another way, it does not matter whether an intervention worked somewhere or causes the target effect widely – if either the causal mechanism or support factors are not in place – the intervention will not work here.

 Cartwright and Hardie (2012) p55 and the three legged stool


























What are the implications for you and your role as a school research lead?

The role of the school research lead/knowledge broker is not without significant challenges,  difficulties and which also has a ‘dark-side’ -  Kislov, Wilson, et al. (2016).  So anything that can help you to be clear about the evidence and what it means, and which stops you over-claiming what the research tells you -  should be useful.

Second, when colleagues come up to you and say ‘the research says intervention works – so let’s do it’ – it might be worth pausing and replying –‘that’s great news – so help me to be able to best help you – can you let me know more about – how the intervention worked and what was the context, within which it worked?’

Third, when reading research where there is little or no detail provided about the causal mechanism or support factors at work – then this research could be judged as being interesting but not necessarily that useful.  Further reading into the what support factors are necessary for the intervention to succeed will be required.

What next 

This is the first of a series of posts which are going to cover ground which I hope is relevant to the start of the academic year.  These posts will be on topics, such as; causal cakes; causal roles and mechanisms;  what do we mean by evidence-based practice; and, when does evidence travel.

References

Cartwright, N. (2013). Knowing What We Are Talking About: Why Evidence Doesn't Always Travel. Evidence & Policy: A Journal of Research, Debate and Practice. 9. 1. 97-112.

Cartwright, N. and Hardie, J. (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford. Oxford University Press.

Kislov, R., Wilson, P. and Boaden, R. (2016). The ‘Dark Side’of Knowledge Brokering. Journal of Health Services Research & Policy. 1355819616653981.







My new book due out October 2018 Evidence-Based School Leadership and Management: A practical guide

https://us.sagepub.com/en-us/nam/evidence-based-school-leadership-and-management/book257046

Friday 17 August 2018

The school research lead and RCTs - what can a systematic review tell us?

As a school research lead one of the things that you will have to get grips with is the debate over whether randomised controlled trials (RCTs) can make any meaningful contribution to understanding ‘what works’ in educational settings.  Helpfully  Connolly, Keenan, et al. (2018)  have recently had published systematic review on the use of RCTs in education, which seeks to address four key criticisms of RCTs: it is not possible to undertake RCTs in education; RCTs are blunt research designs that ignore context and experience; RCTs tend to generate simplistic universal laws of ‘cause and effect’; and that they are inherently descriptive and contribute little to theory. So in this rest of this post I will provide extracts from the systematic review, examine the review’s answer to the questions posed, identify some missed opportunities, and finally, make some comments about RCTS and the work of school research leads.

Connolly, et al. (2018) systematic review - extracts

The systematic review found a total of 1017 unique RCTs that have been completed and reported between 1980 and 2016, with three quarters of these being produced over the last 10 years.

Over half of all RCTs identified were conducted in North America and a little under a third in Europe. 

The RCTs cover a wide range of educational settings and focus on an equally wide range of educational interventions and outcomes. 

Connolly et al go onto argue that the review: provides clear evidence to counter the claim that it is just not possible to do RCTs in education. As has been demonstrated, there now exist over 1000 RCTs that have been successfully completed and reported across a wide range of educational settings and focusing on an equally wide range of interventions and outcomes. Whilst there is a clear dominance of RCTs from the United States and Canada, there are significant numbers conducted across Europe and many other parts of the world. Many of these have been relatively large-scale trials, with nearly a quarter (248 RCTs in total) involving over one thousand participants. Moreover, a significant majority of the RCTs identified (80.8%) were able to generate evidence of the effects of the educational interventions under investigation.

As noted earlier, these  figures are likely to be under-estimates given the limitation of the present systematic review, with its restricted focus on articles and reports published in English. Nevertheless, the evidence is compelling that it is quite possible to undertake RCTs in educational settings. Indeed, across the 1017 RCTs identified through this systematic review, there are almost 1.3 million people that have participated in an RCT within an education setting between 1980 and 2016 ……

Secondly, there is some evidence to counter the criticism that RCTs ignore context and experience. Whilst they only constitute a minority of the trials identified (37.7%), there were 381 RCTs found that included a process evaluation component…..

Thirdly, there is more evidence to suggest that the RCTs produced within the time period
have attempted to avoid the generation of universal laws of ‘ cause and effect’ . Certainly, those RCTs identified that have included at least some subgroup analyses would suggest a more nuanced approach amongst those conducting RCTs, that acknowledges that educational interventions are not likely to have the same effect across all contexts and all groups of students. Moreover, this is clearly evident amongst the majority of RCTs reported (77.9%) that included at least some discussion of and reflections on the limitations of the findings in terms of their generalizability….

…. (in) relation to the fourth criticism regarding the a theoretical nature of RCTs, this is also challenged to some extent by the findings presented above. A clear majority of RCTs that were reported included some discussion of the theory underpinning the interventions under investigation (77.3%). Moreover, a majority of RCTs (60.5%) also provided some reflections on the implications of their findings for theory….

Overall, the findings from this systematic review of RCTs undertaken in education 1980 – 2016 are mixed. On the one hand, there is clear evidence that it is possible to conduct RCTs in education, regardless of the nature of the education setting or of the particular type and focus of the intervention under consideration….

On the other hand, it is perhaps not surprising that criticisms of RCTs continue when nearly two thirds of RCTs in this period of time have not included a process evaluation component and where nearly half of them have not looked beyond the overall effects of the intervention in question for the sample as a whole. Similarly, it is difficult to challenge the view that RCTs promote a simplistic and atheoretical approach to educational research when nearly 40% of trials in this analysis have failed to reflect upon the implications of their findings for theory.

Have the criticisms of RCTs been answered?

The main assumption of the systematic review is that it is possible to confirm whether RCTS can be carried out in education by merely counting the number of RCTs – subject to certain criteria – have been carried out.  However, this does not give you an answer as to the question as to whether it’s possible to carry out RCTS in an educational context.  It merely tells you, that a number of RCTS have been carried out by researchers and commissioners of research who believe it is possible to carry out RCTS within education.  

Missed opportunities

One of the first things that struck me about the systematic review was that it appeared to give very little attention, if any, as to whether any of the RCTs had systematic flaws in their design.  It may well be that over a 1000 RCTs have been conducted between 1980 and 2016 but a recent review of RCTs by Ginsburg and Smith (2016) reported that 27 RCTs that met the minimum standards of the What Works Clearinghouse based in the US, found that 26 of the RCTS had serious threats to their usefulness.  This would suggest that may be only 40 or so of the RCTS included in Connolly’s review did not have some kind of the serious threat to their trustworthiness.  Second, we then need to look at how many of these 40 or so RCTs included a process evaluation, made a contribution to theory and did not seek to overgeneralise.  It’s likely to a be very small percentage of the number RCTS which had been carried out. In other words, out of 1000 plus RCTs how many did not have fundamental flaws in design or some other failing.

What does this all mean for you as a school research lead?

For a start, Connolly et al provide an accessible introduction to a discussion of the issues associated with RCTs.  As such, it’s worth spending time reading the review.

Second, the review highlights some of the issues associated with the hierarchies of evidence.  Both systematic reviews and RCTS appear to be near the top of hierarchies of evidence.  However, what matters more is whether the research design is applicable to the research question at hand, Gorard, See, et al. (2017) and Sharples (2017).

Third, when reading RCTs it is necessary to have some kind of check-list so that you judge the trustworthiness of what you are reading.  In my forthcoming book, Evidence-Based School Leadership: A practical guide,  Jones (2018) - I try and do this by combing the work of Gorard, et al. (2017) and Ginsburg and Smith (2016).  For example, look out for whether the intervention is being evaluated by the researchers who developed the intervention.

Fourth, setting aside the issue of whether it’s possible to conduct an RCT within education, assuming that it is, theirs is the issue about how the analysis of the results is carried out, and whether p-values and statistical significance have been misused.  This is not the time or place to go into detail about this debate – that said I would recommend you have a look at Wasserstein and Lazar (2016) – and the American Statistical Association’s statement on p-values - Gorard, et al. (2017) with more adventurous readers having a look at Greenland, Senn, et al. (2016).  If on the other hand, you fancy having a look at something which is far more accessible, I recommend you have a look at Richard Selfridge’s new book – Databusting for Schools.

And finally

RCTS within educational research are not going away any time in the near future and I hope this post has provided a little more clarity

References

Connolly, P., Keenan, C. and Urbanska, K. (2018). The Trials of Evidence-Based Practice in Education: A Systematic Review of Randomised Controlled Trials in Education Research 1980–2016. Educational Research.
Ginsburg, A. and Smith, M. (2016). Do Randomized Controlled Trials Meet the “Gold Standard”? American Enterprise Institute
Gorard, S., See, B. and Siddiqui, N. (2017). The Trials of Evidence-Based Education. London. Routledge
Greenland, S., Senn, S., Rothman, K., Carlin, J., Poole, C., Goodman, S. and Altman, D. (2016). Statistical Tests, P Values, Confidence Intervals, and Power: A Guide to Misinterpretations. European journal of epidemiology. 31. 4. 337-350.
Jones, G. (2018). Evidence-Based School Leadership and Management: A Practical Guide. London. SAGE Publishing.
Selfridge, R. (2018) Databusting for Schools, London, SAGE
Sharples, J. (2017). A Visions for an Evidence-Based Education System - or Some Things I'd Like to See. London. Education Endowment Foundation
Wasserstein, R. and Lazar, N. (2016). The Asa's Statement on P-Values: Context, Process, and Purpose, the American Statistician, 70:2, 129-133,. The American Statistician. 70. 2. 129-133.