Friday, 8 September 2017

Leading evidence-informed teaching : the rhetoric and the reality

A recent report sponsored by the Department for Education suggests that there is a significant difference between the rhetoric and reality of evidence-informed teaching within schools, with a number of schools appearing to adopt the rhetoric of evidence-informed teaching, whilst at the same time not embedding research and evidence into their day to day practice (Coldwell et al., 2017).   Of the twenty-three schools involved in the report only six schools could be described as having a whole-school approach to research and evidence, with another seven schools where the head and senior leadership were proactive in their approach to a culture supporting the use of research and evidence, and finally, ten schools having an unengaged research evidence culture.  This finding was particularly surprising given the attention to creating a balanced sample of schools in how they are engaged with research evidence.

The difference between unengaged and highly engaged research evidence cultures is illustrated in the following table from Coldwell, et al 2017

 

So before I begin a series of posts where I can examine what can be done to close rhetoric reality gap – perhaps it would be worth undertaking a brief self-audit of which column best fits your school’s use of research evidence.   In doing so what I would like you to do if you locate yourself within the school leadership evidence culture and whole school evidence –culture is try and think of three pieces of supporting evidence – which you can use to support your judgement

Next week we will look at the what school leaders can actively do to make sure a rhetoric reality gap does not emerge. 

Reference

COLDWELL, M., GREANY, T., HIGGINS, S., BROWN, C., MAXWELL, B., B, S., STOLL, L., WILLIS, B. & BURNS, H. 2017. Evidence-informed teaching: an evaluation of progress in England Research report. London: Department for Education.

Friday, 1 September 2017

Judging the trustworthiness of research findings

In last week's post,  I drew attention to differences in expert opinion over the usefulness of statistical significance testing, particularly as regards randomised controlled trials (RCTs).  This week we will look at what else can go wrong with RCTs and questions that you need to ask when seeking to judge the trustworthiness of the associated research findings.  But first, let's quickly describe what we mean by a RCT

What is a RCT?

(Connolly et al., 2017) describe an RCT as … ‘a trial of particular educational programme or intervention to assess whether it is effective; it is a controlled trial because it compares the progress made by those children taking the programme or intervention with a comparison or control group of children who do not and who continue as normal; and it is randomised because the children have been randomly allocated to the groups being compared (p4)

What can go wrong with RCTs?

Unfortunately, lots can go wrong with RCTs.  (Ginsburg & Smith, 2016)  reviewed 27 RCTs that met the minimum standards of the US based What Works Clearing House,  and found that 26 of the 27 RCTS had serious threats to their usefulness.  These threats are listed below
  • Developer associated. In 12 of the 27 RCT studies (44 percent), the authors had an association with the curriculum’s developer. 
  • Curriculum intervention not well-implemented. In 23 of 27 studies (85 percent), implementation fidelity was threatened because the RCT occurred in the first year of curriculum implementation. The NRC study warns that it may take up to three years to implement a substantially different curricular change.
  • Unknown comparison curricula. In 15 of 27 studies (56 percent), the comparison curricula are either never identified or outcomes are reported for a combined two or more comparison curricula. Without understanding the comparison’s characteristics, we cannot interpret the intervention’s effectiveness. 
  • Instructional time greater for treatment than for control group. In eight of nine studies for which the total time of the intervention was available, the treatment time differed substantially from that for the comparison group. In these studies, we cannot separate the effects of the intervention curriculum from the effects of the differences in the time spent by the treatment and control groups. 
  • Limited grade coverage. In 19 of 20 studies, a curriculum covering two or more grades does not have a longitudinal cohort and cannot measure cumulative effects across grades. 
  • Assessment favors content of the treatment. In 5 of 27 studies (19 percent), the assessment was designed by the curricula developer and likely is aligned in favor of the treatment.
  • Outdated curricula. In 19 of 27 studies (70 percent), the RCTs were carried out on outdated curricula. (Ginsburg & Smith, 2016)(Pii)
So what are you to do?

(Gorard, See, & Siddiqui, 2017) in the recently published book The Trials of Evidence-Based Education suggest the following:

First, check whether there is a clear presentation of research findings - are they presented simply and clearly, with all the relevant data provided.

Second, check whether the research is using effect sizes as the way of presenting the scale of the findings.  If significance testing is being used, and p values are being quoted - you may wish to pause for a moment.  Though remember effect sizes have their own problems (see http://evidencebasededucationalleadership.blogspot.co.uk/2017/01/the-school-research-lead-and-another.html)

Third, check where the research design sits on the research design hierarchy of causal questions. At the top of the hierarchy are studies where participants are randomly allocated between groups; below that are participants matched between groups; below that are naturally occurring groups used; below that is only one group studied and before and after data is used, and finally are case studies used (at the bottom of the hierarchy.

Fourth, check the scale of the study, for example, are at least 100 pupils involved in the study.

Fifth, look out for missing information - how may subjects/participants dropped out of the study.   As a rule of thumb the higher the percentage level of completion of the research, the more trustworthy the findings.  As Gorard at el note - a study with 200 participants and a 100% completion rates is likely to be more trustworthy than a study with 300 participants and a 67% completion rate.

Sixth, check the data quality.   Standardised tests provide higher quality data than say questionnaire data, with impressionistic data for causal questions providing the weakest evidence.  Make sure the outcomes being studied are specified in advance.  Is there likely to be any errors in the data caused by inaccuracy or missing data.

And finally

It's quite easy to be intimidated by quantitative research studies - but if you keep it simple - are effect sizes being used; are subjects randomly allocated between the control and the intervention group; is there missing data; are standardised measures of assessment used; and, are the evaluators clearly separate from the implementors - if the answer to all these questions is yes, there you can have a reasonable expectation that the research findings are trustworthy.

Further reading

Connolly, P., Biggart, A., Miller, S., O'Hare, L., & Thurston, A. (2017). Using Randomised Controlled Trials in Education London: SAGE.

Ginsburg, A., & Smith, M. S. (2016). Do Randomized Controlled Trials Meet the “Gold Standard”? American Enterprise Institute. Retrieved March, 18, 2016.

Gorard, S., See, B., & Siddiqui, N. (2017). The trials of evidence-based education. London: Routledge.