Friday 29 June 2018

NEW: Time saving techniques for the evidence-based practitioner

This week's post is slightly different than normal and is a sketch done by Oliver Caviglioli @olivercaviglioli of a session I did at the Hampshire Collegiate School's Teaching and Learning Conference, held on Friday 29 June.  



The full presentation can be found using the link 



Friday 22 June 2018

The school research lead, confirmation bias and unknown unknowns

A few weeks ago I had the privilege of attending Professor Chris Brown's @ChrisBrown1475 inaugural lecture at Portsmouth University.  One of the great things about attending such events is that you get the chance to talk to some very interesting people, for example, Ruth Luzmore a primary headteacher at an inner London all through school.  Now my conversations with Ruth - by now online -  led onto a discussion about known unknowns and unknown unknowns has led me to a re-visit some work I had done on this topic in trying to discover the things that we don't know we don't know.For as US Secretary of Defense Donald Rumseld famously, or should I say infamously, said:

There are things we know that we don’t know. (And) there are known unknowns.  That is to say there are things we know we don’t know.  That is to say they are things that we now know what we don’t know.  But there are also unknown unknowns.  These are things that we do not know that we don’t’ know.’  

Feduzi & Runde (2014) argue that we are particularly bad at looking for things beyond what we already know, in that we are prone to confirmation bias i.e. we look for evidence that will confirm what we already know.  In addition, we also tend to be too conservative in our predictions of likely outcomes, which can lead to a clustering of predictions which tend to be over optimistic.

In order to address these problems associated with cognitive bias Feduzi & Runde (2014) have put forward a technique which seeks to expand the number of possible scenarios and at the same time, look for evidence to support those scenarios.   The process is made up of several steps
  • Think of three main scenarios which you can envisage being the outcome of a decision – things improve, pretty much stay the same, or get worse
  • Place those three outcomes on a favourability scale  
  • Now try and imagine the worst possible scenario, where things don’t just get worse there is a total collapse - in other words a scenario which is 'completely off the-scale' 
  • Having imagined this scenario, try and find evidence that might make this worst possible scenario a possibility
  • Then try and imagine a scenario where success is beyond your wildest dreams,  then go and search for evidence that would make this scenario a possibility.
Feduzi and Runde argue that by doing this then you are likely to discover information that you did not previously know about, and in doing so, you will have uncovered some unknown unknowns.  This uncovering of unknown unknowns is the product of seeking to discover evidence that confirms alternatives, rather than seeking out evidence which rules out alternatives.  By searching out for information which seeks to confirms alternatives provides a counter-point to confirmation bias associated with disproving hypotheses.

So what are the implications for you the school research lead.
  • Recognise that there are things that you don't know that you don't know
  • When thinking through scenarios - it's important to engage a range of individuals - so that the constituent components of either total success or failure can be explored.
  • Try and use a range of techniques - that can help you mitigate the impact of cognitive biases - be it premortems or decision check-lists 
And finally none of the material I've referred to in this post have their origins in education - maybe if we spent a little more time looking at other fields - be it knowledge management, improvement science or implementation science - may we can get to know a lot more about what others know.




Thursday 14 June 2018

What to do when faced with a ‘tsunami’ of expert-opinion

As we approach the end of the academic year the educational conference season appears to be in full-swing with attendees and delegates  being faced with a ‘tsumami’ of expert opinion, for example, last weekend saw ResearchED Rugby.  Indeed, over the next few weeks I will be making my own contribution to that ‘tsumani’ by speaking at the Festival of Education, Wellington College, ResearchSEND at the University of Wolverhampton and the Hampshire Collegiate  school’s teaching and learning conference.   This got me thinking about ‘expert opinion’ and under what circumstances should the opinions expressed by so-called expert speakers at conferences be accepted.  By speaking at a conference I am asking colleagues – if not to accept  my so-called ‘expert’ opinion -  to spend some of their precious time thinking about what I have to say.  On the other hand, I will also be listening to speakers at the conferences, so the question I have to ask myself – particularly if I don’t know much about the speaker’s subject or the speaker – under what circumstances should I accept their expert opinion.

Accepting expert opinion

Over recent weeks I have been exploring the role of research evidence in practical reasoning and in doing so I have across the work of Hitchcock (2005) who cites the work of Ennis (1962) pp. 196-197.  Ennis has dentified seven tests of expert opinion

1.6.1) The opinion in question must belong to some subject matter in which there is expertise. An opinion can belong to an area of expertise even if the expertise is not based on formal education; there are experts on baseball and on stamps, for example.

1.6.2) The author of the opinion must have the relevant expertise. It is important to be on guard against the fallacy of ‘expert fixation’, accepting someone’s opinion because that person is an expert, when the expertise is irrelevant to the opinion expressed.

1.6.3) The author must use the expertise in arriving at the opinion. The relevant data must have been collected, interpreted, and processed using professional knowledge and skills.

1.6.4) The author must exercise care in applying the expertise and in formulating the expert opinion.

1.6.5) The author ideally should not have a conflict of interest that could influence, consciously or unconsciously, the formulated opinion. For example, the acceptance of gifts from the sales representative of a pharmaceutical company can make a physician’s prescription of that company’s drug more suspect.

1.6.6) The opinion should not conflict with the opinion of other qualified experts. If experts disagree, further probing is required.

1.6.7) The opinion should not conflict with other justified information. If an expert opinion does not fit with what the reasoner otherwise knows, one should scrutinize its credentials carefully and perhaps get a second opinion.

Accepting expert opinion at conferences

So what are the implications for Ennis’s seven tests of expert opinion for both the giving or receiving expert opinion.  Well if you are an attendee at a conferences and listening to so-called experts, it seems to me that you should be asking the following questions. 
  • Is the speaker talking about a subject they have ‘expertise in’, or are they speaking because they are deemed to be an ‘expert of some kind’?  You may also to make the distinction between experience and expertise, as the two should not be conflated. In other words, just because someone has experience in doing something does not automatically make them an expert in that subject
  • Does the speaker make it clear that there are limitations with what they are proposing or putting forward  If they don’t, then that’s a real ‘red-flag’ as very little in education is not without limitations or weaknesses.
  • In all likelihood there are likely to be an alternative perspective on the speakers’ topic, so what they say will not be the last word on the matter.  You’ll certainly have to do some further reading or investigation before bringing it back to your school as a solid proposal.  Does the presenter makes suggestions where to look?
  • What are the speakers ‘interests’ in putting forward this point of view, be it, reputational, financial or professional?
  • Just because an expert’s view disagrees with you own experience, that does not invalidate your experience – it just means you need to get a second or third opinion.
  • Does the speaker present a clear argument – do they lay out clearly the components of an argument; be it the data, claim, warrant and supporting evidence. 
  • Is the speaker more concerned with ‘entertaining’ you with flashy slides rather than helping you think these the relevant issues for yourself.
What does Ennis’s 7 conditions mean for ‘experts’ making presentations?
  • Presenters need to be humble and acknowledge the limits of their expertise and be aware of  projecting ‘false-certainty’ in the strength of their arguments
  • Make sure their slides or whatever format they have used for their presentation specifically mentions the limitations of the presenter’s argument.
  • Include a list of references backing the counter-arguments laid out in 2 above.
  • Declare any ‘conflicts of interests’ that they  may have which might influence their  presentation
  • Think long and hard about getting the balance right between providing and ‘education’ and being ‘entertaining.’  
And finally


Attendance at a conference is often a great day-out away from the hassle of a normal working day.  Ironically, if a conference does not lead to substantive additional work-load in terms of additional reading and inquiry, then attendance at the conference will have been an entertaining and nice day-out but that’s all it will have been.

PS 
If you see and hear and me speak at a conference and I don't live up to these principles - please let  me know

Wednesday 6 June 2018

Guest post - Meta-analysis: Magic or Reality, by Professor Adrian Simpson

Recently I had the good fortune to have an article published in the latest edition of the Chartered College of Teaching’s journal Impact in which I briefly discussed the merits and demerits of meta-analyses, Jones (2018).  In that article I lent heavily on the work of Adrian  Simpson (2017) who raises a number of technical arguments against the use of meta-analysis.   However, since then a blog post written by Kay, Higgins, and Vaughan (2018) has been published on the Evidence for Learning website, which seeks to address the issues raised in Simpson’s original article about the inherent problems associated with meta-analyses. In this post Adrian Simpson responds to the counter-arguments raised on the Evidence for Learning website.

Magic or reality: your choice, by Professor Adrian Simpson, Durham University

There are many comic collective nouns whose humour contains a grain of truth. My favourites include "a greed of lawyers", "a tun of brewers" and, appropriately here, "a disputation of academics". Disagreement is the lifeblood of academia and an essential component of intellectual advancement, even if that is annoying for those looking to academics for advice. 

Kay, Higgins and Vaughan (2018, hereafter KHV) recently published a blog post attempting to defend using effect size to compare the effectiveness of educational interventions, responding to critiques (Simpson, 2017; Lovell, 2018a). Some of KHV is easily dismissed as factually incorrect: for example, Gene Glass did not create effect size: Jacob Cohen wrote about it in the early 1960s; the toolkit methodology is not applied consistently: at least one strand [setting and streaming] is based only on results for low attainers while other strands are not similarly restricted (that is quite apart from most studies in the strand being about within-class grouping!)

However, this response to KHV is not about extending the chain of point and counter-point, but to ask that teachers and policy makers check arguments for themselves: Decisions about using precious educational resources needs to lie with you, not with feuding faculty. The faculty need to state their arguments as clearly as possible but readers need to check them: if I appeal to a simulation to illustrate the impact of range restriction on effect size (which I do in Simpson, 2017), can you repeat it - does it support the argument? If KHV claim the EEF Teaching and Learning toolkit use ‘padlock ratings’  to address the concern about comparing and combining effect sizes from studies with different control treatments, read the padlock rating criteria – do they discuss equal control treatments anywhere? Dig down and choose a few studies that underpin the Toolkit ratings – do the control groups in different studies have the same treatment?

So, in the remainder of this post, I invite you to test our arguments: are my analogies deceptive or helpful? Re-reading KHV’s post, do their points address the issues or are they spurious?

KHV’s definition of effect size shows it is a composite measure. The effectiveness of the intervention is one component, but so is the effectiveness of the control treatment, the spread of the sample of participants, the choice of measure etc. It is possible to use a composite measure as a proxy for one component factor, but only provided the ‘all other things equal’ assumption holds.

In the podcast I illustrated the ‘all other things equal’ assumption by analogy: when is the weight of a cat a proxy for its age? KHV didn’t like this, so I’ll use another: clearly the thickness of a plank of wood is a component of its cost, but when can the cost of a plank be a proxy for its thickness? I can reasonably conclude that one plank of wood is thicker than another plank on the basis of their relative cost only if all other components impinging on cost are equal (e.g. length, width, type of wood, timberyard’s pricing policy) and I can reasonably conclude that one timberyard on average produces thicker planks than another on the basis of relative average cost only if those other components are distributed equally at both timberyards. Without this strong assumption holding, drawing a conclusion about relative thickness on the basis of relative cost is a misleading category error.

In the same way, we can draw conclusions about relative effectiveness of interventions on the basis of relative effect size only with ‘all other things equal’; and we can compare average effect sizes as a proxy for comparing the average effectiveness of types of interventions only with ‘all other things equal’ in distribution.

So, when you are asked to conclude that one intervention is more effective than another because one study resulted in a larger effect size, check if ‘all other things equal’ holds (equal control treatment, equal spread of sample, equal measure and so on). If not, you should not draw the conclusion.

When the Teaching and Learning Toolkit invites you to draw the conclusion that the average intervention in one area is more effective than the average intervention in another because its average effect size is larger, check if ‘all other things equal’ holds for distributions of controls, samples and measures. If not, you should not draw the conclusion.

Don’t rely on disputatious dons: dig in to the detail of the studies and the meta-analyses. Does ‘feedback’ use proximal measures in the same proportion as ‘behavioural interventions’? Does ‘phonics’ use restricted ranges in the same proportion as ‘digital technologies’? Does ‘metacognition’ use the same measures as ‘parental engagement’? Is it true that the toolkit relies on ‘robust and appropriate comparison groups’, and would that anyway be enough to confirm the ‘all other things equal’ assumption?

KHV describe my work as ‘bad news’ because it destroys the magic of effect size. ‘Bad news’ may be a badge of honour to wear with the same ironic pride as decent journalists wear autocrats’ ‘fake news’ labels. However, I agree it can feel a little cruel to wipe away the enchantment of a magic show; one may think to oneself ‘isn’t it kinder to let them go on believing this is real, just for a little longer?’ However, educational policy making may be one of those times when we have to choose between rhetoric and reason, or between magic and reality. Check the arguments for yourself and make your own choice: are effect sizes a magical beginning of an evidence adventure, or a category error misdirecting teachers’ effort and resources?
  
References

Kay, J., Higgins, S. & Vaughan, T. (2018) The magic of meta-analysis, http://evidenceforlearning.org.au/news/the-magic-of-meta-analysis/ (accessed 28/5/2018)

Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 32(4), 450-466.


Lovell, O. (2018a) ERRR #017. Adrian Simpson critiquing the meta-analysis, Education Research Reading Room Podcast, http://www.ollielovell.com/errr/adriansimpson/ (accessed 25/5/2018)

Friday 1 June 2018

Evidence-informed practice and the dentist's waiting room

Sometimes the inspiration for a blogpost comes from an unexpected place, in this instance, my dentist’s waiting room.   Now I happen to be a regular visitor to my dentist because  back in 2005 I had a ‘myocardial infarction’ - better known as a heart-attack. Given that at the time I appeared to be fit, active and had completed many triathlons,  my heart-attack was ‘perplexing’ both for me and the medical professionals providing my treatment.  However, to cut a very long-story short, a contributory factor to my heart-attack appeared to be that  I had a bad-case of gum-disease and which research evidence suggests is  related to an increased risk of heart-disease,   Dhadse, P., Gattani, D., & Mishra, R. (2010).  And which is why I was in my dentist's waiting room about to  have my both teeth cleaned and gums ‘gouged’.

Now you may be asking, what on earth has an ‘evidence-based’  trip to do a dentist have to do with evidence-based or,  if you prefer, evidence-informed practice within schools.  Well it just so happened that whilst in the dentist’s waiting room I was reading Hans Rosling’s recently published book: Factfulness: Ten reason we’re wrong about the world – and why things are better than you think, when I came across this  paragraph about mistrust, fear and the inability to ‘hear data-driven arguments.

 In a devastating of example critical thinking gone bad, highly educated, deeply caring parents avoid the vaccinations that would protect their children from killer diseases.  I love critical thinking and I admire scepticism, but only in a framework that respects evidence.  So if you are sceptical about the measles vaccinations, I ask you to do two things.  First, make sure you know what it looks like when a child dies of measles.  Most children who catch measles recover, but there is still no cure and even with the best modern medicine, one or two in every thousand will die from it.  Second, ask yourself, “What kind of evidence would convince me change my mind about vaccination.  If the answer is ‘no evidence could ever change my mind about vaccination,” then you are putting yourself outside evidence-based rationality, outside the very critical thinking that first brought you to this point.  In that case, to be consistent in your scepticism about science, next time you have an operation please ask your surgeon not to bother washing her hands.  (p117).

So what are the implications Rosling et al’s critique of critical thinking gone wrong  for your role a school leader wishing to promote the use of evidence within your school.   At first glance, it seems to me that there are three implications.

First, ask yourself the question for about an issue which have pretty strong views – be it mixed-ability teaching, grammar schools and the 11 plus, or progressive vs traditional education – “What evidence would it take to change your mind?”  This is important as a critical element of being a conscientious evidence-informed practitioner is to actively seek alternative perspectives.  And if you are not at least willing to be persuaded by those perspectives, there is little point seeking  them out in the first place

Second, when working with colleagues who may ‘reject’ evidence-informed practice – ask them the same question “What evidence would it take to change your mind?”.  If they respond “there is no evidence that would get me to change my mind” ask them the following question: “Ok, is there a teaching approach you particularly favour, and if so, why?” and then ask the follow-question – “Tell me more.”

Third, there may be occasions when working with colleagues are resistant to evidence-informed practice that you have to resort to a variant of the  ‘surgeon with dirty hands’ argument, so ask the following: “Would you like your own children or children of family members to be taught by a teacher or teachers who: 
  • Do not have a deep knowledge and understanding of the subjects they teach
  • Have little or no understanding about how pupils’ think about the subject they are teaching
  • Are not very good at asking questions
  • Do not review previous learning
  • Fail to provide model answers
  • Give adequate time for practice for pupils to embed their skills
  • Introduce topics in a random manner
  • Have poor relationships with their pupils
  • Have low expectations of their pupils
  • Do not value effort and resilience
  • Cannot manage pupil behaviour
  • Do not have clear rules and expectations
  • Makes inefficient and ineffective use of time in lessons
  • Are not very clear in what they are trying to achieve with pupils
  • Haven’t really thought about how learning happens and develops or how teaching can contribute to it.
  • Give little or no time to reflecting on their professional practice
  • Provide little or no support for colleagues
  • Are not interested in liaising with pupils’ parents
  • Do not engage in professional development  (amended from Coe, Aloisi, et al. (2014)

And if they answer No – we would not want my children or family members taught by such teachers – then you might respond by saying “You might not believe in evidence-informed practice though you would appear to agree with the evidence on ineffective teaching.”

And finally

Working with colleagues who have different views to you on the role of evidence-informed practice is inevitable.  What matters is not that you have different views but rather how do you about finding the areas you can agree on, which then gives you something to work on in future conversations

References

Coe, R., Aloisi, C., Higgins, S. and Major, L. E. (2014). What Makes Great Teaching? Review of the Underpinning Research. London.
Dhadse, Prasad, Deepti Gattani, and Rohit Mishra. “The Link between Periodontal Disease and Cardiovascular Disease: How Far We Have Come in Last Two Decades ?” Journal of Indian Society of Periodontology 14.3 (2010): 148–154. PMC. Web. 29 May 2018 

Rosling, H, Rosling, O., and Rosling Ronnlund, A. (2018).  Factfulness: Ten reason we’re wrong about the world – and why things are better than you think, London: Sceptre