Sunday, 31 January 2016

The School Research Lead - Making the most of research evidence and school data - an integrated approach

If you are school-research lead then this post is for you.  In particular, if you are interested in using evidence - be it research, school data, stakeholder views and practitioner expertise - within your school, then this post will help you understand the different stages of the evidence-informed inquiry process.  To help with this,  I will be drawing upon the work of Chris Brown, (UCL Institute of Education), Kim Schildkamp and Mireille D. Hubers (both of the University of Twente) and their presentation COMBINING THE BEST OF TWO WORLDS: INTEGRATING DATA-USE WITH RESEARCH INFORMED TEACHING FOR SCHOOL IMPROVEMENT which they made at the recent International Congress for School Effectiveness and Improvement conference held earlier this year in Glasgow.   The rest of this post will: first, summarise the thesis underpinning the development of Brown et al's model; second look in more detail at the proposed cycle of evidence-based inquiry; third, identify a number of potential limitations/design flaws within the proposed cycle; fourth, put forward an alternative model which is based upon evidence-based practice.

The underpinning thesis 

The main 'thesis' underpinning Brown et al's presentation was a call to integrate data based decision-making (DBDM) and research informed teaching practice (RITP) into a comprehensive professional learning based approach that is designed to enhance teaching quality, and a result increased student achievement.  As such, the model is attempting to use the strengths of each approach to offset the weaknesses of the other.  Brown et al summarise the offsetting benefits of each approach as follows:
  • RITP is not based on a real need in the field vs DBDM starts with the vision and goals of a specific school, focusing on a context specific problem. 
  • DBDM data can inform educators about problems in their school, but what causes this problem vs  RITP educators can draw upon a variety of effective approaches to school improvement 
  • RITP is the ‘one size does not fit all’ vs DBDM based on data, schools develop a context specific solution 
  • DBDM is that data can be used to pinpoint possible causes of a problem, but educators may still not know the best available course for school improvement vs RITP picking a promising solution, based on an existing evidence base (slide 23). 
An eight stage cycle of evidence-based inquiry

Brown et al then go onto integrate DBDM and RITP into the following eight stage cycle.




On initial reflection this model provides an incredibly useful way of thinking about the relationship between data and research evidence within the inquiry cycle, with with the local data setting the scene for the necessary research evidence.   Indeed, one of the things I have been thinking about lately is where 'local data-collection' fits within  the models of evidence-based practice and evidence-based medicine which I have previously described and advocated.  And this model, clearly places local data prior to seeking the research evidence.

Potential limitations

However, for me, the model two limitations to be taken into account: first,  the determination of  causes coming before the collection of data; second, the potential of for increasing the risk of confirmation bias.    So lets now examine each of these limitations in more detail.

Causes before data or vice versa

The eight stage model presented places the stage of determining possible causes of the problem before collecting data about causes,  which to me seems to be out of sequence and in reverse to order to what they should be . Indeed, there are other models of data led inquiry where data collection comes before diagnosis - and this illustrated in the four stage model of inquiry popularised by Roger Fisher and William Ury  in the their classic book Getting to Yes




















In this model the data stage focuses on identifying what's wrong, what are the current symptoms, and then moves to the identifying of possible causes in the diagnosis phase, with the next two phases being direction setting and action planning.  For me, and I know when you use these cycles you move back and forth between the stages - it seems far more sensible to place the data-collection before the diagnosis phase.   In addition, and I thank Rob Briner for this observation, the proposed process seems to separate use of the of research evidence from the diagnosis phase.  Yet, why would you not use research evidence to help you identify possible causes of the symptoms being experienced.

The potential for cognitive bias

By placing both vision and goal setting next to determining causes and the drawing of conclusions prior to the search for the solution in the research evidence, this may lead to confirmation bias i.e the tendency to selectively search or interpret information in a way that confirms your perceptions or hypotheses.  So determining the vision and goals may lead to certain problems to be highlighted as they are consistent with the already established, whereas other problems or data inconsistent with the 'vision' may be ignored.  Furthermore, given that conclusions are drawn before searching for research evidence - this may lead to research evidence being sought which is consistent with the conclusions that have been already been drawn.

A possible alternative

Barends, Rousseau and Briner (2014) provide an extremely useful definition of evidence-based practice, which identifies a six-stage process in making decisions which are based upon evidence.  In this model the search for and acquiring of evidence happens at stage two, with research evidence, organisational data and stakeholder views being accessed.  This leads to the subsequent appraisal and aggregation of the evidence resulting in the incorporation of the evidence into the decision-making and action planning process.















However, this process does not fully incorporate all the elements of included in the proposed eight-stage cycle of evidence-informed inquiry i.e vision and goal setting.  This 'omission' can be rectified if so required, by the inclusion of another 'A' representing : Ambition, aims and goal setting and which relates to the vision, mission and values of the school, and which gives us a 7 stage cycle of evidence-based inquiry. 

A seven stage cycle of evidence-informed inquiry

























Some final words

Brown et al make a powerful case for combining DBDM and RITP and have developed a potentially useful initial synthesis of the two processes.  However, the proposed model has some inherent limitations, be it diagnosis taking place before data collection and increased possibilities of confirmation bias taking place.  Finally, an alternative model - the 7 A's of evidence-informed inquiry - is put forward as way to think about the process of linking DBDM and RITP.

Sunday, 24 January 2016

Uplifting Leadership - Transforming schools in tough times

If you are a senior, middle or aspiring leader within a school or college, then this post is for you.  In particular, if you work in a school where you are having to do more with less; where the pupil achievement is not all that it could be; or where teacher morale is poor,  then this post may provide some guidance on how transform what you do and at the same time stay resilient in the face of difficult times.  In doing so, I will be using Andy Hargreaves, Alan Boyle and Alma Harris's 2014 book Uplifting Leadership : How organisations, teams, and communities raise performance, which draws upon research in eighteen organisations and systems - in business, sports and and public education - and identifies what these organisations did to dramatically improve their performance, often against overwhelming odds.  For Hargreaves at al in the end it all came down to one word : Uplift. 

What do we mean by Uplift?

Hargreaves et al define uplift within organisations as .... the force that raises our performance, our spirts, and our communities to attain higher purposes and reach unexpected levels of achievement (p1)

As such uplift is about emotional and spiritual engagements, social and moral justice and higher levels of performance, both in work and life.  Let's now briefly look at each in turn.

Emotional and Spiritual Uplift  .... the beating heart of effective leaders.  It raises people's hopes, stirs up their passions, and stimulates their intellect and imagination.  It inspires them to to try harder, transform what they do, reach for a higher purposes, and be resilient when opposing forces threaten to defeat them. Uplifting leadership makes spirits soar and pulses quicken in a collective quest to achieve a greater good for everyone because we feel drawn to a higher place as well as to the people around us as we strive to reach it (p3)

Social and Community Uplift and .... creating a collective force which is designed to raise everyone's opportunities for achievement and success - with particular reference to those in our communities who are the least well-advantaged.

Uplifting Performance ....raises performance by creating, spiritual, emotional, and moral uplift throughout an organisation and wider community that it influences (p4)

What does Uplifting Leadership involve?

Uplifting leadership involves six inter-related factors which come together to bring about transformations within organisations and these include:
  1. Dreaming with Determination - this involves identifying and articulating a clear, challenging destination, and which is informed by a moral imperative.  Furthermore, this dream is firmly connected with the organisation's past and building upon what the very best of what that organisation has been in the past. 
  2. Creativity and Counter-Flow - this requires creating the new pathways necessary to reach the desired 'dream'.  However, it also goes against the flow - in that it is not about following the predictable, it involves the counterintuitive - things that don't seem to make sense or that others may already have rejected.
  3. Collaboration with Competition- uplifting leadership is at times a counter-intuitive process and at times this will require working alongside current or future competitors.  Competition and collaborative are not mutually exclusive and it is possible for both to co-exist within the same context. 
  4. Pushing and Pulling - this necessitates using the power of the group to both push and pull things forward.  Colleagues when faced with difficulties are picked up and supported by others, whilst the higher purpose to which team members are committed pushes them onto higher levels of achievement
  5. Measuring with Meaning  - .  this  involves the extensive use of data  allows leaders to identify the direction the organisation is heading and what still needs to be done, yet is done in such a way which is both meaningful and owned by the people who work in the organisation
  6. Sustainable Success  - this involves working at a pace that is sustainable.  It's not about leading at a pace which people cannot sustain for any substantive period of time.  It's about recognising the ebb and flow of energy within an organisation and making sure that is managed in such a way as to bring about years and years or continuous improvement and development
What's the evidence base to support Uplifting Leadership?

The evidence-based which informed Uplifting Leadership was drawn from a number of interconnected projects.  First, a project conducted between 2007 an 2010 that looked into public an private sector organisations which performed beyond expectations.  Over 200 in-depth interviews were conducted across 18 projects sites ( five in business, four in sports, and nine in English education).  Within-case and cross-case analysis was to used to identify underlying themes.  Within the original 18 cases - fifteen factors were identified which seemed to explain performance beyond expectation. In order to reach a broader audience an additional sports team was included in the analysis alongside two educational cases - based in Canada and Singapore. Other work being undertaken by the authors was also drawn upon.  This work  included research on successful turnarounds, reform of special education in and a case-study of a London borough.  As a result of combining these research efforts - the analytical framework was reduced from fifteen to six factors.  Additional cases from secondary sources were also drawn upon.

What does this mean for me, a school or college leader?

Hargreaves et al identify a whole range of practical actions that school leaders and their colleagues can take to be uplifting leaders: be become great storytellers; surprise yourself; benchmark relentlessly; avoid cliques and elites; measure what you values; invest for the long-term.  Nevertheless, when it is as all said and done - and as Hargreaves et al argue - uplifting leadership begins with the self, and go onto to cite Manuel Kets de Vries who argues 'if we want things to be different, we must start by being different ourselves'  (Hargreaves et al p 161).   In other words, if we want to lead others, we'd better start by leading ourselves.

Saturday, 16 January 2016

New - Why certain practices reach the classroom and others don't

If you are interested in how educational research and ideas reach the classroom, then this post is for you.  Based on Jack Schneider's 2014 book From the  ivory tower to the school house this post see to explain why some educational research and ideas become common practice in schools, and why others do not.  Schneider identifies four attributes - perceived significance, philosophical compatibility, occupational realism, and transportability-  that educational research must have ... if teachers are to notice, accept, use and share (Schneider, p7).  Finally, I will consider the implications of Schneider's analysis for the implementation of evidence-informed practice.

However, before I look into these four attributes in more detail, I will quickly recount Schneider's summary of the arguments as to why much educational scholarship has little impact on the practices of teachers and pupil learning.  First, Schneider argues  there is little or no 'practice ready scholarship' with academic writing being for each other academics and researchers rather than teachers.  Second, teachers are antagonistic to new ideas: teachers have worked to make the profession comfortable for themselves and are determined to protect the status quo.   However, Schneider argues that the real reason why the is a separation between academic scholarship and teaching practice is the separation of capacities and influence required to shift research into the classroom.  Teachers may be able to influence what goes on in the classroom, they normally lack the capacity and capability to engages with academic research.  Whereas policymakers may be well positioned to connect with the 'educational HE academy' they are not best positioned to directly influence the classroom.

Nevertheless, Schneider argues that educational research, ideas and scholarship can move from the ivory tower/academy to the classroom if the research/idea possesses a set of specific attributes.  So what are these attributes:
  1. Perceived significance - the research or idea is relevant to an issue that matters to teachers and there would appear to be some evidence to support the idea.  On the one hand, the research is signalling that it matters to the profession and schools, and on the other the research would appear to have an evidential justification and/or is backed by an educational authority.
  2. Philosophical compatibility - is the research compatible the with common values, interests and concerns of teachers and headteachers 
  3. Occupational realism - is the idea practical and can be easily put into immediate use in the classroom the very next day or does it require a significant physical investment in the school 
  4. Transportability - how easily can the idea move from research into practice and from teacher to teacher.  Is it an idea that does not require some form of extensive training or  it can be picked up in a 30 minute CPD session.  Alternatively can the idea be transmitted by social media, and for exampe, Twitter and the Twitterati.
Schneider argues that the possession of these four attributes is a necessary but not sufficient condition for research or an idea to move from the ivory tower to the classroom and illustrates this with reference to a a number of other innovations, for -  the taxonomy for the affective domain, Sternberg's triarchic theory, Wittrock's generative learning model; and finally, the behaviour analysis model - which have not made the jump from the academy to the classroom.   Furthermore, just because an educational idea/research has these four attributes does not in itself guarantee that it has merit or worth, examples of this being the popularity of Brain Gym and Learning Styles.

Applying the four attributes to the development of evidence-informed practice within schools.

If we use these four attributes as a check-list to help us identify the chances of 'success' for the evidence-informed practice movement with schools, then it could be argues that evidence-informed practice still has a long way to go to before it can be said to have all of four attributes.  Indeed, it could be argued that at best the evidence-informed practice currently has only one of the attributes necessary for success i.e transportability - witness the success of researchED which has used Twitter as a main form of communication.  On the other hand, evidence-informed practice - which for some means research-informed practice - is be seen by many as occupationally unrealistic - due to a lack of access to research journals.  For others, an evidence-informed approach is philosophically incompatible with both teacher autonomy and relationship between evidence and practices.  Finally, given that much research is seemed irrelevant to schools and not addressing real school-based problems, research is perceived to be insignificant to the immediate needs of pupils, teachers and schools.   

The above analysis would suggest that the success of the evidence-informed practice movement is by no means certain, and if anything the movement is more likely to fail than succeed.  

If you believe in evidence-informed practice in schools - what is to be done?

In some ways the answer is quite simple : supporters of evidence-informed practice within schools need to improve the extent to which evidence-informed practice exhibits these characteristics.  This could be done by the following :
  • Philosophical Compatibility - ensure absolute conceptual clarity, so as not to confuse evidence-informed practice with research-based practice.  This is can be done by proponents of evidence-informed practice continually articulating the role of practitioner expertise just as this was done in evidence based medicine with Sackett et al defining evidence-based evidence as is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise withthe best available external clinical evidence from systematic research.
  • Occupational Realism - emphasising that evidence-informed practice can be put into immediate effect within schools bye supporting colleagues develop well-formulated and answerable questions.  It's not just about accessing journals and research, it also involves teachers accessing and using school data or even just talking to pupils about their perceptions of their learning experience.  Stakeholder views and values are an essential component of evidence-informed practice.  
As for perceived significance - this is where advocates of evidence-informed practice will need to be both patient and open-minded as the 'jury' is still out as to impact of evidence-informed practice on pupil outcomes.  In the specific context of schools, although not comprehensively or systematically established, there are numerous reported benefits to practitioners engaging in evidence informed practice.   However, to help address this lack of academic evidence there are a number of EEF funded studies, for example – The Rise Project: Evidence – Informed School Improvement; Research into Practice : Evidence Informed CPD; Research Learning Communities; and Evidence for the Frontline – which considering the impact of evidence-informed practices on teacher development and pupil outcomes.  These projects will begin to report from 2016 onwards.

To conclude:

This is a 'very quick and dirty analysis' of the evidence-informed practice movement.  However, what it does make abundantly clear is that there is still much to be done to ensure evidence informed practice is perceived to be significant, is philosophically compatible with teacher values and beliefs, and occupationally realistic.  Even if this is done, school leaders will need to ensure that teachers have the capacity and capability to implement evidence-informed practice within their schools.





















Saturday, 9 January 2016

Why the delusional voodoo of grading lesson observations is not going stop any time soon.




Despite the impassioned words of headteachers, such as Tom Sherrington and John Tomsett on the reliability and validity of graded lesson observations, they are still being used in around 50 per cent of schools.   Drawing upon the pioneering work of Pawson and Tilley (1997) I will use two core ideas/concepts from realist evaluation i.e ‘programme theory,’ and ‘mechanisms’  to help explain, why despite evidence that lesson observations should only be used for low-stakes development purposes, some head-teachers and senior leadership teams still wish to continue with the practice of grading lessons.

What do we mean by a ‘programme theory’?

Any intervention - in this case graded lesson observations - introduced or used by head-teachers and senior leaders - should be informed by some form of programme theory i.e. a statement along the lines of .   if we do X then this will deliver change Y.   In other words, a programme theory is simply an ‘if… then....." statement.  So in the case of head-teachers and senior leadership teams who wish to use to continue to graded lesson observations within their school, they may be being guided by a programme theory - which goes along the following lines : ‘If we use graded lesson observations this will then deliver an increase in the quality of teaching and learning within the school.’

Given the diverse nature of headteachers and senior leadership teams it is unlikely that there is only one programme theory being ‘used' in schools who are continuing to use graded lesson observation.  Other programme theories which could be being used include:

If we use graded lesson observations this will lead to greater control over teachers and how well they perform.’

‘If we use graded lesson observations this will then lead to fewer poorly performing teachers within the school.’

"If we use graded lesson observations then we will be able to measure the quality of teaching and learning within the school.'

Of course, this is not a comprehensive list of programme theories and I'm sure you will be able to think of programme theories which include either OfSTED or governing bodies.

In the context of ‘realist evaluation’ what do we mean by a ‘programme mechanism?’

The important thing to remember is that interventions do not in themselves work, they work because people make them work. So an intervention provided an opportunity/constraint X and this prompted me to do Z.    As Wong et al (2013) explain - interventions work by changing the decision-making of the subjects of an intervention. (Now I know describing teachers as 'subjects' of intervention may be troubling for some readers, but stay with me).  With this in mind, the question becomes: how does the use of graded lesson observations influence the thinking and decision-making of teachers?

What programme mechanisms might be considered to be active when considering whether to either to implement or continue to use graded lesson observations?

Drawing upon the work of Pawson and Tilley (1997) on the use of CCTV systems in car parks to reduce car crime, (and which is scarily useful), these are some of the mechanisms head-teachers and senior leadership teams maybe seeking to influence through the use of graded lesson observations.

The 'gotcha mechanism’ graded lesson observations could reduce the number of 'inadequate /requires improvement' lessons as it make more likely that persistently 'poor' teachers are identified and competency and capability procedures are used to to ensure the 'poor' teacher leaves the school.

The 'you’ve been framed mechanism' graded lesson observations could reduce the number of inadequate lessons by encouraging teachers to raise their game on a day to day basis, so as to avoid being graded 'requires improvement/inadequate’.

The 'effective deployment mechanism' graded lesson observation may facilitate the effective deployment of professional development to support those teachers who have been identified as requiring support in order to improve teaching and learning

The 'publicity mechanism' graded lesson observations may indicate to teaching staff that the SLT takes seriously the improvement of teaching and learning. The teachers may then decide to 'up their game' in order to avoid being judged 'requiring improvement/inadequate'

The 'memory jogging mechanism' notices in the staffroom about graded lesson observation protocols may remind teachers of their 'vulnerability' and prompt them to take greater care their lessons meet the required standard

And I'm sure there are others you can think of such as:

The 'get out before they catch me mechanism' graded lesson observations may encourage some teachers who think they are weak practitioners  decide to leave before they are deemed inadequate.

The ‘the compliance mechanism’ graded lesson observations may demonstrate who is ‘in-charge’ within the school and encourage teachers to comply with the desired model of teaching, learning and assessment.

What are the implications of the above analysis the use of graded lesson observation in schools?


First, if we want to understand why headteachers and senior leaders may be extremely reluctant to give up on an established intervention, then we need to take into account both the programme theories they hold and causal mechanisms they are seeking to influence. If the reliability and validity of graded lesson observations is irrelevant to either the programme theory or causal mechanism, then the ‘evidence’ is unlikely to have much impact.

Second, proponents of non-graded observation may need to adopt a slightly different take in disagreeing with colleagues. In the first instance we need to understand colleagues ‘programme theories’ and views of ‘causal mechanisms.’ Relying on evidence, may not be sufficient to bring about changes in those ‘programme theories’ or ‘causal mechanisms’. Daniel Dennett in his 2013 book Intuition Pumps And Other Tools for Thinking suggests that at times of disagreement we should use Rapoport’s Rules:

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

2. You should list any points of agreements (especially is they are not matters of general and widespread agreement).

3. You should mention anything you have learned from your target.

4. Only then are you permitted to say so much as a word of rebuttal or criticism.

And finally

Any intervention – be it graded or ungraded lesson observations - must be seen within the context within which it is applied, so in other words there is no such thing as a context free intervention. Indeed, context is an essential component of ‘realistic evaluation'. So in the case of schools, much will depend upon; the relationships between staff; the values, beliefs and culture of the school; the impact upon the school of external shocks – be it funding or an Ofsted inspection; or the stance that senior leadership take on the use of external evidence, as to how an intervention such as graded lesson observations play out. Remember interventions – be it graded or ungraded lesson observations - do not in themselves work, they work because teachers and headteachers make them work.