Tuesday 23 December 2014

Maturity and the case of graded lesson observations - can the FE sector handle the truth!

Last Friday saw the TES publish an article on graded lesson observations within the further education sector where Lorna Fitzjohn, OfSTED's national director for learning and skills stated : “The big question that we’ve had from some leaders in the sector is: is the sector mature enough at this point in time not to have us (OfSTED) grading lesson observations?”  In responding to this statement it seems sensible to ask the following questions:
  1. What does the research evidence imply about the validity and reliability of graded lesson observations?
  2. What is the best evidence-based advice on the use of graded lesson of observations?
  3. What do the answers to the first two questions imply for the maturity of the leadership and management of the further education sector?
What does the research evidence imply about the validity reliability of lesson observations?
      O’Leary (2014) undertook the largest ever study into lesson observation in the English education system and investigated the impact of lesson observations on lecturers working in the FE sector. Lecturers' views when asked about graded lesson observations were summarised as :
Over four fifths (85.2%) disagreed that graded observations were the most effective method of assessing staff competence and performance. A similarly high level of disagreement was recorded in response to whether they were regarded as a reliable indicator of staff performance. However, the highest level of disagreement (over 88%) of all the questions in this section was the response to whether graded observations were considered the fairest way of assessing the competence and performance of staff. 
     Not only are there 'qualitiative' doubts about the reliability and validity of lesson observations, there are also 'quantitative' based objections to their use, particularly when un-trained observers are used. Strong (2011) et al found that the correlation coefficient for untrained observers agreeing on a lesson observation grade was 0.24 which runs counter to sector leaders advice to  both Principals and Governors to 'trust your instincts.'
     Furthermore, Waldegrave and Simons (2014) cite Coe’s synthesis of a number of research studies, which raises serious questions about the validity and reliability of lesson observation grades. When considering value added progress made by students and a lesson observation grade (validity) , Coe states that in the best case there will be only 49% agreement between the two grades and in the worst case there will be only 37% agreement. As for the reliability of grades Coe’s synthesis suggests that in the best case there will be 61% agreement and in the worst care only 45% agreement between two observers.
      The above would suggest graded lesson observations provide an extremely shaky foundation on which to make judgements about the quality of teaching and learning within the further education sector.   It is now appropriate to turn to the current best evidence-based advice on the use of graded lesson observations.

So what is the best evidence-based advice on the use of lesson observations?
     Earlier this year at the Durham University Leadership Conference Rob Coe stated that the evidence suggests
Judgements from lesson observation may be used for low-stakes interpretations (eg to advise on areas for improvement) if at least two observers independently observe a total of at least six lessons, provided those observers have been trained and quality assured by a rigorous process (2-3 days training & exam). High-stakes inference (eg Ofsted grading, competence) should not be based on lesson observation alone, no matter how it is done
      In other words, the current practice of a graded lesson observation system based on one or two observations per year - which is the case in the vast majority of colleges - is not going to provide sufficient evidence for low stakes improvement never mind high stake lecturer evaluation.  That does not mean lesson observations should not take place but as Coe suggests they need to take place alongside reviews of other evidence : such as student feedback, peer-review and student achievement.
So what does the above mean for the leadership and management of the further education sector?
     First, there would appear to be a clear research-user gap, with sector leaders giving out advice which is inconsistent with the best available evidence. Second, if leaders and managers within the sector have to rely upon graded lesson observations to attempt to demonstrate and track improvement of teaching and learning, then the sector needs to begin to develop a wider and more sophisticated range of processes and measures to judge the quality of teaching and learning. Third, the reliance on graded lesson observation suggests a prevailing leadership and management culture which is founded on an assumption that to improve performance requires the removal of poor performers.  For me, Mark Tucker (2013) sums up the way forward

There is a role for teacher evaluation in a sound teacher quality management system, but it is a modest role. The drivers are clear: create a first rate pool from which to select teachers by making teaching a very attractive professional career choice, provide future teachers the kind and quality of education and training we provide our high status professionals, provide teachers a workplace that looks a lot more like a professional practice than the old-style Ford factory, reward our teachers for engaging in the disciplined improvement of their practice for their entire professional careers, and provide the support and trust they will then deserve every step of the way.

And if we take that as the definition of a mature system, then maybe FE has a long way to go.

Coe, R, Lesson Observation: It’s harder than you think, TeachFirst TDT Meeting, 13th January 2014
O’Leary, M. (2014). Lesson observation in England’s Further Education colleges: why isn’t it working and what needs to change? Paper presented at the Research in Post-Compulsory Education Inaugural International Conference: 11th –13th July 2014, Harris Manchester College, Oxford.
Strong, M., Gargani, J. and Hacifazliog, O. (2011). Do We Know a Successful Teacher When We See One? Experiments in the Identification of Effective TeachersJournal of Teacher Education 62(4) 367–382.
Tucker, M. (2013). True or false: Teacher evaluations improve accountability, raise achievement. Top Performers (Education  Week blog), July 18, 2013 http://www.ncee.org/2013/07/true-or-false-teacher-evaluations-improve-accountability-raise-achievement/
Waldegrave, H., and Simons, J. (2014). Watching the Watchmen: The future of school inspections in England, Policy Exchange, London.


Sunday 14 December 2014

Research Leads Conference and Evidence-Based Practice - How to avoid re-inventing the wheel

On Saturday I had the privilege of attending ResearchED's Research Leads one-day conference.  It was an incredible day full of intellectual challenge mixed with the opportunity to meet some wonderful colleagues.  However, what was deeply ironic given that the event was being held in the Franklin Wilkins building at Kings' College London, and which is named after two of the discoverers of the 'double-helix', was that we were not standing on the 'shoulders of giants'.  Many of the issues and topics which we wrestled with have already, to a very large extent, been engaged with by practitioners in the fields of evidence-based medicine and more generically evidence-based practice.
     So how do we go about standing on the shoulders of giants, particularly with reference to the role of a research-lead in a school and how research-leads could go about performing their task.  Both Alex Quigley and Carl Hendrick did an admirable job in trying to map out the territory for the research-lead and some of the roles and tasks that need to be performed.  For me a fantastic starting point for building upon this discussion is the work of Barends, Rousseau, & Briner's (2014) and their pamphlet on the basic principles of evidence-based management.  Barends* et al state:

From the point of view of a school-research lead this definition has a number of implications.
  1. Being a research-lead involves more than just 'academic' research, it involves drawing upon a wider range of evidence - including both individual school data and the views of stakeholders, such as staff, pupils, parents and the wider community.
  2. This approach can be largely independent of work with higher education institutions.  It is not necessary to partner with a HEI to be an evidence-based practitioner or evidence-based school. It might help, specifically when supporting staff develop as evidence-based practitioner but it's not absolutely necessary.
  3. At the heart of this process is being able to translate school issues or challenges into a well formulated and answerable problem and evidence-based medicine has a number of tools which can help with this process.
      So what does this mean for the research lead's role in a school? Well a highly relevant place to start is to look at those leadership capabilities which appear to have the biggest impact on student and in this case teacher learning.    Given that we a seeking to develop processes which are 'research' led there is no better place to start than the work of Viviane Robinson on Student-Centred Leadership, which is based on both a best-evidence synthesis of the impact of leadership on student achievement, and which identifies three key leadership capabilities: applying relevant knowledge; solving complex problems and building relational trust.  Below I have begun to  tentatively examine what this might mean for research leads, so here goes:
  • Applying relevant knowledge - this about using knowledge about effective research in order to help colleagues become better practitioners, for example, helping to ask well-formulated and answerable questions.
  • Solving complex problems - research is a tricky and slippery topic and it's about discerning the specific research challenges faced by a school and then crafting research processes in order to address those challenges.
  • Building relational trust - which is an absolute pre-requisite for such work.  Research leads need to gain the trust of colleagues, otherwise it will be virtually impossible to develop a research agenda in schools.  And in undertaking this role, this require a specific set of skills which will allow both the research lead and practitioner to engage in effective evidence-based practice.
In order to help with the development of the role of the research-leads the next few posts in this series will draw upon material from both the evidence-based medicine and the broader field of evidence-based practice to look at the following:
  • How to get better at asking well-formulated questions.
  • How to write a short-paper on a Critically Appraised Topic
  • Some of the current challenges faced by evidence-based practitioners, particularly in the field of medicine.
*This definition is partly adapted from the Sicily statement of evidence-based practice: Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Martin, J., Hopayian, K., Porzsolt, F., Burls, A., Osborne, J. (2005). Sicily statement on evidence-based practice. BMC
Medical Education, Vol. 5 (1).
Barends, E., Rousseau, D. M., & Briner, R. B. 2014. Evidence-Based Management : The Basic Principles. Centre for Evidence Based Management (Ed.). Amsterdam.
Robinson, V. (2011) Student-Centered Leadership, Jossey-Bass, San Francisco

Tuesday 9 December 2014

Dangerous Half Truths and Total Nonsense - Evidence Based Practice and two avoidable mistakes

A few weeks ago I wrote about how difficult it is to learn from others using casual benchmarking and the implications this had for leadership teams who followed the Skills Commissioners advice to visit other colleges.  In this post I continue to draw upon the work of Jeffrey Pfeffer and Robert I Sutton and their 2006 book Hard Facts Dangerous Half-Truths and Total Nonsense : Profiting from evidence-based management  and some of the practices which are confused with evidence based management. In addition to casual benchmarking Pfeffer and Sutton identify two other practices - ,  doing what seems to have worked in the past and following deeply held yet unexamined ideologies -  which can lead to both to poor decisions and detrimental outcomes for college stakeholders i.e. staff, students, parents, employers and the community.   These two practices are now explored in more detail.

Doing what seems to have worked in the past
     As colleagues move one from role to another, be it within a school or college the temptation is use that experience and apply it to a new setting.  Pfeffer and Sutton argue that problems arise when the new situation is different from the past and when what we have learned worked in the past, may have actually have been wrong.  There is a huge temptation to import management practices and ideas from one setting to another without sufficient thought about context, either old or new.   Pfeffer and Sutton identify three simple questions that can assist in avoiding negative outcomes arising from inappropriately repeating past strategies and innovations.
  • Is the practice that I am trying to import into a new setting, say lesson planning, peer tutoring, e-learning, pastoral arrangements or information and guidance arrangements directly linked with previous success.   Was the success which is now being attempted to be replicated despite of the innovation which was adopted rather than because of it?
  • Is the new - department, school, college or other setting - so similar to past situations that what worked in the past will work in the new setting?  For example, getting the balance right between delegation and control may be different between an outstanding school/college and one that is on a journey of improvement.
  • Why was past practice  say graded lesson observations - effective, what was the mechanism or theory which explains the impact of your previous actions.  If you don't know what was the theory or mechanism that underpinned past success then it will be difficult to know what will work this time.  (Adapted from Pfeffer and Sutton p9)
Following deeply held yet unexamined ideologies 
     The third basis for flawed decisions-making is deeply held beliefs or biases which may cause school and college leaders to adopt particular practices, even though there maybe little or no evidence to support the introduction of the proposed practice or innovation. Pfeffer and Sutton suggest a final three questions to address this issue.
  • Is my preference for a particularly leadership style or practice because it fits in with my belief about people -  for example, do I support the introduction of PRP for teachers because of my belief of the motivational benefits of higher salaries and payment for results?
  • Do I require the same level of evidence or supporting data, if the issue at hand is something that I am already convinced what the answer is?
  • Am I  allowing my preferences to reduce my willingness to obtain and examine data and evidence which may be relevant to the decisions.  Do I tend to read the educational bloggers/tweeters who agree with me or do I search out views that are in contrast to my own.  Am I willing to look at research which challenges my preconceptions for example Burgess et al (2013) and the small but positive effect of league tables.(Adapted from Pfeffer and Sutton, p 12)
To conclude :
     If an evidence based approach is taken seriously it could potentially change the way in which every school/college leader thinks about educational leadership and management, and subsequently change the way the behave as school/college leader.  As Pfeffer and Sutton state:
     First and foremost, (evidence based management) it is way of seeing the world and thinking about the craft of management.  Evidence-based management proceeds from the premise that using better, deeper logic and employing facts to the extent possible permits leaders to do their jobs better. Evidence based management is based on the belief that facing the hard facts about what works and what doesn't, understanding dangerous half-truths that constitute as much conventional wisdom about management, and rejecting the total nonsense that too often passes for sound advice will help organizations perform better (p13)

Burgess, S., Wilson, D., and Worth, J. (2013) A natural experiment in school accountability the impact of school performance information on pupil progress, Journal of Public Economics, vol 106, pp 57-67
Pfeffer, J and Sutton, R., (2006) Hard Facts Dangerous Half-Truths and Total Nonsense : Profiting from evidence-based management. Harvard Business Review Press.

Monday 1 December 2014

What Works? Evidence for decision-makers - it's not quite that simple.

This week saw the publication of the What Works Network's What Works? Evidence for Decision-Makers report which in the context of education identified a number of practices, for example, peer tutoring and small-group tutoring which appear to work.  On the other hand, the report also identified a number of practices which appeared not to work, for example, giving pupils financial incentives to pass GCSE or pupils repeating a year.  However, whenever such  reports come out which identify supposedly what works there can be tendency to confuse 'evidence' with evidence-based practice.  In particular, there is a danger that published research evidence is used to reduce the exercise of professional judgment by practitioners.  Indeed, one of the great myths associated with evidence-based practice is that it is the same as research determined or driven practice.  As such, it is potentially useful for educationalists interested in evidence-based education to gain a far greater understanding of what is meant by evidence-based practice.
      Barends, Rousseau, & Briner's (2014) recent pamphlet on the basic principles of evidence-based management  define evidence based practice as the making of decisions through the conscientious, explicit and judicious use of the best available evidence from multiple sources by:
  • Asking: translating a practical issue or problem into an answerable question
  • Acquiring: systematically searching for and retrieving the evidence
  • Appraising: critically judging the trustworthiness and relevance of the evidence
  • Aggregating: weighing and pulling together the evidence
  • Applying: incorporating the evidence into the decision-making process
  • Assessing: evaluating the outcome of the decision taken
to increase the likelihood of a favourable outcome (p2)*.
In undertaking this task information and evidence is sought from four sources
  1. Scientific evidence Findings from published scientific research.
  2. Organisational evidence Data, facts and figures gathered from the organisation.
  3. Experiential evidence The professional experience and judgment of practitioners.
  4. Stakeholder evidence The values and concerns of people who may be affected by the decision.
In other words, evidence based practice involves multiple sources of evidence and the exercise of sound judgement.
     Furthermore, drawing upon Dewey's practical epistemology Biesta argues that the role of research is to provide us with insight as to what worked in the past, rather than what intervention will work in the future.  As such, all that evidence can do is provide with us a framework for more intelligent problem-solving.  In other words, evidence cannot give you the answer on how to proceed in any particular situation, rather it can enhance the processes associated with deliberative problem-solving and decision-making.
     So what are the key messages which emerge from this discussion.  For me at least there would be appear to be four key points

  • Research evidence is not the only fruit - in other words when engaging in evidence based-practice research evidence is just one of multiple sources of evidence.
  • Even where there is good research evidence, that does not replace the role of judgment in making decisions about how to proceed.
  • All that research evidence can do is tell you what worked in the past, it can't tell you what will work in your setting or in the same setting at some time in the future.
  • Research evidence provides a starting point for intelligent problem-solving
Barends, E., Rousseau, D. M., & Briner, R. B. 2014. Evidence-Based Management : The Basic Principles. Centre for Evidence Based Management (Ed.). Amsterdam.
Biesta, G. 2007 Why 'What Works' Won't Work : Evidence-based practice and the democratic deficit in educational research, Educational Theory, 57 (1) 21-22