Tuesday 23 December 2014

Maturity and the case of graded lesson observations - can the FE sector handle the truth!

Last Friday saw the TES publish an article on graded lesson observations within the further education sector where Lorna Fitzjohn, OfSTED's national director for learning and skills stated : “The big question that we’ve had from some leaders in the sector is: is the sector mature enough at this point in time not to have us (OfSTED) grading lesson observations?”  In responding to this statement it seems sensible to ask the following questions:
  1. What does the research evidence imply about the validity and reliability of graded lesson observations?
  2. What is the best evidence-based advice on the use of graded lesson of observations?
  3. What do the answers to the first two questions imply for the maturity of the leadership and management of the further education sector?
What does the research evidence imply about the validity reliability of lesson observations?
      O’Leary (2014) undertook the largest ever study into lesson observation in the English education system and investigated the impact of lesson observations on lecturers working in the FE sector. Lecturers' views when asked about graded lesson observations were summarised as :
Over four fifths (85.2%) disagreed that graded observations were the most effective method of assessing staff competence and performance. A similarly high level of disagreement was recorded in response to whether they were regarded as a reliable indicator of staff performance. However, the highest level of disagreement (over 88%) of all the questions in this section was the response to whether graded observations were considered the fairest way of assessing the competence and performance of staff. 
     Not only are there 'qualitiative' doubts about the reliability and validity of lesson observations, there are also 'quantitative' based objections to their use, particularly when un-trained observers are used. Strong (2011) et al found that the correlation coefficient for untrained observers agreeing on a lesson observation grade was 0.24 which runs counter to sector leaders advice to  both Principals and Governors to 'trust your instincts.'
     Furthermore, Waldegrave and Simons (2014) cite Coe’s synthesis of a number of research studies, which raises serious questions about the validity and reliability of lesson observation grades. When considering value added progress made by students and a lesson observation grade (validity) , Coe states that in the best case there will be only 49% agreement between the two grades and in the worst case there will be only 37% agreement. As for the reliability of grades Coe’s synthesis suggests that in the best case there will be 61% agreement and in the worst care only 45% agreement between two observers.
      The above would suggest graded lesson observations provide an extremely shaky foundation on which to make judgements about the quality of teaching and learning within the further education sector.   It is now appropriate to turn to the current best evidence-based advice on the use of graded lesson observations.

So what is the best evidence-based advice on the use of lesson observations?
     Earlier this year at the Durham University Leadership Conference Rob Coe stated that the evidence suggests
Judgements from lesson observation may be used for low-stakes interpretations (eg to advise on areas for improvement) if at least two observers independently observe a total of at least six lessons, provided those observers have been trained and quality assured by a rigorous process (2-3 days training & exam). High-stakes inference (eg Ofsted grading, competence) should not be based on lesson observation alone, no matter how it is done
      In other words, the current practice of a graded lesson observation system based on one or two observations per year - which is the case in the vast majority of colleges - is not going to provide sufficient evidence for low stakes improvement never mind high stake lecturer evaluation.  That does not mean lesson observations should not take place but as Coe suggests they need to take place alongside reviews of other evidence : such as student feedback, peer-review and student achievement.
  
So what does the above mean for the leadership and management of the further education sector?
     First, there would appear to be a clear research-user gap, with sector leaders giving out advice which is inconsistent with the best available evidence. Second, if leaders and managers within the sector have to rely upon graded lesson observations to attempt to demonstrate and track improvement of teaching and learning, then the sector needs to begin to develop a wider and more sophisticated range of processes and measures to judge the quality of teaching and learning. Third, the reliance on graded lesson observation suggests a prevailing leadership and management culture which is founded on an assumption that to improve performance requires the removal of poor performers.  For me, Mark Tucker (2013) sums up the way forward

There is a role for teacher evaluation in a sound teacher quality management system, but it is a modest role. The drivers are clear: create a first rate pool from which to select teachers by making teaching a very attractive professional career choice, provide future teachers the kind and quality of education and training we provide our high status professionals, provide teachers a workplace that looks a lot more like a professional practice than the old-style Ford factory, reward our teachers for engaging in the disciplined improvement of their practice for their entire professional careers, and provide the support and trust they will then deserve every step of the way.

And if we take that as the definition of a mature system, then maybe FE has a long way to go.

References
Coe, R, Lesson Observation: It’s harder than you think, TeachFirst TDT Meeting, 13th January 2014
O’Leary, M. (2014). Lesson observation in England’s Further Education colleges: why isn’t it working and what needs to change? Paper presented at the Research in Post-Compulsory Education Inaugural International Conference: 11th –13th July 2014, Harris Manchester College, Oxford.
Strong, M., Gargani, J. and Hacifazliog, O. (2011). Do We Know a Successful Teacher When We See One? Experiments in the Identification of Effective TeachersJournal of Teacher Education 62(4) 367–382.
Tucker, M. (2013). True or false: Teacher evaluations improve accountability, raise achievement. Top Performers (Education  Week blog), July 18, 2013 http://www.ncee.org/2013/07/true-or-false-teacher-evaluations-improve-accountability-raise-achievement/
Waldegrave, H., and Simons, J. (2014). Watching the Watchmen: The future of school inspections in England, Policy Exchange, London.






 































6 comments:

  1. Great Article. I really enjoyed to read this article.
    Pay For Coursework

    ReplyDelete
  2. A nice topic for reading... thanks for sharing the information with us it’s really appreciable...
    Nursing Assignment Help

    ReplyDelete
  3. Great Article. I really enjoyed to read this article.
    Dissertation Writers UK

    ReplyDelete
  4. It would further allow students regarding all those possible provisions of interest and the values which they have been looking for. cheap proofreaders

    ReplyDelete
  5. Our trusted, qualified, and able designers try to bring you the latest artistic styles to make you look classy! Our committed and devoted team of leaders are here to serve you as client satisfaction is the high and soul priority for us.

    ReplyDelete
  6. Excellent. I like every aspect of it, and I added it to my favorites to keep track of fresh stuff on your website. My blog can be found here Improve Click Speed With Kohi Clicking. Keep your target at 10-14 CPS if you want to become a pro in PvP and FPS games. Continue to use the Kohi clicker for practice.

    ReplyDelete