Saturday, 24 February 2018

Teaching Staff Turnover and Employee Engagement

During this week's #UKEDResChat discussion a number of Tweets mentioned concerns about levels of teaching staff turnover and how to go about creating reliable and valid measures of staff satisfaction and engagement.  So with that in mind, I thought I'd have a look at Bamford and Worth (2017) and their report for the NFER as to the reasons why teachers leave the teaching profession.   I will then focus on one recommendation of the report i.e. the need for schools to measure job satisfaction and engagement and intervene -  to show how that might be easier said than done.

Why do teachers leave the teaching profession?

Drawing on data collected from 40,000 households as part of the Understanding Society longitudinal study, Bamford and Worth (2017) found the following.

  • More than half of non-retiring teachers who leave remain working in the education sector.
  • Teachers do not leave for higher- paid jobs: overall pay decreases, but hourly wages stay the same.
  • Leavers' working hours decrease and many secondary leavers take up part-time positions.
  • Leavers' job satisfaction and subjective well-being improve after leaving. 

Bamford and Worth then go on to make the following recommendations

  • School leaders should regularly monitor the job satisfaction and engagement of their staff, and intervene 
  • Government and other secondary-sector stakeholders need to urgently look at ways of accommodating more part-time working in secondary schools 
  • School leaders, Government and Ofsted need to work together to review the impact their actions are having on teacher workload, to identify practical actions that can be taken to reduce this 

An evidence-based approach to monitoring job satisfaction and engagement 

Monitoring job satisfaction and engagement and susbequently intervening may seem a very sensible and obvious recommendation.  However, it may be a lot easier said than done .  So to help understand why this might be case I'm going to look at the work of  Briner (2014) who raises some very pertinent questions about employee engagement.  So here goes:

Defining engagement - unfortunately there is no one agreed definition of engagement. 

The consequence of this is as Briner states: From a practical (and academic) perspective the absence of agreement about what something means - and an absence of concern about that lack of agreement - is not funny or weird or cute or unfortunate or inconvenient. It's a confused, confusing and chaotic mess that is almost bound to lead to messy and undesired outcomes. It means that whenever we talk about or think about or try to measure 'engagement' we are almost certainly saying different things, understanding different things, measuring different things and doing different things but believing quite incorrectly they are all the same. 

Measuring engagement - if there is no agreement about the nature of employee engagement the chance of developing valid, reliable and meaningful measures are slim. 

Again as Briner states: As a consequence of confused definition and overlap with other existing ideas there is currently little evidence that engagement measures are particularly valid or reliable. There is one crucial form of validity - predictive validity - for which there seems to be almost no evidence at all. This form of validity is essential as it explores whether measures, in this case of engagement, actually predict anything important in the future. At the present time therefore we do not have enough good quality evidence to allow us to draw even tentative conclusions about whether or how engagement can be measured in a valid and reliable way.

Engagement is nothing new or different 

Briner poses two questions about whether engagement is a new or different concept

Engagement is not a new and different idea: If this is the case then the term and idea should be immediately discontinued because using a new term to describe existing concepts is confusing and unhelpful. 

Engagement is a new and different idea: If this is the case then there is a huge amount of work to be done first to define engagement in a way that shows precisely how it is new and different and second to gather good quality evidence to show that measures of engagement are measuring something new and different. 

There is lack of good quality evidence about employee engagement

As Briner states '

There is almost no good quality evidence with which to answer the most important questions about engagement:
Fundamental Question 1: 'Do increases in engagement cause increases in performance?' 
Fundamental Question 2: 'Do engagement interventions cause increases levels of engagement and subsequent increases in performance?' 

Over-claiming and misclaiming

Briner argues that these four challenges raise serious challenges about the usefulness of the idea of employee engagement.  Nevertheless, there is an additional challenge;

That the proponents, supporters and advocates of engagement both over-claim by exaggerating the quantity and quality of evidence and mis-claim by making statements about engagement that, on closer inspection, seem to be about something else.

What are the implications of this discussion for school leaders who wish to monitor job satisfaction and engagement.

  • It will be a waste of time and resources for the school to try and develop its own valid and reliable measures of employee engagement
  • Staff surveys are highly likely to tell you very little, indeed as Argyris (1994) states may even get in the way of learning what needs to be done.
  • Multiple proxy measures of employee engagement are going to be required to help school leaders make a judgement about employee engagement.

And finally

How school leaders tackle the challenge of employee engagement comes down to a choice as to type of school leaders they want to be.  Are they school leaders who carefully examine the evidence on a particular, being explicit about what they know or don't know and then act accordingly. Or do they want to be school leaders who are not overly bothered about the quality of the evidence, subsequently misclaim and misrepresent the evidence for their own purposes and come up with superficial solutions to complex issues.  The choice is yours! (Amended from Briner)


Argyris, C. (1994). Good Communication That Blocks Learning. Harvard business review. 72. 4. 77-85.
Bamford, S. and Worth, J. (2017). Teacher Retention and Turnover Research. Research Update 3: Is the Grass Greener Beyond Teaching? Slough. NFER
Briner, R. (2014). What Is Employee Engagement and Does It Matter? An Evidence-Based Approach. The Future of Engagement Thought Piece Collection. 51.

Sunday, 4 February 2018

The School Research Lead and Teacher Journal Clubs - Summarising the evidence

In this post I look at how a school research lead might wish to summarise the evidence about teacher journal clubs.  In doing so, I will try and make we have a format that allows the including of four sources evidence and also takes into the context of the individual.  However, given the workload pressures, it is recognised that whatever report or document is produced, can be produced relatively quickly and without being burdensome.  As such, whilst the example uses a Word based tabular format, the same information could also be presented on 10 - 12 slide PowerPoint or through the use of some kind of mind map

The template

The following example has been produced for a fictional school, which is considering introducing a teacher journal club into its professional learning programme.  As will be seen from the example, the school is relatively new to research and is just beginning to put its 'toe in the water'.

 Teacher Journal Clubs
 Background question

How can teacher journal clubs contribute to teacher professional learning and the use of evidence-based practice?
 Teacher journal clubs appear to have the potential to contribute to the increased use of evidence-informed practice.   Initial discussions with stakeholders suggest there is support for piloting a journal club within the school.   Although, no-one within the school – be it teaching assistants, teachers and senior leadership – have experience in running journal clubs, adequate resources are available on the internet to support their introduction. 
Description of the best available evidence
Although there appears to be no systematic reviews in educational settings about use of teacher journal clubs, a systematic reviews in a health setting (Deenadayalan et al., 2008) provides guidance on how to run a successful journal club.  This guidance suggests: regular and anticipated meetings, mandatory attendance, clear long- and short-term purpose, appropriate meeting timing and incentives, a trained journal club leader to choose papers and lead discussion, circulating papers prior to the meeting, using the internet for wider dissemination and data storage, using established critical appraisal processes and summarizing journal club findings. (from abstract)
Recent research in education (Sims et al., 2017)involving two 11-18 mixed secondary schools (Ofsted – outstanding) indicates that journal clubs are a viable, scalable model of teacher-led professional development, capable of creating sustained increases in evidence-informed practice 
School Data
The school is a mixed 11-18 school and is currently rated by Ofsted as good.  The school has an extensive programme of professional learning – though little or none is focused on research use.  The school has recently recruited a number of new staff who are at the beginning of their career.  However, there are also a number of staff who have been at the school for over twenty years. Although in recent years the professional learning budget has been squeezed – there is still sufficient time in the programme for half-termly journal clubs

Stakeholders’ views (pupils, staff, parents, community)
 A number of teachers within the school are active on Twitter and are aware that the school currently provides few opportunities for teachers to engage in research evidence.  Successful schools in the locality have introduced journal clubs and it is perceived that this has contributed to those schools’ reputation for innovation.  However, there are other teachers who do not see the value of educational research and are aware of schools which have introduced journal clubs – and then have quietly dropped them after a year.  Nevertheless, there is a general consensus amongst the teaching staff that it may be worth undertaking a small pilot with volunteers.

Practitioner expertise – key leaders
 None of the major decision-makers within the school – the HT, 2 DHTs and the newly appointed School Research Lead (SRL)– have experience of running or participating in a journal club.  However, the SRL has attended a number of researchED events and has seen presentations on how to successfully run a journal club.  The SRL is also aware of resources available on the Internet and produced by teachers – which give clear advice on how to ensure a journal club is successful.  In addition, the SRL is currently studying for a post-graduate degree in education.

Questions for consideration

·      Can we access suitable research journals?
·      How do recruit volunteers for the pilot?
·      Do teachers have the capacity and capability to understand and apply research findings?
·      Do we have someone of sufficient knowledge and expertise to lead the journal club?
·      Can desired changes in teaching practice can be identified?
·      Is sufficient time available for the implementation of journal club?
·      How will the impact of the journal club be measured?

References and resources

·      (Deenadayalan et al., 2008)
·      (Sims et al., 2017)


·      School research lead

·      To be shared by email and to be discussed at the next staff meeting
·      Prior discussion of paper at departmental meetings

Update and review

·      When is it likely that new relevant evidence be available?
·      During 2018 as reports on the efficacy of Research Learning Communities and the School Research Leads are published by the EEF.
·      End of the academic year

Friday, 26 January 2018

The School Research Lead and making the most of journal clubs - recommendations from a systematic review

In this week’s post, I will be taking a further look at the research on journal clubs and in particular a systematic review by (Deenadayalan, Grimmer-Somers, Prior, & Kumar, 2008)

The systematic review

(Deenadayalan et al., 2008) over identified 101 articles, of which 21 comprised the body of evidence, with 12 describing journal club effectiveness within healthcare settings.  Over 80% of the papers noted that journal clubs were effective in improving participants’ knowledge and critical appraisal skills.  Nevertheless, none of the papers reported on how this then manifested itself in changes in practice.


Although the articles reviewed often differed in terms of participants, processes and evaluation, (Deenadayalan et al., 2008) argue that there were a range of consistent findings vis a vis the effectiveness of journal clubs in developing participant’s knowledge and critical appraisal skills.  As such, Deenadayalan et al have been able to identify a number of recommendations for the conduct of a journal club, which if adopted, increases the journal club’s chances of succees

Journal club attendance
  • Establish a journal club group of members of the same discipline, or similar interests within a clinical specialty. 
Journal club purpose
  • Have an established and agreed overarching goal for the long-term journal club intervention. The overarching journal club purpose should be reviewed regularly, and agreed by participants
  • Establish the purpose of each journal club meeting, and link this to the paper being read, or the skill acquisition being addressed.
Structure of an effective journal club
  • Regular attendance should be expected and recorded. Attendance may be mandatory, particularly if the journal club has a curriculum-based format
  • Conduct journal clubs at regular predictable intervals (suggest monthly)
  • Conduct journal club at an appropriate times of the day for all participants
  • Provide incentives to attend such as food (which is shown to increase attendance as well as the conviviality of the occasion).
Leading journal club
  • Journal clubs appear to be more effective if they have a leader. The journal club leader should be responsible for identifying relevant articles for discussion, however the final choice needs to be decided by the journal club members
  • Train the leader/facilitator of the journal club in relevant research design and/or statistical knowledge so as to appropriately direct group discussions and assist the group to work towards its goals
  • The leader can change from meeting to meeting, however he/she needs to have the skills to present the paper under discussion and lead the group adequately. It is a fine balance between choosing a leader of high academic standing whose expertise may stifle discussion,
  • or choosing a leader from peers who may not have the requisite understanding of the paper under discussion
  • Provide access to a statistician to assist the leader in preparing for journal club, and to answer questions that may arise from the journal club discussion.
Choosing articles for discussion
  • Choose relevant case-based or clinical articles for discussion. These papers should be of interest to all participants. Articles should be chosen in line with the overarching purpose of the journal club
  • Identify one journal club member (either the designated leader or a member) who has the responsibility for identifying the literature to be discussed for each meeting. This person should also lead the discussion on the article at the journal club. 
Circulating articles for discussion
  • Provide all participants for each journal club (in addition to the leader) with pre-reading at a suitable time period prior to the journal club (may be up to a week prior). Participants should agree to the time frame for pre-reading. In some curriculum-based situations, assessment of whether pre-reading has occurred may be appropriate
  • Use the internet as a means of distributing articles prior to the meeting, maintaining journal club resources and optimizing use of time and resources. 
Efficiently running the journal club
  • Use established critical appraisal approaches and structured worksheets during the journal club session, which leads to healthy and productive discussion
  • Formally conclude each journal club by putting the article in context of clinical practice.

Journal club effectiveness
  • Depending on the journal club purpose, it may be appropriate to evaluate knowledge uptake formally or informally
  • Evaluation should specifically relate to the article(s) for discussion, critical appraisal, understanding of biostatistics reported in the paper and translating evidence into practice. (Deenadayalan et al., 2008) p 905-6

How relevant are these findings to schools?

It seems to me, that these findings are broadly applicable to school-based research clubs, and could be easily adapted to meet the needs of individuals schools.   Nevertheless, there are a number of points which are worth further consideration.

First, depending on the nature of journal being reviewed, it makes a lot of sense to get an expert in statistics involved.  Anyone who has read (Gorard, See, & Siddiqui, 2017) recent book will be aware of some of the challenges in the correct use and interpretation of p values, statistical significance and confidence interval.  As for effect sizes, (Simpson, 2017) provides an interesting survey of the problems associated with effect sizes and subsequent use in ‘meta-analyses’.

Second, whoever leads the journal club will need to be viewed as credible by colleagues, not just in being able to find, select and understand and apply research.  The journal clubs leader also need to have ‘high-level’ interpersonal skills, so that they can skilfully navigate discussions, where colleagues disagree, or where colleagues have had deeply held values and beliefs challenged by the literature.  Indeed, it could be be argues that unless the research is ‘challenging’ if not controversial, then it is unlikely to provoke deep reflection on practice

Finally, given the workload pressures on teachers, and the relatively scant if non-existent evidences of journal clubs impacting upon day-to-day decision-making, very real consideration needs to be given to the reasons, why a journal club is being established.  As a mechanism to get ‘research’ into the classroom it is unlikely to have any impact whatsoever on teachers’ teaching and pupils’ learning.  If on the other hand, it is seen as an integral part of a wider process of building social capital (Hargreaves & Fullan, 2012) and developing a collaborative culture amongst teachers and other colleagues, it will be of some value.

Next week I’ll be looking at some school-based research by (Sims, Moss, & Marshall, 2017) into whether journal clubs can work and can they increase evidence-based practice.


Deenadayalan, Y., Grimmer-Somers, K., Prior, M., & Kumar, S. (2008). How to run an effective journal club: a systematic review. Journal of Evaluation in Clinical Practice, 14(5), 898-911. doi:10.1111/j.1365-2753.2008.01050.x
Gorard, S., See, B., & Siddiqui, N. (2017). The trials of evidence-based education. London: Routledge.
Hargreaves, A., & Fullan, M. (2012). Professional capital: Transforming teaching in every school. New York: Teachers College Press.
Simpson, A. (2017). The misdirection of public policy : Comparing and combining standardised effect sizes. Journal of Education Policy.
Sims, S., Moss, G., & Marshall, E. (2017). Teacher journal clubs: How do they work and can they increase evidence-based practice? Impact, 1(1).