Friday 26 January 2018

The School Research Lead and making the most of journal clubs - recommendations from a systematic review

In this week’s post, I will be taking a further look at the research on journal clubs and in particular a systematic review by (Deenadayalan, Grimmer-Somers, Prior, & Kumar, 2008)

The systematic review

(Deenadayalan et al., 2008) over identified 101 articles, of which 21 comprised the body of evidence, with 12 describing journal club effectiveness within healthcare settings.  Over 80% of the papers noted that journal clubs were effective in improving participants’ knowledge and critical appraisal skills.  Nevertheless, none of the papers reported on how this then manifested itself in changes in practice.

Findings

Although the articles reviewed often differed in terms of participants, processes and evaluation, (Deenadayalan et al., 2008) argue that there were a range of consistent findings vis a vis the effectiveness of journal clubs in developing participant’s knowledge and critical appraisal skills.  As such, Deenadayalan et al have been able to identify a number of recommendations for the conduct of a journal club, which if adopted, increases the journal club’s chances of succees

Journal club attendance
  • Establish a journal club group of members of the same discipline, or similar interests within a clinical specialty. 
Journal club purpose
  • Have an established and agreed overarching goal for the long-term journal club intervention. The overarching journal club purpose should be reviewed regularly, and agreed by participants
  • Establish the purpose of each journal club meeting, and link this to the paper being read, or the skill acquisition being addressed.
Structure of an effective journal club
  • Regular attendance should be expected and recorded. Attendance may be mandatory, particularly if the journal club has a curriculum-based format
  • Conduct journal clubs at regular predictable intervals (suggest monthly)
  • Conduct journal club at an appropriate times of the day for all participants
  • Provide incentives to attend such as food (which is shown to increase attendance as well as the conviviality of the occasion).
Leading journal club
  • Journal clubs appear to be more effective if they have a leader. The journal club leader should be responsible for identifying relevant articles for discussion, however the final choice needs to be decided by the journal club members
  • Train the leader/facilitator of the journal club in relevant research design and/or statistical knowledge so as to appropriately direct group discussions and assist the group to work towards its goals
  • The leader can change from meeting to meeting, however he/she needs to have the skills to present the paper under discussion and lead the group adequately. It is a fine balance between choosing a leader of high academic standing whose expertise may stifle discussion,
  • or choosing a leader from peers who may not have the requisite understanding of the paper under discussion
  • Provide access to a statistician to assist the leader in preparing for journal club, and to answer questions that may arise from the journal club discussion.
Choosing articles for discussion
  • Choose relevant case-based or clinical articles for discussion. These papers should be of interest to all participants. Articles should be chosen in line with the overarching purpose of the journal club
  • Identify one journal club member (either the designated leader or a member) who has the responsibility for identifying the literature to be discussed for each meeting. This person should also lead the discussion on the article at the journal club. 
Circulating articles for discussion
  • Provide all participants for each journal club (in addition to the leader) with pre-reading at a suitable time period prior to the journal club (may be up to a week prior). Participants should agree to the time frame for pre-reading. In some curriculum-based situations, assessment of whether pre-reading has occurred may be appropriate
  • Use the internet as a means of distributing articles prior to the meeting, maintaining journal club resources and optimizing use of time and resources. 
Efficiently running the journal club
  • Use established critical appraisal approaches and structured worksheets during the journal club session, which leads to healthy and productive discussion
  • Formally conclude each journal club by putting the article in context of clinical practice.

Journal club effectiveness
  • Depending on the journal club purpose, it may be appropriate to evaluate knowledge uptake formally or informally
  • Evaluation should specifically relate to the article(s) for discussion, critical appraisal, understanding of biostatistics reported in the paper and translating evidence into practice. (Deenadayalan et al., 2008) p 905-6

How relevant are these findings to schools?

It seems to me, that these findings are broadly applicable to school-based research clubs, and could be easily adapted to meet the needs of individuals schools.   Nevertheless, there are a number of points which are worth further consideration.

First, depending on the nature of journal being reviewed, it makes a lot of sense to get an expert in statistics involved.  Anyone who has read (Gorard, See, & Siddiqui, 2017) recent book will be aware of some of the challenges in the correct use and interpretation of p values, statistical significance and confidence interval.  As for effect sizes, (Simpson, 2017) provides an interesting survey of the problems associated with effect sizes and subsequent use in ‘meta-analyses’.

Second, whoever leads the journal club will need to be viewed as credible by colleagues, not just in being able to find, select and understand and apply research.  The journal clubs leader also need to have ‘high-level’ interpersonal skills, so that they can skilfully navigate discussions, where colleagues disagree, or where colleagues have had deeply held values and beliefs challenged by the literature.  Indeed, it could be be argues that unless the research is ‘challenging’ if not controversial, then it is unlikely to provoke deep reflection on practice

Finally, given the workload pressures on teachers, and the relatively scant if non-existent evidences of journal clubs impacting upon day-to-day decision-making, very real consideration needs to be given to the reasons, why a journal club is being established.  As a mechanism to get ‘research’ into the classroom it is unlikely to have any impact whatsoever on teachers’ teaching and pupils’ learning.  If on the other hand, it is seen as an integral part of a wider process of building social capital (Hargreaves & Fullan, 2012) and developing a collaborative culture amongst teachers and other colleagues, it will be of some value.


Next week I’ll be looking at some school-based research by (Sims, Moss, & Marshall, 2017) into whether journal clubs can work and can they increase evidence-based practice.

Reference

Deenadayalan, Y., Grimmer-Somers, K., Prior, M., & Kumar, S. (2008). How to run an effective journal club: a systematic review. Journal of Evaluation in Clinical Practice, 14(5), 898-911. doi:10.1111/j.1365-2753.2008.01050.x
Gorard, S., See, B., & Siddiqui, N. (2017). The trials of evidence-based education. London: Routledge.
Hargreaves, A., & Fullan, M. (2012). Professional capital: Transforming teaching in every school. New York: Teachers College Press.
Simpson, A. (2017). The misdirection of public policy : Comparing and combining standardised effect sizes. Journal of Education Policy.
Sims, S., Moss, G., & Marshall, E. (2017). Teacher journal clubs: How do they work and can they increase evidence-based practice? Impact, 1(1).


Friday 19 January 2018

The School Research Lead and Journal Clubs - Do we need a logic model?

There is currently much interest in the use of journal clubs within schools.  For example, both the Blackpool (St Mary’s Catholic Academy) and Durrington Research Schools are currently promoting the use of journal clubs in their schools.   In addition, we have the Chartered College of Teaching operating a ‘virtual journal club’ through its monthly book club.   However, it should be noted that this is nothing new as Beth Giddins has been promoting the use of journal clubs since 2015 (https://www.edujournalclub.com/journal-clubs/)    Nevertheless, there is very little research available on the effective use of journal clubs within schools, with (Sims, Moss, & Marshall, 2017) being a notable exception.   With that in mind, this post, will be the first in a series of blogposts which looks at the of journal clubs.  In doing so, I will be drawing upon a number of systematic reviews in medicine and health settings which look at how effective are journal clubs in supporting both continuous professional development and evidence-based decision-making (Harris et al., 2011) and (Deenadayalan, Grimmer-Somers, Prior, & Kumar, 2008).  However, the first post in the series will briefly examine a possible logic model/framework for use with teacher journal clubs.

Logic models

Put simply, a logic model graphically illustrates the components of an intervention in terms of inputs, output and outcomes.  Figure 1 illustrates how the various elements of a very simple logic model come together.   The inputs represent the resources that are put into the programme or intervention, money, time and skills.    The outputs, represent what is done, the activities associated with the programme and who the programme reaches.   Finally, the outcomes are the changes and benefits (and possibly, costs) which accrue in the short, medium and long-term, for examples change in teacher knowledge and skills, application in the classroom and improvements in pupil learning.

For a detailed explanation of logic models have a look at Better Evaluation 
Teacher Journal Clubs - A logic model

Adapting the work of (Harris et al, 2011) it's possible to be come up with a detailed logic model for for how a teacher journal club might work. 

The benefits of using a logic model

The benefits of developing a logic model can be found in more detail at Community Toolbox but for the purposes of our discussion the benefits of logic models.  
  • Logic models integrate planning, implementation, and evaluation. In other words, developing a logic model will give you a greater understanding of what needs to be done to make the innovation work, and at the same time gives you a framework for evaluation
  • Logic models help you make good matches between activities and effects.  By developing a logic model for a journal club it can held you spot those intended activities with no supporting activities and resources, and then make the suitable adjustments.
  • Logic models can help in the collaborative planning process.  The development a logic model is an iterative process and by working together this can help build a shared understanding of what needs to be done to make an intervention work.  It is also helpful when you are looking to disseminate an intervention within or between schools.
  • Logic model can help keep a focus on accountability and outcomes.  In schools where resources are increasingly scare a logic model can keep a focus on the outcomes of an intervention and whether the planned for outcomes are actually happening.  Hopefully, this will allow further r resources to be allocated when the journal club proves a success. 

However, as noted at Community Toolbox - logic models can have a number of weaknesses.
  • A logic model needs to be 'logical'.  If it is not, this will no doubt cause problems for colleagues seeking to implement the innovation
  • A logic model cannot capture all the variables and elements at work when trying to make an intervention work - so the logic model may move from being 'simple' to being 'simplistic'
  • A logic model can be both difficult and time consuming to create.  So there needs to be a clear trade-off between the time and effort put into creating the logic model and the subsequent benefits 
And finally

Next week I'll look at some of the evidence about what needs to be done to make sure your journal club is a success.



Saturday 6 January 2018

Measuring the impact of Lesson Study in Schools

The end of 2017 saw a considerable amount of discussion about the impact of lesson study, which was in large part prompted by the EEF Evaluation Report on lesson study - which suggested it had no impact on pupil learning outcomes. However, this raises all sorts of questions about how to go about measuring the impact of lesson study. So to help with this task - Sarah Seleznyov of the UCL IOE has written a guest post on how schools can go about measuring the impact of lesson study.

Guest post by Sarah Seleznyov - Measuring the impact of lesson study in schools

'If we are going to measure the impact of lesson study, we need first to be sure of what it is and what its expected outcomes are. Based on this understanding, we then need to decide what it is important to evaluate and measure.

This is trickier to do than it might seem, since there is very little literature from Japanese authors on lesson study accessible to English-language speakers. Based on an extensive analysis of the literature, and ongoing dialogue with Tokyo Gakugei University professors, listed below are what I believe to be the critical components of the lesson study process:

1. Identify focus


Teachers compare long-term goals for student learning and development to current students’ current learning characteristics in order to identify a school-wide research theme

2. Planning

Teachers work in collaborative groups to carry out kyozai kenkyu (study of material relevant to the research theme). This study leads to the production of a collaboratively written plan for a research lesson. This detailed plan attempts to anticipate pupil responses, misconceptions and successes for the lesson.

3. Research lesson

The research lesson is taught by one teacher, who is a member of the collaborative planning group. Other members of the group act as silent observers, collecting evidence of pupil learning.

4. Post-lesson discussion

The collaborative group meet to discuss the evidence gathered. Their learning in relation to the research theme is identified and recorded. It is intended that this learning informs subsequent cycles of research.

5. Repeated cycles of research

Subsequent research lessons are planned and taught that draw on the findings from the post-lesson discussions.

6. Mobilising knowledge
Opportunities should be created for teachers working in one lesson study group to access and use the knowledge from other groups, either through observing other groups’ research lessons or through the publication of group findings.

7. Outside expertise

Where possible, there should be input from a koshi or ‘outside expert’ involved in the planning process and/or the research lesson. (Seleznyov, S. (forthcoming))

And what are the expected outcomes of teacher participation in lesson study? Lesson study aims to enable teachers to make positive changes to their day-to-day classroom practices so that improvements to pupil achievement are sustained beyond the lesson study process. It is not intended as a quick-fix problem solving approach to teaching and learning, nor a short term professional learning project.

In England, there is overwhelming pressure on schools and school leaders to provide evidence of the impact of any intervention that is intended to improve outcomes for pupils. This evidence of impact is expected to be within what is in research terms a very short time frame: a year, or two years at the most, and must be evidenced in terms of pupil outcomes. Lesson study experts, however, describe lesson study as a focus on the development of expertise over decades, not months.

In line with this focus on the long term development of teacher expertise, we advise teachers not to expect to see an impact on pupil learning after one, two or even three research lesson cycles. This would align with other literature on the development of teacher expertise, eg Hattie, who advises caution in judging ‘expert teachers’ using simplistic assessment measures such as tests, which can only measure improvements in shallow learning.

However, we can anticipate that qualitative changes to some of the ‘soft’ aspects of pupil learning (for example, engagement, resilience, perseverance) will lead to quantitative changes in attainment and progress in the long term (Gutman, L.M. and Schoon, I., 2013). Teachers engaged in lesson study can look for key aspects of the pupils’ learning that are likely to lead to the biggest changes in their progress over time and can see the research lessons as a vehicle to gain insight into these aspects.

When evaluating lesson study, school leaders should consider gathering evidence against the full logical chain that might enable changes to pupil outcomes. Our logical chain (Godrey et al., forthcoming) is based on the work of Guskey, who describes five different levels at which the effectiveness of professional development can be measured:

1. Teachers’ reactions

Teachers’ attitudes to and enjoyment of professional learning might improve. Did teachers like lesson study? How was it different to / better than the other professional development they experience?

2. Teachers’ professional learning

Teachers might report improved subject knowledge, pedagogical content knowledge and self-confidence. What did teachers learn? How did lesson study enable them to learn this?

3. The organisation’s professional development model

The structure, time, resourcing of the school’s professional learning programme might alter in order to accommodate lesson study. Cultural attitudes towards professional learning might shift in relation to: the role of peer-to-peer learning, teacher ownership of learning, lesson observation as learning not performance. Did we as leaders do enough to enable the lesson study to be of as high a quality as possible? Has teachers’ participation in lesson study made them think differently about eg working with other colleagues, lesson observations, etc?

4. Teacher use of new knowledge and skills

Teachers’ newly acquired confidence, subject knowledge and pedagogical content knowledge should lead to changes in practice. Have teachers made any changes to their day-to-day classroom practice based on what they learnt through lesson study? How substantial are these changes to practice?

5. Pupil learning outcomes

Changes in teachers’ practice should lead to improved attitudes to learning and ultimately progress for pupils, in terms of evidence from written work and assessment data. Have teachers noticed any changes to pupil learning, based on the changes they have made to their practice? What is their evidence for claiming this?

School leaders considering how best to evaluate the impact of their school’s lesson study projects should consider which of the above five levels are of most relevance to their own context and seek evidence against those priorities, rather than focusing solely on level five. And think also about how the school’s professional development model supports the effective implementation of lesson study:

• To what extent has your school invested in a long term approach to professional learning and does your proposed cost-impact evaluation for lesson study take this into account?

• How can your impact evaluation support teachers to focus less on the short term quantitative analysis of pupil assessment data and more on the longer term qualitative analysis of pupil learning?

• Can a process be put in place to gather long term evidence of impact on pupil learning?

For more information on our lesson study programmes, to join the UCL Institute of Education Lesson Study Network or to purchase the UCL Lesson Study Handbook, contact: s.seleznyov@ucl.ac.uk'

References:


Godfrey, D., Seleznyov, S., Wollaston, N., and Barrera-Pedamonte, F. (forthcoming). Target oriented lesson study (TOLS) Combining Lesson study with an integrated impact evaluation model.

Seleznyov, S. (forthcoming). Lesson study: an exploration of its translation beyond Japan