The underpinning thesis
The main 'thesis' underpinning Brown et al's presentation was a call to integrate data based decision-making (DBDM) and research informed teaching practice (RITP) into a comprehensive professional learning based approach that is designed to enhance teaching quality, and a result increased student achievement. As such, the model is attempting to use the strengths of each approach to offset the weaknesses of the other. Brown et al summarise the offsetting benefits of each approach as follows:
- RITP is not based on a real need in the field vs DBDM starts with the vision and goals of a specific school, focusing on a context specific problem.
- DBDM data can inform educators about problems in their school, but what causes this problem vs RITP educators can draw upon a variety of effective approaches to school improvement
- RITP is the ‘one size does not fit all’ vs DBDM based on data, schools develop a context specific solution
- DBDM is that data can be used to pinpoint possible causes of a problem, but educators may still not know the best available course for school improvement vs RITP picking a promising solution, based on an existing evidence base (slide 23).
An eight stage cycle of evidence-based inquiry
Brown et al then go onto integrate DBDM and RITP into the following eight stage cycle.
On initial reflection this model provides an incredibly useful way of thinking about the relationship between data and research evidence within the inquiry cycle, with with the local data setting the scene for the necessary research evidence. Indeed, one of the things I have been thinking about lately is where 'local data-collection' fits within the models of evidence-based practice and evidence-based medicine which I have previously described and advocated. And this model, clearly places local data prior to seeking the research evidence.
Potential limitations
However, for me, the model two limitations to be taken into account: first, the determination of causes coming before the collection of data; second, the potential of for increasing the risk of confirmation bias. So lets now examine each of these limitations in more detail.
Causes before data or vice versa
The eight stage model presented places the stage of determining possible causes of the problem before collecting data about causes, which to me seems to be out of sequence and in reverse to order to what they should be . Indeed, there are other models of data led inquiry where data collection comes before diagnosis - and this illustrated in the four stage model of inquiry popularised by Roger Fisher and William Ury in the their classic book Getting to Yes
In this model the data stage focuses on identifying what's wrong, what are the current symptoms, and then moves to the identifying of possible causes in the diagnosis phase, with the next two phases being direction setting and action planning. For me, and I know when you use these cycles you move back and forth between the stages - it seems far more sensible to place the data-collection before the diagnosis phase. In addition, and I thank Rob Briner for this observation, the proposed process seems to separate use of the of research evidence from the diagnosis phase. Yet, why would you not use research evidence to help you identify possible causes of the symptoms being experienced.
The potential for cognitive bias
By placing both vision and goal setting next to determining causes and the drawing of conclusions prior to the search for the solution in the research evidence, this may lead to confirmation bias i.e the tendency to selectively search or interpret information in a way that confirms your perceptions or hypotheses. So determining the vision and goals may lead to certain problems to be highlighted as they are consistent with the already established, whereas other problems or data inconsistent with the 'vision' may be ignored. Furthermore, given that conclusions are drawn before searching for research evidence - this may lead to research evidence being sought which is consistent with the conclusions that have been already been drawn.
A possible alternative
Barends, Rousseau and Briner (2014) provide an extremely useful definition of evidence-based practice, which identifies a six-stage process in making decisions which are based upon evidence. In this model the search for and acquiring of evidence happens at stage two, with research evidence, organisational data and stakeholder views being accessed. This leads to the subsequent appraisal and aggregation of the evidence resulting in the incorporation of the evidence into the decision-making and action planning process.
However, this process does not fully incorporate all the elements of included in the proposed eight-stage cycle of evidence-informed inquiry i.e vision and goal setting. This 'omission' can be rectified if so required, by the inclusion of another 'A' representing : Ambition, aims and goal setting and which relates to the vision, mission and values of the school, and which gives us a 7 stage cycle of evidence-based inquiry.
A seven stage cycle of evidence-informed inquiry
Some final words
Brown et al make a powerful case for combining DBDM and RITP and have developed a potentially useful initial synthesis of the two processes. However, the proposed model has some inherent limitations, be it diagnosis taking place before data collection and increased possibilities of confirmation bias taking place. Finally, an alternative model - the 7 A's of evidence-informed inquiry - is put forward as way to think about the process of linking DBDM and RITP.
Dear dr. Jones,
ReplyDeleteThank you for writing this blog post about the importance of using evidence, and your statement that we make a powerful case in combining data use and research use into one comprehensive evidence use framework. You have also described two limitations of our framework, and I would like to respond to those.
The first is that the causes appear to be determined before data are being collected. However, this is not what we propose. We believe that first, data need to be collected in order to determine whether the experienced problem (e.g., poor student achievements) is in fact a problem. If the problem is actually prevalent, educators need to formulate several hypotheses of possible causes before data are being collected. After that, those hypotheses can be compared with scientific literature. For example, when teachers hypothesize that classroom size caused students’ poor results, scientific literature can point out that this is not likely to be the case (e.g., a meta-analysis by Hattie (2009) illustrated that classroom size has an effect size of 0.21, which is considered to be in the range between small and medium). Moreover, when teachers hypothesize that their lack of small-group learning caused students’ poor achievements, that same meta-analysis can point out that this is more likely to be the case (small-group learning has an effect size of .49, which is considered to be in the range between medium and large). Thus, in selecting a hypothesis, scientific literature can guide the way by providing information about what causes are (not) likely to be found. This is not to say that it replaces educator’s professional judgments, but it can help them to effectively focus their efforts. When educators collaboratively agreed upon the hypothesis they are going to study, they are collecting their data. Taken together, you are absolutely right by stating ‘why would you not use research evidence to help you identify possible causes of the symptoms being experienced’: that is exactly what we propose.
This also means that the second limitation you mention, the possible risk for cognitive bias, is diminished. A hypothesis needs to be formulated in a specific manner. When descriptive statistics are used, concepts like ‘large difference’ need to be defined before the data are being collected. When statistical tests are used, common statistical rules (e.g., α = .05) are applicable. Furthermore, in our framework, vision and goalsetting refer to the educational problem: what is the extent of the problem experienced by the educators, and when are they satisfied with a solution? Do students’ achievements need to increase with an average of 1 point within one year, or with 2 points? Such goals are used as a benchmark later on, to evaluate whether the designed and implemented actions for improvement led to the desired results. Thus, collecting data about your goal (e.g. increased student achievements) refers to different data than collecting data about the causes of your problem (e.g. a lack of small-group learning). This decreases the chance that setting a certain goal will influence the causes that are being studied.
I hope this clarified our evidence use framework. Let us continue this discussion. Maybe other researchers or educators would like to respond to the ways in which evidence can be used to improve education?
Kind regards
Mireille Hubers
To access my publications, please go to: www.researcherid.com/rid/G-8636-2015
Follow my research activities via: https://twitter.com/mdhubers
Good points Mireille. To summarize, key starting points for evidence informed education are: defining a problem in your school (current situation) and a goal (desired situation). Use data to verify this problem, and the extent of the problem. Come up with possible causes of the problem based on the experience and knowledge of school staff combined with research literature. Collect local data to investigate these possible causes. For the other steps of the proposed cycle or paper contact us at: k.schildkamp@utwente.nl
ReplyDeleteWe can possibly put more of the effective principles in this regard which would help students in establishing all those values which must be carried out by them. do my math homework
ReplyDeleteI Learn More Here than I learned in school, that is argument to provide reform in schools and fast as it's possible!
ReplyDeleteWoo! Your blog is very informative and helpful for us. If your are looking for best Slabbing company. Our company provide you best Slabbing services.
ReplyDelete