Sunday, 29 November 2015

Is it any good : Merit is not the only fruit

If you are a school research lead, practioner inquirer or someone interested in what works in schools, this post is for you.   In this post,  I will use Stufflebeam and Coryn's (2014) extended definition of evaluation to help examine some of the values associated with in-school evaluation.  I will then go onto consider some of the tasks associated with evaluations and consider the operational implications for the school research lead.
So what do we mean by and extended definition of evaluation?

A useful place to start is Stufflebeam and Coryn's  (2014) extended definition of evaluation, which they describe as:

… the systematic process of delineating, obtaining, reporting, and applying descriptive and judgmental information about some subject’s merit, worth, probity, feasibility, safety, significance, and or equity. (p14)

Stufflebeam and Coryn acknowledge they might have included other values and when conducting an evaluation, as such discussions will need to take place as to whether other values, relevant to the context, should also be include.   That said, it is likely that most evaluations will include some, if not all, of the seven included values.

So what are the extended values?

In a recent post, I discussed the distinction between 'merit' and 'worth', so will only briefly revisit them now.
  • Merit - the intrinsic quality or excellence of the service/programme/project/innovation without reference to costs.
  • Worth - the quality of the item, taking into account both context and costs.
More detailed descriptions of the other five extended values are detailed below.
  • Probity - has the activity/programme under review been conducted honestly and with due regard to ethical considerations such as honesty, integrity and ethical behaviour.  As such evaluators, should check a programme's uncompromising adherence to the highest moral standards and err on the side of too much consideration of probity.
  • Feasibility – does the service/programme consume more resources than available or are there political considerations which make the activity undeliverable.  As such an evaluator's decision may justify the non-continuation of a programme
  • Safety – are those engaging with the service/programme subject vulnerable to harm, for example,  physical or psychological, and is applicable to evaluations in all fields
  • Significance – what is the potential of the service/programme and its importance within a given context.   Sometimes programmes are only relevant in the short-term, or only have local interest.  On the other hand, some programmes have a relevance far beyond the evaluation setting.  As such, a key questions for the evaluator is whether the project/service scaleable and will it work in other different settings.
  • Equity – this can include: provision to all; access for all; equal participation; impact on different groups.  Kellaghan cited y Stufflebeam and Coryn argues that there are seven indicators of the existence of equity:
    1. A societys public educational services will be provided for all people
    2. People from all segments of the society will have equal access to the services
    3. There will be close to equal participation by all groups in the use of the services
    4. Levels of attainments  - for example, years in the education systems - will be substantially the same for different groups
    5. Levels of proficiency in achieving all of the education system's objectives will be equivalent for different groups
    6. Levels of aspiration for life pursuits will be similar across societal groups
    7. The education systems will make similar impacts on improving life accomplishments of all segments of the population (especially ethnic, gender, socio-economic groups) that the education serves.  (Stufflebeam and Coryn, p14)
So how do we operationalise evaluation?

Stufflebeam and Coryn characterise the work of evaluators under four main headings.
  • Delineating – determining key questions, audiences, values, criteria, budget, information sources and where appropriate budget
  • Obtaining – obtaining, aggregating and analysis relevant information
  • Reporting – providing the sponsor, other audiences and stakeholders with feedback about the outcomes of the evaluation
  • Applying – how can the evaluator assist the evaluation sponsor apply the findings of the evaluation
The final feature of Stufflebeam and Coryn’s definition of evaluation to be considered concerns the nature of the information included in the evaluation
  • Descriptive information – this should provide a range of factual statements that ‘objectively’ describe the programme/service.  This could include to whom is the service offered; when was if offered; how many people engaged with the provision; how resourced – human, physical and financial; the cost of the provision
  • Judgmental information  - this includes getting the views of what those involved in the service/innovation/provision think of the ‘quality’ of the service.  This should involve judging the provision against a set of values and criteria
So what are the implications for the school research lead?

For me there are several implications for the school research lead and senior leaders within a school.
  • If the role of the school research lead involves helping colleagues to try work out what works, this should be seen as a necessary but not sufficient condition for evaluations. 
  • The evaluative questions - need to include : what works, for whom, to what extent, in what context etc.
  • Evaluation will require the application of values to the work of colleagues, which will require the evaluator to be particularly skilled in managing the internal politics within a school.
  • If a school is conducting a range of practitioner inquiries, it's important to try and ensure that all types pupils are 'covered' by the inquiries.
  • Be clear whether the evaluation is about bring improvement or providing an overall judgement of the programme/evaluand.
  • Informal evaluations may be useful in generating discussion, but are unlikely to provide sufficient rigorous evidence to justify scaling up informally evaluated programmes within a school.  However, they may be useful within a formative evaluative context.
  • Formal and detailed evaluations are necessary where the outcome of the evaluation is likely to involve a critical operational decision within the school.
Some final words

Although this post has focussed primarily on school evaluation and has put the 'school research lead' in the foreground of the evaluative process, being a skilled evaluator is a responsibility for every teacher within a school.   Indeed, applying the values of the extended definition to current practices may provide a very stimulating 'provocation' which results in changes in practice.

References


Stufflebeam, D.L. & Coryn, C. (2014). Evaluation Theory, Models & Applications (second edition), Jossey Bass, San Franciscon.

1 comment:

  1. Concerning details and other concerns have been well sorted out here and hopefully for the future would lay down better platform. scholarship statement of purpose

    ReplyDelete