Newsroom

July 26, 2018

Education Evaluation: Reflections from the Field

By Henre Benson, Chief Operations Office: CASME (Twitter @henreb)

The Monitoring and Evaluation experience of implementers of education interventions can be traumatic. NGOs, already stretched to capacity, face the prospect of their internal processes being brought under scrutiny, deliverables and dosages counted and outcomes tested. The reality is that in many cases the impact is unclear and attribution is nearly always uncertain. Nobody sets out to fail. Most interventions are based on a theory of change, model or idea (whether rigorously tested or just drawn from years of experience) and a belief that what is intended will work. In many cases that belief is well founded as these initiatives are changing lives. There are countless personal stories of learners, schools and teachers presented with new opportunities, brighter futures and hope as a result of training, support or an essential resource provided.

However, when viewed through the lenses of quantitative evaluation frameworks or controlled comparison groups the impacts are not always as expected. Naturally the evaluation results can and will be defended. Within highly complex systems, education being one, mitigating factors will be cited, such as contexts beyond project control, systemic factors or challenges just too overwhelming to address within the confines of time-bound, resource limited education interventions.

Given the extent of time, money and resources invested in education interventions it is understandable that efforts to monitor and measure impact by the sector are now entrenched. The work of evaluation is to illuminate the successes and failures, but it should also to be transparent about its own limitations and put these as a preface not a footnote.

So whilst most right thinking, learning organisations will respect and appreciate the value of evaluations, it can be said that evaluations are not always well planned, executed or communicated.

A few represent the worst in class. For example when evaluation is exclusively summative, usually designed post-hoc and drawing illegitimate conclusions based on superficial data. These tend to be compliance driven (oops did someone say we need to evaluate this). Then it is executed simply to meet this request. Similarly when an evaluation is commissioned after the start of an intervention. This typically prompts a scramble for baseline data, cobbling together of frameworks and indicators.

I think what evaluators do need to recognise is that they are not necessarily experts in the fields they are evaluating (sometimes they are). In some cases they are not even experts in social research, just someone with a laptop, a keen ability to draw graphs and write compelling reports, but fortunately with capacity building and professionalisation commitments coupled with recognition of the important role evaluation plays, these types of evaluators are a diminishing breed. If there is honest and transparent engagement they will facilitate a process of developing a framework alongside the experts and not impose unrealistic measures to simplify their work. Education development is complex and happens within a system of super-complexity. The work of teachers and teacher-educators is not simple, nor should be the work of those evaluating it.

In my experience well planned, executed and communicated evaluation is an opportunity for all stakeholders to reflect and learn; if that is how your organisation chooses to respond. Achieving this requires management of knowledge, relationships and process; not just process. The most worthwhile evaluations are developmental and aim to improve impact and outcomes as implementation takes place. This means adopting an agile approach. A fail-fast-forward mindset. It also means that evaluation reports should be shared widely and timeously and opportunity to engage with them created. That engagement requires leadership and if necessary have the courage to change the things we can. The worst evaluations are those that are ignored, both by implementers and donors.

Read 185 times