We use ex-ante studies, process evaluations, economic evaluations, outcome evaluations and impact evaluations to answer a wide array of questions. End-of-programme evaluations frequently aim to answer questions such as:
- “Was the project implemented as planned?”
- “What results were achieved by the intervention?”
- “What is the likelihood that intervention effects will be sustained?”
It may also ask:
- “Why did the intervention achieve good results in context A, and poor results in context B?”
To answer these questions, different evaluation approaches, designs and methods may be appropriate. It is not reasonable to expect that one type of evaluation design can adequately address all evaluation questions in all contexts.
A randomised control trial (RCT) or another quantitatively oriented design such as a regression discontinuity design or propensity score matching might be the best method to answer: “What magnitude of change was achieved by the intervention that would not have been achieved otherwise” in a specific case. It will likely have to be supplemented with other qualitative methods such as descriptive case studies and ethnographic observation if it wants to answer the important “why” questions.
The NONIE guidance on Impact Evaluation recognises that an RCT is not always required to answer impact questions. Alternative impact evaluation designs could be used when it is not necessary to quantify the effects of an intervention, but the effects still need to be attributed to the intervention. The General Elimination Method and Causal Contribution Analysis are two such designs that rely on mixed-method data collection. Theory-based evaluation is also a widely used alternative.
The ILO pokes fun at the idea that RCTs is some kind of “gold standard” and concludes: “The only standard that does exist is one of methodological appropriateness.”
In selecting an evaluation approach, we need to strive for rigor. Michael Quinn Patton, the author of classic evaluation texts such as Developmental Evaluation says, “Rigor does not reside in methods, but in rigorous thinking.” If this is true, evaluators and those who commission evaluations are invited to critically engage with more than just the evaluation design. We are invited to embrace evaluative thinking. Only then will we learn from our efforts to solve the education problems.
An excellent text on quantitative evaluation designs such as RCTs is Shadish, Cook & Campbell’s Experimental and Quasi-Experimental Designs. It is not an introductory text, however.
Between 2001 and 2008 a debate about Impact Evaluation raged in the international evaluation community. They arrived at a consensus statement in the 2008 publication: the NONIE Guidance on Impact Evaluation
John Mayne’s Contribution Analysis Approach aims to make credible casual claims about the contribution an intervention is making to observed results using rigorous alternatives to RCTs. Michael Scriven explains the premises of the General Elimination Model which aims to identify the Modus Operandi underlying programme strategies. Theory-based evaluation as explained by Carol Weiss has a long history and examines the mechanisms that mediate between intervention processes and outcomes.
In recognition that our complex problems are not easily fixed, and our evaluations should therefore be responsive to this complexity, evaluation approaches such as Michael Patton’s Developmental Evaluation and Pawson and Tilley’s Realist Evaluation have gained traction.
It is increasingly recognised that evaluative thinking is necessary if we want to truly learn. A 2018 edition of the New Directions in Evaluation Journal edited by Vo and Archibald interrogates this.