I blogged here highlighting the obsession in international development circles with new ideas and innovations and neglect of regular development interventions, and examined the reasons, here questioning the belief that there are new ideas and innovations waiting to make a transformative impact, and here that policies in most of development matter very little and it's mostly about implementation.
I have blogged earlier that evaluation in the context of international development refers mainly to new programs, of headline impact, and is post-facto. This is at variance from the real world value of evaluations. A presentation on this topic is here.
This post will question the conventional wisdom on impact evaluations and instead argue that evaluations should focus on the use of administrative data (and surveys) coupled with qualitative information to help improve the effectiveness of implementation.
International development actors are focused on new ideas (and not new programs - and they are different and I’ll blog next on this). This partially explains the focus on headline evaluations. But as I blogged here, there are too few altogether new ideas and interventions. Instead there are badly implemented programs with well-known sets of program features.
This makes evaluations aimed at improving the implementation of ongoing programs and interventions very relevant. Instead of evaluations of headline outcome or impact, the requirement is for an understanding of whether the implementation is being done with fidelity and whether the premised theory of change is holding up. Ideally, instead of providing static post-mortems, evaluations should provide actionable concurrent decision support.
Consider the examples of a few development interventions. A cash transfer program, a health insurance program, an agriculture extension program, a skill development program, a school ICT program (smart classroom and tablets), a price stabilisation scheme for certain agriculture crops, a mobile health clinic program, a maternal and child health intervention, an agricultural free power metering initiative, a community mobilisation campaign etc.
In all these cases, there are certain issues that are of first order interest to policy makers and program implementors. Consider the following questions:
1. Are all the important implementation elements being captured and periodically reviewed? How can monitoring and review be improved?
2. Is the program being implemented with fidelity as intended? If not, where are things not going right and what can be done to address them?
3. Is the theory of change holding? What are the signatures that point to it failing or succeeding?
4. What are the proximate determinants (or proxies) of impact? Is the program generating the expected impact? If not, what's going wrong and what can be done to address them?
5. In light of all the above, what can be done to improve outcomes? Any program design changes, complementary interventions, any technology application?
The mainstream quantitative and experimental evaluation strategies and techniques are not suited to investigate these questions that involve details of the program's operational dynamics. This is also because, unlike in developed countries, there are some important challenges associated with program evaluations in developing countries.
For a start, given weak state capabilities, the implementation fidelity is generally poor. This means that any evaluation ends up assessing a poorly implemented version of the program. This becomes an evaluation of the implementation and less of the program itself.
From the perspective of evaluation, a major deficiency with government programs in developing countries is the lack of a clear delineation of the theory of change and articulation of objectives in terms of desired outcome/output parameters. They are essential to examine intermediate and proximate indicators and thereby assess the broad directionality and extent of change taking place from the intervention.
In development interventions, most often there are so many other confounding factors that establishing causal relationship is almost impossible. Finally, several contextual factors (social, cultural, and local political) act as binding constraints that weaken the implementation fidelity and outcomes realisation. All these factors are more pronounced in developing country contexts.
In the circumstances, meaningful program evaluations require a combination of quantitative and qualitative assessments. The former would include analysis of administrative data and carefully designed surveys. It should be layered on top of observational studies of the program implementation.
Further, these evaluations should prioritise the realisation of execution fidelity. This would entail looking at inputs, processes, and intermediate outputs. It would entail examining and answering the questions posed above. The answers should provide decision-support to tweak and improve implementation.
There’s also a genuine demand side failure that has contributed to the absence of such evaluations. Politicians assume that their programs must and will succeed. Bureaucrats, for a variety of reasons (cognitive biases, over-confidence, poor state capability, absence of supply-side, procurement challenges, uncertainties and increased work, limited posting durations etc), do not demand evaluations of the kind that would generate actionable implementation decision-support. Instead impact evaluations commissioned by governments are almost always with the unsaid intention of validation and publicity.
In conclusion, headline impact evaluations of ongoing programs are likely to be neither accurate and relevant, nor resonate with policy makers. The presence of numerous counterfactuals means that there are likely several omitted variable biases, thereby weakening the findings of the impact evaluation. They also provide convenient excuses for the policy makers and politicians to reject such findings. In any case, junking or significant changes to an ongoing program runs into political economy and other problems.
No comments:
Post a Comment