Substack

Friday, September 17, 2021

An alternative perspective on evidence and impact analysis

This post will present an alternative perspective on impact analysis (and use of evidence), one which questions some of the settled assumptions in assessing the impact of a development intervention.  

I believe that the prevailing perspective and dominant methodologies to generate evidence to assess the impact of development interventions has three problems. It implicitly assumes that a good idea or program design translates to outcomes, outcomes become apparent soon, and there is some definitive marker for success with any intervention. 

1. The translation of an idea to action involves three parts - design of the intervention, its implementation, and availability of essential complementary conditions. If the intervention is well-designed, implemented as intended, and complementary requirements are met, then outcomes follow. Unfortunately, both design and implementation are dependent on the context, and complementary requirements are beyond our control.

2. Realisation of impact in terms of outcomes generally takes a long time, often well after the intervention is completed. This is true of most of the commonly observed interventions in development. 

3. There is nothing like a binary outcome of success or failure with an intervention. Instead there is only degree of success or failure. There are no absolute failures or successes. And assessment of degrees of success or failure is hard, even impossible. This assessment is made harder still by the three constraints mentioned above and the time requirement for outcomes to surface. 

Instead the impact assessment enquiry has to be more nuanced. What's the degree of impact generated from the intervention, given the context and subject to its design constraints and implementation deficiencies? What's an acceptable enough degree of impact, given the limitations? What improvements can be made in the design and implementation of the intervention?

This shifts the primary objective of evidence generation for impact analysis away from headline outcomes assessment and towards relaxing design constraints, bridging implementation deficiencies, and addressing complementary requirements. And also figuring out good proxy indicators for likelihood of outcomes realisation. The enquiry is more about using qualitative and quantitative techniques to generate evidence and insights that can help with the aforementioned three requirements. 

This perspective is also amplified by the reality that the landscape of development consists overwhelmingly of large scale ongoing programs (running schools and hospitals, immunisation programs, insurance, scholarships, nutrition delivery, toilet construction, housing, agriculture extension services etc), which are either essential public mandates or which cannot be discontinued for political economy and other considerations. The space available for altogether new interventions is very limited and of marginal relevance. The word innovation is a much hyped superfluity and distraction in development - improvisation is a better alternative. 

Instead, the space available for improvisation with design tweaks, implementation improvements, and complementary factor enhancements is large and less politically contested. The methodological approaches required for exploring improvements in these areas are a combination of qualitative and quantitative ones. 

Once we agree with this perspective, several aspects of the dominant narrative on impact analysis (and evidence-based policy making) in international development will have to change.

No comments: