Substack

Monday, February 20, 2023

Examining the theory of change of evidence in development

I'll do three posts this week on development. The first will focus on the value of evidence in development. The second will examine the value of innovation. The final post will draw from the first two and posit an agenda for philanthropists to engage with governments, in the specific context of India. 

This post will examine the premiss behind the conventional wisdom on the value of evidence in development. As I have blogged several times (here, herehere, here, here, here, here, here, here, here and here), this premiss arises from simplistic mental models about how stuff happen in the real world. 

In the theoretical world of development, the theory of change on evidence in development can be summarised in terms of two possible pathways. One, if we are able to establish evidence that an ongoing program is not having impact, then it can be discontinued. Two, if we can demonstrate impact in a pilot, then it can be scaled up. 

However, in the real world of development, neither of these hold. 

1. Contrary to logic, it's very difficult impossible for a full government program, even a critical feature of a program, to be canned just because some evidence has emerged about its ineffectiveness. There are at least three reasons. One, the emergent evidence will always be rationalised away as either being not credible, or by pointing to unique contextual factors, or due to some confounding factor. Two, the absence of impact will be rationalised away as being a temporary phenomenon (say, once teachers and students learn how to use technology Edtech will start showing results). Three, there are only so many ways in which you can do the fundamental things in education, health, skilling, agriculture etc, which are captured in these programs and therefore canning them does not arise as a possibility.  

2. Again contrary to logic, there are very few altogether new scalable ideas (apart from specific products/drugs or IT solutions) in major development sectors (health, education, nutrition, livelihoods, skills, agriculture etc) which can be converted into a program. In some variant or the other, all these have been tried out somewhere sometime. Or are parts of an ongoing program somewhere. Or have marginal impacts as to merit the considerable bandwidth of an acutely capacity constrained public system. Deworming pills, nifty nudges, and information disclosures are examples. And in any case, successful pilots mean little in the complex world of public policy making. 

Actually, I'll go further. Even if there are innovations, they get adopted not because of evidence but because they have become part of a powerful narrative, to which evidence is only one of the contributors. As I blogged earlier, paraphrasing Lant Pritchett, development is a faith-based activity, where the value of incremental evidence is marginal and primarily in hastening the formation of a narrative strong enough to tip the balance in favour of the change.  

In the circumstances, what's the value of evidence?

I can see one important use - to ensure execution fidelity. To ensure that the program gets implemented as per its prescribed design and guidelines. Here administrative data generated by the processes and transactions involved in government programs assume significance. Data analytics can be used to generate actionable and relevant insights about deviations, abuse/manipulation, ineffectiveness, inefficiencies etc. Unfortunately, in a world enamoured by innovations, too few people want to engage on this unsexy, painstaking, and less-innovative endeavour. 

And even when they engage, they prefer to focus on distracting issues like AI/ML or technology solutions and management-speak whose actual value in addressing the development issue at hand is deeply questionable.  

On a related note, in some cases, evidence can also be used to iterate and improve the design of a program. But this requires such high capacity that its application will be very rare in weak state capacity environments. But for policy makers and academic researchers engaged with bringing evidence to bear in public policy, use of administrative data should be the gold-standard in the application of evidence in policy making. 

No comments: