Substack

Saturday, December 22, 2012

Evidence-based policy making – Missing the woods for the trees?


There is no denying that evidence should inform public policy design. However, it may be a matter of debate as to what constitutes evidence and how it should inform the policy design process. There may also be a need to revisit the interpretation of “external validity” of research findings beyond its current locational - geographical, social, and cultural - context.

Conventional wisdom on evidence-based policy making predominantly views “evidence” as emerging from a process of scientific research. Even within research, in recent years there has been a trend to seek evidence from field experiments, preferably randomized control trials. This approach provides limited space for “priors”, especially those drawn from a source with “less than objective” underpinnings, in policy design. Indeed, in an ideal world the entire policy edifice should be constructed on the objectivity of experimentally exposed scientific wisdom.

External validity of experimental research is generally viewed in terms of its replicability in “other environments”. Critics of experimental research view this as its Achilles heel. But even before we come to other environments, there are concerns about its replicability within the same environment, when scaled up.

Can we afford the “luxury” of an ideal policy design, informed by rigorous experimental evidence? Is there no rigorous and objective enough process of consolidating “priors” in development research? More importantly, are “research findings” any more unbiased and objective than other sources of similar knowledge? Are the findings of a field experiment readily amenable to being scaled up, even in the same environment, without considerable dilution of its final effect?
Specifically, I have two important points for consideration. One, the process of experimental research driven evidence discovery overlooks the, often equally rigorous, evidential value of institutional knowledge latent in communities and public systems. Two, such research findings, especially experimental results, present sanitized outcomes, whose replication when scaled-up under less-sanitized, real-world conditions are questionable. Let me illuminate both these in some more detail.

The vast body of mightily impressive experimental research from the past two decades reveals several important insights, statistically rigorously validated, about the development process. But there is little by way of knowledge or insights that were not already available as institutional wisdom. Therefore, and especially given the huge amount of money and effort that has gone into obtaining this knowledge through experimental research, it is appropriate that we question as to whether the same could have been learnt in a more cost-effective manner from the latent institutional wisdom.

I see several benefits from a process of discovering institutional wisdom. One, it is likely to shorten the knowledge discovery cycle, besides lowering its cost. Second, unlike the fragmented nature of knowledge that emerges from the inherently restrictive, mostly single-issue based experimental research, a knowledge discovery process is more likely to reveal a more comprehensive and organic understanding of the underlying problem. Third, it is also likely to reconcile the dissonance between logical and theoretical consistency and implementational difficulties which characterize a large body of experimental research.

Fourth, the discovery of latent institutional knowledge through a consultative process would, by keeping the stakeholders involved, increase the likelihood of it being willingly embraced by them. This stands in contrast to the obvious disconnect between experimental research and its audience within public systems, which prevents its ready adoption. Fifth, this strategy is likely to create a body of replicable heuristics which can be used to reveal latent knowledge in different socio-economic settings. This becomes especially important given the highly context-specific nature of policy and implementation framework design, a reason that weakens the external validity of any experimental finding.

Sixth, the internal dynamics generated by such a process is itself likely to lay the “environmental” foundation for a successful scale-up. A social consultative process of unraveling latent knowledge typically generates unintended positive spill-overs that can potentially both weaken change opposing factors and strengthen administrative capability. Finally, an experimental research based knowledge discovery, which limits “priors”, opens up too many policy threads and becomes an exercise in the search for the ideal policy design. In contrast, a knowledge discovery process helps finalize a robust enough second-best policy framework design and leaves open those still uncertain threads to be identified or validated by experimental research.

The absence of any discussion about such a latent knowledge discovery process is surprising since deep-dive problem-solving through interviews, focus group discussions, observations, surveys, and so on, is the staple of modern consulting industry. If such evidence is rigorous enough for businesses before they undertake make-or-break investment decisions involving billions of dollars, why would they be any different for governments? And businesses use such evidence to decide on decisions involving human preferences and in aligning human incentives to their commercial interests.

How would the process of arriving at a strategy to sell cheap shampoo satchets to poor people be dramatically different from one for selling mosquito nets or chlorine tablets to the same category of people? Is the process of devising a strategy by private financial institutions to attract deposits of low income people any different from that aimed at increasing savings among the same people? For that matter, is there any radical difference between a commercial strategy to induce positive responses from the market at the “bottom of the pyramid” and one that seeks to get the same target group to respond to similar incentives for their own welfare?

This brings to my second concern about the extant methodology of experimental research. My concern arises from three directions. One, there is a big difference between implementing a program on a pilot basis for an experimental study and implementing the same on scale. In the former, the concentrated effort and scrutiny of the research team, the unwitting greater over-sight by the official bureaucracy, and the assured expectation, among its audience, that it would be only a temporary diversion, contributes to increasing the effectiveness of implementation. Two, is the administrative system capable of implementing the program so designed, on scale? Finally, there is the strong possibility that we will end up implementing a program or intervention that is qualitatively different from that conceived experimentally. 

It is one thing to find considerable increases in teacher attendance due to the use of time-stamped photographs or rise in safe water consumption from the use of water chlorination ampules when both are implemented over a short time horizon, in microscopic scale, and under the careful guidance and monitoring of smart, dispassionate, and committed research assistants. I am inclined to believe that it may be an altogether different experience when the same is scaled up over an entire region or country over long periods of time and with the “business as usual” minimal administrative guidance and monitoring. And all this is leaving aside its unanticipated secondary effects. In fact, far from implementing an intervention which is tailored based on rigorous scientific evidence, we may actually end up implementing a mutilated version which may bear little resemblance to the original plan when rolled out at the last mile. 

I believe that evidence-based research has a critical role to play in development policy design. But it should complement a process of discovering latent institutional knowledge, through something resembling a scientific problem- solving approach. Experimental research should be used to tie-up the loose ends that arise from the former exercise. A marriage of the two should be the way forward in evidence-based policy design. 

1 comment:

KP said...

Dear Gulzar,

Enjoyed this and the focus on institutional knowledge / scale. The managerialism inherent in social initiatives adopted purely this way is damaging. I think you are pointing to the fallacy of composition, and I suspect that in policy responses we must be open to the fallacy of the excluded middle - where the narrowness of the solution could make it inflexible.

I recall an earlier post where the teacher efficacy was a simple derivative of the bell curve and averages - an outcome of what I call the "Jack Welch school of performance management".

(there is a good book on public policy and Citizenship Arvind Sivaramakrishnan, SAGE Publishing 2012 - good on critiquing managerialism and liberalism per se- a little week on the public policy institutional aspect.)

Where historical evidence should be and can be used to change things on the basis of knowledge - the institutional inertia to make changes is damaging. Take the case of the slow disposal of cases - particularly in the case of crimes against women.

The case of under reporting of crimes particularly against women - due to a general reluctance to deal with the police - and in cases where such FIR's are filed the sheer inability to deliver justice speedily - points more to the ineptness of the system on a colossal scale. The results are for all to see - and is a damaging indictment of our inability to process any learning.

That apart, what I notice among what can be called pop-economics based on data is to flash some counterintuitive logic - which is appealing for book sales - but seems to be devoid of "experience" as felt. Which can produce aberrations like data to support a theory that the best way to reduce gun related deaths is more guns!

Back to our own crises - I don't think we suffer from either data or enough experience to draw up a solution - but, a political culture of responding to everything piecemeal will only continue to feed more disasters.

For instance making delhi safe for women is not only about buses - as though there was any causal relationship - and this case only serves to highlight the general apathy towards all other victims - and sheer non-responsiveness in terms of prevention - till it all coalesced into another horrible incident.

Data is the least of our problems in solution design - and solution design may also be a problem - resulting more from apathy - than from any inability to analyse to minutiae.

regards, KP.