Thursday, March 2, 2017

Academic research and development

Consider the big development challenges of our times. What can be to done to achieve learning outcomes or equip youth with skills development? How do we address the pervasive weaknesses of health care systems at all levels? What initiatives can enhance state capacity? How do we address two of the most debilitating sources of corruption and erosion of state capacity in all developing countries, public procurements and personnel deployments? How to improve urban governance? What can other developing countries learn from China's success? 

Unfortunately, none of these questions are amenable to neat field experiments and empirical analyses with focus on big micro-data. But focus of academic research and exploration in these areas have largely been confined to these approaches. In fact, most often, the enquiry does not even start with these problems, and instead focus on marginal dimensions and considerations. 

The occasional forays are confined to documenting best practice models. The large multi-lateral organisations, given the nature of their processes and incentives, are the only ones interested.

But unfortunately, a long history of bad experiences with the transplantation of best practice models has stigmatised even the mention of best practices. I would argue that an analysis of a best practice (or positive deviance) and its iteratively adapted implementation is far more valuable than the findings from realms of research that get generated every year as field experiments.

The demands of academic publication mean that ethnographic studies that examine systems and processes have become marginalised. Further, these approaches look unsexy without any math. The result has been a crowding out of academic effort away from such invaluable qualitative studies. This, I believe, has been a big (and most unfortunate) casualty of the obsession with evidence. 

It also draws attention to the excessive "economisation" of development, at the cost of insights from other disciplines. I have not come across any compelling argument that the theories and methodologies of economics should supersede those of sociology or political science and claim more wisdom in being able to address any of the aforementioned questions. It is of course a different matter that the purveyors of these other disciplines have largely stayed away from engaging with such real world challenges.

3 comments:

Karthik Dinne said...

Largely disagree on the argument of non-value of RCTs. Wanted to write a blogpost in response to your earlier post "Evidencariat". Combining both here.

One shouldn't apply unreasonable metrics to judge the contribution of RCTs. They are just filling a void of particular type of availability of data. They aren't supposed to tell us everything. It's an unreasonable expectation.

One great value-add of RCTs (RCTs that test hypothesis of binding constraints in the system) is that they have peeled off layers of cognitive traps, need for closure biases and never ending philosophical and trade off debates, especially in education.

For instance - there's a perception among large number of educationists even today that the poor education quality in government schools is because of hiring of guest teachers.

Process of dislodging such beliefs don't yield to deductive reasoning. It needs evidence to prove that para teachers isn't the binding constraint.

The problem with such deep rooted beliefs (para teachers) is that it inhibits further analysis of the problem. Without good evidence to peel off such misconception, much of the discourse of reform revolves around the guest teachers.

In a context with limited bandwidth for discourse on education, such ideas occupy large space in such policy discourse, inhibiting root cause analysis.

This also brings us to your point on MVP of policy design. Designing an MVP assumes the presence of data on some fundamental aspects, and a decently clear picture of the existing system and its binding constraints. It isn't the case always and especially before RCTs. Before the advent of RCTs, there were a large set of competing arguments, blurring our vision. For instance, the guest teacher example and many others. In the absence of clear picture of system and binding constraints, it was easy to fall into trap of false binding constraints and waste precious time, energy and resources.

The whole argument that weak state capacity is the binding constraint, is made possible and stands on strong foundation because RCTs in education have peeled of many of our cognitive traps and perceptions by proving that many of the supposed binding constraints are NOT the true constraints.

In short, RCTs have told us what not to do by providing data on various competing hypotheses regarding binding constraints.

I also disagree regarding the ethnographic part in context of education. Before RCTs, the large literature on education was ethnographic and philosophical. For example, Vimala Ramachandran, Narayan and Mooij etc. While these are important, they do not resolve many of the competing arguments on the binding constraints and thus do not provide a good starting point to design an MVP.

Of course, now that we have sufficient understanding of the nature of constraints, such studies are essential for 'iterative adaptation' and to tweak the appropriate monitoring norms within the education bureaucracy.

Gulzar Natarajan said...

Thanks for the long response Karthik. A few points

1. I have gone through both posts again. I've never said we should not use RCTs nor denied its importance. They are undoubtedly useful. It is just that their role has become excessively skewed in crowding out all others. In fact, an RCT provides a lazy path to publication.

In fact, my argument is that first do deep-dive problem solving, largely of the kind that consultants do, then identify the uncertain elements, and apply more rigorous evidence techniques, if and where necessary.

2. The point I was making was not on RCTs, but the excessive reliance on "evidence" generated from the likes of field experiments. So, for example, researchers demand evidence to prove that internal migration to urban areas causes negative externalities. Further, the regular forms of evidence like pre-post or other data analytics based evidence have become to be considered inferior. We use RCTs to figure out demand curves, when simple WTP surveys can do the same.

3. On ethnographic studies too, I make the point that the pendulum has swung to the other extreme and it has become fully marginalised. I am not in favour of one or the other, but want both to be part of the analytical toolkit. For e.g., retired teachers and government doctors are viewed with far greater esteem in Kerala than in AP or North. This has relevance on using retired teachers as coaches for schools, or reform of tertiary care hospitals. On issues like primary health care reforms, the reform prescription would vary widely between North and South India, which can emerge only based on institutional studies. The work of Robert Wade on irrigation canals and corruption gives more insights on the dynamics of corruption and how to control it, than several RCTs put together. In my present role I see even more value in non-economic analysis in micro-level interventions...

4. On your example of para teachers, I think these are all newspaper and popular narratives. While these may be used by unions as rhetorical instruments and for external consumption, even as early as mid-2000s, I have never come across this as a serious problem as being raised by any serious education leader in government in closed-door meetings. Popular explanations like poverty, contract teachers, lack of motivation of students, caste etc are rarely taken seriously as explanations. The serious explanations are mostly on the mark - large classrooms, difficult to engage with each child, no time for teacher to prepare, administrative responsibilities, absenteeism among students, etc

5. Further, you over-estimate the importance of evidence in policy making. I have yet to see a policy design or implementation decision (howsoever small) in government at district, state, and central government levels being taken because of some evidence in a paper. Post-facto, someone can claim that their paper did it, like Esther does in the Ely lecture, but that is a classic example of reverse causality. In that case of payment releases based on specific work estimates, the approach was tried out in states like AP at least a decade back, and was taking its usual time to catch on. Then they tried it out in Bihar and the RCT was done. By this time, the AP experiment was already on its way to adoption in Delhi...

Karthik Dinne said...

Thanks Gulzar. I was thinking of writing a blog post. I didn't have time. Hence, I ended up posting a comment which anyways became long!

I agree with you regarding excessive demands of rigour but I think I should explain my argument clearly. At the risk of another lengthy comment, here is what I intended to say.

Apparently, word limit of the comment exceeded the acceptable limit of this comment box, hence posted my response on the blog. Hope you don't mind the length of the comment.

http://www.iterativeadaptation.in/2017/03/on-utility-of-rcts-academic-research.html