This paper is an excellent example of misplaced research priorities around RCTs.
In brief telephone calls based feedback was elicited on the quality of implementation of the Telangana government's Rythu Bandu Scheme where cash transfers through cheques were made to eligible farmers - did farmers get the cheque, did they get in time, did they encash etc. An RCT evaluation of the telephone calls revealed that 83% of farmers received and encashed their cheques, farmers in areas with such monitoring were 1.5% more likely to receive and enchase their cheques, and among the bottom quartile land holding farmers percentage was 3.3% higher. The call centres delivered an additional Rs 7 Cr to farmers at a cost of Rs 25 lakh.
In brief telephone calls based feedback was elicited on the quality of implementation of the Telangana government's Rythu Bandu Scheme where cash transfers through cheques were made to eligible farmers - did farmers get the cheque, did they get in time, did they encash etc. An RCT evaluation of the telephone calls revealed that 83% of farmers received and encashed their cheques, farmers in areas with such monitoring were 1.5% more likely to receive and enchase their cheques, and among the bottom quartile land holding farmers percentage was 3.3% higher. The call centres delivered an additional Rs 7 Cr to farmers at a cost of Rs 25 lakh.
Did this require an RCT or any research? It has long been conventional wisdom in bureaucracies. Several public agencies across states have had citizen feedback electing mechanisms in place for years. Neighbouring state of Andhra Pradesh has even made this central to performance assessments by making everything that government does evaluated by telephone feedback through its Real Time Governance System (RTGS). The power discoms in Andhra Pradesh has had such telephone call centers to elicit feedback for more than a decade.
I can’t imagine even one reasonable bureaucrat who would dispute the hypothesis of the paper - citizen feedback, especially one using random sample telephone calls, is a useful way to assess implementation quality.
The real challenge is not about getting feedback. Not even about getting granular and actionable feedback and that too in real-time. It is about the ability of the system to act on any feedback in a meaningful enough manner. That is the real binding constraint and that is critically dependent on the state's capacity to engage actively on a basic governance issue - monitor and act effectively on information. And most often with all such interventions, this gets glossed over.
Further, establishing the call centre exhausts the bandwidth and cognitive energies of the officials leaving them with limited mental space and energy to engage actively for the much more important institutionalisation of follow-up actions. And the officers responsible also likely get transferred by then. Unfortunately, while it is easy to plan for these things in a comprehensive manner, the actual work happens when the rubber hits the road and the plans have limited value in actually getting the follow-up institutionalised.
Furthermore, we should not discount the scale scenario where such monitoring, without the follow-up requirements, is most likely to become a routinised one-more addition to the monitoring paraphernalia without any incremental benefit.
The real challenge is not about getting feedback. Not even about getting granular and actionable feedback and that too in real-time. It is about the ability of the system to act on any feedback in a meaningful enough manner. That is the real binding constraint and that is critically dependent on the state's capacity to engage actively on a basic governance issue - monitor and act effectively on information. And most often with all such interventions, this gets glossed over.
Further, establishing the call centre exhausts the bandwidth and cognitive energies of the officials leaving them with limited mental space and energy to engage actively for the much more important institutionalisation of follow-up actions. And the officers responsible also likely get transferred by then. Unfortunately, while it is easy to plan for these things in a comprehensive manner, the actual work happens when the rubber hits the road and the plans have limited value in actually getting the follow-up institutionalised.
Furthermore, we should not discount the scale scenario where such monitoring, without the follow-up requirements, is most likely to become a routinised one-more addition to the monitoring paraphernalia without any incremental benefit.
In fact, one could very easily imagine the best case narrative of change with the RCT finding. IVRS-based feedback systems can improve implementation efficiency. So let's establish telephone call centres in each district/state. And in five years, we would have another development innovations graveyard littered with dysfunctional call centres and vast amounts down the drain.
Instead, this was an opportunity to both highlight attention on state capacity and figure out how the state's capacity to monitor implementation could be improved.
What is a more effective monitoring approach to review development programs? Consider some (there could be more) of the variables associated with such reviews – who is reviewing and who is being reviewed, frequency of review, specific parameters being reviewed. So a District Collector (or a Block Development Officer) could once a week (or once a fortnight) review BDO (or Village Revenue Officer) on some process (approvals processing) or output (receipt or encashment) indicators.
How about optimising this? Let’s say all blocks divided into two treatment arms and two different review methods of/by BDOs or VROs (with a none-too-onerous deep-dive you can figure out these options), against the business as usual monitoring control group.
This would immediately spotlight attention on the importance of the quality of monitoring and the role of state capacity improvements (in a more objective manner the likes of which has never been done before by researchers) in generating value for money. For example, if it shows that BDOs who review VROs once a week as against those who review once a fortnight are associated with x% higher receipts for the poorest quartile, then that is a high value insight for someone like a Secretary, Rural Development or Agriculture. In fact, it would have actually delivered more returns at virtually no cost.
Further, the gains go beyond just improving the efficiency of the specific intervention. It would likely apply to most interventions. It would have genuinely spotlighted attention on state capacity, specifically how better monitoring would have increased the efficiency of public service delivery. This may appear self-evident, but in a world where everyone is searching for innovations and different ways of doing things, what is so obvious often actually ends up not being so obvious!
If this insight is then written up in an oped, one could be reasonably confident that some Secretary or other of Rural Development would have tweeted this, and following his tweet, several District Collectors would have rushed to embrace this monitoring approach. The researcher would have, through their research, triggered off the dynamics to improve state capacity.
On a side note, the specific numbers on the value for money analysis associated with establishing and maintaining a call centre are questionable, even disingenuous. But that is not relevant for this post.
Actually, with the call center, what would be high value would be to do high quality data analytics to elicit actionable insights that are granular-enough to keep field functionaries (the BDO, VRO etc) on their toes.
To summarise, two points. One, I don’t think that it needed high quality research effort to establish this. Two, instead this was an opportunity to spotlight attention on the importance of state capacity
and offer actionable insights to improve state capacity. So, loss double-time.
No comments:
Post a Comment