I have blogged earlier arguing that the case for evidence based policy making was, like every other all encompassing ideas, slowly becoming an ideology. Its proponents, the evidencariat, disown all priors and prefer to leave policy and program implementation design questions on important development issues completely open-ended.
Accordingly, we have no idea of the education or healthcare production function and should leave design elements on any intervention to achieve learning or treatment outcomes to the discretion of the implementers, to be figured out by them on context-specific considerations by drawing on rigorously generated evidence.
Such evidencariat come from different directions. On the one side there are those advocating field experiments, the main object of the aforementioned blogpost, who seek to establish priors by trying to generate rigorous enough evidence. Then there are those who claim that it is not possible to have unified and universally-applicable solutions and therefore advocate a decentralised, iterative process of discovering solutions. There are others who go one step ahead with the second approach and argue in favour of using finance to align incentives and target outcomes. Different variants inhabit the space between.
This evidence-dogma has taken strong hold in influential global development circles. There is something logically very neat and attractive about such solutions that appears to overcome the complex challenge of making development happen. Its impact will be felt in an increasing degree in aid and other forms of external, public and private, spending in the years ahead. But like with all such cycles of development fads, and there have been several over history, this too will pass by as the earlier ones.
The incentives too are aligned favourably. The donors get the comfort of trying out something different and that too through an approach which revolves around the use of evidence and empowerment of the recipient governments. The latter will nonchalantly play along, with the firm conviction that given the multiple dimensions of what constitutes "evidence", they can game the process any which ever way. This is a conversation taking place in completely different worlds, two sides talking past each other and still comfortable with each other's positions.
Accordingly, we have no idea of the education or healthcare production function and should leave design elements on any intervention to achieve learning or treatment outcomes to the discretion of the implementers, to be figured out by them on context-specific considerations by drawing on rigorously generated evidence.
Such evidencariat come from different directions. On the one side there are those advocating field experiments, the main object of the aforementioned blogpost, who seek to establish priors by trying to generate rigorous enough evidence. Then there are those who claim that it is not possible to have unified and universally-applicable solutions and therefore advocate a decentralised, iterative process of discovering solutions. There are others who go one step ahead with the second approach and argue in favour of using finance to align incentives and target outcomes. Different variants inhabit the space between.
This evidence-dogma has taken strong hold in influential global development circles. There is something logically very neat and attractive about such solutions that appears to overcome the complex challenge of making development happen. Its impact will be felt in an increasing degree in aid and other forms of external, public and private, spending in the years ahead. But like with all such cycles of development fads, and there have been several over history, this too will pass by as the earlier ones.
The incentives too are aligned favourably. The donors get the comfort of trying out something different and that too through an approach which revolves around the use of evidence and empowerment of the recipient governments. The latter will nonchalantly play along, with the firm conviction that given the multiple dimensions of what constitutes "evidence", they can game the process any which ever way. This is a conversation taking place in completely different worlds, two sides talking past each other and still comfortable with each other's positions.
No government in any developing country has either the time or resources or capacity to do development as suggested by any of the different schools of evidencariat. Unlike the academia and intelligentsia, they inhabit the real world with short election cycles and bureaucratic tenures, impatient citizens and opinion makers, entrenched legacies, scarce resources competing for multiple unavoidable demands, and shockingly weak state capacity.
In the circumstances, the Bayesian approach is the only way forward. Conditional on the aforementioned constraints, what is the best approach to improving the effectiveness of development spending?
A prudent compromise may be required. Evidence based research should be a complement to a process of discovering latent institutional knowledge (or priors) through the standard toolkits of deep-dive problem solving. If we are able to do the latter with some reasonable degree of rigour, that alone would be sufficient to have a robust enough minimum viable product in policy design and implementation plan. The most cost-effective and context-specific approach of evidence generation should be used to figure out the remaining one or two uncertain elements in the design, if any. There is no need for a high standard of evidence when less would suffice. And any implementation should have a very rigorous monitoring system that feeds back credible information and the implementation design should allow for refinements and course corrections as required. A more detailed illustration of this is available here.
And on the absence of a production function, this from S Africa is only the latest in evidence to the contrary,
Scripted lesson plans have great potential to improve teaching practice in resource- and capacity- constrained settings, but there are risks that they undermine teachers’ autonomy to cater teaching to the level of the child, especially if lesson plans require adherence to an overly-ambitious curriculum. Both programs provide teachers with scripted lesson plans and supporting reading materials, such as graded readers and flash cards, but they differ in the mode of implementation. In some schools (Training) teachers receive two two-day training sessions over the course of the year. In other schools (Coaching), teachers also receive monthly visits from specialized reading coaches. We find that after only 9 months of implementation both the Training and Coaching interventions had a positive impact on reading proficiency, by 0.13 and 0.14 standard deviations respectively. Teachers are also more likely to provide individualized assessment and assign pupils to reading groups within the classroom based on ability. Furthermore, there is substantial pupil-level heterogeneity, mediated by class size: Pupils who performed badly at baseline do not benefit from the program, but this trend is reversed in larger classes...
The coaching treatment is twice as expensive... We can therefore conclude that scripted lesson plans can be more cost-effectively implemented through a traditional model of training, and need not be combined with ongoing monitoring and feedback from reading coaches.
However, the scalability of such approaches will be difficult given the limited success, for example, with training public school teachers to use the Pratham model of remedial instruction. This, is more a reflection of weak state capacity than the application of a wrong education production function. And the approaches advocated by the evidencariat demand far more intensive engagement of state capacity.
In the circumstances, for school systems entrapped at very low equilibriums struggling to move from the bad to satisfactory, scripting may not be a bad idea. After all, the much validated Pratham model is a very minutely scripted model of instruction. However, for a system striving to move from satisfactory to good, much less from good to great, scripting is unlikely to work.
In the real world, with the challenge of working in a resource constrained environment with the objective of achieving scale in a reasonable period of time, we cannot afford not to have priors. We have to accommodate decidedly second-best approaches.
In the circumstances, for school systems entrapped at very low equilibriums struggling to move from the bad to satisfactory, scripting may not be a bad idea. After all, the much validated Pratham model is a very minutely scripted model of instruction. However, for a system striving to move from satisfactory to good, much less from good to great, scripting is unlikely to work.
In the real world, with the challenge of working in a resource constrained environment with the objective of achieving scale in a reasonable period of time, we cannot afford not to have priors. We have to accommodate decidedly second-best approaches.
No comments:
Post a Comment