Substack

Tuesday, April 1, 2025

Assessing the health of school education systems in India

I have blogged on the issue of improving student learning outcomes numerous times. This post discusses a simple proposal for periodic measurement of learning outcomes and using it as a high-level decision support. 

Poor student learning outcomes should count as one of the biggest governance failures of India. An important contributor is the near complete absence of any institutional mechanism at higher levels to assess the quality of school education being delivered and the progress made in the achievement of basic literacy and numeracy or grade-appropriate learning outcomes. This diffuses accountability across the system at all levels. 

Any meaningful effort, therefore, to improve school education must start with a mechanism to continuously and credibly measure and track the aggregate learning outcome trends over time. This would serve as a health check on the general direction and magnitude of progress being made on improving student learning outcomes and also help compare across administrative units and geographies. It would strengthen accountability. 

A census in the form of standardised tests would be great but is impractical to do across grades and states. It’s for this reason that standardised tests globally are done at only a few checkpoints. 

Another option would be to undertake a rigorously sampled survey across grades and states. Surveys would be administered periodically to a cluster random sample of schools in each district. The baseline survey would be followed with an end-line recurring each year. This time series so generated would help assess subject-wise performance across districts and states. 

The National Assessment Survey (NAS) could be revised accordingly to serve this purpose. Alternatively, or perhaps more appropriately, the surveys could be administered by independent private assessment firms. The Government of India (GoI) could empanel a few testing firms for 3-5 years and allocate them across states. While the state governments would manage the contract, GoI would make payments to agencies. This would prevent conflicts of interest, align incentives, and maintain the rigour of the testing processes and the credibility of the outcomes. 

The firms would undertake evaluations based on some uniform and consistently applied set of principles issued by the Ministry of Education (MoE), GoI. The instruments, while different, must be standardised to test certain competencies. All data could be stored in a portal that would allow for easy longitudinal comparison of performance across administrative units and subjects. This data could also be analysed by independent researchers and provide actionable insights for policymakers and implementors than those generated by the countless randomised control trials done on this topic over the years. 

Any state government interested in expanding the scope of the survey would have the flexibility to do so. Some might want to compare across blocks or clusters or add a half-yearly midline. The expanded scope could be incorporated into the contract with the testing agency. 

The testing processes could be improved at the margins based on the emerging feedback from the first couple of tests, and the contracts could be revised accordingly. With time, over say, 2-3 years, the testing processes would have stabilised and the outcomes would provide a reliable assessment of the direction and magnitude of progress made by districts and states on different subjects. The important insights would be the trends in progress across the parameters and comparison across neighbouring or similarly placed geographies and administrative units. 

There are several areas where such information would be valuable decision support for policy making and program implementation. Some examples. 

Education Departments at all levels of government could use the survey trends to assess the direction and magnitude of progress on outcomes. Most importantly, it would allow for recognising failures and taking action to address them. Those lagging on trends would become aware of their lag and be forced to introspect on the reasons. Such introspection is critical to any meaningful effort to improve the quality of education. 

A bane of the education ecosystem is the absence of any credible system to reliably monitor outcomes. In its absence, questionable and bad ideas endure. The massive expenditures being incurred by state and central governments on Edtech despite very little evidence of its efficacy in improving learning outcomes is a case in point. More of the same without course corrections is most likely in the absence of any credible outcomes measurement system. A survey-based longitudinal learning outcomes tracking system would audit these interventions, help recognise failings, and allow officials to make the case to pull back on some of these questionable interventions. 

The MoE could consider incentivising states and districts by allocating a part of the Samagra Siksha Abhiyaan budget for the realisation of learning outcomes as measured from the aforementioned Survey. This would make outcomes-based financing of school education a reality. To make this sufficiently attractive, the allocations under this component can be made significantly large. This financing strategy could be complemented by competitions among districts and states on learning outcomes realisation.

This monitoring system should be supplemented with measures outlined here and elsewhere on interventions that are proximate to improving the quality of classroom instruction (as against those general inputs in the education objective function).