Substack

Monday, September 19, 2022

A Note for funders of development impact evaluations

This post will provide a list of suggestions and checklist for funders of international development impact evaluations. 

First, the suggestions:

1. An important parameter for categorising impact evaluations should be whether they emerged from the primary policy maker’s side or not. Funders should make this an important point of diligence when they are funding impact evaluations. They should incentivise researchers to engage closely with policy makers, understand their real needs (as against hypothesising their perceived needs), and work together to design and present the evaluation proposal. Is the evaluation arising as a demand from a policy maker and whose outputs will feed into a program design or redesign? Or, is it primarily a proposal from a researcher, which also happens to have taken the consent of the policy maker? 

2. In the normal course, the highest priority impact evaluations should be those arising directly from policy makers about impact evaluation issues agitating them and whose evaluations can be done within their expected time frames. In this context, the highest value impact evaluations are of the quick but rigorous A/B testing kind which help with changing the design of an intervention to improve its operational efficacy or implementation fidelity. 

3. Also, in the normal course, concurrent evaluations which feed into program design or implementation should have greater preference than post-facto evaluations of headline efficacy. For example, an evaluation question which emerges from a deep-dive of an ongoing implementation and on a proximate cause of likely implementation effectiveness (say, the periodicity or the manner of a cash transfer; or converging an intervention in agriculture with another ongoing program; or a procedural change or small add-on to an ongoing program) should be considered high value and prioritised. Such evaluation proposals also reflects active engagement by the researchers with policy makers to surface an important factor which impacts the program effectiveness. 

4. Any impact evaluation of an intervention/idea in a context should necessarily be preceded by a deep-dive that also includes examination of the history of same or similar interventions/ideas in the particular local context (and not merely based on theory and evidence from other contexts). What are the examples of such interventions in that context in the last three decades? What have been their outcomes? Do the government and other local stakeholders know about them? How have they been received by the system?

5. Impact evaluation should always be associated with qualitative research (key informant interviews, focus group discussion etc), which should help in interpreting the quantitative study findings relating to design and implementation issues.

6. Encourage (or in certain cases mandate) enlisting a local researcher as Principal Investigator in impact evaluations. Despite all its possible flaws and distortions, this is perhaps the only way in which local researchers can become involved in a meaningful enough manner in impact evaluations in their own countries and develop evaluation expertise. 

7. On a similar note, philanthropic donors like BMGF should prioritise local researchers, and that too working in local institutions, in the impact evaluation projects they fund. 

8. Large scale and long-drawn research projects (eg. RISE, Young Lives etc) should necessarily enlist a local institution as the anchor/host institution in the country. Encouraging local researchers and institutional capacity building in developing countries should be an explicit primary objective of such funding. 

9. Large funders should also incorporate some features and parameters that capture capacity building and knowledge transfer when they approve impact evaluation proposals. For example, funders should insist that outsourced services like surveys should be given on open competitive bids to preferably local providers instead of being given on nomination to captive foreign partner institutions. Or a clear plan (with accountability) that explains how the government partner is actively engaged in the evaluation design, the conduct of the evaluation, and how the evaluation findings are used. 

10. Encourage and incentivize studies to explore and reference impact evaluations and studies of impact evaluations done by researchers, government agencies, and non-government institutions in developing countries (and not just those in foreign think-tanks and by foreign researchers). For example, the Directorate of Monitoring and Evaluation Office (DMEO) in India has a rich expertise on the challenges and issues with impact evaluations, and have very useful things to say about what type of evaluations could be used where and when. 

11. Multilateral and bilateral funders, who have the credibility of government backing, should make available model procurement documents and contracts for hiring evaluation agencies; and templates of evaluation designs, survey instruments etc which can be drawn by evaluation agencies of governments and local evaluation providers. There should be something similar to the World Bank’s PPIAF in case of impact evaluations. This will help atleast those interested government leaders who are committed to undertaking impact evaluations, thereby also creating a local supply-side for it. It’s important that this be housed in a bilateral or multilateral institution (and not in a philanthropic entity or think tank) to allow government leaders the freedom to draw and use. 

A checklist for diligence of proposals considered by funders of impact evaluations

1. Is the primary demand arising from policy makers, with a commitment to incorporate the results? What are its signatures and how credible are they?

2. Has there been extensive stakeholder engagement by the impact evaluators?

3. Is cost-effectiveness of the intervention a consideration in the evaluation?

d. Is bureaucratic feasibility (or state capacity consideration) a factor in the evaluation?

4. Is the evaluation to improve design/implementation or to assess headline efficacy?

5. Is it a concurrent or post-facto evaluation?

6. Does the evaluation team have a local PI based in the subject country?

7. Does the evaluation proposal have meaningful partnerships with local institutions?

8. Does the evaluation involve some signatures of capacity building or some form of knowledge transfer?

9. Does the evaluation involve procurement of services from local providers?

10. Does the report reference and document the work done by local researchers, government agencies, and non-government institutions?

No comments: