Tuesday, October 8, 2013

My problem with randomistas

Prof Ricardo Hausman made a very interesting point in class today about the use of randomized control trials  (RCTs) in social policy and medicine. He makes the distinction between exploring general questions of economic growth or of medical research, and examination of growth in a particular country or treatment of a patient. He argues that while RCTs may be useful instrument in the former (RCTs in clinical trials of drugs or to isolate the effect of a particular social policy intervention) it is likely to be less useful in the second quest. I agree and I particularly like the analogy with medicine.

I feel that just as clinical judgement should form the starting point for any treatment of a medical condition in a patient, a diagnostic approach should do the same with social policy design and implementation. This diagnostic approach can be either a consultant's deep-dive problem solving one or Hausman and Co's growth diagnostics or any other similar process which essentially starts with the problem. Further, such diagnosis rests on a very strong foundation of priors, whose strength and reliability depends on judgement skills that have been honed by a rich experience of diagnosis.

However, once the clinical judgement is done, the doctor may still resort to some diagnostic tests to validate his initial clinical judgement. This is done when he/she is uncertain about certain symptoms. By the same analogy, a field test, including an RCT, can be effective in identifying the still uncertain elements that remain after the initial diagnosis is done. I have written about this earlier here.

But it needs to be borne in mind that field tests and the like come after the priors have been established through a non-experimental process of problem diagnosis. In other words, tests like RCTs are secondary to the primary process of problem diagnosis. Unfortunately, I believe that the randomistas refuse to allow for many priors and see their process as being primary to everything else in their quest for the ideal policy design.

In this context, the current state of medical treatment in the US, where clinical judgement has taken a backseat to diagnostic tests, is instructive. As a result of the threat of litigation suits and the moral hazard problems unleashed by a lightly regulated insurance based model, "defensive medicine" has gripped the US health care practitioners. This has resulted in the emergence of more time consuming and expensive treatment protocols. The use of RCTs in design and implementation of social policy for a particular context is remarkably similar to this, a resort to "defensive" policy making. Its dominance is just a reflection of the weakness of problem diagnosis. And like with medical treatment, it is both time consuming and expensive.

We need to be cautious in stretching this analogy with medicine. First, the number of combinations (of solutions) that populate a design space is much smaller with medical research than with social policy. After all, the number of ways in which a molecule can combine with another or affect an organ (or the body) is likely to be just a handful. Second, unlike social policy, external validity is a much smaller problem with medical research. Once the drug is tested with the few broad genetic or other physiological types, the external validity as well as functional effectiveness can be reliably established. Unlike medicine, social policy is not science! 

No comments: