Wednesday, June 5, 2019

The ahistoricism and dataisation of economics!

I was not following this story. Raj Chetty has apparently introduced this theory-free and data-driven Economics introductory course, Ec 1152 at Harvard. This Vox article is largely admiring,
There’s little discussion of supply and demand curves, of producer or consumer surplus, or other elementary concepts introduced in classes like Ec 10. There is no textbook, only a set of empirical papers. The material is relatively cutting-edge. Of the 12 papers students are required to read, 11 were released in 2010 or after. Half of the assigned papers were released in 2017 or 2018. Chetty co-authored a third of them... Economics 1152 is fundamentally about tools — learning to use them, learning when to use which, and learning what they can and cannot do for you. And it trains students to use those tools to study inequality, specifically: in their own neighborhoods, in housing, in education, and more. Rather than having weekly problem sets (the standard pedagogy in most introductory econ classes), Ec 1152 asks students to complete four major projects in which students directly analyze data... Chetty is aiming to make the course a model for other schools.
I am no economist to take on a Bates Clark medal winner. But as a lay person, much less a policy maker, I find this descent and degeneration appalling. 

The Chetty thing is the latest, perhaps the most extreme and disturbing, in the recent trend in development economics that debunks theoretical models and sings the virtues of empiricism through evidence-based policy making. As it is, the original theoretical focus itself was not desirable in its marginalisation of priors. This latest development is like positivism has come the full-circle in social sciences, and this time for the worse. 

This view goes something like this - there are no priors (in fact, you discredit experience as being biased - after all you guys have been doing development for decades and we still have poverty and misery in abundance) >> and therefore conventions, latent wisdom, and experience counts for little >> therefore there are no theories >> so we need evidence on everything >> how better to create evidence than look for data >> so let's do experiments (RCTs) or mine administrative data and understand reality and design evidence-based policies.

Notice how neatly this approach fits with the perspective of someone who is both an outsider and knows little about "that" real world they seek to understand and who are equipped with some toolkits which they believe can help explain phenomena in "that" real world.

Fundamentally, this is where the likes of Chetty goes wrong,
If Chetty is an advocate for anything, it’s for the notion that economics is an empirical discipline, a science just as much as, say, medicine is.
I am comforted by the fact that Lars Peter Hansen, no less, has taken on evidence-based policy making.
I was reminded of the commonly used slogan “evidence-based policy.” Except for pure marketing purposes, I find this terminology to be a misnomer, a misleading portrayal of academic discourse and the advancement of understanding. While we want to embrace evidence, the evidence seldom speaks for itself; typically, it requires a modeling or conceptual framework for interpretation. Put another way, economists—and everyone else—need two things to draw a conclusion: data, and some way of making sense of the data.
On the limitations of data and evidence, he writes,
A modeling challenge that I and others have confronted is how to incorporate, meaningfully acknowledge, and capture the limits to our understanding—and what implications these limits have for markets and economic outcomes. While experimental evidence of various guises is available, unlike many of our colleagues in the physical and biological sciences, macroeconomists are limited in terms of the types of experiments we can run. Other sources of evidence can be helpful, including those captured in aggregate time series and in microeconomic cross sections. But for important policy-relevant questions, to use this evidence in meaningful ways requires conceptual frameworks or models...
Understanding that the evidence itself does not contain all the answers is crucial to an informed society. We’re living in a world that along some dimensions feels very data rich. We’re able to collect a lot of data, we have powerful hardware to store and process them, and we have machine-learning techniques to find patterns in them. But many of the important questions we face are about fundamentally dynamic problems. They’re in areas in which, along some dimensions, our knowledge is sparse. How do we best handle financial-market oversight in order to limit the possibilities of a big financial crisis? What economic policies should be in place to confront climate change?... Many of the people who influence, or want to influence, public policy are reluctant to acknowledge that we’re often working with incomplete information. Ambiguity, they believe, is hard to sell to the public or to politicians claiming to represent the public: politicians and policy makers often seek confidence in policy outcomes, even when this confidence is not justified. As a result, there will always be people willing to step to the forefront to give confident answers... Designing activist policy prescriptions on the basis of a false pretense of knowledge can indeed be harmful.
Of course, modeling runs into its set of limitations and failings, like that of confusing a model for the model, much less imagining that there can ever be the model. 

Update 1 (21.05.2020)

James Heckman takes issue with Raj Chetty and Co's statistical economics. The latter's work has spawned a "Zip code is destiny" narrative, with the researchers themselves having launched an Opportunity Atlas that ranks neighbourhoods according to their chances of propelling poor children upwards. But Heckman takes issue,
A working paper by Magne Mogstad, another economist at the University of Chicago, and his colleagues argues that the “noise”, or random fluctuation, in Mr Chetty’s data means “it is not possible to draw firm conclusions about which counties in the United States have high or low values” of upward mobility from the poorest 25% of households. Mr Heckman acknowledges that there are clear differences in mobility according to neighbourhood. But the ultimate drivers could lie in family structure, parenting habits, exposure to crime or the quality of schooling. All these are difficult to derive from American tax-return data. Pundits take the research on “neighbourhood effects” as evidence that “zipcode is destiny”. Mr Heckman bristles at that. It overlooks the fact that Asians and black women do fairly well in mobility relative to whites. “It diverts attention away from other plausible explanations for why African-Americans are not doing well. Put discrimination on the table…but family structure is the one thing that is just off the table in American society,” he says.

1 comment:

Unknown said...

I like this post. I wrote to my colleagues in IFMR GSB the following:

"The idea of not teaching Big data together with relevant practical issues is, indeed, the right thing to do, in my personal opinion. A strictly personal view: ultimately, economics is an empirical science and economic policymaking has to be more data and evidence based than ideologically driven. In that sense, it is a welcome step.

That said, I did want to add that in economics, theory is needed to set up the point or points of departure for students. Students need to peg/anchor the empirical findings and whether they constitute departure from theory and if so, why and if not, why not? Only then, can they design the right policies or critique government policies or even corporate decisions.

The piece by Lars Peter Hansen is highly lucid even if it leaves the reader with a sense of incompleteness in the end."

So, one cannot throw the baby out with the bathwater. Theories are very useful anchors. You are right with your title of the blog post.

BTW, is there a typo in the last line of the post?