"Consistency and rigour are features of a deductive approach, which draws conclusions from a group of axioms – and whose empirical relevance depends entirely on the universal validity of the axioms. The only descriptions that fully meet the requirements of consistency and rigour are completely artificial worlds... deductive reasoning is the mark of science: induction – in which the argument is derived from the subject matter – is the characteristic method of history or literary criticism.
But this is an artificial, exaggerated distinction. Scientific progress – not just in applied subjects such as engineering and medicine but also in more theoretical subjects including physics – is frequently the result of observation that something does work, which runs far ahead of any understanding of why it works. Not within the economics profession.
There, deductive reasoning based on logical inference from a specific set of a priori deductions is 'exactly the right way to do things'. What is absurd is not the use of the deductive method but the claim to exclusivity made for it. This debate is not simply about mathematics versus poetry. Deductive reasoning necessarily draws on mathematics and formal logic: inductive reasoning, based on experience and above all careful observation, will often make use of statistics and mathematics.
Economics is not a technique in search of problems but a set of problems in need of solution. Such problems are varied and the solutions will inevitably be eclectic. Such pragmatic thinking requires not just deductive logic but an understanding of the processes of belief formation, of anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses and governments do.
The belief that models are not just useful tools but are capable of yielding comprehensive and universal descriptions of the world blinded proponents to realities that had been staring them in the face. That blindness made a big contribution to our present crisis, and conditions our confused responses to it."
The central challenge as the mainstream in the profession see it is to develop a model (preferably one that can be simulated on a computer) of the economy which is not only able to explain why events happen as they do but also make reasonably accurate predictions of them. Kay explores the various possible interpretations that have sought to correct the obvious flaws in the standard DSGE models, consequent to the soul-searching that followed the sub-prime crisis and its aftermath.
In response to the crititicism of the Lucasian DSGE model, its Chicago supporters have sought to make it even more complex in order for it to be more realistic! They have introduced more parameters to represent the complex problems that abound in the real world, which takes into account market frictions and transaction costs. Another response has come from those like Joe Stiglitz who while retaining many of Lucas assumptions have introduced greater importance to information imperfections (for example, Ricardian equivalence's assumptions of households having information about future budgetary problems is now questioned).
Some others, from the complexity economics school, have put forward agent-based modelling solutions, based on specific behavioural and other heuristics generally observed in the real-world. All these solutions satisfy the "requirement" of being mathematical and computer simulatable. However, questions about their real-world effectiveness remain.
Without offering any specific model, John Kay argues in favor of a less mathematics based approach. He writes,
"Another line of attack would discard altogether the idea that the economic world can be described by any universal model in which all key relationships are predetermined. Economic behaviour is influenced by technologies and cultures, which evolve in ways that are certainly not random but that cannot be fully, or perhaps at all, described by the kinds of variables and equations with which economists are familiar. The future is radically uncertain and models, when employed, must be context specific."
The crux of the debate is that all conventional approaches to explaining and forecasting macroeconomic phenomena assume that it has to be contained in a logically consistent and theoretically sound model. It assumes that it is possible to collapse all the different (and there are a maddening array of them) scenarios into this one comprehensive model.
Therefore, the supporters of the Lucasian school try to formulate a single model that can satisfactorily explain all the different types of economic recessions. Accordingly, it seeks to use the same model, with its standard set of assumptions, to explain aggregate demand slumps caused by as widely varying factors as the routine ones (say, monetary policy induced) to those spawned by banking crisis and resultant balance sheet damages.
The result is a failure to satisfactorily explain the present balance sheet recession, especially in conditions of persistent high unemployment rates and zero nominal interest rates. In response, the freshwater economists have either adopted a postion of ostrich like denial or have tried to tinker with the existing models, introducing newer parameters and assumptions to explain market frictions, and in the process drawing them further away from reality.
What if there is no such magic model that can be formulated? Is it possible to forecast macroeconomic outcomes with any great degree of accuracy, beyond estimating the broad trends? What if the degree of relevance of each assumption varies widely across different contexts, to be so irrelevant at certain times as they are relevant at other times, and in which case the model itself should assume a completely different character? More importantly, is there really a need to have a universal, one-size-fits-all model?