The National Housing Bank's (NHB) Residex index tracks residential property prices in 15 major cities since July 2007 (base year of 2007). Here is the NHB residex comparison for the six metropolitan cities.
Calcutta, Chennai and Mumbai have recovered nicely from the real-estate recession, with Chennai showing a steady growth in prices. Hyderabad and Bangalore are clearly yet to recover from the shocks.
(HT: Mostly Economics)
Thursday, December 30, 2010
Wednesday, December 29, 2010
America's unemployment challenge
Excellent graphic from David Leonhardt summarizes the job market challenge for the US economy.
The economy needs to add atleast 125,000 jobs per month to merely keep up with the workforce population growth. Presently, government jobs are falling and private sector jobs growth is stagnant. The economy would need to add 9.4 million jobs to reduce the unemployment rate to the "normal" 6%. If the economy adds 200,000 jobs a month (itself higher than the current additions), then the 'normal' will be achieved well after 2020. At 250,000 job additions a month, the 6% rate will be reached only after 2016, while at 300,000 additions it will take till mid-2014 to reach the 'normal'.
Update 1 (7/1/2011)
The December 2010 jobs market report has little cause for cheer. The economy added just 103,000 jobs in December and the unemployment rate was at 9.4%. The so-called real unemployment rate, which includes those workers who are discouraged or have given up looking for work, stands at 16.7 percent.
The percentage of the unemployed who have been without work 27 weeks or longer edged up last month to 44.3 percent, virtually unchanged from a year ago. See this and this. See these excellent graphics from Chad Stone.
Update 1 (15/1/2011)
Floyd Norris has this graphic which shows that the recession has induced more people above the age group of 55 to work, even as it has reduced the percentage of those below 55 working. Thirty years ago, one in seven jobs in the United States was held by a person who was 55 or older. Today the proportion is one in five.
Changing demographics can explain part of the change. As baby boomers age, the number of people over 55 has increased. But another reason for the change seems to be that fewer older workers who have jobs are willing to retire while fewer younger people are even looking for work. The fact that a lower proportion of people under 25 are in the labor force — either working or looking for work — could reflect a decision by more of them to further their education.
Update 2 (5/2/2011)
The January jobs report shows that the US economy added 36,000 jobs on net in January and its unemployment rate fell to 9%, possibly because many people stopped looking for jobs and dropped off the labor force. The private sector added 50,000 jobs, while government shed 14,000 jobs.
A broader measure of unemployment — which includes those whose hours have been cut, those who are working part time because they could not find full-time jobs, and those so discouraged that they have given up on the search — was 16.1%, down from 16.7% in December. That left 13.9 million people still out of work. The unemployment rate among people with less than a high school diploma was 14.2%, while the rate among those with a bachelor’s degree or higher was 4.2%. The number of people who had been out of work for six months or more eased to 6.2 million from 6.4 million.
See this, this, this, and this.
Update 3 (26/2/2011)
The big worry for the US is with the details of the unemployment - massive spurt in long-term unemployed and the disproportionately higher share of those with low education who are unemployed.
Update 4 (7/3/2011)
Mark Thoma has two posts on how long it will take for unemployment to return to normalcy under two different assumptions. Both make for bleak reading.
In the meantime, the February jobs report had something to cheer, with total jobs added being 192,000 (up from a gain of just 63,000 in January), primarily through the private sector, and unemployment rate falling to 8.9%. The industries with the biggest gains included construction (again, partly because of weather effects), manufacturing, professional and business services, and health care and social assistance. The losers were state and local governments, which have been shedding more and more workers as their budget troubles get worse.
Since the downturn began in December 2007, the economy has shed, on net, about 5.4 percent of its nonfarm payroll jobs. And that doesn’t even account for the fact that the working-age population has continued to grow, meaning that if the economy were healthy we should have more jobs today than we had before the recession. However, the average duration of unemployment climbed to 37.1 weeks.
The economy needs to add atleast 125,000 jobs per month to merely keep up with the workforce population growth. Presently, government jobs are falling and private sector jobs growth is stagnant. The economy would need to add 9.4 million jobs to reduce the unemployment rate to the "normal" 6%. If the economy adds 200,000 jobs a month (itself higher than the current additions), then the 'normal' will be achieved well after 2020. At 250,000 job additions a month, the 6% rate will be reached only after 2016, while at 300,000 additions it will take till mid-2014 to reach the 'normal'.
Update 1 (7/1/2011)
The December 2010 jobs market report has little cause for cheer. The economy added just 103,000 jobs in December and the unemployment rate was at 9.4%. The so-called real unemployment rate, which includes those workers who are discouraged or have given up looking for work, stands at 16.7 percent.
The percentage of the unemployed who have been without work 27 weeks or longer edged up last month to 44.3 percent, virtually unchanged from a year ago. See this and this. See these excellent graphics from Chad Stone.
Update 1 (15/1/2011)
Floyd Norris has this graphic which shows that the recession has induced more people above the age group of 55 to work, even as it has reduced the percentage of those below 55 working. Thirty years ago, one in seven jobs in the United States was held by a person who was 55 or older. Today the proportion is one in five.
Changing demographics can explain part of the change. As baby boomers age, the number of people over 55 has increased. But another reason for the change seems to be that fewer older workers who have jobs are willing to retire while fewer younger people are even looking for work. The fact that a lower proportion of people under 25 are in the labor force — either working or looking for work — could reflect a decision by more of them to further their education.
Update 2 (5/2/2011)
The January jobs report shows that the US economy added 36,000 jobs on net in January and its unemployment rate fell to 9%, possibly because many people stopped looking for jobs and dropped off the labor force. The private sector added 50,000 jobs, while government shed 14,000 jobs.
A broader measure of unemployment — which includes those whose hours have been cut, those who are working part time because they could not find full-time jobs, and those so discouraged that they have given up on the search — was 16.1%, down from 16.7% in December. That left 13.9 million people still out of work. The unemployment rate among people with less than a high school diploma was 14.2%, while the rate among those with a bachelor’s degree or higher was 4.2%. The number of people who had been out of work for six months or more eased to 6.2 million from 6.4 million.
See this, this, this, and this.
Update 3 (26/2/2011)
The big worry for the US is with the details of the unemployment - massive spurt in long-term unemployed and the disproportionately higher share of those with low education who are unemployed.
Update 4 (7/3/2011)
Mark Thoma has two posts on how long it will take for unemployment to return to normalcy under two different assumptions. Both make for bleak reading.
In the meantime, the February jobs report had something to cheer, with total jobs added being 192,000 (up from a gain of just 63,000 in January), primarily through the private sector, and unemployment rate falling to 8.9%. The industries with the biggest gains included construction (again, partly because of weather effects), manufacturing, professional and business services, and health care and social assistance. The losers were state and local governments, which have been shedding more and more workers as their budget troubles get worse.
Since the downturn began in December 2007, the economy has shed, on net, about 5.4 percent of its nonfarm payroll jobs. And that doesn’t even account for the fact that the working-age population has continued to grow, meaning that if the economy were healthy we should have more jobs today than we had before the recession. However, the average duration of unemployment climbed to 37.1 weeks.
Tuesday, December 28, 2010
Private sector share in generation capacity addition set to increase
I had blogged earlier about the dismal share of private sector in power generation during the Eleventh Five Year Plan period (2007-12). In fact, at less than 20% of the total generation capacity addition, private capacity addition will be less than that of NTPC alone.
However, Businessline points to preliminary estimates for the Twelfth Five Year Plan (2012-17), project that private sector will form 62% of the total 75000 MW (or 46500 MW) of capacity addition in conventional energy sources. The total capacity addition requirement for the Plan is targeted at 88,000 MW, which includes 13,000 MW from renewable energy sources. State and Central utilities are expected to form 20% and 18% respectively of the remaining.
The transition to the tariff-based bidding regime from the coming January will give the private developers an edge in bidding for future capacity addition projects. In any case, with mounting experience, the private sector have been improving their experience in commissioning of new projects. The private sector formed 45% or 4313 MW of the projects commissioned in 2009-10.
Given the massive and growing demand, manifested in the widespread suppressed demand and resultant load-sheddings, no amount of capacity addition will satisfy demand for the foreseeable future. The projected capacity addition will barely help maintain the current power management scenario. China currently adds nearly 100000 MW a year, whereas India has projected adding less than 200000 MW in the ten year period of Twelfth and Thirteenth Five Year Plans. In that time, at the current rate, China would have added five times more capacity!
Therefore, while commendable, it is important that private sector be able to come up with even more ambitious capacity addition programs. This assumes greater significance in view of the anticipated negative impact on government sector projects due to the introduction of a tariff-based bidding regime.
However, Businessline points to preliminary estimates for the Twelfth Five Year Plan (2012-17), project that private sector will form 62% of the total 75000 MW (or 46500 MW) of capacity addition in conventional energy sources. The total capacity addition requirement for the Plan is targeted at 88,000 MW, which includes 13,000 MW from renewable energy sources. State and Central utilities are expected to form 20% and 18% respectively of the remaining.
The transition to the tariff-based bidding regime from the coming January will give the private developers an edge in bidding for future capacity addition projects. In any case, with mounting experience, the private sector have been improving their experience in commissioning of new projects. The private sector formed 45% or 4313 MW of the projects commissioned in 2009-10.
Given the massive and growing demand, manifested in the widespread suppressed demand and resultant load-sheddings, no amount of capacity addition will satisfy demand for the foreseeable future. The projected capacity addition will barely help maintain the current power management scenario. China currently adds nearly 100000 MW a year, whereas India has projected adding less than 200000 MW in the ten year period of Twelfth and Thirteenth Five Year Plans. In that time, at the current rate, China would have added five times more capacity!
Therefore, while commendable, it is important that private sector be able to come up with even more ambitious capacity addition programs. This assumes greater significance in view of the anticipated negative impact on government sector projects due to the introduction of a tariff-based bidding regime.
Monday, December 27, 2010
Complexity economics and the global financial system
Andrew Haldane of the Bank of England has an excellent speech highlighting the complex and adaptive nature of the global financial markets. He compares the reaction of global financial market to market events as similar to the "flap of a butterfly’s wing in New York or Guangdong generates a hurricane for the world economy". He writes,
Conventional wisdom on complex systems like eco-systems and financial markets was that they were self-regulating and self-repairing. It was though that complex systems tended to exhibit greater stability, and complexity strengthened self-regulatory forces in systems, so improving robustness. However, the events of the past 18 months have revealed a financial system which has shown itself to be neither self-regulating nor self-repairing. He uses four mechanisms to explain complex adaptive systems
1. Connectivity and stability - robust-yet-fragile character
Interconnected networks exhibit a knife-edge, or tipping point, property. Within a certain range, connections serve as a shock-absorber. The system acts as a mutual insurance device with disturbances dispersed and dissipated. Connectivity engenders robustness. Risk-sharing – diversification – prevails. But beyond a certain range, the system can flip the wrong side of the knife-edge. Interconnections serve as shock-amplifiers, not dampeners, as losses cascade. The system acts not as a mutual insurance device but as a mutual incendiary device. Risk-spreading – fragility - prevails. The extent of the systemic dislocation is often disproportionate to the size of the initial shock.
Another feature of connected networks is their 'long-tailed distribution' - the histogram formed by the number of links to each node. Unlike the randomly configured network with its symmetric and bell-shaped distribution, many real-world networks do have a thin middle and long, fat tails. There is a larger than expected number of nodes with both a smaller and a larger number of links than average.
Long-tailed distributions have been shown to be more robust to random disturbances, but more susceptible to targeted attacks. Therefore, long periods of apparent robustness, where peripheral nodes are subject to random shocks, offers little comfort or assurance of network health. It is only when the hub – a large or connected financial institution - is subject to stress that network dynamics will be properly unearthed.
Another feature of connected networks is their 'small world' property. In his famous chain letter experiment, Stanley Milgram showed that the average path length (number of links) between any two individuals was around six – hence 'six degrees of separation'. He found that certain key nodes can introduce short-cuts connecting otherwise detached local communities. This property will tend to increase the likelihood of local disturbances having global effects – so-called 'long hops'. A local problem quickly turns into a global one.
Haldane examines the global financial system and finds several interesting changes over the past two decades. First, the scale and interconnectivity of the international financial network has increased significantly - nodes have ballooned, increasing roughly 14-fold, and links have become both fatter and more frequent, increasing roughly 6-fold. Second, the international financial network exhibits a long-tail. Measures of skew and kurtosis suggest significant asymmetry in the network’s degree distribution. Third, the average path length of the international financial network has also shrunk - between the largest nation states, there are fewer than 1.4 degrees of separation.
2. Feedback and stability
The sub-prime crisis generated panic hoarding of liabilities (counterparty risk meant that banks hoarded liquidity rather than on-lend it) and distress sales of assets (to meet margin calls or reduce exposures). Individually-rational actions generated a collectively worse funding position for all. These rational responses by banks to fear of infection added to the fragility of an already robust-yet-fragile financial network.
3. Uncertainty and Stability
Through widespread counterparty uncertainty, networks have important consequences for the dynamics and pricing in financial markets. Given the multiple levels of splicing and dicing of derivative instruments, it was impossible to even trace back counterparties, leave accurately alone pricing those risks. Links in the chain are unknown and determining your true risk position is thereby problematic. The network chain was so complex that spotting the weakest link became impossible.
4. Innovation and stability
Another dimension of network stability was the role of complex financial instruments. Financial engineering unleashed into the markets an alphabet soup of instruments whose range of real risks were often impossible to assess with any reasonable degree of certainty.
He draws insights from network theory in areas like ecology, epidemiology, biology and engineering, to explain the emergence over the past decade of a financial network characterized by complexity and homogeneity (pro-cyclical and exposure to similar types of instruments and areas). The trend towards slicing and dicing risk and diversifying them through securitization and derivative instruments dramatically increased the system inter-connectedness and complexity. he says,
The impact of these trends was that the financial network,
He then draws on the experience of other network disciplines and provides some tentative policy prescriptions to manage the financial network and avert systemic dislocations. He discusses three areas,
"Complex because these networks are a cat’s-cradle of interconnections, financial and non-financial. Adaptive because behaviour in these networks are driven by interactions between optimising, but confused, agents. Seizures in the electricity grid, degradation of ecosystems, the spread of epidemics and the disintegration of the financial system – each is essentially a different branch of the same network family tree."
Conventional wisdom on complex systems like eco-systems and financial markets was that they were self-regulating and self-repairing. It was though that complex systems tended to exhibit greater stability, and complexity strengthened self-regulatory forces in systems, so improving robustness. However, the events of the past 18 months have revealed a financial system which has shown itself to be neither self-regulating nor self-repairing. He uses four mechanisms to explain complex adaptive systems
1. Connectivity and stability - robust-yet-fragile character
Interconnected networks exhibit a knife-edge, or tipping point, property. Within a certain range, connections serve as a shock-absorber. The system acts as a mutual insurance device with disturbances dispersed and dissipated. Connectivity engenders robustness. Risk-sharing – diversification – prevails. But beyond a certain range, the system can flip the wrong side of the knife-edge. Interconnections serve as shock-amplifiers, not dampeners, as losses cascade. The system acts not as a mutual insurance device but as a mutual incendiary device. Risk-spreading – fragility - prevails. The extent of the systemic dislocation is often disproportionate to the size of the initial shock.
Another feature of connected networks is their 'long-tailed distribution' - the histogram formed by the number of links to each node. Unlike the randomly configured network with its symmetric and bell-shaped distribution, many real-world networks do have a thin middle and long, fat tails. There is a larger than expected number of nodes with both a smaller and a larger number of links than average.
Long-tailed distributions have been shown to be more robust to random disturbances, but more susceptible to targeted attacks. Therefore, long periods of apparent robustness, where peripheral nodes are subject to random shocks, offers little comfort or assurance of network health. It is only when the hub – a large or connected financial institution - is subject to stress that network dynamics will be properly unearthed.
Another feature of connected networks is their 'small world' property. In his famous chain letter experiment, Stanley Milgram showed that the average path length (number of links) between any two individuals was around six – hence 'six degrees of separation'. He found that certain key nodes can introduce short-cuts connecting otherwise detached local communities. This property will tend to increase the likelihood of local disturbances having global effects – so-called 'long hops'. A local problem quickly turns into a global one.
Haldane examines the global financial system and finds several interesting changes over the past two decades. First, the scale and interconnectivity of the international financial network has increased significantly - nodes have ballooned, increasing roughly 14-fold, and links have become both fatter and more frequent, increasing roughly 6-fold. Second, the international financial network exhibits a long-tail. Measures of skew and kurtosis suggest significant asymmetry in the network’s degree distribution. Third, the average path length of the international financial network has also shrunk - between the largest nation states, there are fewer than 1.4 degrees of separation.
2. Feedback and stability
The sub-prime crisis generated panic hoarding of liabilities (counterparty risk meant that banks hoarded liquidity rather than on-lend it) and distress sales of assets (to meet margin calls or reduce exposures). Individually-rational actions generated a collectively worse funding position for all. These rational responses by banks to fear of infection added to the fragility of an already robust-yet-fragile financial network.
3. Uncertainty and Stability
Through widespread counterparty uncertainty, networks have important consequences for the dynamics and pricing in financial markets. Given the multiple levels of splicing and dicing of derivative instruments, it was impossible to even trace back counterparties, leave accurately alone pricing those risks. Links in the chain are unknown and determining your true risk position is thereby problematic. The network chain was so complex that spotting the weakest link became impossible.
4. Innovation and stability
Another dimension of network stability was the role of complex financial instruments. Financial engineering unleashed into the markets an alphabet soup of instruments whose range of real risks were often impossible to assess with any reasonable degree of certainty.
He draws insights from network theory in areas like ecology, epidemiology, biology and engineering, to explain the emergence over the past decade of a financial network characterized by complexity and homogeneity (pro-cyclical and exposure to similar types of instruments and areas). The trend towards slicing and dicing risk and diversifying them through securitization and derivative instruments dramatically increased the system inter-connectedness and complexity. he says,
"Follow-the-leader became blind-man’s buff. In short, diversification strategies by individual firms generated heightened uncertainty across the system as a whole... a strategy of changing the way they had looked in the past led to many firms looking the same as each other in the present. Banks’ balance sheets, like Tolstoy’s happy families, grew all alike. So too did their risk management strategies. Financial firms looked alike and responded alike. In short, diversification strategies by individual firms generated a lack of diversity across the system as a whole. So what emerged during this century was a financial system exhibiting both greater complexity and less diversity. Up until 2007... complexity plus homogeneity equalled stability."
The impact of these trends was that the financial network,
"... was at the same time both robust and fragile – a property exhibited by other complex adaptive networks, such as tropical rain forests; whose feedback effects under stress (hoarding of liabilities and fire-sales of assets) added to these fragilities – as has been found to be the case in the spread of certain diseases; whose dimensionality and hence complexity amplified materially Knightian uncertainties in the pricing of assets – causing seizures in certain financial markets; where financial innovation, in the form of structured products, increased further network dimensionality, complexity and uncertainty; and whose diversity was gradually eroded by institutions’ business and risk management strategies, making the whole system less resistant to disturbance – mirroring the fortunes of marine eco-systems whose diversity has been steadily eroded and whose susceptibility to collapse has thereby increased."
He then draws on the experience of other network disciplines and provides some tentative policy prescriptions to manage the financial network and avert systemic dislocations. He discusses three areas,
"1. Data and Communications: to allow a better understanding of network dynamics following a shock and thereby inform public communications. For example, learning from epidemiological experience in dealing with SARs, or from macroeconomic experience after the Great Depression, putting in place a system to map the global financial network and communicate to the public about its dynamics... Part of the answer lies in improved data, part in improved analysis of that data, and part in improved communication of the results;
2. Regulation: to ensure appropriate control of the damaging network consequences of the failure of large, interconnected institutions. For example learning from experience in epidemiology by seeking actively to vaccinate the 'super-spreaders' to avert financial contagion; and
3. Restructuring: to ensure the financial network is structured so as to reduce the chances of future systemic collapse. For example, learning from experience with engineering networks through more widespread implementation of central counterparties and intra-system netting arrangements, which reduce the financial network’s dimensionality and complexity."
Sunday, December 26, 2010
The global energy deficit
Large parts of the world remain without access to electricity. Over a quarter of the global population are deprived off even the first generation uses of electricity - lighting their homes and charging mobile phones.
The United Nations estimates that 1.5 billion people across the globe still live without electricity, including 85% of Kenyans, and that three billion still cook and heat with primitive fuels like wood or charcoal.
Given the massive investments required with conventional electricity generation, transmission and distribution, off-grid electricity using cheap solar panels and high-efficiency LED lights is the most realistic option for many areas.
However, it suffers from a massive administration challenge. The dispersed nature and small size of such electricity generation makes it impractical for government agencies to administer. As the Times writes, "A $300 million solar project is much easier to finance and monitor than 10 million home-scale solar systems in mud huts spread across a continent."
Such systems have to be run either by the local community or by private operators. However, a reliable and sustainable business model for such investments remains elusive. Investors naturally see the poor rural consumer base as too risky to yield reasonable returns.
The United Nations estimates that 1.5 billion people across the globe still live without electricity, including 85% of Kenyans, and that three billion still cook and heat with primitive fuels like wood or charcoal.
Given the massive investments required with conventional electricity generation, transmission and distribution, off-grid electricity using cheap solar panels and high-efficiency LED lights is the most realistic option for many areas.
However, it suffers from a massive administration challenge. The dispersed nature and small size of such electricity generation makes it impractical for government agencies to administer. As the Times writes, "A $300 million solar project is much easier to finance and monitor than 10 million home-scale solar systems in mud huts spread across a continent."
Such systems have to be run either by the local community or by private operators. However, a reliable and sustainable business model for such investments remains elusive. Investors naturally see the poor rural consumer base as too risky to yield reasonable returns.
Saturday, December 25, 2010
Friday, December 24, 2010
Competitive bidding in power purchases
In an attempt to ensure a transparent price discovery mechanism in all power purchases by distribution utilities and lower retail tariffs, the Government of India have mandated tariff-based competitive bidding for all purchases from January 5, 2011. This marks the end of the current practice wherein power purchase agreements were executed through MoUs based on regulator-determined "cost-plus" tariffs.
This follows the National Tariff Policy notification (Para 5.1) that all power procurements should be only through competitive bidding from January 2011. A comparison of the tariffs obtained through competitive route and those arrived under the cost-plus tariff structure reveals that the those obtained through bidding were lower.
The state-owned producers like NTPC and the various state-government generators had expectedly opposed this. It is a testament to the competitiveness (or lack of it) of the state-owned generators that despite being the country's largest power producer, NTPC has failed to come even close to winning any of the UMPPs bid out so far.
Here is a comparison of the tariffs under different scenarios (via India Power Reforms and this). The Case I (no location specification and linkages) and Case II (government helps in land acquisition, coal linkages, clearances and water supplies) bid rates for various projects are as follows.
The comparison with some of the rates arrived at through the cost-plus MoU route makes the case in favor of competitive bidding a no-brainer.
Power purchase agreements on a cost-plus MoU route is a classic case of inefficient utilization of resources. It is an example of subsidizing inefficient state utilities at the cost of consumers. It may have been a necessity in the half-century or so after Independence when the private sector did not have the breadth and depth to meet the requirements. Now, with the private sector stronger and with the resources to provide a major share of the country's power requirement, it is time to bury such inefficient policies.
However, on a cautionary note, it is important that such contracts do not end up replacing public sector inefficiency with private sector crony capitalism. This assume importance in view of the widespread practice (and growing moral hazard) of contract re-negotiations, citing loose or ambiguous contractual provisions, after the bids are finalized. Recent efforts to tighten bidding norms in UMPPs are a step in the right direction.
The one thing I am curious about is whether any state-owned utility has succeeded with a tariff-based bid anywhere in the country!
This follows the National Tariff Policy notification (Para 5.1) that all power procurements should be only through competitive bidding from January 2011. A comparison of the tariffs obtained through competitive route and those arrived under the cost-plus tariff structure reveals that the those obtained through bidding were lower.
The state-owned producers like NTPC and the various state-government generators had expectedly opposed this. It is a testament to the competitiveness (or lack of it) of the state-owned generators that despite being the country's largest power producer, NTPC has failed to come even close to winning any of the UMPPs bid out so far.
Here is a comparison of the tariffs under different scenarios (via India Power Reforms and this). The Case I (no location specification and linkages) and Case II (government helps in land acquisition, coal linkages, clearances and water supplies) bid rates for various projects are as follows.
The comparison with some of the rates arrived at through the cost-plus MoU route makes the case in favor of competitive bidding a no-brainer.
Power purchase agreements on a cost-plus MoU route is a classic case of inefficient utilization of resources. It is an example of subsidizing inefficient state utilities at the cost of consumers. It may have been a necessity in the half-century or so after Independence when the private sector did not have the breadth and depth to meet the requirements. Now, with the private sector stronger and with the resources to provide a major share of the country's power requirement, it is time to bury such inefficient policies.
However, on a cautionary note, it is important that such contracts do not end up replacing public sector inefficiency with private sector crony capitalism. This assume importance in view of the widespread practice (and growing moral hazard) of contract re-negotiations, citing loose or ambiguous contractual provisions, after the bids are finalized. Recent efforts to tighten bidding norms in UMPPs are a step in the right direction.
The one thing I am curious about is whether any state-owned utility has succeeded with a tariff-based bid anywhere in the country!
Thursday, December 23, 2010
Johnny Depp teaches "framing effect"!
This dialogue from Johnny Depp's new movie "Tourist" is at the heart of what behavioural psychologists call framing.
Inspector: Now, you wish to report a murder?
Depp: No, some people tried to kill me.
Inspector: I was told you were reporting a murder.
Depp: Attempted murder.
Inspector: Ah, that is not so serious.
Depp: No, not when you downgrade it from murder. When you upgrade it from room service, it’s quite serious.
(HT: Tax Policy Center discussing whether President Obama's New Tax law is about cutting taxes or sparing people a tax hike)
Inspector: Now, you wish to report a murder?
Depp: No, some people tried to kill me.
Inspector: I was told you were reporting a murder.
Depp: Attempted murder.
Inspector: Ah, that is not so serious.
Depp: No, not when you downgrade it from murder. When you upgrade it from room service, it’s quite serious.
(HT: Tax Policy Center discussing whether President Obama's New Tax law is about cutting taxes or sparing people a tax hike)
Wednesday, December 22, 2010
Addressing India's employment challenges
Here is my Mint op-ed on addressing arguably India's biggest economic challenge - providing employment to the millions joining the work-force.
Tuesday, December 21, 2010
On knowledge capital depreciation
Accounting principles have defined depreciation schedules for different types of physical capital. However, depreciation for knowledge capital, which form an increasingly larger share of wealth, is more controversial.
In an interesting post, Micheal Mandel has attributed America's declining economic prosperity and current travails to its misplaced economic priorities. He feels that instead of building on the country's leadership in knowledge-based industries and knowledge capital creation, successive American governments over the past two decades, supported policies that favored debt-led consumption. And he points to the
Tyler Cowen while agreeing with Mandel's conclusion, differs with him on the arguement that globalization is responsible for the increased depreciation rate of knowledge capital. Instead, he points to the inherent nature of knowledge capital and its general propensity to diminish faster in value (to its users) compared to physical capital. He takes the example of an imaginary economy with two sectors, music and bathtubs, and writes,
There may be some nuances to the knowledge capital depreciation story. There are two opposing forces at work here. One is the classic demand-supply issue - as the supply (or access) increases, the value decreases. The other is the contribution of network effects on the value of the service or good - as more people have the good/service, its value increases.
The rate of appreciation or depreciation depends on the amplitude of these two effects. If the former is larger and disruptive technolgies emerge quicker (like in the music), then depreciation rate will be higher. On the contrary, if the entry barriers are too large (like in pharma sector, as Bill Gates Foundation is finding out), the rate of depreciation will be smaller, even positive.
In an interesting post, Micheal Mandel has attributed America's declining economic prosperity and current travails to its misplaced economic priorities. He feels that instead of building on the country's leadership in knowledge-based industries and knowledge capital creation, successive American governments over the past two decades, supported policies that favored debt-led consumption. And he points to the
"The value of knowledge capital depends, in part, on how rare it is. The more companies or countries that possess the same knowledge (say, about how to make a commercial airliner), the less valuable that knowledge is. This is just Economics 101, applied to intangibles.
Over the past 10-15 years, the strengthening of information flows into developing countries meant that knowledge capital was being distributed much more quickly around the world. As a result, the normal process of knowledge capital depreciation greatly accelerated in the US and Europe – beneath the radar screen, because no statistical agency constructs a set of knowledge capital accounts."
Tyler Cowen while agreeing with Mandel's conclusion, differs with him on the arguement that globalization is responsible for the increased depreciation rate of knowledge capital. Instead, he points to the inherent nature of knowledge capital and its general propensity to diminish faster in value (to its users) compared to physical capital. He takes the example of an imaginary economy with two sectors, music and bathtubs, and writes,
"My bathtub is over thirty years old, yet for me it works fine and I have no desire to buy a new one. When it comes to music, most people want to listen to what is new and hot, not Bach's B Minor Mass... The more that your economy "looks like" the music sector, the more rapid the rate of depreciation for production capital and knowledge capital. This means we may be overestimating our national wealth."
There may be some nuances to the knowledge capital depreciation story. There are two opposing forces at work here. One is the classic demand-supply issue - as the supply (or access) increases, the value decreases. The other is the contribution of network effects on the value of the service or good - as more people have the good/service, its value increases.
The rate of appreciation or depreciation depends on the amplitude of these two effects. If the former is larger and disruptive technolgies emerge quicker (like in the music), then depreciation rate will be higher. On the contrary, if the entry barriers are too large (like in pharma sector, as Bill Gates Foundation is finding out), the rate of depreciation will be smaller, even positive.
Monday, December 20, 2010
Chinese industrial policy and protectionism in the US
For decades, politicians and businesses in India used the excuse of protecting domestic companies to keep out foreign manufacturers in several areas. They argued that government support - directly with concessions and other subsidies and indirectly through trade barriers - was necessary to keep the fledgling domestic industries from being swept away by multi-national companies. Critics though contended that this was not in the country's benefit since it prevented the inflow of advanced technologies and strangulated market competition.
Over the past two decades, India has moved away from such industrial policy and opened its economy to private investments. The same has been the trend in many parts of developing world. But, in an inversion of this trend, industrial policy appears to be making a comeback in the most surprising of places - the self-professed flagbearer of free-market capitalism, the United States! And driving this has been the remorseless march of China, especially in the infrastructure sector equipments.
I have blogged earlier about the ambitious forays of Chinese companies into the global renewables, light-rail, power equipment and telecommunications markets. After having established their control over their domestic market for wind power generation equipments, the Chinese are now out to capture the global market. Chinese state-owned wind turbine makers like Dongfang, Goldwind, and Sinovel are now scouting large external markets like the US. Their undoubted cost advantage over the domestic US makers with more or less the same technologies is making US manufacturers apprehensive. Wind turbines made by Chinese manufacturers sell for an average of $600,000 a megawatt, compared with $800,000 or more for Western models made from Chinese parts, and even higher prices for European and American machines.
This has resulted in a rising tide of protectionist voices. They point to the unfair competitive advantages enjoyed by state-owned Chinese companies through the massive direct and indirect subsidies they receive from Beijing. To pre-empt moves to shut them out of the large US market, the Chinese makers have been exploring options to set up manufacturing facilities and thereby create jobs in the US.
The Chinese government have followed the classic industrial policy in these core infrastructure equipment sectors for the past two decades. Initially, on the one hand, the central government offered large subsidies and encouraged massive investments (by assuring a market for equipments produced by these firms). On the other, it encouraged foreign investments with strings attached - technology transfers, local content rules, and domestic manufacturing facilities - that catalyzed domestic manufacturing growth.
The domestic manufacturers leveraged the huge domestic volume and built-up expertise to move up the technology value-chain. As the domestic manufacturers developed competence, the government encouraged them to bid for foreign projects. The implicit subsidies enjoyed by these firms - cheap inputs and labor, tax concessions, cheap land and capital, preferential contracts from state-owned firms etc - coupled with their now undoubted competence, give the Chinese firms a huge pricing power. Over time, the Chinese manufacturers displaced their foreign competitors not only in the domestic market but globally too. For example, Chinese companies now control almost half of the $45 billion global market for wind turbines.
The Times report points to a classic example of industrial policy in wind power sector - Notice 1204 put out by China’s top economic policy agency, the National Development and Reform Commission, on July 4, 2005, declared that wind farms had to buy equipment in which at least 70% of the value was domestically manufactured. This is in clear violation of WTO rules prohibiting such local content regulations. However, for fear of losing a piece of China’s booming wind farm business, foreign manufacturers have preferred to indigenise productions instead of complaining to trade officials in their home countries.
Update 1 (15/1/2011)
Excellent account of why Evergreen Solar, the third-largest maker of solar panels in the United States, is closing down its US factories and shifting production to China. It says that the support offered in China cannot be matched by US governments - cheap factory labor (monthly wages average less than $300 to more than $5,400 a month for Massachusetts factory workers) and cheap capital (less than 5% and covering more than 2/3rd of the investment to double digit rates in the US). China’s real advantage lies in the ability of solar panel companies to form partnerships with local governments and then obtain loans at very low interest rates from state-owned banks.
Over the past two decades, India has moved away from such industrial policy and opened its economy to private investments. The same has been the trend in many parts of developing world. But, in an inversion of this trend, industrial policy appears to be making a comeback in the most surprising of places - the self-professed flagbearer of free-market capitalism, the United States! And driving this has been the remorseless march of China, especially in the infrastructure sector equipments.
I have blogged earlier about the ambitious forays of Chinese companies into the global renewables, light-rail, power equipment and telecommunications markets. After having established their control over their domestic market for wind power generation equipments, the Chinese are now out to capture the global market. Chinese state-owned wind turbine makers like Dongfang, Goldwind, and Sinovel are now scouting large external markets like the US. Their undoubted cost advantage over the domestic US makers with more or less the same technologies is making US manufacturers apprehensive. Wind turbines made by Chinese manufacturers sell for an average of $600,000 a megawatt, compared with $800,000 or more for Western models made from Chinese parts, and even higher prices for European and American machines.
This has resulted in a rising tide of protectionist voices. They point to the unfair competitive advantages enjoyed by state-owned Chinese companies through the massive direct and indirect subsidies they receive from Beijing. To pre-empt moves to shut them out of the large US market, the Chinese makers have been exploring options to set up manufacturing facilities and thereby create jobs in the US.
The Chinese government have followed the classic industrial policy in these core infrastructure equipment sectors for the past two decades. Initially, on the one hand, the central government offered large subsidies and encouraged massive investments (by assuring a market for equipments produced by these firms). On the other, it encouraged foreign investments with strings attached - technology transfers, local content rules, and domestic manufacturing facilities - that catalyzed domestic manufacturing growth.
The domestic manufacturers leveraged the huge domestic volume and built-up expertise to move up the technology value-chain. As the domestic manufacturers developed competence, the government encouraged them to bid for foreign projects. The implicit subsidies enjoyed by these firms - cheap inputs and labor, tax concessions, cheap land and capital, preferential contracts from state-owned firms etc - coupled with their now undoubted competence, give the Chinese firms a huge pricing power. Over time, the Chinese manufacturers displaced their foreign competitors not only in the domestic market but globally too. For example, Chinese companies now control almost half of the $45 billion global market for wind turbines.
The Times report points to a classic example of industrial policy in wind power sector - Notice 1204 put out by China’s top economic policy agency, the National Development and Reform Commission, on July 4, 2005, declared that wind farms had to buy equipment in which at least 70% of the value was domestically manufactured. This is in clear violation of WTO rules prohibiting such local content regulations. However, for fear of losing a piece of China’s booming wind farm business, foreign manufacturers have preferred to indigenise productions instead of complaining to trade officials in their home countries.
Update 1 (15/1/2011)
Excellent account of why Evergreen Solar, the third-largest maker of solar panels in the United States, is closing down its US factories and shifting production to China. It says that the support offered in China cannot be matched by US governments - cheap factory labor (monthly wages average less than $300 to more than $5,400 a month for Massachusetts factory workers) and cheap capital (less than 5% and covering more than 2/3rd of the investment to double digit rates in the US). China’s real advantage lies in the ability of solar panel companies to form partnerships with local governments and then obtain loans at very low interest rates from state-owned banks.
Sunday, December 19, 2010
Global Corruption Barometer 2010
The Transparency International has published its Global Corruption Barometer 2010. The 2010 Barometer used public opinion surveys to capture the experiences and views of more than 91,500 people in 86 countries and territories. It reports that one in four people paid bribes in the last year, and that six out of 10 people around the world say that corruption has increased over the last three years.
The section on petty bribery in particular makes for fascinating reading. Nearly 77000 people were asked to rate their experiences over the past 12 months with nine of the commonest public service delivery departments. Interestingly, India came in the top-tier of countries where respondents reported to having paid a bribe to access nine public services over the past 12 months - 54% of Indians reported to having paid bribes.
The police emerged as the most corrupt of departments, with nearly three of ten people who had contact with police reporting that they paid bribes. In fact, people in Asia Pacific and Latin America reported that judiciary was the most corrupt institution.
Capturing the regressivity of petty bribes, the survey finds that poor people are more frequently penalized with bribes. They pay bribes more frequently than the rich in eight out of the nine services.
Nearly half the people report that their last bribe was to "avoid a problem with the authorities", nearly a quarter to "speed things up". Interestingly, in Asia Pacific, 35% of people last paid bribes to "access a service they are entitled to" and 28% to "speed things up".
The section on petty bribery in particular makes for fascinating reading. Nearly 77000 people were asked to rate their experiences over the past 12 months with nine of the commonest public service delivery departments. Interestingly, India came in the top-tier of countries where respondents reported to having paid a bribe to access nine public services over the past 12 months - 54% of Indians reported to having paid bribes.
The police emerged as the most corrupt of departments, with nearly three of ten people who had contact with police reporting that they paid bribes. In fact, people in Asia Pacific and Latin America reported that judiciary was the most corrupt institution.
Capturing the regressivity of petty bribes, the survey finds that poor people are more frequently penalized with bribes. They pay bribes more frequently than the rich in eight out of the nine services.
Nearly half the people report that their last bribe was to "avoid a problem with the authorities", nearly a quarter to "speed things up". Interestingly, in Asia Pacific, 35% of people last paid bribes to "access a service they are entitled to" and 28% to "speed things up".
Saturday, December 18, 2010
Restoring competitiveness in PIIGS
An article in the Economist points to erosion in competitiveness suffered by the peripheral economies over the past decade. It writes,
This again draws attention to the difficulty of maintaining competitiveness in a single-currency union. External competitiveness depends on prices, which in turn is a function of exchange rate and cost of production. The former, which is the commonest instrument to manage export competitiveness, in unavailable for eurozone members. This leaves them with adjusting the cost of production by managing prices of goods and services and wages - internal devaluation. This can be done either by negative or slower price and wage growth (which carries the dangers of deflation), or increased productivity growth rates.
Increasing productivity, especially to the extent required to make any meaningful dent on real prices, may be difficult to achieve. This leaves with deflation as the only option available. However, as the Economist article highlights, the experience with deflation in other countries has not been very satisfactory. Deflation also carries the risk of increase in the real value of the already unsustainable debt burdens of the peripheral economies.
Further, far from any deflation, the peripheral economies today face a greater threat from inflation. The higher than eurozone average inflation rate in peripheral economies mean that their competitiveness relative to the others, especially Germany, is declining further. All this means that the options available for the troubled economies are limited and exit looks increasingly inevitable.
Further, as the graphic also indicates, external competitiveness is critical for the peripheral economies to not only match Germany, but also compete with China in many export markets.
In this context, Dani Rodrik, one of its supporters, has opined that "an amicable divorce is a better option than years of economic decline and political acrimony" and suggests that members "can rejoin, and do so credibly, when the fiscal, regulatory, and political prerequisites are in place".
"In the decade and a half before the crisis, countries such as Greece, Ireland, Portugal and Spain lost a lot of competitiveness. Low interest rates led to a surge in domestic demand. That, coupled with rigid labour markets in some places, led to sharp rises in nominal wages. At the same time productivity growth was not vigorous enough to compensate. By contrast, for a decade after its reunification boom turned sour in the mid-1990s, Germany took bitter medicine, holding wages down and boosting productivity. The result was a steady erosion of the peripheral countries’ competitiveness, especially relative to Germany."
This again draws attention to the difficulty of maintaining competitiveness in a single-currency union. External competitiveness depends on prices, which in turn is a function of exchange rate and cost of production. The former, which is the commonest instrument to manage export competitiveness, in unavailable for eurozone members. This leaves them with adjusting the cost of production by managing prices of goods and services and wages - internal devaluation. This can be done either by negative or slower price and wage growth (which carries the dangers of deflation), or increased productivity growth rates.
Increasing productivity, especially to the extent required to make any meaningful dent on real prices, may be difficult to achieve. This leaves with deflation as the only option available. However, as the Economist article highlights, the experience with deflation in other countries has not been very satisfactory. Deflation also carries the risk of increase in the real value of the already unsustainable debt burdens of the peripheral economies.
Further, far from any deflation, the peripheral economies today face a greater threat from inflation. The higher than eurozone average inflation rate in peripheral economies mean that their competitiveness relative to the others, especially Germany, is declining further. All this means that the options available for the troubled economies are limited and exit looks increasingly inevitable.
Further, as the graphic also indicates, external competitiveness is critical for the peripheral economies to not only match Germany, but also compete with China in many export markets.
In this context, Dani Rodrik, one of its supporters, has opined that "an amicable divorce is a better option than years of economic decline and political acrimony" and suggests that members "can rejoin, and do so credibly, when the fiscal, regulatory, and political prerequisites are in place".
Wednesday, December 15, 2010
MDGs Vs DIGs
I had blogged sometime back about the fact that our development discourse favors the wealth-redistribution way to poverty eradication over the wealth-creation path. In this context, the distinction drawn by Erik Reinert between "development economics (i.e. radically changing the productive structures of poor countries) and palliative economics (i.e. easing the pains of economic misery)" assumes relevance.
The most high-profile symbol of the wealth-redistribution or palliative economics are the UN's Millennium Development Goals (MDGs). It focuses on the provisioning of eight social sector services and relies on foreign aid to contribute a substantial share of its financing needs. Harvard Professor Stephen Peterson has a very relevant article that throws in a word of caution on the obsessive pursuit of the MDGs. His thesis reads,
He illustrates this with the example of education Vs roads,
His alternative proposal is
He also advocates that foreign aid should be utilized for meeting DIGs and not MDGs,
His opposition to aid-financed social sector investments is two-fold - its benefits are questionable (mostly diffuse and long-drawn out) and it creates assets whose maintenance requires massive recurring expenditure (salaries, O&M costs, consumables etc) which are left to the host governments (and who are most often unable to bear the burden). In addition comes the reality of developed economies facing a decade or more of belt-tightening when the already miniscule aid flow are likely to decline further. And, in any case, the MDG-fulfilling aid requirement was too large for current aid trends to make any meaningful dent.
I am inclined to agree with the underlying premise behind all the aforementioned, though not the sweeping tenor of the generalization. While I agree that the focus should shift to DIGs, it should not be at the expense of MDGs. In many respects, they are inter-related. A healthy and well-educated population is a pre-requisite for any wealth-creation.
In particular, the focus on roads and electricity cannot be over-emphasized. They are the fundamental building blocks for success with any development or governance intervention and literally the oxygen of economic growth. A "Big Push" in either has the potential to be the closest to silver-bullet interventions in poverty eradication.
Prof Peterson is also right to highlight the importance of revenue mobilization and the need to revamp public finance systems in developing countries. Foreign aid can at best be small complements, the bulk of the massive resources - for achieving both social sector and infrastructure goals - have to come from domestic tax and non-tax revenues. And there are numerous opportunities for quick-wins by improving the revenue mobilization machinery with easy and commonplace intiatives.
The most high-profile symbol of the wealth-redistribution or palliative economics are the UN's Millennium Development Goals (MDGs). It focuses on the provisioning of eight social sector services and relies on foreign aid to contribute a substantial share of its financing needs. Harvard Professor Stephen Peterson has a very relevant article that throws in a word of caution on the obsessive pursuit of the MDGs. His thesis reads,
"The Millennium Development Goals (MDGs) are not the best bet for the bottom billion: they have never been adequately funded, are unlikely to be adequately funded, are fiscally unsustainable, and not the best investment for poor countries in terms of level and certainty of return. The global economic crisis requires a rethink of development, a return to fundamentals, a return to growth and a return to fiscal probity."
He illustrates this with the example of education Vs roads,
"The MDGs are not the best investment decision in terms of pro-poor growth multipliers. Investment in education, for example does not have a clear impact on growth whereas, there is considerable evidence that tertiary roads have significant growth multipliers and pro-poor outcomes."
His alternative proposal is
"The MDGs should be replaced with the following strategy: DIGs (Decadal Infrastructure Goals). DIGs has four components:
• DTGs: decade tax goals
• DAGs: decade agriculture goals
• DRGs: decade road goals
• DPGs: decade power goals
... The DIGs reduce the risk of development as we know how to design, implement, and finance them and their value and impact are certain."
He also advocates that foreign aid should be utilized for meeting DIGs and not MDGs,
"Social services are long term liabilities (principally salaries) and should be funded by domestic revenue not volatile foreign aid. A 'better bet' for using foreign aid in Africa is to have it focus on the DIGs (revenue, roads, power) which have proven growth multipliers that can in turn expand domestic revenue for social services. If African societies want social services, then they must rely on their own pockets, not those of foreigners — taxes are the price of living in a civilized society."
His opposition to aid-financed social sector investments is two-fold - its benefits are questionable (mostly diffuse and long-drawn out) and it creates assets whose maintenance requires massive recurring expenditure (salaries, O&M costs, consumables etc) which are left to the host governments (and who are most often unable to bear the burden). In addition comes the reality of developed economies facing a decade or more of belt-tightening when the already miniscule aid flow are likely to decline further. And, in any case, the MDG-fulfilling aid requirement was too large for current aid trends to make any meaningful dent.
I am inclined to agree with the underlying premise behind all the aforementioned, though not the sweeping tenor of the generalization. While I agree that the focus should shift to DIGs, it should not be at the expense of MDGs. In many respects, they are inter-related. A healthy and well-educated population is a pre-requisite for any wealth-creation.
In particular, the focus on roads and electricity cannot be over-emphasized. They are the fundamental building blocks for success with any development or governance intervention and literally the oxygen of economic growth. A "Big Push" in either has the potential to be the closest to silver-bullet interventions in poverty eradication.
Prof Peterson is also right to highlight the importance of revenue mobilization and the need to revamp public finance systems in developing countries. Foreign aid can at best be small complements, the bulk of the massive resources - for achieving both social sector and infrastructure goals - have to come from domestic tax and non-tax revenues. And there are numerous opportunities for quick-wins by improving the revenue mobilization machinery with easy and commonplace intiatives.
Tuesday, December 14, 2010
Nudging on diet control!
Here is the nudge way to exercise self-control with eating habits. Researchers Carey K. Morewedge, Young Eun Huh and Joachim Vosgerau from the Carnegie Mellon University have found that continuously imagining that you are eating (by say, staring at pictorial representations of your favorite chocolate for some time or just mentally picturing eating more and more of the chocolate) will reduce the amount of the food you will actually eat. They write,
Update 1 (17/7/2011)
Paul Rozin, Sydney Scott, Megan Dingley, Joanna K. Urbanek, Hong Jiang, and Mark Kaltenbach argue that subtle changes in the way different foods are made accessible in a pay-by-weight-of-food salad bar in a cafeteria serving adults for the lunch period can play an important role in reducing obesity. They found that "making a food slightly more difficult to reach (by varying its proximity by about 10 inches) or changing the serving utensil (spoon or tongs) modestly but reliably reduces intake, in the range of 8–16%". They conclude that "making calorie-dense foods less accessible and low-calorie foods more accessible over an extended period of time would result in significant weight loss".
"The consumption of a food typically leads to a decrease in its subsequent intake through habituation — a decrease in one’s responsiveness to the food and motivation to obtain it. We demonstrated that habituation to a food item can occur even when its consumption is merely imagined. Five experiments showed that people who repeatedly imagined eating a food (such as cheese) many times subsequently consumed less of the imagined food than did people who repeatedly imagined eating that food fewer times, imagined eating a different food (such as candy), or did not imagine eating a food. They did so because they desired to eat it less, not because they considered it less palatable. These results suggest that mental representation alone can engender habituation to a stimulus."
Update 1 (17/7/2011)
Paul Rozin, Sydney Scott, Megan Dingley, Joanna K. Urbanek, Hong Jiang, and Mark Kaltenbach argue that subtle changes in the way different foods are made accessible in a pay-by-weight-of-food salad bar in a cafeteria serving adults for the lunch period can play an important role in reducing obesity. They found that "making a food slightly more difficult to reach (by varying its proximity by about 10 inches) or changing the serving utensil (spoon or tongs) modestly but reliably reduces intake, in the range of 8–16%". They conclude that "making calorie-dense foods less accessible and low-calorie foods more accessible over an extended period of time would result in significant weight loss".
Monday, December 13, 2010
John Taylor is wrong on the impact of federal transfers
John Taylor has been a long time critic of the fiscal stimulus measures in the US. In a recent post, he puts forward more evidence to support his claim. He points to his research findings on the impact of temporary transfers to state and local governments in the 2009 stimulus,
And concludes,
Now, by any objective reading, it is plain incorrect to arrive at this conclusion from the aforementioned research observation. Let me illustrate with the example of Urbania Municipality. In normal times, Urbania's annual expenditure is $100 m, of which $50 m comes from property and other tax and non-tax revenues, $10 m from federal grants, and $40 from different types of debt. The expenditure break-up is $15 m interest repayments, $25 m operation and maintenance, $30 m salaries, and $30 m capital expenditure.
Now recession strikes coinciding with a financial market crisis. Internal revenues fall from $50m to $30 m. Frozen credit markets coupled with a ratings downgrade dramatically increases Urbania's cost of accessing debt. Its net annual borrowing halve to $20 m. Assuming a $100 m annual expenditure and the annual $10 m federal grant, Urbania is left with a $40 m deficit. The only available source of funds is the federal government.
Since interest repayments and maintenance expenditures, amounting to $40 m are unavoidable, without additional external support, Urbania is left with just $20 to pay both salaries and expenditures on new roads and waterlines. Retrenchment of municipal labor and deferring investments are the inevitable result. Replacement of the leaking sewerage line and water lines to the newly constructed housing colony will have to wait for another year.
It is in this context that the federal government decides to step in with an additional $30 m grant to local governments like Urbania. Though, this incremental $30 m will still leave Urbania with a budget spending deficit of $10 m, it is now considerably better off that without it. It will not be able to increase its purchases of goods and services beyond the targetted $100 m. But Urbania will now be able to atleast retain all its workers and even spend $20 m on important infrastructure investments. And the contractionary impact of the recession will be considerably mitigated.
Now substitute Urbania with American states and the flaw in Prof Taylor's sweeping conclusion about the failure of federal transfers in the stimulus package becomes evident. The counterfactual is difficult to construct and the benefits of the intervention even easier to overlook. But it cannot be denied that in the absence of federal transfers, there would have been mass lay-offs, cuts in services, and large-scale postponement of important civic investments. And the effect of all this would be a self-fulfilling downward spiral of declining aggregate demand and decreasing tax revenues.
Is it an ideological blindness that causes eminent economists like Greg Mankiw and now John Taylor to come with such obtuse arguments? Or is it yet another manifestation of the dark age of macroeconomics?
"state and local governments did not increase their purchases of goods and services — including infrastructure — even though they received large grants in aid from the federal government. Instead they used the grants largely to reduce the amount of their borrowing as the following graph dramatically shows. As American Recovery and Reconstruction Act (ARRA) grants from the federal government rose, the amount of net borrowing by state and local governments declined."
And concludes,
"the 2009 stimulus package did little to stimulate the economy, despite its large size... the temporary increases in transfers... in the 2008 and 2009 stimulus packages did not work to stimulate the economy."
Now, by any objective reading, it is plain incorrect to arrive at this conclusion from the aforementioned research observation. Let me illustrate with the example of Urbania Municipality. In normal times, Urbania's annual expenditure is $100 m, of which $50 m comes from property and other tax and non-tax revenues, $10 m from federal grants, and $40 from different types of debt. The expenditure break-up is $15 m interest repayments, $25 m operation and maintenance, $30 m salaries, and $30 m capital expenditure.
Now recession strikes coinciding with a financial market crisis. Internal revenues fall from $50m to $30 m. Frozen credit markets coupled with a ratings downgrade dramatically increases Urbania's cost of accessing debt. Its net annual borrowing halve to $20 m. Assuming a $100 m annual expenditure and the annual $10 m federal grant, Urbania is left with a $40 m deficit. The only available source of funds is the federal government.
Since interest repayments and maintenance expenditures, amounting to $40 m are unavoidable, without additional external support, Urbania is left with just $20 to pay both salaries and expenditures on new roads and waterlines. Retrenchment of municipal labor and deferring investments are the inevitable result. Replacement of the leaking sewerage line and water lines to the newly constructed housing colony will have to wait for another year.
It is in this context that the federal government decides to step in with an additional $30 m grant to local governments like Urbania. Though, this incremental $30 m will still leave Urbania with a budget spending deficit of $10 m, it is now considerably better off that without it. It will not be able to increase its purchases of goods and services beyond the targetted $100 m. But Urbania will now be able to atleast retain all its workers and even spend $20 m on important infrastructure investments. And the contractionary impact of the recession will be considerably mitigated.
Now substitute Urbania with American states and the flaw in Prof Taylor's sweeping conclusion about the failure of federal transfers in the stimulus package becomes evident. The counterfactual is difficult to construct and the benefits of the intervention even easier to overlook. But it cannot be denied that in the absence of federal transfers, there would have been mass lay-offs, cuts in services, and large-scale postponement of important civic investments. And the effect of all this would be a self-fulfilling downward spiral of declining aggregate demand and decreasing tax revenues.
Is it an ideological blindness that causes eminent economists like Greg Mankiw and now John Taylor to come with such obtuse arguments? Or is it yet another manifestation of the dark age of macroeconomics?
Sunday, December 12, 2010
The case for temporary cross-country migration
I had made the case for internal labor mobility, especially from rural to urban areas, in an article here. In an earlier post, I had pointed to a study by Michael Clemens, Claudio Montenegro and Lant Pritchett that coss-border migration to the US is the most effective anti-poverty strategy for the people of developing economies. So much so that Lant Pritchett has estimated annual gains of about $300 billion — three times the benefit of removing the remaining barriers to trade - and therefore recommended the creation of 3 million jobs for guest workers in the US.
Now Amol Agarwal points to a World Bank evaluation study by David McKenzie and John Gibson on New Zealand's Recognised Seasonal Employer (RSE) program. It was launched in 2007 with the explicit goal of promotion of development in the Pacific Islands alongside benefiting employers at home. The multi-year evaluation study reveals that the program has largely achieved its goals. The study writes,
At a theoretical level, economists have long suggested guest worker programs and temporary migrations are among the most cost-effective of poverty eradication interventions. The complementarities in the aging populations of developed economies and the youthful ones in emerging economies provide mutually beneficial opportunities. But there lies several formidable obstacles in the achievement of free labor mobility, especially of those categories of labor which are most beneficial to the supplying countries.
For a start, unlike capital, labor mobility evokes immediate and intense socio-political (xenophobia) and economic ("stealing our jobs") reactions. This is likely to be amplified as long as economic conditions in developed economies remain weak. The difficulty of administering guest worker or temporary labor programs is another major stumbling block in its more widespread adoption. How can over-stay be eliminated?
The RSE program has been successful in avoiding the problem of over-staying. But managing such programs between New Zealand and small Pacific Islands is nothing compared to the infintely more complex task of doing the same between say, Nigeria and USA. Despite the severe punishments meted out to illegal migrants and those over-staying, tens of thousands of migrants from the sub-continent and East Asia continue to over-stay in the Gulf countries. A long period of close economic and political integration between economies is the only way to sustainably break down the institutional restrictions on labor mobility.
The authors acknowledge that the gains from such seasonal migration are much less compared to that from permanent international migration. They therefore point to the big question of whether seasonal migration can or cannot eventually open up avenues for permanent migration.
Now Amol Agarwal points to a World Bank evaluation study by David McKenzie and John Gibson on New Zealand's Recognised Seasonal Employer (RSE) program. It was launched in 2007 with the explicit goal of promotion of development in the Pacific Islands alongside benefiting employers at home. The multi-year evaluation study reveals that the program has largely achieved its goals. The study writes,
"Participating in the RSE has raised incomes in both Tonga and Vanuatu, allowed households to accumulate more assets, increased subjective standards of living, and, additionally in Tonga improved child school attendance for older children. Communities also seem to have received modest benefits in terms of monetary contributions from workers, with community leaders overwhelmingly viewing the policy as having an overall positive impact. These results make this seasonal migration program one of the most effective development interventions for which rigorous evaluations are available."
At a theoretical level, economists have long suggested guest worker programs and temporary migrations are among the most cost-effective of poverty eradication interventions. The complementarities in the aging populations of developed economies and the youthful ones in emerging economies provide mutually beneficial opportunities. But there lies several formidable obstacles in the achievement of free labor mobility, especially of those categories of labor which are most beneficial to the supplying countries.
For a start, unlike capital, labor mobility evokes immediate and intense socio-political (xenophobia) and economic ("stealing our jobs") reactions. This is likely to be amplified as long as economic conditions in developed economies remain weak. The difficulty of administering guest worker or temporary labor programs is another major stumbling block in its more widespread adoption. How can over-stay be eliminated?
The RSE program has been successful in avoiding the problem of over-staying. But managing such programs between New Zealand and small Pacific Islands is nothing compared to the infintely more complex task of doing the same between say, Nigeria and USA. Despite the severe punishments meted out to illegal migrants and those over-staying, tens of thousands of migrants from the sub-continent and East Asia continue to over-stay in the Gulf countries. A long period of close economic and political integration between economies is the only way to sustainably break down the institutional restrictions on labor mobility.
The authors acknowledge that the gains from such seasonal migration are much less compared to that from permanent international migration. They therefore point to the big question of whether seasonal migration can or cannot eventually open up avenues for permanent migration.
Friday, December 10, 2010
Prof Mankiw's "convenient agnosticism"!
Greg Mankiw is one of my favorite economists. His hugely popular text book is arguably the second-best window to learning the principles of Economics. His superstar blog has surely played a major role in enriching the debate on many political economy issues.
But his recent post explaining his ambivalence and agnosticism on extending unemployment insurance dismays me. In fact, it comes across as specious and positively disingenious, bordering on misleading and mis-directing the debate. Here is why
1. For a start, there is now enough literature on the pros (reduces household income uncertainty and props up aggregate demand) and cons (budgetary costs and lowers job search costs) raised by Prof Mankiw.
There is now ample evidence that extending the duration of UI does little to lower the marginal incentive to search for re-employment. Alan Krueger and Andreas Mueller have found that the time devoted to job search is fairly constant regardless of unemployment duration for those who are ineligible for UI.
Further, a recent study by the San Francisco Fed found negligible impact of UI extension on increase in unemployment rate - about 0.4 percentage point of the nearly 6 percentage point increase in the national unemployment rate since 2008.
2. The cases for and against any policy instrument varies depending on the broader macroeconomic environment. There cannot be a single applicable-for-all-conditions reasoning in favor or against a policy.
Consider this. The US economy is still recovering very slowly from a deep recession. Unemployment rates are on the rise and private sector job creation remains anemic. Inflation is low, aggregate demand is weak. Household balance sheets remain battered and will require more repairs. Monetary accommodation may have limited further traction. There is limited fiscal space available.
In the circumstances, doing nothing for lack of compelling quantitative evidence (of the success of the proposed intervention) and thereby letting the economy continue its present course has several dangers. The dismal macroeconomic conditions and prospects mean that the economy risks falling into a deep recession, from which recovery will be even more painful and long-drawn out. In fact, without recovery taking hold (and there is no evidence of this happening), the shares of debts and deficits will continue to grow.
The limited fiscal space available means that any stimulus spending should be directed at areas delivering the greatest bang for the buck. Money should be delivered to the hands of people who are likely to spend it immediately. A number of studies clearly indicate that among the various stimulus options, UI is among those with the largest multiplier.
3. The argument against any action on the grounds of conclusive enough evidence is all the more misleading since the inherent nature of an economy precludes any conclusive enough evidence on any reasonably complex economic policy intervention. Do we invoke this excuse to let a system take its own course, even when it is hurtling down into an abyss? Or do we weigh the relative probabilities of possible policy options and respond with the most cost-effective and most-likely-to-succeed option?
No one disputes that budget deficits and public debts should be lowered. The relevant question is whether this is the time to do so? What are the costs of contraction at this point in time? What is the cost of not doing anything and letting UI expire? Is there enough evidence in favor of the stimulative impact of UI?
In other words, the choice is between risking a deflationary recession with rising shares of debts and deficits and untold pain for the vast majority of people, and stimulating a recovery by running up short-term deficits. Reasonable people would choose the later as the lesser evil.
4.
This too has echoes of the first point. Prof Mankiw surely knows that there cannot be a not-too-high, not-too-low, and just right magic figure for the duration of UI applicable for all situations. In fact, the issue to be discussed here is not whether UI duration is too-high or too-low, but the benefits of its extension as opposed to costs. At issue is not what constitutes the optimal duration of UI, but whether fiscal stimulus is required and what is the most effective form of stimulus. If we assume that the moral hazard of UI is negligible and that we need fiscal stimulus, then the case in favor of extending UI is pretty strong.
5.
On this, there is already enough evidence that US stands on the negative side of the optimum UI duration scale to Europe's positive side. So the argument on the possibility of deviating farther away from the desirable may be superfluous.
6. Prof Mankiw's argument is based on lack of "compelling quantitative" evidence on the optimum period of UI. He gives the impression that he is unable (or unwilling) to take a decision on UI till he has "compelling quantitative analysis" about "how generous the optimal system would be". Unfortunately, by setting his conditions in so comprehensive a manner, he has virtually ensured that he will never need to take a decision to support extension of UI. In any case, how much evidence is "compelling" enough?
He knows all too well that it is impossible to conclusively quantify the optimal duration of UI - not now, never in future too. In the circumtances, this logic is a convenient excuse for not doing anything. This naturally translates into letting the UI benefits expire. In other words, you vote against extension without appearing to directly oppose it!
And such reasoning has now become the convenient excuse for conservative economists to recuse themselves from supporting government interventions in many areas. The pathological opposition to tax increases too works on similar lines. Once again, it is impossible to conclusively prove whether the system falls on the upward (left) or downward (right) side of the Laffer curve.
Similarly, conservatives point to the persistent high unemployment rates and weak economic growth and argue that the fiscal stimulus measures failed. The difficulty of estimating the counterfactual condition (without stimulus) means that an argument set on such terms of reference cannot be easily refuted.
Taken to its extremes, such reasoning can be used to justify almost any position. In social sciences like economics, it is impossible to conclusively quantitatively prove that a proposed intervention or policy is required. Decisions on whether fiscal stimulus is necessary and what types of spending is required, will never be proved in a universally compelling manner.
But his recent post explaining his ambivalence and agnosticism on extending unemployment insurance dismays me. In fact, it comes across as specious and positively disingenious, bordering on misleading and mis-directing the debate. Here is why
1. For a start, there is now enough literature on the pros (reduces household income uncertainty and props up aggregate demand) and cons (budgetary costs and lowers job search costs) raised by Prof Mankiw.
There is now ample evidence that extending the duration of UI does little to lower the marginal incentive to search for re-employment. Alan Krueger and Andreas Mueller have found that the time devoted to job search is fairly constant regardless of unemployment duration for those who are ineligible for UI.
Further, a recent study by the San Francisco Fed found negligible impact of UI extension on increase in unemployment rate - about 0.4 percentage point of the nearly 6 percentage point increase in the national unemployment rate since 2008.
2. The cases for and against any policy instrument varies depending on the broader macroeconomic environment. There cannot be a single applicable-for-all-conditions reasoning in favor or against a policy.
Consider this. The US economy is still recovering very slowly from a deep recession. Unemployment rates are on the rise and private sector job creation remains anemic. Inflation is low, aggregate demand is weak. Household balance sheets remain battered and will require more repairs. Monetary accommodation may have limited further traction. There is limited fiscal space available.
In the circumstances, doing nothing for lack of compelling quantitative evidence (of the success of the proposed intervention) and thereby letting the economy continue its present course has several dangers. The dismal macroeconomic conditions and prospects mean that the economy risks falling into a deep recession, from which recovery will be even more painful and long-drawn out. In fact, without recovery taking hold (and there is no evidence of this happening), the shares of debts and deficits will continue to grow.
The limited fiscal space available means that any stimulus spending should be directed at areas delivering the greatest bang for the buck. Money should be delivered to the hands of people who are likely to spend it immediately. A number of studies clearly indicate that among the various stimulus options, UI is among those with the largest multiplier.
3. The argument against any action on the grounds of conclusive enough evidence is all the more misleading since the inherent nature of an economy precludes any conclusive enough evidence on any reasonably complex economic policy intervention. Do we invoke this excuse to let a system take its own course, even when it is hurtling down into an abyss? Or do we weigh the relative probabilities of possible policy options and respond with the most cost-effective and most-likely-to-succeed option?
No one disputes that budget deficits and public debts should be lowered. The relevant question is whether this is the time to do so? What are the costs of contraction at this point in time? What is the cost of not doing anything and letting UI expire? Is there enough evidence in favor of the stimulative impact of UI?
In other words, the choice is between risking a deflationary recession with rising shares of debts and deficits and untold pain for the vast majority of people, and stimulating a recovery by running up short-term deficits. Reasonable people would choose the later as the lesser evil.
4.
"So when I hear economists advocate the extension of UI to 99 weeks, I am tempted to ask, would you also favor a further extension to 199 weeks, or 299 weeks, or 1099 weeks? If 99 weeks is better than 26 weeks, but 199 is too much, how do you know?"
This too has echoes of the first point. Prof Mankiw surely knows that there cannot be a not-too-high, not-too-low, and just right magic figure for the duration of UI applicable for all situations. In fact, the issue to be discussed here is not whether UI duration is too-high or too-low, but the benefits of its extension as opposed to costs. At issue is not what constitutes the optimal duration of UI, but whether fiscal stimulus is required and what is the most effective form of stimulus. If we assume that the moral hazard of UI is negligible and that we need fiscal stimulus, then the case in favor of extending UI is pretty strong.
5.
"It is also conceivable that the amount of UI offered in normal times is higher than optimal and that a further extension would move us farther from what is desirable."
On this, there is already enough evidence that US stands on the negative side of the optimum UI duration scale to Europe's positive side. So the argument on the possibility of deviating farther away from the desirable may be superfluous.
6. Prof Mankiw's argument is based on lack of "compelling quantitative" evidence on the optimum period of UI. He gives the impression that he is unable (or unwilling) to take a decision on UI till he has "compelling quantitative analysis" about "how generous the optimal system would be". Unfortunately, by setting his conditions in so comprehensive a manner, he has virtually ensured that he will never need to take a decision to support extension of UI. In any case, how much evidence is "compelling" enough?
He knows all too well that it is impossible to conclusively quantify the optimal duration of UI - not now, never in future too. In the circumtances, this logic is a convenient excuse for not doing anything. This naturally translates into letting the UI benefits expire. In other words, you vote against extension without appearing to directly oppose it!
And such reasoning has now become the convenient excuse for conservative economists to recuse themselves from supporting government interventions in many areas. The pathological opposition to tax increases too works on similar lines. Once again, it is impossible to conclusively prove whether the system falls on the upward (left) or downward (right) side of the Laffer curve.
Similarly, conservatives point to the persistent high unemployment rates and weak economic growth and argue that the fiscal stimulus measures failed. The difficulty of estimating the counterfactual condition (without stimulus) means that an argument set on such terms of reference cannot be easily refuted.
Taken to its extremes, such reasoning can be used to justify almost any position. In social sciences like economics, it is impossible to conclusively quantitatively prove that a proposed intervention or policy is required. Decisions on whether fiscal stimulus is necessary and what types of spending is required, will never be proved in a universally compelling manner.
Thursday, December 9, 2010
The tax-cut stimulus
President Obama's slide continues. In the desperation to secure Congressional approval for an extension of unemployment insurance (UI) (whose extension was expiring on December 31, 2010), the Obama administration agreed for a two-year $900 bn tax-cuts only stimulus plan. The stated objective is to mitigate the sufferings of the long-term unemployed and boost aggregate demand. However, the odds are that it may well return to haunt him when he seeks re-election in 2012.
The major component is the extension of the Bush-era cut in marginal tax rates from 36% and 39.6% to 33% and 35% respectively for two more years. After strongly opposing its extension for those with incomes above $250,000 (which amounted to $60 bn in tax revenues for a year), the President finally abandoned this in return for Republican support for UI extension. The UI extension allows jobless workers in states with high levels of unemployment to collect insurance for up to 99 weeks (as against the regular 52 week period). See reactions to the tax cuts here. Mark Thoma says that it is neither targeted nor timely nor temporary.
The other components include protection of estates up to $5,000,000 from the estate tax for next two years (cost $10 bn in revenues); extension of all refundable tax-credits made in ARRA for two more years (cost $40 bn); extend UI for 13 months (56 bn); a 2% cut in the 6.2% Social Security payroll tax paid by employees (not employers) for one year (cost of $120 bn); and 100% tax cuts on business investments over the next two years (so as to shift investment forward to 2011 and 2012). The top rate of 15% on capital gains and dividends would remain in place for two years.
Ezra Klein puts the stimulus measure in perspective,
In view of the unsustainable fiscal deficit and public debt, it was important that any fiscal stimulus focus on areas that provide the biggest bang for the buck. But by relying exclusively on tax cuts - all of which except the UI extension will have a multiplier less than one - the stimulus measure will have limited impact on the aggregate demand. In fact, the CBO has estimated that its impact on the unemployment rate will be marginal. Economix points to a study by Center for American Progress which estimates that the stimulus will save or create about 2.2 million jobs over the two years. This overlooks the fact that a large proportion of these tax cuts will end up being saved than being spent, all the more so given the anemic economic environment. See also Mark Zandi's assessment here.
There are two dangers for the White House with this round of stimulus, similar to what happened with the ARRA. It is now widely acknowledged that the White House over-promised with the ARRA, despite clear evidence that it was not large enough to meet the requirements of the time. Subsequently as economy stagnated and unemployment rate rose beyond the initial estimates (as was inevitable), opponents jumped on it as evidence of the stimulus having failed. This in turn made further stimulus measures anathema.
This time too, the same story is being repeated. President Obama has already promised that "It will spur our private sector to create millions of new jobs". Since the "millions" of jobs will expectedly never arrive, this round of stimulus too will remain vulnerable to being criticised later for not delivering on its promises.
Second, the compromise is only kicking the political economy can down the road. The two year extension of tax-cuts means that they will come up for renewal once again in 2012, in the midst of the presidential elections. In light of the present conditions and medium-term prospects, the economy is likely to be no better then. The debate is likely to follow much the same script as now. The two-year extension may therefore end up playing into the hands of the Republicans. In fact, after their recent Congressional election success, the GoP would now have the opportunity to use the same agenda to win two elections.
This debate surrounding the tax cuts extension draws attention to one of the most challenging public policy issues - tax cuts are rarely ever rolled back. Temporary tax cuts, especially direct taxes, have a habit of becoming permanent. Not only are the Bush tax cuts likely to become permanent, the newer tax cuts too will become difficult to roll-back after the two year period. As Mark Thoma says, this assumes greater significance with the payroll tax, which is critical to sustaining the already bleeding Social Security. He suggests that instead of payroll tax cut, the administration could have proposed a 2% rebate on pay-roll taxes, to be paid out of general revenues.
Update 1 (26/6/2011)
Robert H Frank points to the high unemployment rate (14 million unemployed, 9 million part-timers looking for full time jobs, 28 million in jobs they would have quit under normal conditions, and 2.2 m who had dropped out of labor force) and advocates a temporary payroll tax cut for both employees and employers. The payroll tax was originally meant to pay for Social Security, and in recent years, employees and employers have each contributed 6.2 percent of total salary — with no additional levies on salaries beyond $106,800. Last December, Congress approved cutting the employee’s contribution to the payroll tax to 4.2 percent of salary for the 2011 calendar year.
He argues that tax holiday for employees will boost disposable incomes and spending (estimated to lower unemployment rate by one percentage point by 2012 end), and holiday for employers on new hirings can generate atleast 5 m new hirings.
The major component is the extension of the Bush-era cut in marginal tax rates from 36% and 39.6% to 33% and 35% respectively for two more years. After strongly opposing its extension for those with incomes above $250,000 (which amounted to $60 bn in tax revenues for a year), the President finally abandoned this in return for Republican support for UI extension. The UI extension allows jobless workers in states with high levels of unemployment to collect insurance for up to 99 weeks (as against the regular 52 week period). See reactions to the tax cuts here. Mark Thoma says that it is neither targeted nor timely nor temporary.
The other components include protection of estates up to $5,000,000 from the estate tax for next two years (cost $10 bn in revenues); extension of all refundable tax-credits made in ARRA for two more years (cost $40 bn); extend UI for 13 months (56 bn); a 2% cut in the 6.2% Social Security payroll tax paid by employees (not employers) for one year (cost of $120 bn); and 100% tax cuts on business investments over the next two years (so as to shift investment forward to 2011 and 2012). The top rate of 15% on capital gains and dividends would remain in place for two years.
Ezra Klein puts the stimulus measure in perspective,
"There's some new stimulus in the form of the payroll-tax cut and the expensing proposals. The older stimulus programs that are getting extended -- notably the unemployment insurance and the tax credits -- probably would've expired outside of this deal. The tax cuts for income over $250,000 are a bad way to spend $100 billion or so, and the estate tax deal is really noxious... Most of the money just keeps programs that are currently in effect from expiring, so in some ways, it would be more accurate to say that this money is anti-contractionary rather than stimulative."
In view of the unsustainable fiscal deficit and public debt, it was important that any fiscal stimulus focus on areas that provide the biggest bang for the buck. But by relying exclusively on tax cuts - all of which except the UI extension will have a multiplier less than one - the stimulus measure will have limited impact on the aggregate demand. In fact, the CBO has estimated that its impact on the unemployment rate will be marginal. Economix points to a study by Center for American Progress which estimates that the stimulus will save or create about 2.2 million jobs over the two years. This overlooks the fact that a large proportion of these tax cuts will end up being saved than being spent, all the more so given the anemic economic environment. See also Mark Zandi's assessment here.
There are two dangers for the White House with this round of stimulus, similar to what happened with the ARRA. It is now widely acknowledged that the White House over-promised with the ARRA, despite clear evidence that it was not large enough to meet the requirements of the time. Subsequently as economy stagnated and unemployment rate rose beyond the initial estimates (as was inevitable), opponents jumped on it as evidence of the stimulus having failed. This in turn made further stimulus measures anathema.
This time too, the same story is being repeated. President Obama has already promised that "It will spur our private sector to create millions of new jobs". Since the "millions" of jobs will expectedly never arrive, this round of stimulus too will remain vulnerable to being criticised later for not delivering on its promises.
Second, the compromise is only kicking the political economy can down the road. The two year extension of tax-cuts means that they will come up for renewal once again in 2012, in the midst of the presidential elections. In light of the present conditions and medium-term prospects, the economy is likely to be no better then. The debate is likely to follow much the same script as now. The two-year extension may therefore end up playing into the hands of the Republicans. In fact, after their recent Congressional election success, the GoP would now have the opportunity to use the same agenda to win two elections.
This debate surrounding the tax cuts extension draws attention to one of the most challenging public policy issues - tax cuts are rarely ever rolled back. Temporary tax cuts, especially direct taxes, have a habit of becoming permanent. Not only are the Bush tax cuts likely to become permanent, the newer tax cuts too will become difficult to roll-back after the two year period. As Mark Thoma says, this assumes greater significance with the payroll tax, which is critical to sustaining the already bleeding Social Security. He suggests that instead of payroll tax cut, the administration could have proposed a 2% rebate on pay-roll taxes, to be paid out of general revenues.
Update 1 (26/6/2011)
Robert H Frank points to the high unemployment rate (14 million unemployed, 9 million part-timers looking for full time jobs, 28 million in jobs they would have quit under normal conditions, and 2.2 m who had dropped out of labor force) and advocates a temporary payroll tax cut for both employees and employers. The payroll tax was originally meant to pay for Social Security, and in recent years, employees and employers have each contributed 6.2 percent of total salary — with no additional levies on salaries beyond $106,800. Last December, Congress approved cutting the employee’s contribution to the payroll tax to 4.2 percent of salary for the 2011 calendar year.
He argues that tax holiday for employees will boost disposable incomes and spending (estimated to lower unemployment rate by one percentage point by 2012 end), and holiday for employers on new hirings can generate atleast 5 m new hirings.
Tuesday, December 7, 2010
Limits of technology - the case studies with education
In the hype surrounding rapidly proliferating information and communications technologies, it is easy to get carried away by technology-based silver-bullet solutions to development and governance issues. In this context, Kentaro Toyama introduces a dose of much-needed realism by cautioning against excessive optimism that these technologies can exercise magic-wand properties to eliminate poverty.
He writes that technologies are merely instruments to be deployed by human beings involved in the implementation of development programs. He describes them as "magnifier of human intent and capacity", and writes (and I quote extensively),
Ironically, techonology as a magnifier of human capability also results in exactly the opposite outcomes when capability and intent are absent or very weak. Echoing the digital divide voices, he writes,
There are two recent stories about the use of technology in education that are classic examples that overlook the wise words of Prof Toyama. First, the NYT reported about the use of cameras to videotape classroom instruction and use them to both remotely assess teachers and help them improve. Supported by the Bill Gates Foundation, a number of American school districts are videotaping classes and score teacher performance remotely by independent assessors. The teachers will also be provided feed-back and training to improve the quality of instruction.
In a Mint op-ed Harvard Professor Tarun Khanna pointed to the success of South Korean on-line learning website, Megastudy, with improving education quality. Megastudy's core idea is that good teachers are videotaped, and then others can pay to subscribe to their lectures, via online access to the videos. Prof Khanna reasons that the incentive structure in Megastudy's model (teachers get a share of the revenues) encourages the good teacher and "the underperforming ones understand how they must improve". Further, it also enables access for all children to the best available teachers.
Both these are typical examples of technology-driven euphoria glossing over the real-world challenges of replicating such models. Especially if the objective is to use these technologies to improve the quality of instruction and increase student access to the most powerful learning resources in developing country environments.
The videotape model to assess teachers is simply too fanciful to even cross the first stage of laboratory trial. While theoretically attractive, it becomes simply impractical when viewed through the lens of implementation on scale. The exorbitant installation cost and maintenance itself should be enough to junk such ideas (at $1.5 million for a district with 140 schools. it would cost nearly $12 bn to instal in 1.1 m Indian schools).
Even overlooking the cost and the very relevant questions about whether a videotape can capture all the different dimensions of classroom instruction, there still remain issues of implementation. Who will watch the videotape? How do we ensure quality of their assessments? How can the subjective opinions of the thousands of assessors be standardized? How do we administer the teacher feedback mechanism? We could easily end up subsituting an ineffectual assessment system by principals or supervisors with an equally inefficient system involving third-party assessors sitting at remote locations!
And then there are the real world issues. Any such technology and approach to assessment is vulnerable to being subverted. Cameras will develop problems and some will be made to develop problems and fail to record! And even when they do record, some of them would certainly end up recording something else (or without sound)! The possibilities are too numerous to be addressed to any level of satisfaction.
The Megastudy model offers little towards improving learning outcomes and teacher performance in any meaningful manner. Given the massive scale involved (1.1 million schools and nearly 200 million students in India), such websites will always have marginal reach. The small sliver of students who can afford it will benefit and increase their learning gap with the rest. The good teachers (and here too the chances are that there will be very few from government schools) would of course increase their bank balances.
In fact, on-line learning websites, admittedly not exactly Megastudy-model based, are already widely available in India. The sheer size of the education market in countries like India mean that firms like Educomp (which offer such services) can expand rapidly for a long time to come by merely concentrating on the top 5-10% of the market. It would be decades before the aforementioned models start penetrating the mass-market, involving government schools. In any case, why go for paid Megastudy material, when you can get the Khan Academy for free?
The businessman in me (and I am not in that profession!) will be excited by the commercial possibilities of videotaping and on-line classroom content distribution. However, the public official in me (and I am one!) sees limited possibilities for its scalability.
He writes that technologies are merely instruments to be deployed by human beings involved in the implementation of development programs. He describes them as "magnifier of human intent and capacity", and writes (and I quote extensively),
"But as we conducted research projects in multiple domains (education, microfinance, agriculture, health care) and with various technologies (PCs, mobile phones, custom-designed electronics), a pattern, having little to do with the technologies themselves, emerged. In every one of our projects, a technology’s effects were wholly dependent on the intention and capacity of the people handling it. The success of PC projects in schools hinged on supportive administrators and dedicated teachers. Microcredit processes with mobile phones worked because of effective microfinance organizations. Teaching farming practices through video required capable agriculture-extension officers and devoted nonprofit staff...
technology—no matter how well designed—is only a magnifier of human intent and capacity. It is not a substitute. If you have a foundation of competent, well-intentioned people, then the appropriate technology can amplify their capacity and lead to amazing achievements. But, in circumstances with negative human intent, as in the case of corrupt government bureaucrats, or minimal capacity, as in the case of people who have been denied a basic education, no amount of technology will turn things around.
Technology is a magnifier in that its impact is multiplicative, not additive, with regard to social change. In the developed world, there is a tendency to see the Internet and other technologies as necessarily additive, inherent contributors of positive value. But their beneficial contributions are contingent on an absorptive capacity among users that is often missing in the developing world. Technology has positive effects only to the extent that people are willing and able to use it positively. The challenge of international development is that, whatever the potential of poor communities, well-intentioned capability is in scarce supply and technology cannot make up for its deficiency."
Ironically, techonology as a magnifier of human capability also results in exactly the opposite outcomes when capability and intent are absent or very weak. Echoing the digital divide voices, he writes,
"The greater one’s capacity, the more technology delivers; the lesser one’s capacity, the less value technology has. In effect, technology helps the rich get richer while doing little for the incomes of the poor, thus widening the gaps between haves and have-nots."
There are two recent stories about the use of technology in education that are classic examples that overlook the wise words of Prof Toyama. First, the NYT reported about the use of cameras to videotape classroom instruction and use them to both remotely assess teachers and help them improve. Supported by the Bill Gates Foundation, a number of American school districts are videotaping classes and score teacher performance remotely by independent assessors. The teachers will also be provided feed-back and training to improve the quality of instruction.
In a Mint op-ed Harvard Professor Tarun Khanna pointed to the success of South Korean on-line learning website, Megastudy, with improving education quality. Megastudy's core idea is that good teachers are videotaped, and then others can pay to subscribe to their lectures, via online access to the videos. Prof Khanna reasons that the incentive structure in Megastudy's model (teachers get a share of the revenues) encourages the good teacher and "the underperforming ones understand how they must improve". Further, it also enables access for all children to the best available teachers.
Both these are typical examples of technology-driven euphoria glossing over the real-world challenges of replicating such models. Especially if the objective is to use these technologies to improve the quality of instruction and increase student access to the most powerful learning resources in developing country environments.
The videotape model to assess teachers is simply too fanciful to even cross the first stage of laboratory trial. While theoretically attractive, it becomes simply impractical when viewed through the lens of implementation on scale. The exorbitant installation cost and maintenance itself should be enough to junk such ideas (at $1.5 million for a district with 140 schools. it would cost nearly $12 bn to instal in 1.1 m Indian schools).
Even overlooking the cost and the very relevant questions about whether a videotape can capture all the different dimensions of classroom instruction, there still remain issues of implementation. Who will watch the videotape? How do we ensure quality of their assessments? How can the subjective opinions of the thousands of assessors be standardized? How do we administer the teacher feedback mechanism? We could easily end up subsituting an ineffectual assessment system by principals or supervisors with an equally inefficient system involving third-party assessors sitting at remote locations!
And then there are the real world issues. Any such technology and approach to assessment is vulnerable to being subverted. Cameras will develop problems and some will be made to develop problems and fail to record! And even when they do record, some of them would certainly end up recording something else (or without sound)! The possibilities are too numerous to be addressed to any level of satisfaction.
The Megastudy model offers little towards improving learning outcomes and teacher performance in any meaningful manner. Given the massive scale involved (1.1 million schools and nearly 200 million students in India), such websites will always have marginal reach. The small sliver of students who can afford it will benefit and increase their learning gap with the rest. The good teachers (and here too the chances are that there will be very few from government schools) would of course increase their bank balances.
In fact, on-line learning websites, admittedly not exactly Megastudy-model based, are already widely available in India. The sheer size of the education market in countries like India mean that firms like Educomp (which offer such services) can expand rapidly for a long time to come by merely concentrating on the top 5-10% of the market. It would be decades before the aforementioned models start penetrating the mass-market, involving government schools. In any case, why go for paid Megastudy material, when you can get the Khan Academy for free?
The businessman in me (and I am not in that profession!) will be excited by the commercial possibilities of videotaping and on-line classroom content distribution. However, the public official in me (and I am one!) sees limited possibilities for its scalability.
Monday, December 6, 2010
The effects of long-term unemployment
Does the duration of unemployment have an adverse impact on the job market prospects of those exiting the workforce (voluntarily or by lay-off)? A recent post in the Economix examined the evidence and finds conclusive evidence in the affirmative. This assumes greater significance in view of the trends, as indicated in the November US jobs report, that unemployment rates may remain high for a long period.
There is strong evidence from history and from the Great Recession that "the longer people stay out of work, the more trouble they have finding new work". After examining the job-finding data (for unemployed people) from the Bureau of Labor Statistics from January 1976 to October 2007, University of Chicago’s Robert Shimer "found that 51 percent of workers who had been unemployed for one week obtained work in the following month, but the share declined sharply after that".
Statistics from the recent recession too points to much the same trends.
In other words, even cyclical unemployment has the potential to become structural (skills mis-match etc). This can be attributed to broadly two reasons. One, the "better workers are more likely to get hired faster, leaving a pool of less qualified workers as the ones who disproportionately make it to long-term unemployment in the first place". Second, "the experience of unemployment itself also seems to damage workers’ prospects". Disentangling their relative contributions is not easy.
Such long-term unemployed face formidable obstacles in the job markets. Employers (who are in any case more discriminating in their choices in recessions) will see long-term unemployment as a "signal that something is defective" and see them as job market "lemons". There is also the likelihood that the long period out of work would have lowered their working-habit (or made them lazier) and even eroded their skills (especially if they were in dynamic industries). It can also adversely impact the psychology of the individuals - lower self-respect, mental depression, etc. All of this in turn has a negative effect on the person's productivity.
The policy responses to address this are mainly three-fold. One direct employment generation by way of government spending programs. Second, incentives to promote job-creating investments by the private sector. Third, re-training and skill upgradation programs for the long-term unemployed. But the relative impacts of each is debatable, though the Europeans, Germans in particular, had considerable success with their "short-work" scheme that kept people from being laid-off.
In any case, the costs of not doing anything in the face of persistent high unemployment rates may have deep and bitter long-term labor market consequences for the US economy.
Update 1 (8/5/2011)
The Economix reports that though older workers are much less likely to be unemployed than their younger counterparts (the unemployment rate for people over age 65 is about 6.5%), they are more likely to stay out of work. If older workers do lose their jobs, their chances of finding another job are extraordinarily low. Here’s a look at the average duration of unemployment (on a 12-month moving average), broken down by age of the unemployed
The fact that younger workers are more likely to go back to college and re-skill after being laid off and also the fact that older people are more likely to have been laid off from industries in structural decline — like manufacturing and newspapers — partially explains this trend.
Update 2 (20/2/2012)
Chris Dillow has a nice post on the long-term impact of youth unemployment. He points to a study that "found that men who had been unemployed for more than six months before the age of 23 earned an average of 7% less than others even at the age of 42; this controls for educational qualifications". It raises the probability of being unemployed in later years and has a wage penalty.
Another study by Ulrike Malmendier and Stefan Negel finds that "economic events experienced over the course of one’s life have a more significant impact on individuals’ risk taking than historical facts learned from summary information in books and other sources." So, for example, people who had seen bad equity returns in their youth own fewer equities than others even decades later.
There is strong evidence from history and from the Great Recession that "the longer people stay out of work, the more trouble they have finding new work". After examining the job-finding data (for unemployed people) from the Bureau of Labor Statistics from January 1976 to October 2007, University of Chicago’s Robert Shimer "found that 51 percent of workers who had been unemployed for one week obtained work in the following month, but the share declined sharply after that".
Statistics from the recent recession too points to much the same trends.
In other words, even cyclical unemployment has the potential to become structural (skills mis-match etc). This can be attributed to broadly two reasons. One, the "better workers are more likely to get hired faster, leaving a pool of less qualified workers as the ones who disproportionately make it to long-term unemployment in the first place". Second, "the experience of unemployment itself also seems to damage workers’ prospects". Disentangling their relative contributions is not easy.
Such long-term unemployed face formidable obstacles in the job markets. Employers (who are in any case more discriminating in their choices in recessions) will see long-term unemployment as a "signal that something is defective" and see them as job market "lemons". There is also the likelihood that the long period out of work would have lowered their working-habit (or made them lazier) and even eroded their skills (especially if they were in dynamic industries). It can also adversely impact the psychology of the individuals - lower self-respect, mental depression, etc. All of this in turn has a negative effect on the person's productivity.
The policy responses to address this are mainly three-fold. One direct employment generation by way of government spending programs. Second, incentives to promote job-creating investments by the private sector. Third, re-training and skill upgradation programs for the long-term unemployed. But the relative impacts of each is debatable, though the Europeans, Germans in particular, had considerable success with their "short-work" scheme that kept people from being laid-off.
In any case, the costs of not doing anything in the face of persistent high unemployment rates may have deep and bitter long-term labor market consequences for the US economy.
Update 1 (8/5/2011)
The Economix reports that though older workers are much less likely to be unemployed than their younger counterparts (the unemployment rate for people over age 65 is about 6.5%), they are more likely to stay out of work. If older workers do lose their jobs, their chances of finding another job are extraordinarily low. Here’s a look at the average duration of unemployment (on a 12-month moving average), broken down by age of the unemployed
The fact that younger workers are more likely to go back to college and re-skill after being laid off and also the fact that older people are more likely to have been laid off from industries in structural decline — like manufacturing and newspapers — partially explains this trend.
Update 2 (20/2/2012)
Chris Dillow has a nice post on the long-term impact of youth unemployment. He points to a study that "found that men who had been unemployed for more than six months before the age of 23 earned an average of 7% less than others even at the age of 42; this controls for educational qualifications". It raises the probability of being unemployed in later years and has a wage penalty.
Another study by Ulrike Malmendier and Stefan Negel finds that "economic events experienced over the course of one’s life have a more significant impact on individuals’ risk taking than historical facts learned from summary information in books and other sources." So, for example, people who had seen bad equity returns in their youth own fewer equities than others even decades later.