Friday, April 30, 2010

Article on RTE

Here is my article in the May issue of Pragati on the possible consequences of two important proposals in the Right to Education Act - reserving 25% seats in private schools for children from under-privileged backgrounds and the possibility of large recruitments in government schools to meet the 30:1 student-teacher ratio.

The inflation index debate - core and headline inflation

Less than a year back, we were talking in alarming tones about a deflationary spiral in India and today, in a dramatic reversal of position, opinion makers are fretting about inflation getting out of control. While it is undeniable that there are dark clouds of inflation looming on the horizon, the clarity on the debates surrounding inflation could be considerably enhanced by reformulating the terms of the debate. In this context, Mostly Economics has this excellent explanation of the need for policy makers to also follow the core inflation rate.

One of the biggest challenges with clarifying the debate on price changes remains the construction of an inflation index that can be the least distortionary measure for use in formulating monetary policy.

In an excellent post Mark Thoma draws the distinction between today's inflation rate (or headline rate) and the long-run trend rate (or core inflation) of inflation. Though the former informs people of what is the rate at which prices are rising now, it has limited role in forming inflation expectations. But the later carries considerable significance to businesses (in their investment decisions), households (in their spending decisions), and policymakers (in monetary policy decisions), in so far as it informs them of the economic prospects for the future based on current growth trends. Central bankers in particular find the core inflation rate a more accurate predictor of future inflation rate.

Mark Thoma points to both empirical and theoretical ways to define such an index. The empirical definition involves a measure that best predicts future inflation (and therefore future interest rates) and also best represents the inflation rate faced by a typical consumer. The theoretical definition favors a measure that can be deployed in the dominant paradigm sticky prices macroeconomic models. On both these counts, the core inflation rate, which excludes food and energy prices, comes out as the preferred measure over the all-encompassing headline inflation rate.

Empirical analysis of inflation data across countries point to the fact that prices excluding food and energy prices (or core prices) are the most accurate measure of long-run trend rate of inflation. In other words, core inflation has been found to be a more accurate forecast of future inflation (pdf here) than the headline inflation rate. Further, the exclusion of food and energy prices helps strip out the transitory components and provides a more better estimate of the long-run inflation trend.

At a theoretical level, core inflation rate target, being relatively sticky (compared to food and energy prices), also best stabilizes the volatility in output, consumption and employment. As Mark Thoma writes,

"In theoretical models used to study monetary policy, the procedure for setting the policy rule is to find the monetary policy rule that maximizes household welfare (by minimizing variation in variables such as output, consumption, and employment). The rule will vary by model, but it usually involves a measure of output and a measure of prices, i.e. generally a Taylor rule type framework comes out of this process (a rule that links the federal funds rate to measures of output and prices).

However, in the Taylor rule, the best measure of prices to target is usually something that looks like a core measure of inflation. Essentially, when prices are sticky, which is the most common assumption in modern theoretical models, it’s best to target an index that gives most of the weight to the stickiest prices. That is, volatile prices such as food and energy are essentially tossed out of the index."


And to the extent that all policy making is based on some models, he has more justfication for using a core inflation rate

"In models with price and wage sluggishness (and all models make this assumption), it is the failure of prices to move quickly to clear markets that causes output to deviate from target. Thus, monetary policy makers need not be concerned with highly flexible prices, it is the sluggish prices that are the problem. The solution is to keep the problem (sluggish) prices as predictable as possible so that even if prices are set far in advance, they will remain optimal.

To do this, the sluggish prices must be stabilized - the flexible prices can take care of themselves. For policymakers, this implies that highly flexible prices such as food and energy can be removed from the index to isolate and highlight the problematic sluggish prices. The goal is not to find the index that best represents the cost of living, rather, the goal is to learn about current and expected future values of the index most useful for stabilization."


Apart from the core inflation which strips out the most volatile categories - food and energy - off the index permanently, alternative methods like the trimmed-mean method, used by the Federal Reserve Bank of Dallas in the US strips only the most volatile items off the index each month.

Unlike India, most other countries use the CPI-based core inflation rate, which is calculated once a month, to guide policymaking for both their central banks and governments. India, in contrast, relies on a weekly, WPI-based inflation rate.

However, its utility is limited to the present and contains limited information of the future. Its value lies in activiating the automatic social safety nets and other support measures to cushion those most vulnerable to food and energy price increases. It can also be argued that in any case, if the present headline inflation is on its way up on a sustained basis, it gets automatically transmitted into the core inflation figures.

In India, the year-on-year, point-to-point WPI-based inflation figures are released every week and it is a measure of the broad headline inflation at any particular time. It has a weightage of 22.02% for primary articles (foodgrains etc) and 14.23% for fuel, power, light and lubricants. A reflection of the volatility of this inflation figure is seen from the fact that it has risen from 1.5% in October 2009 to 9.9% in March 2010, on the back of steep increases in food prices.

Such sharp variations in the WPI figures arise from the fact that it measures the price change between two points - now and the same time last year - and in the process overlooks all of what happened over the year and between those two points. To that extent it is a simple point-to-point comparison of prices between now and the same time last year.

Further, such binary point comparison is also skewed by the base effect. This time last year, much of the debate was about avoiding a deflation (as inflatn slipped into negative territory in June) and even a liquidity trap. This in turn meant depressed prices and it is only natural that prices return to their trend rates. This naturally reflects in a more pronounced change (over a year) in prices and manifests in a higher inflation rate. All these make a strong case for abandoning the current point-to-point calculations and embracing an average measure that more accurately reflects the progression of prices throughout the year.

Since India does not have a reliable CPI-based non-food and energy inflation index, the closest measure of core inflation comes from the the changes in the prices of non-food manufacturing prices. Removing the primary articles, fuel products, and manufactured food products, the weight of the non-food manufactures in the WPI-based headline inflation index is about 52%.



Since November 2009, when it was negative 0.4%, India's core inflation rate has been rising sharply to 4.2% in February 2010 and 4.7% for March.

In view of its volatility and supply-side origins, there is very little that policy makers can do to address headline inflation trends. In contrast, the core inflation will give RBI and policy makers in New Delhi the most reliable information about the future trajectory of prices.

The problem with core inflation is that it becomes a misleading and lagging (to headline inflation) indicator during times of protracted increases in prices, and a monetary policy that follows it risks being "behind the curve". Further, steep and sudden variations in the other components, most notably house rents, can skew even the core inflation rates.

Update 1 (23/5/2010)
Paul Krugman sees core inflation as a measure of "inflation inertia".

Wednesday, April 28, 2010

The Tianjin Eco-city and prospects for satellite townships

The Times reports of a collaborative effort by China and Singapore to jointly develop the greenfield Tianjin Eco-city, as a socially harmonious, environmentally friendly and resource-conserving, city in China. The Eco-city, located 40 km from the Tianjin city centre and 150 km from Beijing, is to be developed on 30 sq km of non-arable, salt-pan land at an estimated cost of $22 bn and be home to about 350,000 people.

Started in 2008 and expected to be completed in 10-15 years, the success or otherwise of Tianjin Eco-city could provide valuable lessons about the replicability and scalability of such experiments across the world. In view of the importance of cities as the economic growth engines of economies, the Eco-city will invariably be the reference point for similar projects to develop satellite townships across many countries, including India.

The City will have extensive green spaces and public recreational facilities; draw a significant part of its water supply from non-traditional sources such as desalinated water; have integrated waste management practices with emphasis on the reduction, reuse and recycling of waste; have good environment friendly public transport; have subsidized public housing to accommodate the lower and lower-middle income strata of society; and will be barrier-free to cater to the needs of the elderly and the mobility-impaired.

The City is being developed by the Sino-Singapore Tianjin Eco-City Investment and Development, a 50-50 joint venture company between a Singapore consortium led by the Keppel Group and a Chinese consortium led by Tianjin TEDA Investment Holdings, a state-owned enterprise based in Tianjin.

Without getting into the issue of the problems (which are numerous) associated with such projects or other judgments about them, here are a few observations on them

1. Such greenfield projects involve massive investments, and most often they will bleed the promoters for many years. In the circumstances, it is but inevitable that there be considerable government investments for such projects to take-off, especially on infrastructure. In fact, up-front government investments will have to become the pivot around which the private investments will slowly co-alesce.

2. It may be easier to develop the city around some specific anchor economic activities, especially knowledge-based sectors like say IT, biotechnology, or finance. These activities, which benefit from network effects, will also promote agglomeration economies and residential congregation in the areas surrounding the work-place. Such cities are likely to be post-modern cities.

3. In so far as anchor activities have the potential to catalyse the development of cities, the city can more easily develop around an already established commercial/industrial area.

4. All such cities should have excellent connectivity, both transport (air and land) and communications. Investments required to develop such infrastructure will inevitably have to come from governments.

5. Till the city attains a critical mass of economic activity, it may be necessary to offer generous subsidies, often verging on handouts (land at concessional rates, tax concessions, preferential treatment for utility service supply and its tariffs etc). Therefore the initial investors are likely to be beneficiaries of the "first-mover-advantage".

6. The infrastructure required for the development of such cities will have to go beyond roads, water, sewerage and public transport to include hospitals, schools, recreation facilities, various service providers, housing stock catering to people with different economic backgrounds etc. Attracting these services too will require generous support from the government.

7. Given the post-modern nature of such cities, an important contributor to their success in attracting the required mass of investments and economic activity will lie in the effectiveness of marketing and branding. Such cities will have to appeal to the lifestyle senses of potential migrants, both businesses and individuals.

8. All these will require a broad enabling policy framework, whose benign presence will facilitate the aforementioned activities. Agencies like the general purpose Singapore Cooperation Enterprise or the more sector-specific Singapore Urban Redevelopment Authority International can play an important role in the development of the Master Plans and the broad policy regimes.

In the final analysis, such greenfield cities can develop only if they offer greater attractions for businesses (up and downstream linkages, lower cost of operations, superior quality of infrastructure etc) and employees (better standard of living, advantage of network effects, closeness to work-place etc) than those offered by the neighbouring existing city.

Update 1 (25/8/2010)

Egyptian government is promoting two megacities - 6 October City, due west of Cairo, and New Cairo, due east - 20 miles away from the Cairo in an ambitious effort to decongest Cairo and meet the expanding needs. By 2020, planners expect the new satellite cities to house at least a quarter of Cairo’s 20 million residents and many of the government agencies that now have headquarters in the city.

The Times writes, "Only a country with a seemingly endless supply of open desert land — and an authoritarian government free to ignore public opinion — could contemplate such a gargantuan undertaking. The government already has moved a few thousand of the city’s poorest residents against their will from illegal slums in central Cairo to housing projects on the periphery."

Tuesday, April 27, 2010

Financial incentives-based educational reforms

In recent years, there have been an increasing body of research exploring the role of incentive-based education reform and its impact on the dynamics of student motivation. However, there exists considerable debate about whether incentives for inputs or outputs are more effective in increasing student achievement.

A recent NBER working paper by Roland Fryer describes school-based randomized trials in 261 urban public schools in Chicago, Dallas, New York City, and Washington DC that distributed financial incentives worth $6.3 mn to roughly 38000 students so as to test the impact of financial incentives for various inputs and outputs on student achievement and finds that

"In stark contrast to simple economic models, our results suggest that student incentives increase achievement when the rewards are given for inputs to the educational production function, but incentives tied to output are not effective. Relative to popular education reforms of the past few decades, student incentives based on inputs produce similar gains in achievement at lower costs. Qualitative data suggest that incentives for inputs may be more effective because students do not know the educational production function, and thus have little clue how to turn their excitement about rewards into achievement."


The experiments varied from city to city on several dimensions - what was rewarded, how often students were given incentives, the grade levels that participated, and the magnitude of the rewards. The key features of each experiment consisted of monetary payments to students (directly deposited into bank accounts opened for each student or paid by check to the student) for performance in school according to a simple incentive scheme. In all cities except Dallas, where students were paid three times a year, payments were disseminated to students within days of verifying their achievement. Students were paid for inputs like reading a book (or lesson), attendance, good behavior, wearing their uniforms, and turning in their homework.

Prof Fryer finds that providing incentives for achievement test scores has no effect on any form of achievement measured; there is scant evidence that total effort increased in response to the programs, though there may be substitution between tasks; also no evidence that incentives decrease intrinsic motivation; and that input experiments seem positively associated with motivation and output experiments seem negative. However, the point estimates on all these are too small and the standard errors are too large to conclude that financial incentives are a game-changer in educational reforms. He concludes

"Relative to achievement-increasing education reform in the past few decades – Head Start, lowering class size, bonuses for effective teachers to teach in high need schools – student incentives for certain inputs provide similar results at lower cost. Yet, incentives alone, like these other reforms, are not powerful enough to close the achievement gap."


About the theory that students do not understand the educational production function and, thus, lack the know-how to translate their excitement about the incentive structure into measurable output, he writes,

"Students who were paid to read books, attend class, or behave well did not need to know how the vector of potential inputs relates to output, they simply needed to know how to read, make it to class, or sit still long enough to collect their short-term incentive...

There are three pieces of evidence that support this theory. First, evidence from our qualitative team found consistent narratives suggesting that the typical student was elated by the incentive but did not know how to turn that excitement into achievement. Second, focus groups in Chicago confirmed this result; students had only a vague idea how to increase their grades. Third, there is evidence to suggest that some students – especially those who are in the bottom of the performance distribution – do not understand the production function well enough to properly assess their own performance, let alone know how to improve it.

Three other theories are also consistent with the experimental data. It is plausible that students know the production function, but that they lack self-control or have other behavioral tendencies that prevent them from planning ahead and taking the intermediate steps necessary to increase the likelihood of a high test score in the future. A second competing theory is that the educational production function is very noisy and students are sufficiently risk averse to make the investment not worthwhile. A final theory that fits our set of facts is one in which complementary inputs (effective parents, e.g.) are responsible for the differences across experiments."

Sunday, April 25, 2010

Status based taxation in local government taxes

Mark Thoma points to an excellent interview of Raj Chetty by Romesh Vaitilingam, where the economist talks about the optimal design of tax policies and public expenditure programmes.

He argues in favor of maintaining and even expanding the duration of unemployment insurance benefits for unemployed in the US. His studies on unemployment benefits indicates that the moral hazard effect (where generous European-style indefinite unemployment benefit schemes disincentivize people from looking for jobs) is offset by the "liquidity effect" (poor people have less savings and therefore being liquidity constrained they are forced into accepting any job that comes their way, whereas with some limited benefits they may be able to take some time to search out jobs more efficiently). Ivan Werning too finds much the same evidence and does not favor stringent limits on duration of benefits on the grounds that those with the longest unemployment spells are those with largest losses from foregone earnings.

On taxation policy he finds fairly strong and conclusive evidence that salience of tax increases demand elasticity. This also means that contrary to the claims that prices and tax incidence are indifferent on whom (seller or buyer) the tax is imposed, there is strong evidence that imposing the tax on buyers may minimize the impact on demand. In other words, taxes on consumers mostly gets passed through without making much dent on demand.

He also makes an interesting proposal about status-based taxation, to leverage the premiums attached to individual status in traditional societies and maximize tax collections. Prof Chetty therefore proposes offering the attraction of naming public assets to incentivize tax compliance, especially with local government taxes, in developing countries like India. However, I am not convinced about this for the following reasons

1. The local property tax rates are very small (a maximum of 1% of capital value or 20% of annual rental value of the house, both of which are in turn heavily under-valued). In fact, collectively, it is estimated that property taxes form just 0.2% of GDP. This means that there are not likely to be any large enough tax assessees even in the bigger Panchayats. Even among the handful of large houses/shops, their tax revenues are not likely to be large enough to cover any substantial asset or even a part of those assets (say a room in a school building), and this even if we consider their tax assessment for a number of years.

In the circumstances, if the panchayat decides to allocate naming rights for an asset only for covering a share of the cost of construction, it would be tantamount to selling off the few critical community assets in the village very cheap. Like divesting public enterprises at below their valuations in a depressed equity market! Or selling off at the lower-end of the value chain! The local government risks facing the considerable "seller’s curse" of having sold out a part of its own identity permanently for a far too low price in an under-developed market.

2. There are not too many community assets (both existing and proposed ones) in any village that can be so named. School building, PHC building, and community hall are the most attractive assets for naming. Street roads too can be taken up for naming, especially those where the assessee resides. However, the cost of paving a street (about Rs 20 lakhs for a 100 m street) with cement concrete is too large to be covered with property tax. A small single room school building will cost Rs 3-4 lakhs. The largest property tax assessment in a village will not be more than a few thousands, rarely more than a lakh.

3. Typically, even with small efforts, the tax collection efficiency of local bodies can be improved dramatically. Presently they neither have the capacity (most gram panchayats have just one bill collector to assess, maintain records, collect, and follow up with violators) nor incentives (property tax collections, atleast in case of small Panchayats, are a very small share of their inflows, and salaries and major works are financed with government grants) to either collect or enforce the existing tax rules. So simple efforts at tax collection and capacity building through personnel and maintenance of records will yield substantial results. The low hanging fruits are too many to be plucked.

An excellent comparison can be made with the dramatic improvements in tax governance of cities in India over the last two decades as collection and enforcement machinery have been strengthened and records updated. Further, unlike the Panchayats, salaries, repairs and maintenance expenditures and most of the infrastructure spending of cities has to come from their internal funds.

4. There is also the problem of recidivism as the assessees can slip back into their old habits and evade/avoid payment of taxes once the asset is named.

5. In any case, the competing needs of panchayats are numerous and resources too scarce for the program to be meaningfully implemented. The status tax proposal can take off only if the asset that the assessee wants to name is among that covered in the Panchayat’s priority. This coincidence of interests is likely to be very rare. The typical Panchayats priorities (which in turn is decided by the amount available), among capital assets, are most likely to be street roads and drains. Schools and hospitals are generally covered by the grants under various other schemes.

A more efficient way to capture the willingness to pay among the large assessees would be to have a formal policy that solicits donations from people in return for naming these assets. This would enable a more efficient price discovery, especially for the more prized assets (like schools, community halls, bus shelters etc), by formal allocation of the naming rights to those who bid the highest. A floor price, amounting to the actual cost (or say 75%) of construction of the building, can be fixed to formulate the rules of the bidding process. This would be a tax-plus scenario! I am inclined to believe that this policy is likely to have a number of takers, especially in small villages, where the premium attached to such symbols is substantial.

In the final analysis, the problem with local government (both panchayats and municipalities) taxation in India is that apart from the meagre property taxes and assigned revenues from stamp duty and entertainment taxes, they do not get any share of the major tax sources – commercial taxes, excise taxes, income tax and corporate taxes. Further, the local government tax rates are too small that even if they actually manage to collect their taxes, it would hardly enough to make a serious dent on their huge needs.

Update 1 (22/6/2010)
Nice post by Nancy Folbre on the debate surrounding incentives and unemployment benefits.

Saturday, April 24, 2010

Nudging on electricity use optimization gone wrong

Sometime back, I had blogged, here and here, about a unique experiment by the Sacramento Electricity Utility that sought to "nudge" its consumers to reduce their electricity consumption by sending them a Home Energy Report (HER) that carried information about the consumption pattern of their neighbours. It was found that households which consumed more than their average neighbours reduced their electricity use (albeit by a small percentage) in response to the HER send alongside the monthly bill.

But more troublingly, Ray Fisman points to a recent study by Mathew Kahn and Dora Costa, who found that the effectiveness of such nudges are also dependent on the political leanings of its target. They found that while the program succeeded in encouraging high consuming (higher than the average of their neighbours) Democrats and environmentalists to lower their consumption, it actually led to increased consumption among the relatively low consuming Republicans.

Kahn and Costa write, "Our regression estimates predict that a Democratic household that pays for electricity from renewable sources, that donates to environmental groups, and that lives in a liberal neighborhood reduces its consumption by 3 percent in response to this nudge. A Republican household that does not pay for electricity from renewable sources and that does not donate to environmental groups increases its consumption by 1 percent."

Conservatives have always opposed such energy conservation measures and its dissemination through paternalistic methods. The information that they are using less electricity than their neighbours would have actually encouraged the Republicans to become less vigilant about their consumption. Costa and Kahn suggest that ardently right-wing electricity customers might respond to paternalistic nudges by burning more energy, just to thumb their noses at Big Brother. They therefore suggest that nudges be tailored to meet the specific ideolgical and behavioral characteristics of the traget audience.

See this website of opower, a firm that assisted in the Sacramento experimment.

Friday, April 23, 2010

The difficulty of regulating the shadow-banking system

Regulation of the shadow banking system, with its non-exchange traded (over-the-counter, private contracts) and customized derivative instruments, is surely top on the priority list of all financial market reform proposals. But as Alan Blinder writes in a WSJ op-ed, the preferred approach of standardizing derivatives and moving their trading into organized exchanges will run up against formidable obstacles.

"The primary beneficiaries of customization are the ... five big Wall Street firms... If they can stave off standardization and exchange trading, comparison shopping will remain very difficult and profit margins will remain sky high. But if reform makes standardized, exchange-traded products dominant, competition will squeeze profit margins to the bone."


In addition to providing information about prices and volumes, exchange trading would subject derivatives to a full range of regulations, including disclosure and reporting requirements and stricter antifraud rules. And as expected, as a Times op-ed writes, the proposals now under discussion before the US Senate does not appear very promising

"The bill would allow too many trades to be done off the exchanges. Regulators would be able to police them, but there would be no ongoing investor oversight. There are carve-outs for certain corporate users of derivatives and for contracts tailored to unique purposes. The bill also would allow the Treasury secretary to exempt an entire type of derivative known as foreign exchange swaps. Corporate pension funds that invest in derivatives would be subjected to less scrutiny than is required of many other investors. The financing arms of major manufacturers would also escape full scrutiny. All of that is going in the wrong direction."


The Times op-ed advocates going beyond mere regulation of derivatives and proposes an outright ban on abusive derivatives like that sold by Goldman

"The Goldman deal was nothing more than a bet on the mortgage market, in which one side was destined to win and the other to lose, without 'investing' anything in the real economy. The CDO did not hold actual mortgage-related bonds, but rather allowed the participants to stake a position on whether bonds owned by others would perform well, or tank. And that helped to further inflate the housing bubble. That is not investing. It is gambling, and it is abusive. It has no place in banks that can bring down the system if they fail... Congress should ban both gaming and abusive derivatives. That would help clarify the difference between pure speculation and true hedging. It would start to restore what has been lost in the crisis: public confidence in the integrity of financial markets."


In many respects, the final outcome on the regulation of the shadow banking system will be the acid test for the extent of influence wielded by the Wall Street firms on the Washington establishment. No other reform proposal impinges so directly and immediately on the profitability of Wall Street. Unless something dramatic happens, I am inclined to believe that any reform to bring derivatives under the umbrella of the regulatory agencies will be considerably diluted by what it excludes from its ambit.

The Goldman scandal, which involved the use of precisely such customized derivative products (in this case, synthetic CDOs, with underlying CDS's), and the momentum of public outrage generated by it is an excellent opportunity to bell the cat on reforming the shadow-banking system.

Update 1 (13/12/2010)

Excellent NYT article on how the big banks control the derivatives trading processes, preventing the development of independent clearing houses and electronic trading platforms.

Thursday, April 22, 2010

Counter-cyclical free-market solution to cyclical unemployment?

Rising unemployment rates have been the defining characteristic of the Great Recession. However, as Jack Ewing writes, Germany has been relatively successful in maintaining its labor market stable during this period.

One reason for its success was the use of the so-called short work scheme, "Kurzarbeit", which allows companies to cut workers’ hours and the government would make up some of the lost wages. During the first quarter of 2010, 22% of all firms and 39% of manufacturers were estimated to have used Kurzarbeit.

The most interesting instrument was the use of counter-cyclical work-time accounts. This permits workers to bank overtime hours during busy periods and then take paid time off when business is slow. Such provisions, enshrined in work contracts, provide flexibility to both employers and workers to more effectively manage their workforce. Are work-time accounts a free-market answer to addressing the problem of cyclical unemployment?

Wednesday, April 21, 2010

More on macro-prudential regulation proposals

As Paul Krugman wrote recently, deposit insurance (by preventing bank runs) and bank capital requirements (by reducing the incentive for banks to take advantage of guarantees to gamble with other peoples’ money) had been the twin pillars of banking regulation for many decades now. However, the rapid emergence of non-depository banking and non-exchange traded (or over the counter) financial instruments (like derivatives), and their pervasive ("too inter-connected to fail") and dominating ("too-big-to-fail") role in the financial markets have rendered the conventional regulatory tools blunt.

It is also now well-acknowledged that under-pricing of risk and the application of uniform and time invariant risk ratios/weights played an important role in sustaining resource mis-allocation and amplifying the contributory factors to the asset bubbles. The major pro-cyclical risks arise from the inflating asset values that incentivizes and enables financial institutions (and households) to expand their both lending and borrowing. The systemic impact of such collective bouts of "irrational exuberance" is much more than the sum of their individual impacts.

Financial market regulation proposals in the aftermath of the sub-prime crisis continue to remain focused on micro-prudential (firm specific) regulation, albeit with increased attention to the counter-cyclical dimension. Primarily this means tightening the monitoring and enforcement, and increasing the conventional capital requirements (capital adequacy ratio) and liquidity and reserve ratios. There have also been proposals to use more innovative debt instruments like contingent convertible bonds (CoCos) that automatically convert into equity when risks mount; and attempts to re-define the meaning of capital (whether it includes only common stock or covers others like preferred stock and deferred tax assets). But the macro-prudential dimension of regulation is yet to evolve into concrete proposals.

As Claudio Borio explains, macro-prudential (or systemic) regulatory frameworks have two dimensions - managing the cross-sectional risk distribution across the financial system at any given time and addressing the evolution of the aggregate risk over time. The challenge in the first dimension is to deal with common (co-related) exposures (to similar asset classes or linkages among them) across financial institutions that causes institutions to fail together.

The key issue with the second dimension is to deal with pro-cyclicality or "how system-wide risk can be amplified by interactions within the financial system as well as between the financial system and the real economy". As Borio writes, "During expansions, declining risk perceptions, rising risk tolerance, weakening financing constraints, rising leverage, higher market liquidity, booming asset prices, and growing expenditures mutually reinforce each other, potentially leading to the overextension of balance sheets. The reverse process operates more rapidly, as financial strains emerge, amplifying financial distress."

As Avinash Persaud writes, the need for macro-prudential regulation arises from the fact that "financial firms acting in an individually prudent manner may collectively create systemic problems". Further, in the busts following booms, "all financial firms responding to common, prudential, market-based risk controls would lead them all to want to sell the same assets at the same time, creating a liquidity black hole". In other words, effective micro-prudential regulation (of individual institutions) does not obviate the need to monitor and regulate emergent system-wide risk factors.

Persaud and a few others have advocated the automatic (or indexed) raising of capital adequacy requirements (for individual firms) when aggregate borrowing (or some other index of systemic riskiness) in an economy or a sector is above average so as to "put sand in the systemically dangerous spiral of rising asset prices" and thereby reduce the amplitude of the bubbles. This can be extended to include a counter-cyclical dimension to all capital and other reserve ratios, by indexing them to certain systemic risk parameters that more or less accurately reflect the emergent risk scenarios. Such automatic stabilizing instruments have been called asset based reserve requirements (ABRR).

Apart from the counter-cyclically indexed ratios, Claudio Borio also advocates having capital ratios, insurance premia etc which are determined depending on the estimates of the individual institutions’ contribution to system-wide risks. This would imply having tighter standards for institutions whose contribution is larger, contrasting sharply with the microprudential approach, which would have common standards for all regulated institutions.

Another macro-prudential tool would be to have systemic rules that incentivize firms that can absorb short-term liquidity risks (because they have long-term funding/debts or larger capital cushions) to not join the selling frenzy in a downward spiral, instead of some common prudential rules for all that only amplifies the spiral.

In a Vox article, Enrico Perroti draws the distinction between aggregate risk creation (which affects financial stability), which is highly co-related with the business cycle, and systemic risk creation (which affects the larger macroeconomic stability) in credit booms. He writes that while asset risk is the natural remit of micro-financial regulators (including central banks by way of liquidity support during a deleveraging spiral), the task of managing the systemic risk arising from panic withdrawals of short-term funding (and the resultant system-wide propogation of risks) should be assigned to "macro-prudential councils" (which also contains central bank representation). This is similar in concept to the super regulators, like that proposed in the last Union Budget in India, who would be tasked with monitoring systemic risks.

He argues that aggregate asset risk factors, such as co-related holdings of long-term assets, can be targeted with countercyclical capital requirements and regulation (such as prudential limits, rules on disclosure, and clearing arrangements). Similarly, systemic levies, imposed through the aforementioned macro-prudential councils, offer a policy that can tighten financial discipline without the need for a large increase in interest rates across the whole economy.

He therefore proposes a liquidity-risk levy on intermediaries relying on fragile uninsured short-term funding (to the extent of such funding) for the negative externalities they create for others (when they make fire sales to repay rapid withdrawals of funding) by such borrowings. This would also act as a "charge (on) intermediaries ex ante for the de facto insurance of uninsured liabilities, though without creating an explicit insurance promise".

Such levies would be aimed at future incentives, discouraging rapid asset growth funded by investors bearing no risk (like carry trade strategies to invest in securities), and also increase maturity transformation from the current absurd over reliance on overnight repo markets, thus increasing financial resilience to shocks. Further, he also feels that such "liquidity-risk charges should be scaled by bank size – to tackle the too-big-to-fail problem – and by interconnectedness – to control intermediaries which cannot be easily disentangled from others".

Update 1 (18/7/2010

The Basel III Committee has this draft working paper on counter-cyclical capital buffer. When credit is expanding faster than GDP, bank regulators slowly increase their capital requirements, signaling those requirements clearly one year in advance. The higher capital requirements serve three main purposes: they help to slow down credit bubbles, they make an economy’s banks stronger, and they offer a way out of the paradox of capital. See also this and this.

Tuesday, April 20, 2010

The interaction between interest and exchange rates and inflation

Putting to rest all speculation about the extent of monetary tightening, the RBI's annual monetary policy for 2010-11 has raised repo (5.25%), reverse repo rates (3.75%) and CRR (6%) by 25 basis points each. See this superb analysis of the monetary policy by Amol Agarwal and grapic below from Financial Express today.



This hike in repo and reverse repo rates is a clear statement of intent that the RBI is concerned with inflation and is committed to keeping inflationary expectations under control. However, the extent of hike (against say, a 50 basis points increase) indicates that the RBI does not as yet feel the need to embark on aggressive monetary tightening, and that it remains sensitive to ensuring that monetary contraction does not stifle any economic growth prospects.

The extraordinary monetary expansion to weather the contagion effects of the sub-prime crisis is now being reversed by increasing the CRR, so as to mop up the excess liquidity sloshing around. The persistance of large inflows into the Liquidity Adjustment Facility (LAF), despite the increases in the CRR was a clear pointer to further increases in CRR. In fact, there may have been a case for raising CRR by 50 basis points.

In this context, it is important to consider the fact that any monetary policy approach will impact not only on inflation but also the rupee exchange rate, and therefore any interest rate decision have multi-dimensional implications.

1. The WPI-based inflation figures for the year ending March 2010 shows that prices rose 9.9%, though high but less than market expectations. Interestingly, while the food inflation appears to be on the decline, manufacturing prices look set to rise on the back of a strongly rebounding economy.

The increase in non-food or core inflation has been cited as a cause for concern and has been invoked to justify the calls for raising interest rates. Other indicators of economy growing fast include - rapid increases in various credit ratios, wage growth, corporate earnings and investment demand growth.



However, as the graphic from Mint indicates, the worst of the food inflation may be over and it appears to be on the way down. The good Rabi harvest and start of its inflows into the market will exert a downward pressure on food prices. The WPI inflation index too appears to have stabilized for the past few months. In fact, any increase in manufacturing inflation due to increase in global oil and commodity prices or other supply side factors cannot be addressed with monetary tightening.

Further, as the Planning Commission member Saumitra Chaudhuri said, some of the increases in prices are due to the low base effect of last year when economic growth was considerably repressed. The Planning Commission too does not believe that the inflation for 2010-11 will exceed 4-5%, especially in view of the weak economic conditions in much of developed world for the foreseeable future. Therefore, despite the leading indicators pointing to a recovery, the case for aggressive monetary tightening is not very clear.

In this context, is the 4-5% inflation being targeted for countries like India a little too contractionary? This assumes significance in light of the recent suggestion by the IMF, in a radical break from its traditional position, that developed economies raise their normal inflation targets from 2% to 4%, so as to have adequate room to manouevre with monetary expansion even if the interest rates touch the zero-bound.

Assuming that developed economies have annual growth rates in the range of 3-5%, compared to India's 8-10%, is it not natural that the inflation targets for the developing economies be higher? How much higher will be a matter of some debate. Should we target an inflation rate of 6-7%, especially given the ambitious double digit growth rates being aimed at?

Interestingly, all the developing economies who enjoyed high growth rates for a sustained period, did have similar inflation rates. In fact, such rates were considered acceptable in India itself all through till nineties. The danger with this is the possibility of inflationary expectations getting out of control.

2. Raising interest rates now will also lead to an increase in the interest rate differentials between India and the US, where rates are likely to remain at present level for the foreseeable future. This in turn will only increase the incentives for dollar carry trades, resulting in increased capital inflows into India and putting upward pressure on the rupee. An appreciating rupee will continue to erode the export competitiveness and increase calls for forex market intervention by the RBI.

In a recent Business Standard op-ed, Shankar Acharya expressed concern at the fact that the current account deficit has been in excess of 3% for more than two quarters and this has coincided with exchange rate appreciation and rising inflation.



As he wrote, normally when a country runs a moderately high current account deficit and relatively rapid inflation is weakening its competitiveness, a depreciation of its currency is expected, whereas the opposite seems to be happening with rupee. The large inflows of foreign capital coupled with the RBI's reluctance (or inability to undertake sterilization of the foreign currency purchases given its preoccupation with managing the record borrowing programme of the government in 2009-10) to intervene in the forex market have contributed to this contrarian trend.

He also points to the adverse impact of Chinese currency manipulation, by pegging its renminbi with a declining dollar, on our exports. In the circumstances, in order to prevent rupee from appreciating and amplifying the external account imbalance, he advocates that the RBI undertake foreign currency purchases (with required sterilization, either through open market sales of government bonds or a resurrection of the market stabilisation scheme or an increase in the CRR) and/or impose some form of capital controls (reduction of ECB borrowing limits, Tobin taxes, tighten P-Note regulations etc), as advocated again by the IMF recently, to limit the surge in inflows.

However, there are problems with implementing Prof Acharya's prescriptions. One, the appreciating rupee has contributed towards keeping imports cheap and therefore inflation (especially manufacturing, by way of capital equipment imports and oil prices) under control. Therefore a depreciation will surely add to the inflationary pressures. Second, it is also debatable as to whether the RBI can afford to intervene aggressively enough (especially when it also has to meet the government's massive borrowing obligations) to keep the rupee from appreciating since it would lead to build up of reserves (which will then stay invested with low return yielding dollar assets) and increased cost of servicing the sterilization efforts.

In this context, as mentioned earlier, raising interest rates will only widen the rate differentials and encourage carry-trade opportunities and capital inflows, and thereby appreciate the rupee further. Also, the higher rates also means that the cost of sterilization interventions increase and thereby constrain the RBI in its market interventions.

The difficulty of managing such situations also highlights the need to deploy a careful mix of all available options, without relying exclusively on the conventional interest rate channel. I have already blogged about how the RBI used multiple instruments to manage the macroeconomic environment effectively and largely avoided the adverse impact of the sub-prime crisis.

See also this excellent op-ed in the Businessline by Kanagasabapathy who suggest that apart from working on aligning its repo, reverse repo and bank rates, the RBI should also move from being a passive absorber of funds (through its LAF window) to more active liquidity management (by buying g-secs and repo operations to keep call money rates in the repo-reverse repo band). This will ensure that the markets respond immediately to policy interest rate changes (by integrating the rate and quantity channels) so that the ‘rate channel' transmission of monetary policy becomes more effective. As he writes, the rate and quantum channels can be integrated and monetary transmission made effective only if the policy rate changes are complemented and supported by active liquidity management operations.

Sunday, April 18, 2010

Debit cards, mobile banking and the dynamic inconsistency problem

As part of its total financial inclusion (TFI) and other programs to expand the reach of formal banking to cover the poor, central and state governments in India have been encouraging the issuance of debit cards along with bank accounts. It is also proposed to deliver many direct cash transfers like pensions and scholarships to beneficiaries through debit cards.

Taking access to credit one step forward, it has also been proposed to deliver cash through mobile phones in various mobile banking models under discussion. Accordingly, people can use their cell phones as a mobile personal ATM, from which they can easily transfer cash for various purchases.

While it is true that debit cards would enhance the ease of access to banking services, especially in urban areas, there is growing evidence from the research of behavioural economists that highlights attention on the cognitive biases thrown up by such easy access.

Behavioural economists have found that people, especially poor people, exhibit dynamically inconsistent preferences, wherein they attach a higher value to the present than any future time, "a preference for one that arrives sooner rather than later". Such hyperbolic discounting models of human behaviour show very high discount rates for the immediate against a very low one for distant time horizons.

The commonest manifestation of such time inconsistent preferences is the self-control problem. This relates to a variety of topics including procrastination, addiction, efforts at weight loss, sending children to school, deferring consumption, and saving for retirement. In this framework, extensive research done over the past decade on savings among the poor point to self-control problems that plague people's savings and consumption decisions. They have found that people succumb to the choice of spending on immediate needs and demands than saving for the future though they had originally planned to save for a future need (like child's education or building a house).

Accordingly, behavioural economists have advocated use of commitment savings products and default savings options that lock in savings and increase the cost of withdrawl so as to overcome the inconsistency in inter-temporal preferences. By the same argument, facilitating easy access to bank accounts works towards amplifying their preference for the immediate and against the future.

In the circumstances, access to banking services through a debit card (or mobile phone) enables people to withdraw their savings easily from the nearest ATM center, without having to physically visit and endure the formalities of withdrawing the money from a regular branch. In a scenario where customers have a simple no-frills account and easy access to ATM centers, the debit cards (and mobile phone banking) may end up exacerbating the self-control problem and encourage people to withdraw available money for immediate consumption than future needs. Ironically enough, the relative illiquidity of a savings bank account may be preferable to the liquidity of a debit card (or mobile phone account)!

In other words, technologies like debit cards and mobile phones come up against a trade-off problem. On the one hand, they enable easier access to formal credit mechanism. On the other, they also run the risk of feeding into people's self-control problems, which has the potential to lower their savings appetite while exacerbating their consumption urges. Therefore, while embracing such interventions that facilitate easier access to formal credit mechanisms, it is important to strike the appropriate balance.

The skeletons from the Goldman cupboard

It was slightly long in coming, but had to happen. The Goldman Sachs enigma (the "perpetual money making machine") has finally been uncovered as the Securities and Exchanges Commission (SEC) in the US filed a civil law suit accusing it of defrauding its customers who had purchased risky mortgage related debt that were secretly devised to fail.

There were whispers all along that Goldman had been bundling and selling mortgages and other debts to unsuspecting investors while simultaneously betting against the same debts. Interestingly, despite the widespread knowledge of such practices, this is the first instance of action by the SEC against a Wall Street deal that helped investors capitalize on the falling sub-prime mortgage market.

The instrument in the SEC case, called Abacus 2007-AC1, was one of 25 deals that Goldman created so that it and select clients could bet against the housing market. The buyers of Abacus 2007-AC1 bonds - synthetic CDOs, created by packaging bundles of CDS (that insure against default of underlying mortgage bonds) - would keep receiving regular payments so long as the underlying securities stayed healthy, but would have to pay up the full insured amounts (or their full investments) if the securities failed.



In the instant case, Goldman created Abacus 2007-AC1 in February 2007, ostensibly through an independent deal manager ACA Management (who identified and packaged the mortgage bonds) for investors, but actually at the request of John A. Paulson, a prominent hedge fund manager who earned an estimated $3.7 billion in 2007 by correctly wagering that the housing bubble would burst. Though investors were told that ACA was selecting the bonds impartially, Goldman let the Paulson fund select mortgage bonds that they believed were most likely to lose value.

Unknown to both ACA and its investors (foreign banks, pension funds, insurance companies and other hedge funds), at the same time the Paulson fund and Goldman's proprietary trading desks were betting against the same bonds (so as to cause the bonds to default and the CDS's to kick in and make good the insurance contracts).

In fact, Goldman had let the Paulson fund pick and choose the underlying bonds, ones that it knew was likely to default. As the Abacus deals plunged in value, Goldman and certain hedge funds made money on their negative bets, while the Goldman clients who bought the $10.9 billion in investments lost billions of dollars.

Such deals were triple-wins for Goldman Sachs

1. It would buy the CDS's on sub-prime mortgage bonds at cheap prices (in view of their high risk) and then repackage them, get it rated AAA and sell it at better terms (lower premium payouts to holders of the CDS) to investors. Through this, they were able to arbitrage the market for such insturments.

2. Its proprietary trading desk would bet on these bonds defaulting (by either selecting those bonds that they know are most likely to default or by taking up short positions on them and driving down prices and forcing events like a ratings downgrade or margin calls) and keep making the premium payments to the investors in the CDOs. Once the underlying bonds finally defaulted, the CDO investors lost their all their investments which get paid out by way of insurance redemption. And Goldman and its select clients shared this.

3. Finally, Goldman made large money from fees for arranging all these deals - both with investors and the clients.

Steve Waldman has this superb summary of the scandal

"Investors in Goldman’s deal reasonably thought that they were buying a portfolio that had been carefully selected by a reputable manager whose sole interest lay in optimizing the performance of the CDO. They no more thought they were trading 'against' short investors than investors in IBM or Treasury bonds do. In violation of these reasonable expectations, Goldman arranged that a party whose interests were diametrically opposed to those of investors would have significant influence over the selection of the portfolio. Goldman misrepresented that party’s role to the manager and failed to disclose the conflict of interest to investors. That’s inexcusable. Was it illegal? I don’t know, and I don’t care... But the firm’s behavior was certainly unethical. If Goldman cannot acknowledge that, I can’t see how investors going forward could place any sort of trust in the firm. Whatever does or does not happen in Washington D.C., Goldman Sachs needs to reform or die."


Though Goldman has denied all these allegations and has vowed to vigorously defend itself, Wall Street hammered Goldman shares by more than 13%. This incident may have irreparably damaged Goldman's "reputation and its ability to keep its hands on so many sides of a trade — a practice that is immensely profitable for the firm". It could also trigger off a cascade of law suits against Goldman by its investors, especially the major banks like ABN and IKB, who lost their investments in these Abacus instruments. Interestingly, John Paulson or his fund are not part of the civil suit filed by SEC.

See also this Times debate and Joe Nocera on the scam. Paul Krugman points to George Akerlof and Paul Romer who had highlighted the possibility of financial institutions taking excessive risks at the expense of the society and tax payers if they sensed profit opportunities and if the incentives were also so aligned.

Though the case against Goldman appears very clear, it may be the loudest testament to the virtual paralysis of financial market regulation in the US that there exists the real danger that Goldman may escape relatively unscathed. As Yves Smith writes, "This case should be a slam-dunk, but years of deregulation have narrowed the ground for lawsuits." It is an indictment of the whole system that instead of expressing such doubts we should have been asking, as Prof William Black writes, "Why have there been no criminal charges?"

The one silver lining in the cloud is the fact that, as Mark Thoma writes, it provides a great opportunity to silence the conservatives and Republicans and push through the financial market regulatory reform proposals currently being debated in the US Congress. In fact, the supporters of reforms should ride the momentum generated by the populist backlash against Goldman and drive home strong regulatory reforms in the Bill. It is important that they act immediately in view of the limited public memory horizons.

Update 1 (20/4/2010)
Goldman's earnings rose 91 percent in the first quarter of 2010, to $3.46 billion or $5.59 a share, up from $1.81 billion or $3.39 a share in the same period last year. Revenues increased 36 percent to $12.78 billion, up from $9.42 billion in the quarter a year ago.

This Times article explains why the case against Goldman, while clear cut for the layman, may not be as easy to prove legally.

Update 2 (24/4/2010)
More evidence, from emails, that Goldman made massive money betting against mortgages is available here. See also this NYT story that examined emails traded by Goldman Sachs executives saying that they would make 'some serious money' betting against the housing markets. See also this from Goldman hearings.

Update 3 (4/5/2010)
James Kwak on synthetic CDOs like Abacus.

Update 4 (21/7/2010)
Goldman Sachs agrees to pay $550 million to settle federal claims that it misled investors in the Abacus subprime mortgage product as the housing market began to collapse.



Update 5 (23/6/2011)

Slate has an article that examines the claim that Goldman Sachs misled its clients by continuing to promote and sell them securities backed by sub-prime mortgages even as its trading desk was betting on the sub-prime mortgages going bad and declining. It writes,

"Starting in late 2006, Goldman Sachs made trades that would pay off if the housing market tanked. Was this a massive bet that the housing market was going to crash, as Goldman's critics maintain? Or was it merely a hedge, an attempt by the firm to reduce its risk, as Goldman claims?"


Goldman obviously claims the later.

Update 6 (15/3/2012)

Greg Smith's sensational resignation letter where he accuses Goldman of a culture which puts making money for the firm at the cost of client over anything else. He was Goldman Sachs executive director and head of the firm’s United States equity derivatives business in Europe, the Middle East and Africa.

Thursday, April 15, 2010

New York taxi medallions as an investment?

The New York City taxi medallion (fastened to the hoods of all taxis), or permit to run taxis on the city roads, is a classic example of a cartel. However, unlike the regular cartels, a government agency, the New York City Taxi and Limousine Commission (TLC) which is responsible for licensing and regulating New York City's medallion, is formally administering this one. And amidst the wreckage of the sub-prime meltdown and the Great Recession, medallion prices have remained amongst the steadiest of asset classes.





The price of a an individual medallion (see the price data here!) reached doubled to a record-high of $588,000 in February since 2004, while that of a corporate-owned medallion reached a another record high $779,000 in January. In fact ownership of this asset has yielded above-market returns of 20.75% per year for corporate medallions and 15.32% for individual medallions.



In 1937, the number of taxicab medallions was limited to those that existed at that time and by late 1940s this number had settled at 11,787. However, since 2003, State and local legislation have allowed the Taxi and Limousine Commission (TLC) to sell new medallions, bringing the current total to 13,257 yellow medallion taxicabs operating in New York City. In a ratio set by law, 40% of the total medallions are designated for individual, as opposed to corporate, ownership. Individual owners are required to drive at least part time, while corporate owners have the option of hiring an agent to lease the medallion full time for up to $800 a week.

If there are any derivative instruments based on the medallions, I would like to put my money there!!

Update 1 (24.1.2011)

Felix Salmon on the rise and rise of tax medallions,

"When the stock market peaked in October 2007, medallions were trading at $425,000 apiece. (All data from this page.) By the time the market had plunged by more than half in February 2009, medallions had risen in value to $552,000. And they’ve only gone up in value since: in December 2010, the average medallion changed hands for $624,000; last Wednesday, a new all-time record was set for a corporate medallion which sold for $880,000."

Wednesday, April 14, 2010

Municipal waste - landfills Vs incinerators?

Standard solid waste management techniques to dispose off municipal garbage include landfill dumping, composting of bio-degradable materials, biomethanization of biological materials, and incineration. All of these disposal techniques involve significant amount of pollution and are therefore done away from habitations. Strong public opposition to installation of such plants near residential areas are therefore natural. For this reason, despite the theoretical possibility of municipal garbage emerging as a clean alternative fuel (waste-to-energy), it has remained unexploited to potential in most parts of the world.

In this context, Times reports of municipal and industrial garbage incinerating plants abutting housing colonies in Europe, which are so clean that many times more dioxin is now released from home fireplaces and backyard barbecues than from such incinerators. Unlike the smoke-belching models of the past, today's waste-to-energy plants have arrays of newly developed filters and scrubbers to capture the offending chemicals — hydrochloric acid, sulfur dioxide, nitrogen oxides, dioxins, furans and heavy metals — as well as small particulates.

Across Europe, there are about 400 plants, with Denmark, Germany and the Netherlands leading the pack in expanding them and building new ones. Denmark alone now has 29 such plants, serving 98 municipalities in a country of 5.5 million people, and 10 more are planned or under construction. There plants are placed in the communities they serve, no matter how affluent, so that the heat of burning garbage can be efficiently piped into homes. Further, to increase public acceptance, some of the newest plants are encased in elaborate outer shells that resemble sculptures.



Waste-to-energy plants score over landfills in many ways. Primarily, the methane gas emitted by landfills is about 20 times more potent than carbon dioxide, the gas released by burning garbage. A study by the US EPA in 2009 found that even the state-of-the-art landfills which collect the methane that emanates from rotting garbage to make electricity churn out roughly twice as much climate-warming gas as waste-to-energy plants do for the units of power they produce. It also found that although new landfills are lined to prevent leaks of toxic substances and often capture methane, the process is highly inefficient. Secondly, it has been found (by the same EPA study) that waste-to-energy plants produce nine times the energy as landfills for the same amount of waste disposed off.

The graphic below compares the relative effectiveness of landfills, which are widely used in the US (and across developing countries like India), and incinerators.



The preference for landfills in the US comes from the presence of abundant landmass, which makes it easier and cheaper to dispose off garbage in this manner. For exactly the opposite reason, landfills are unattractive in Europe, forcing them into costlier but more environment friendly alternatives like incinerators. In fact, the European Union severely restricts the creation of new landfill sites, thereby promoting waste-to-energy plants. Cities in countries like India, which too face similar land scarcity, also may have to adopt such techniques to dispose its garbage.

Advocates of recycling strongly oppose waste-to-energy plants on the grounds that it disincentivizes recycling (though the example of Germany, the world leader in recycling, with its large number of incinerators of non-recyclable waste, points otherwise). They advocate measures that move towards "zero-waste" instead of spending energy to burn the garbage after it is generated.

However, such best-practice zero-waste strategies are invariably the enemy of the practical and possible. Instead of rejecting all second-base alternatives on the grounds that they are not as efficient as the "best practice", a more realistic option would be adopt a mix of all these methods to effectively dispose off municipal solid waste.

Though there are a few experiments with waste-to-energy (with municipal garbage) in India, I am not aware of any that have actually yielded the desired results (over a long enough period) to serve as examples worthy of emulation. In fact, the present policies may actually be supporting fly-by-night developers, out to knock off the generous upfront subsidies offered by the Ministry of Nonconventional Energy Sources (MNES) and also the massive land extents often allotted to such plants on the city limits.

Far from becoming a financial burden, the prevailing conventional wisdom in India is that municipal bodies can make money from solid waste disposal, especially from waste-to-energy plants (the possibility of accessing carbon credits under the Kyoto Protocol only adds to the impression). Accordingly, cash-strapped municipal bodies are loath to spend much on effective garbage collection, transportation and disposal, and look for quick-fix solutions in private operators for their solid waste disposal.

State and central government policies are structured with this in mind, without consideration for the fact that effective solid waste disposal is very expensive. For example, New York City paid $307 million last year to export more than four million tons of waste, mostly to landfills in distant states. In 2009, a small portion of New York's trash was processed at two 1990-vintage waste-to-energy plants in Newark and Hempstead, NY, owned by a private company, Covanta, and the City paid $65 a ton for the service. And incidentally, this is the cheapest available way for New York City to get rid of its trash.

Waste-to-energy plants do involve large upfront expenditures, that cannot be made without having access to assured municipal garbage and regulatory certainty. Instead of upfront subsidies, a more effective subsidy to such plants may be by way of feed-in tariffs (or generous tariff incentives) similar to that proposed under the National Solar Power Policy. This removes the risk of developers knocking off the MNES incentive and fleeing after some time and incentivizes the developers to generate as much electricity as possible, so as to maximize his returns. Municipal laws should also be revised to provide greater clarity to developers and ULBs (for example on the issue of assuring the supply of solid waste to such plants, etc) on the terms for sanctioning and setting up of such waste-to-energy plants.

See also this debate on MSW disposal in the US.

Update 1 (28/4/2010)
Normal Steisel makes the case for converting New York's garbage into energy. Rose George advocates generation of electricity from sewerage, by anaerobic digestion of the sludge and using the methane so produced to run turbines.

Global public debt in perspective

Via Economix, a fascinating snapshot of the global public debt (map size indicating the total external debt). Europe, as the recent crisis shows, is all in the red! Reassuringly, despite its higher GDP ratio of all public debt, India's share of external debt is very small (atleast far smaller than what it can afford to sustain).

Monday, April 12, 2010

Greater evil - inflation or unemployment?

The rising US public debt stock and the resultant apprehensions of an inflationary spiral have increased the opposition to any further stimulus spending despite the dismal unemployment scenario. Opponents point to the examples of Greece and a few other European countries to warn about the unsustainability of any more debt-financed fiscal spending.

However, as many others like Mark Thoma and Paul Krugman have pointed out, the unemployment situation is so dismal (last three months the unemployment rate has held steady at 9.7%) despite the March gains, that even a return to some semblance of normalcy would require spectacular economic growth for a few years. Even with recent robust economic growth rates, "what is happening is that the pace of job creation is matching the number of people returning to the labor force, but once the change in labor force is accounted for there is no change in the rate of unemployment".




In the aftermath of the 162,000 jobs addition in March, Dean Baker had this to say, "The average in the private sector over the last two months was just 66,000 jobs. With the public sector shedding jobs, we will need more rapid job growth just to stay even with the growth in the labor force." The job market signals point to a "very long, gradual, drawn out recovery process", which if not hastened can result in considerable suffering and misery.

In the context of the debate about the choice between inflation and unemployment, Stephen Gandel writes

"Neither is great. And the combination is, of course, horrible. But the problem with the folks arguing against an expanded jobs bill is that they don't recognize that there is at least a chance that doing nothing is worse. Inflation at least has some positive affects. House prices rise again. Debt becomes more manageable. Wages increase. Yes, our buying power erodes. And higher interest rates can slow growth. But if anyone can tell me an even small upside to high unemployment, I would be interested."


And as Don Peck recently wrote in the Atlantic, the impact of the unemployment legacy of the Great Recession may be far reaching,

"The Great Recession may be over, but this era of high joblessness is probably just beginning. Before it ends, it will likely change the life course and character of a generation of young adults. It will leave an indelible imprint on many blue-collar men. It could cripple marriage as an institution in many communities. It may already be plunging many inner cities into a despair not seen for decades. Ultimately, it is likely to warp our politics, our culture, and the character of our society for years to come."


In this context, Mark Thoma points to a graphic that forecasts the unemployment rates with and without additional government fiscal support. He argues that "instead of getting back to full employment by, say, 2013, we could get there sooner if we act now".

Sunday, April 11, 2010

The emerging market in natural gas

The massive investments in the exploration of natural gas over the past decade has become a potentially game-changing development in the global energy market. Apart from becoming a cheap and environment friendly (gas-fired power stations emit about half as much carbon as the cleanest coal plants) competitor to carbon fuels, it also threatens to turn on its head the century-old, petroleum-politics driven, global geo-politics.

Technological breakthroughs in recent years like hydraulic fracturing ("fracing", pronounced "fracking" involves blasting a cocktail of water, sand, chemicals and other materials into the underground rock formations to shatter it into thousands of pieces, creating cracks that allow the gas to seep to the well for extraction) and horizontal drilling (allows the drill bit to penetrate the earth vertically before moving sideways for hundreds or thousands of metres), have allowed exploration in hitherto unexploitable sources like underneath hardy shale-rock formations, coal-bed methane and "tight gas". However, with expanding explorations there have been serious environmental concerns that processes like hydraulic fracturing release toxic chemicals to pollute the ground water.



These unconventional sources today promise supply of massive quantities of natural gas for the foreseeable future. The International Energy Agency (IEA) estimates the global total to be 921 trillion cubic metres, more than five times proven conventional reserves. Further, since shale is almost ubiquitous, the IEA says China and India could have "large" reserves, far greater than the conventional resource. Exploration is also under way in Austria, Germany, Hungary, Poland and other European countries, with the big oil majors joining the fray.

Adding to the discoveries have been the excess availability of Liquefied Natural Gas (LNG) transport logistics - its importers with the infrastructure to receive and regasify LNG can now easily tap the global market for spot cargoes. The recession has dampened demand from Japan and South Korea, the leading LNG buyers, and Qatar, the world’s LNG powerhouse, has spent the past decade ramping up supplies aimed at the American market.

Further, LNG will grow increasingly abundant as new projects due to come on stream this year add another 80m tonnes to annual supply, almost 50% more than in 2008. While Qatar itself has 23.4 million tonnes per annum (mtpa) under construction that will take it total to 77 mtpa, Australia has 20 mtpa under construction and another 59-110 mtpa planned that could make it the market behemoth. Nigeria and Iran, too, have huge capacities of 40-50 and 72-78 mtpa on the cards.

According to the International Energy Agency, overcapacity in gas pipelines and liquefied natural gas terminals will increase to about 250 billion cubic meters by 2015 as demand remains subdued. That is four times the level of spare capacity in 2007.



This proliferation of discoveries and prospects of an explosion in supplies over the decade has surely changed the trade economics of natural gas and the bargaining power of the big exporters like Qatar, Russia and Australia. Strong prospects of domestic (in potential gas importers) discoveries and too much capacity chasing the demand, means that many exploration (for example, Gazprom postponed its Shtokman gasfield in the Barents Sea), pipelines (Nord Stream, Gazprom’s flagship project to export gas directly to Germany through the Baltic Sea) and LNG projects (Australia's LNG terminals to export gas to the now relatively less interested China) which looked attractive till a couple of years back are now looking bleak. The Energy Information Administration, the statistical arm of America’s Department of Energy, predicts decades of relatively weak gas prices.

The vast capacity addition in recent few years, huge discoveries of natural gas trapped in hardy shale-rock formations in the US (Texas and Canadian border) in 2007 and 2008 (equivalent to 30 years of domestic gas production in the last five years), and the impact of a recession induced dip in demand in the large traditional developed economy markets, have driven North American natural gas prices to around $5 per million British thermal units (mmBtu) from its highs of $13 per mmBtu in mid-2008.

A recent McKinsey report forecasts that natural gas demand in India will almost double from the current 166 mmscmd (million metric standard cubic metres daily) to 310-320 mmscmd by 2015 if the price can be contained at $10-11/mmBtu (at customer gate), making the Indian market as large as that of Japan if not slightly larger by 2015, and second only to China which is expected to see demand of 452 mmscmd by 2015. Petronet LNG pegs domestic demand at 271 mmscmd by 2017 and expects it to touch 322 mmscmd only by 2022.





This emerging global context therefore offers ample opportunities for India to leverage its a massive market potential and strike beneficial long-term deals. The clear intent shown by the government to dismantle the administered price mechanism (APM) on natural gas will only increase the attractiveness of Indian market for suppliers. A recent Business Standard article pointed to estimates that India can consume quite a lot of LNG at $ 6 mmBtu but much less at rates that are in the $8-10 mmBtu. Incidentally, the government has fixed gas from the Reliance’s DC-6 field fixed at $4.20 per mmBtu.

India's massive power generation capacity addition program and the persisting 12-15% peak power deficit can be considerably addressed by tapping long-term supplies that leverage the current market trends to strike deals at attractive terms. In view of the imminent decoupling of gas and oil prices in the years ahead, care should be taken to ensure that such deals do not link gas supply prices to global oil prices. Apart from this, India will also need to develop LNG terminals to facilitate imports of these supplies.

Update 1 (28/2/2011)

An excellent NYT article chronicles the environmental dangers posed by natural gas extraction, especially the relatively new drilling method — known as high-volume horizontal hydraulic fracturing, or hydrofracking. It involves injecting huge amounts of water, mixed with sand and chemicals, at high pressures to break up rock formations and release the gas.

With hydrofracking, a well can produce over a million gallons of wastewater that is often laced with highly corrosive salts, carcinogens like benzene and radioactive elements like radium, all of which can occur naturally thousands of feet underground. Other carcinogenic materials can be added to the wastewater by the chemicals used in the hydrofracking itself. The wastewater which is sometimes hauled to sewage plants not designed to treat it and then discharged into rivers that supply drinking water, contains radioactivity at levels higher than previously known, and far higher than the level that federal regulators say is safe for these treatment plants to handle.

Update 2 (12/4/2011)

NYT points to two new studies that question the wisdom that natural gas can considerably offset global warming. They argue that it is "likely to do more to heat up the planet than mining and burning coal". The unconventional shale gas production, which accounts for nearly a quarter of total production in the United States and is expected to reach 45 percent by 2035, is thought to emit roughly half the amount of carbon dioxide as coal and about 30 percent that of oil.

However, as the article says, this may conceal larger emissions over its entire production life cycle — that is, from the moment a well is plumbed to the point at which the gas is used. In fact, they write that when everything is factored together, the greenhouse gas footprint of shale gas can be as much as 20 percent greater than, and perhaps twice as high as, coal per unit of energy. The report writes,

"The problem, the studies suggest, is that planet-warming methane, the chief component of natural gas, is escaping into the atmosphere in far larger quantities than previously thought, with as much as 7.9 percent of it puffing out from shale gas wells, intentionally vented or flared, or seeping from loose pipe fittings along gas distribution lines. This offsets natural gas’s most important advantage as an energy source: it burns cleaner than other fossil fuels and releases lower carbon dioxide emissions... Methane leaks have long been a concern because while methane dissipates in the atmosphere more quickly than carbon dioxide, it is far more efficient at trapping heat."


Natural gas is already the principal source of heat in half of American households.

See also this report which tries to answer some of the criticism of the pollution effects of unconventional natural gas.