Вы находитесь на странице: 1из 226

A-PDF Merger DEMO : Purchase from www.A-PDF.

com to remove the watermark


MAKROEKONOMIJA 1 DODATNO GRADIVO, ANDREJ SRAKAR Seznam lankov po vrstnem redu: 1) King &Mazzota Ecosystem Valuation 2) Davide Furceri, Prakash Loungani: Who let the Gini out? Searching for sources of inequality 3) Giovanni D'Alessio, Romina Gambacorta, Giuseppe Ilardi: Are Germans poorer than other Europeans? The principal Eurozone differences in wealth and income 4) Michael Bordo, Christopher M. Meissner: Does inequality lead to a financial crisis? 5) Skidelsky: The Influence of the Great Depression on Keynes General Theory 6) Uneasy Money Blog: Whos Afraid of Says Law? 7) R. Dorfman: Thomas Robert Malthus and David Ricardo 8) Johan Norberg: the Questionable Search for a Happiness Index 9) Eugenio Proto, Aldo Rustichini: GDP and life satisfaction: New evidence 10) Christopher D. Carroll: Consumption and Saving: Theory and Evidence 11) Katie Farrant, Mika Inkinen, Magda Rutkowska, Konstantinos Theodoridis: What can company data tell us about financing and investment decisions? 12) Daniel Carroll: Private Fixed Investments Recovery: Not So Bad After All 13) Volker Wieland: Fiscal Stimulus and the Promise of Future Spending Cuts 14) Dean Baker: Fiscal Policy, the Long-Term Budget, and Inequality 15) Menzie Chinn: Fiscal Multipliers 16) Robert J. Barro: Government Spending Is No Free Lunch 17) Ayako Saiki, Sunghyun Henry Kim: How the euro synchronised EZ cycles 18) Biagio Bossone: On Economic Rationality, Bubbles, and Macroprudence 19) Charles I. Plosser: Monetary Policy and a Brightening Economy 20) T. Duy: Yellen's Debut as Chair 21) Jens Christensen: When Will the Fed End Its Zero Rate Policy? 22) Jean Ross: What makes a tax system fair? 23) Risaburo Nezu: Abenomics and Japans Growth Prospects 24) Regine Adele Ngono Fouda: Protectionism and Free Trade: A Countrys Glory or Doom? 25) Michael Boehm: Job polarisation and the decline of middle-class workers wages 26) Paul Krugman: How complicated does the model have to be? 27) Simon Wren-Lewis: The return of schools of thought in macroeconomics 28) Jonathan Portes: Fiscal policy: What does Keynesian mean? 29) M. Henry Linder, Richard Peach, and Robert Rich: The Long and Short of It: The Impact of Unemployment Duration on Compensation Growth 30) Jrmie Cohen-Setton: Blogs review: The employment to population ratio - does the drop in unemployment represent a genuine improvement in the US labor market? 31) Ben S.Bernanke: Deflation: Making Sure "It" Doesn't Happen Here 32) Mickey Levy: Clarifying the debate about deflation concerns 33) Michele Battisti, Gianfranco di Vaio, Joseph Zeira: A new look at global growth convergence and divergence 34) Prakash Loungani: Are Jobs and Growth Still Linked? 35) Brad DeLong: The Real Challenges to Growth 36) Simon Wren-Lewis: Monetary versus Fiscal: an odd debate 37) Daron Acemoglu and James Robinson: Democracy vs. Inequality

A-PDF Merger DEMO : Purchase from www.A-PDF.com to remove the watermark

Varovanje okolja

From: King &Mazzota Ecosystem Valuation, http://www.ecosystemvaluation.org/index.html

Basic Concepts of Economic Value


This section explains the basic economic theory and concepts of economic valuation. Economic value is one of many possible ways to define and measure value. Although other types of value are often important, economic values are useful to consider when making economic choices choices that involve tradeoffs in allocating resources. Measures of economic value are based on what people want their preferences. Economists generally assume that individuals, not the government, are the best judges of what they want. Thus, the theory of economic valuation is based on individual preferences and choices. People express their preferences through the choices and tradeoffs that they make, given certain constraints, such as those on income or available time. The economic value of a particular item, or good, for example a loaf of bread, is measured by the maximum amount of other things that a person is willing to give up to have that loaf of bread. If we simplify our example economy so that the person only has two goods to choose from, bread and pasta, the value of a loaf of bread would be measured by the most pasta that the person is willing to give up to have one more loaf of bread. Thus, economic value is measured by the most someone is willing to give up in other goods and services in order to obtain a good, service, or state of the world. In a market economy, dollars (or some other currency) are a universally accepted measure of economic value, because the number of dollars that a person is willing to pay for something tells how much of all other goods and services they are willing to give up to get that item. This is often referred to as willingness to pay. In general, when the price of a good increases, people will purchase less of that good. This is referred to as the law of demandpeople demand less of something when it is more expensive (assuming prices of other goods and peoples incomes have not changed). By relating the quantity demanded and the price of a good, we can estimate the demand function for that good. From this, we can draw the demand curve, the graphical representation of the demand function. It is often incorrectly assumed that a goods market price measures its economic value. However, the market price only tells us the minimum amount that people who buy the good are willing to pay for it. When people purchase a marketed good, they compare the amount they would be willing to pay for that good with its market price.

They will only purchase the good if their willingness to pay is equal to or greater than the price. Many people are actually willing to pay more than the market price for a good, and thus their values exceed the market price. In order to make resource allocation decisions based on economic values, what we really want to measure is the net economic benefit from a good or service. For individuals, this is measured by the amount that people are willing to pay, beyond what they actually pay. Thus, two goods that sell for the same price may have different net benefits. For example, I may have a choice between wheat and multigrain bread, which both sell for $2.00 per loaf. Because I prefer multi-grain, I am willing to pay up to $3.00 for a loaf. However, I would only pay $2.50 at the most for the wheat bread. Therefore, the net economic benefit I receive for the multi-grain bread is $1.00, and for the wheat bread is only $.50. The economic benefit to individuals is often measured by consumer surplus. This is graphically represented by the area under the demand curve for a good, above its price.

The economic benefit to individuals, or consumer surplus, received from a good will change if its price or quality changes. For example, if the price of a good increases, but peoples willingness to pay remains the same, the benefit received (maximum willingness to pay minus price) will be less than before. If the quality of a good increases, but price remains the same, peoples willingness to pay may increase and thus the benefit received will also increase. Economic values are also affected by the changes in price or quality of substitute goods or complementary goods . If the price of a substitute good changes, the economic value for the good in question will change in the same direction. For example, wheat bread is a close substitute for multi-grain bread. So, if the price of multi-grain bread goes up, while the price of wheat bread remains the same, some people will switch, or substitute, from multi-grain to wheat bread. Therefore, more wheat bread is demanded and its demand function shifts upward, making the area under it, the consumer surplus, greater.

Similarly, if the price of a complementary good, one that is purchased in conjunction with the good in question, changes, the economic benefit from the good will change in the opposite direction. For example, if the price of butter increases, people may buy less of both bread and butter. If less bread is demanded, then the demand function shifts downward, and the area under it, the consumer surplus, decreases. Producers of goods also receive economic benefits, based on the profits they make when selling the good. Economic benefits to producers are measured by producer surplus, the area above the supply curve and below the market price. The supply function tells how many units of a good producers are willing to produce and sell at a given price. The supply curve is the graphical representation of the supply function. Because producers would like to sell more at higher prices, the supply curve slopes upward. If producers receive a higher price than the minimum price they would sell their output for, they receive a benefit from the salethe producer surplus. Thus, benefits to producers are similar to benefits to consumers, because they measure the gains to the producer from receiving a price higher than the price they would have been willing to sell the good for.

click for enlargement

When measuring economic benefits of a policy or initiative that affects an ecosystem, economists measure the total net economic benefit. This is the sum of consumer surplus plus producer surplus, less any costs associated with the policy or initiative.

Valuation of Ecosystem Services


This section defines and explains some important concepts related to how economists approach ecosystem valuation.

Ecosystem valuation can be a difficult and controversial task, and economists have often been criticized for trying to put a pricetag on nature. However, agencies in charge of protecting and managing natural resources must often make difficult spending decisions that involve tradeoffs in allocating resources. These types of decisions are economic decisions, and thus are based, either explicitly or implicitly, on societys values. Therefore, economic valuation can be useful, by providing a way to justify and set priorities for programs, policies, or actions that protect or restore ecosystems and their services (see The Big Picture for more information). In order to understand how economists approach ecosystem valuation, it is useful to review some important definitions and concepts. Ecosystem Functions and Services Ecosystem functions are the physical, chemical, and biological processes or attributes that contribute to the self-maintenance of an ecosystem; in other words, what the ecosystem does. Some examples of ecosystem functions are provision of wildlife habitat, carbon cycling, or the trapping of nutrients. Thus, ecosystems, such as wetlands, forests, or estuaries, can be characterized by the processes, or functions, that occur within them. Ecosystem services are the beneficial outcomes, for the natural environment or people, that result from ecosystem functions. Some examples of ecosystem services are support of the food chain, harvesting of animals or plants, and the provision of clean water or scenic views. In order for an ecosystem to provide services to humans, some interaction with, or at least some appreciation by, humans is required. Thus, functions of ecosystems are value-neutral, while their services have value to society.

Some Factors that Complicate Ecosystem Management Decisions Decisions about ecosystem management are complicated by the fact that various types of market failure are are associated with natural resources and the environment. Market failures occur when markets do not reflect the full social costs or benefits of a good. For example, the price of gasoline does not fully reflect the costs, in terms of pollution, that are imposed on society by burning gasoline. Market failures related to ecosystems include the facts that: (i) many ecosystems provide services that are public goods; (ii) many ecosystem services are affected by externalities; and (iii) property rights related to ecosystems and their services are often not clearly defined. Ecosystem services are often public goods, which means that they may be enjoyed by any number of people without affecting other peoples enjoyment. For example, an aesthetic view is a pure public good. No matter how many people enjoy the view, others can also enjoy it. Other services may be quasi-public goods, where at a certain level of use, others enjoyment may be diminished. For example, a public recreation area may be open to everyone. However, crowding can decrease peoples enjoyment of the area. The problem with public goods is that, although people value them, no one person has an incentive to pay to maintain the good. Thus, collective action is required in order to produce the most beneficial quantity.

Ecosystem services may be affected by externalities, or uncompensated side effects of human actions. For example, if a stream is polluted by runoff from agricultural land, the people downstream experience a negative externality. The problem with negative externalities is that the people (or ecosystems) they are imposed upon are generally not compensated for the damages they suffer. Finally, if property rights for natural resources are not clearly defined, they may be overused, because there is no incentive to conserve them. For example, unregulated fisheries are an open-access resource anyone who wants to harvest fish can do so. Because no one person or group owns the resource, open access can lead to severe over-harvesting and potentially severe declines in fish abundance over time. Ecosystem valuation can help resource managers deal with the effects of market failures, by measuring their costs to society, in terms of lost economic benefits. The costs to society can then be imposed, in various ways, on those who are responsible, or can be used to determine the value of actions to reduce or eliminate environmental impacts. For example, in the case of the crowded public recreation area, benefits to the public could be increased by reducing the crowding. This might be done by expanding the area or by limiting the number of visitors. The costs of implementing different options can be compared to the increased economic benefits of reduced crowding. In the case of a stream polluted by agricultural runoff, the benefits from eliminating the pollution can be compared to costs of actions to reduce the runoff, or can be used to determine the appropriate fines or taxes to be levied on those who are responsible. In the case of open-access fisheries, the benefits from reducing overfishing can be compared to regulatory costs or costs to the commercial fishing industry if access is restricted.

Ecosystem Values Ecosystem values are measures of how important ecosystem services are to people what they are worth. Economists measure the value of ecosystem services to people by estimating the amount people are willing to pay to preserve or enhance the services (see Basic Concepts of Economic Value for more detailed information). However, this is not always straightforward, for a variety of reasons. Most importantly, while some services of ecosystems, like fish or lumber, are bought and sold in markets, many ecosystem services, like a day of wildlife viewing or a view of the ocean, are not traded in markets. Thus, people do not pay directly for many ecosystem services. Additionally, because people are not familiar with purchasing such goods, their willingness to pay may not be clearly defined. However, this does not mean that ecosystems or their services have no value, or cannot be valued in dollar terms. It is not necessary for ecosystem services to be bought and sold in markets in order to measure their value in dollars. What is required is a measure of how much purchasing power (dollars) people are willing to give up to get the service of the

ecosystem, or how much people would need to be paid in order to give it up, if they were asked to make a choice similar to one they would make in a market. (Overview of Methods to Estimate Dollar Values gives an overview of, and Dollar-Based Ecosystem Valuation Methods describes in more detail, the methods that economists use to estimate dollar values for ecosystems and their services.)

Types of Values Economists classify ecosystem values into several types. The two main categories are use values and non-use, or passive use values. Whereas use values are based on actual use of the environment, non-use values are values that are not associated with actual use, or even an option to use, an ecosystem or its services. Thus, use value is defined as the value derived from the actual use of a good or service, such as hunting, fishing, birdwatching, or hiking. Use values may also include indirect uses. For example, an Alaskan wilderness area provides direct use values to the people who visit the area. Other people might enjoy watching a television show about the area and its wildlife, thus receiving indirect use values. People may also receive indirect use values from an input that helps to produce something else that people use directly. For example, the lower organisms on the aquatic food chain provide indirect use values to recreational anglers who catch the fish that eat them. Option value is the value that people place on having the option to enjoy something in the future, although they may not currently use it. Thus, it is a type of use value. For example, a person may hope to visit the Alaskan wilderness area sometime in the future, and thus would be willing to pay something to preserve the area in order to maintain that option. Similarly, bequest value is the value that people place on knowing that future generations will have the option to enjoy something. Thus, bequest value is measured by peoples willingness to pay to preserve the natural environment for future generations. For example, a person may be willing to pay to protect the Alaskan wilderness area so that future generations will have the opportunity to enjoy it. Non-use values, also referred to as passive use values, are values that are not associated with actual use, or even the option to use a good or service. Existence value is the non-use value that people place on simply knowing that something exists, even if they will never see it or use it. For example, a person might be willing to pay to protect the Alaskan wilderness area, even though he or she never expects or even wants to go there, but simply because he or she values the fact that it exists. It is clear that a single person may benefit in more than one way from the same ecosystem. Thus, total economic value is the sum of all the relevant use and nonuse values for a good or service.

Izbira med uinkovitostjo in enakostjo

Who let the Gini out? Searching for sources of inequality


Davide Furceri, Prakash Loungani, 13 February 2014 Income inequality has been growing in many economies over the past two decades, and it is currently historically high. This column adds two new contributors to the popular explanations of increased inequality. Fiscal consolidations, especially those following the recent crisis, can increase inequality, mostly by affecting the long-term unemployment. A second source that leads to a persistent increase in inequality is capital account liberalisation. Therefore, the effects of these policies on inequality should be taken into account when deciding upon policy designs.
Last months World Economic Forum at Davos will be remembered as the one where the rich realised that incomes were unequal. One suspects the rich had always been dimly aware of this fact, but even they seem to have been astounded by the degree of inequality.

Income inequality is at historic highs. The richest 10% took home half of the US income in 2012 a division of spoils not seen in that country since the 1920s. In OECD countries, inequality increased more in the three years up to 2010 than in the preceding 12 years.

The recent increases come on top of growing inequality for over two decades in many advanced economies (Levy and Temin 2007, DAlessio et al. 2013).

What explains this rise?


One common explanation is that technological change in recent decades has conferred an advantage to those adept at working with computers and information technology. Moreover, global supply chains have moved low-skilled tasks out of advanced economies. Thus, the demand for highly skilled workers in advanced economies has increased, raising their incomes relative to those less skilled. Our recent research uncovers two other contributors to increased inequality.

The opening up of capital markets to foreign entry and competition, referred to as capital account liberalisation. The policy actions taken by governments to lower their budget deficits.

Such actions are referred to as fiscal consolidation in economists jargon and, by their critics, as austerity policies.

Whom does it hurt?


The Great Recession of 2007-09 has led to a significant increase in public debt in advanced economies due to the decline in tax revenues, the costs of financial bailouts of banks and

companies, and the fiscal stimulus provided by many countries at the onset of the crisis. Public debt increased, on average, from 70% of GDP in 2007, to about 100% of GDP in 2011 its highest level in 50 years. Against this backdrop, many governments have embarked on fiscal consolidation in recent years. Such consolidations combinations of spending cuts and taxes hikes to reduce the budget deficitare a common feature of government actions. So, history offers a good guide to studying the impacts of these policies on inequality. Over the past 30 years, there have been 173 episodes of fiscal consolidation in our sample of 17 advanced economies. On average across these episodes, the policy actions reduced the budget deficit by about 1% of GDP. There is clear evidence that the decline in budget deficits was followed by increases in inequality. The Gini coefficient, the most commonly-used measure of inequality, increased by 0.3 percentage points two years following the fiscal consolidation, and by nearly 1 percentage point after eight years (Chart 1). Figure 1. Fiscal consolidations are followed by increases in inequality (impact on the Gini coefficient in the years following a fiscal consolidation)

Note: The chart shows point estimates and one-standard-error bands. See Ball, Furceri, Leigh, and Loungani (2013) for details. Fiscal consolidation episodes are taken from Guajardo, Leigh and Pescatori (forthcoming, JEEA).

These effects are quantitatively significant. The Gini coefficient is measured here on a scale from zero to 100; the average value of the coefficient in our sample is 25. Hence, an average-sized fiscal consolidation of 1% of GDP raises the Gini by about 4%. One explanation for these results could be that while fiscal consolidations coincide with inequality, it is actually a third factor that is responsible for movements in both. For example, a recession, or a slowdown, could raise inequality and at the same time lead to an increase in the debt-to-GDP ratio, thus increasing the odds of a fiscal consolidation. However, the impact of fiscal consolidation on output holds even after controlling for the impacts of recessions and slowdowns. Other tests of the robustness of these results is reported in two recent papers (Ball, Furceri, Leigh, and Loungani, 2013, Woo, Bova, Kinda, and Zhang, 2013). There can be many channels through which fiscal consolidation raises inequality. For instance, cuts in social benefits and in public sector wages and employment, often associated with fiscal consolidation, may disproportionately affect lower-income groups. Another channel could be

through the impact of fiscal consolidations on long-term unemployment, since long spells of unemployment are likely to be associated with significant earnings losses. Some evidence in favour of this comes from our finding that fiscal consolidations lead to an increase in long-term unemployment. The long-term unemployment rate is about 0.5 percentage points higher four years after an episode of consolidation; in contrast, the impact on short-term unemployment is very small (Chart 2). Figure 2. Fiscal consolidations are followed by an increase in long-term unemployment

Note: The chart shows point estimates and one-standard-error bands. See Ball, Furceri, Leigh, and Loungani (2013) for details. Fiscal consolidation episodes are taken from Guajardo, Leigh and Pescatori (forthcoming, JEEA).

Open to inequity?
The past three decades have been associated with a steady decline in the number of restrictions that countries impose on cross-border financial transactions, as reported in the IMF's Annual Report on Exchange Arrangements and Exchange Restrictions. An index of capital account openness constructed from these reports shows a steady increase; that is, restrictions on crossborder transactions have been steadily lifted. To uncover whether there is a link between this increased openness and inequality, we study episodes when there were large changes in the index of capital account openness, which are more likely to represent deliberate policy actions by governments to liberalise their financial sectors. Using this criterion, there were 58 episodes of large-scale capital account reforms in our sample of 17 advanced economies. What happens to inequality in the aftermath of these episodes? The evidence is that, on average, capital account liberalisation is followed by a significant and persistent increase in inequality. The Gini coefficient increases by about 1.5% a year after the liberalisation, and by 2% after five years (see Chart 3). Figure 3. Capital account liberalisations are followed by increases in inequality (impact on Gini coefficient in years following capital account liberalisation)

Note: The chart shows point estimates and one-standard-error bands. See Furceri, Jaumotte, and Loungani (2013) for details.

The robustness of this result is documented extensively in our research (Furceri, Jaumotte and Loungani 2013). In particular, the impact of capital account liberalisation on inequality holds even after the inclusion of myriad of other determinants of inequality such as output, openness to trade, changes in the size of government, changes in industrial structure, demographic changes, and regulations in product, labour and credit markets. There are many channels through which opening up the capital account can lead to higher inequality. For example, an opening up allows financially constrained companies to borrow capital from abroad. If capital is more complementary to skilled workers, liberalisation increases the relative demand for such workers, leading to higher inequality in incomes. Indeed, there is evidence that the impact of liberalisation on wage inequality is greater in industries that are more dependent on external finance, and where the complementarity between capital and skilled labour is higher (Larrain 2013).

Policy lessons
These results do not imply that countries should not undertake capital account liberalisation or fiscal consolidation. After all, such policy actions are not taken on a whim, but reflect an assessment that they will benefit the economy. Capital account liberalisation allows domestic companies to access pools of foreign capital, and oftenthrough foreign direct investment in particularaccess to the technology that comes with it. It also allows domestic savers to invest in assets outside their home country. If properly managed, this expansion of opportunities can be beneficial. Likewise, fiscal consolidation is generally undertaken with the aim to reduce government debt to safer levels. Lower debt levels in turn can help the economy by bringing down interest rates; and over time the lighter burden of interest payments on the debt can also allow the government to cut taxes. However, at a time when rising inequality is a source of concern to many governments, weighing these benefits against the distributional effects is also important. Awareness of these effects might lead some governments to choose to design the policy actions in a way that redresses the distributional impacts. For instance, greater resort to progressive taxes and the protection of social benefits for vulnerable groups can help counter some of the effect of fiscal consolidations on inequality. By promoting education and training for low- and middle-income workers, governments can also counter some of the forces behind the long-term rise in inequality.

References
Ball, Laurence, Davide Furceri, Daniel Leigh and Prakash Loungani (2013), The Distributional Effects of Fiscal Consolidation, IMF Working Paper 13/151 (Washington: International Monetary Fund). D'Alessio, Giovanni, Romina Gambacorta, Giuseppe Ilardi (2013), Are Germans poorer than other Europeans? The principal Eurozone differences in wealth and income, VoxEU.org, 24 May. Furceri, Davide, Florence Jaumotte and Prakash Loungani (2013), The Distributional Effects of Capital Account Liberalization, forthcoming IMF Working Paper (Washington: International Monetary Fund). Larrain, Mauricio (2013), Capital Account Liberalization and Wage Inequality, Columbia Business School Working Paper (June). Levy, Frank and Peter Temin (2007), Inequality and institutions in 20th century America, VoxEU.org, 15 June. Woo, Bova, Kinda and Zhang (2013), Distributional Effects of Fiscal Consolidation and the Role of Fiscal Policy: What Do the Data Say? IMF Working Paper 13/195 (Washington: International Monetary Fund, September).

Are Germans poorer than other Europeans? The principal Eurozone differences in wealth and income
Giovanni D'Alessio, Romina Gambacorta, Giuseppe Ilardi, 24 May 2013 The ECBs recent survey on household finances and consumption threw up some unexpected results counter-intuitively, the average German household has less wealth than the average Mediterranean household. In line with a recent VoxEU.org contribution from De Grauwe and Ji, this article analyses the principal differences in wealth and income between the main Eurozone countries.
The Household Survey (European Central Bank 2013) is a joint project of the ECB and all the Eurozone central banks providing harmonised information on the balance sheets of 62,000 households in 15 Eurozone countries (all except Ireland and Estonia).1 Media hype had been generated by the ranking of the countries median household wealth results, especially by the fact that:

Germany was in last place with 51,400. Italy and Spain were significantly above France with wealth equal to 173,500 and 182,700 respectively, compared to the French households 115,800.

The mean household wealth averages paint a very different picture to current narratives about the relative wealth of nations in the Eurozone. The relative dispersion in the estimates is much smaller: the German household mean is 195,200, while for France, Italy and Spain it is 233,400, 275,200 and 291,400 respectively. Moreover, Germany climbs six places in the wealth ranking. As already noted by De Grauwe and Ji (2013), Germanys position at the bottom of the median ranking is simply due to its large wealth inequality compared with the others. This is confirmed by observing that the concentration of wealth, measured by a Gini index of 0.76, is much higher in Germany, while for France, Italy and Spain the estimate is smaller (0.68, 0.61 and 0.58 respectively).

Household size matters


This analysis does not take account of household composition in the various countries. The distribution of household wealth across countries is affected by differences in the demographic characteristics of households (age, education, household size):

In northern countries, households are generally small, often composed of a single member. In the south it is not unusual to find many people, even from different generations (grandparents, parents and children), living together.

The splitting up of household members produces a sort of partition of wealth among the households they generate, as happens when young members exit the household to form a new family. A simple way to adjust for household size is to consider per capita averages:

The per capita wealth figure for Italy and Spain is 108,700, slightly higher than for France (104,100) and Germany (95,500).

Hence, the differences between per capita averages are much smaller than those observed between household medians. Dealing with sample estimates, we observe that most of the above differences are not statistically significant, although National Accounts estimates confirm that per capita wealth in Germany is a little lower than in Italy, Spain or France (by 3%, 4% and 11 % respectively) (European Central Bank 2013). The cross-country variation in wealth is also influenced by other factors: the ownership rate of the main residence, the trends in prices of the wealth components and the different propensity to under-report. On the first point, the level of household wealth is connected to the ownership of the main residence: the rate is 44% in Germany, probably due to a primary role of social housing in this country, compared to 55%, 69% and 83% in France, Italy and Spain, respectively. Moreover, in the past decade the price of dwellings grew considerably less in Germany compared to the other countries. Finally, the German households wealth is characterised by a higher diffusion of financial assets, which are more frequently subjected to under-reporting in sample surveys. It should also be noted that the wealth of nations is not composed exclusively by household wealth, but also includes the wealth of other institutional sectors (i.e., the public sector). Following De Grauwe and Ji estimates (2013), Germany is among the richest countries in the Eurozone in terms of national wealth. Thus, there are no substantial differences in the averages for per capita wealth and that the media hype is the result of a specious reading of part of the results.

Income and poverty


Other economic indicators paint pictures that are more favourable to Germany:

German mean gross household income is about 43,500. In the other three countries the average is between 31,000 and 37,000.

If we consider household equivalent income, a measure of the resources available at the individual level that takes into account household size and composition, the income gap appears larger:2 the German mean gross equivalent income is about 28,000 (the median 22,000), compared with averages ranging from 19,000 to 21,000 (and the medians from 15,000 to 17,000). Even taking into account the different purchasing power of income in the four countries, the results are similar for both means and medians. In short, Germans have a significantly higher income with respect to the citizens of the other three major Eurozone countries with similar equivalent income statistics. Figure 1. Net wealth statistics (1000)

As to the poverty distribution, the proportion of relatively poor individuals is calculated by adopting a unique poverty line (half the Eurozone median equivalent income adjusted for the different price levels3) and a specific poverty line for each country.The second clearly defines poverty only in terms of the relative position (and prices) of the household in its countrys income distribution, while the first treats all households as belonging to a single area, although it does take into account differences in price levels across countries. Figure 3 shows that the two definitions do not produce the same picture. In the first definition:

Poverty appears to be more widespread in Italy and Spain and less widespread in France and Germany. Adopting national poverty lines, poverty is more widespread in Italy and Germany than in Spain and France.

These results are only a small part of what can be obtained analysing the rich database that is available to researchers. It would be worthwhile extending the study of households economic behaviour in the Eurozone by means of analyses that consider various dimensions (sociodemographic, economic and institutional) of cross-country heterogeneity while avoiding hasty conclusions. Figure 2. Income statistics (1000)

Figure 3. Distribution of poverty (per cent)

Editor's note: The views expressed in this article are those of the authors and do not involve the responsibility of the Bank of Italy.

References
De Grauwe, Paul and Yuemei Ji (2013), Are Germans really poorer than Spaniards, Italians and Greeks?, VoxEU.org, 16 April. European Central Bank (2013), The Eurosystem Household Finance and Consumption Survey: Results from the First Wave, Statistics Paper Series 2, April.

HFCN (2013),The Eurosystem Household Finance and Consumption Survey Results from the first wave, ECB Statistics Paper Series, ECB Statistics Paper Series, n.2.

1 Further details on the survey are contained in the document on the HFCS principal results (HFCN 2013) and in the associated methodological report. The documents, together with a set of additional descriptive statistics and the instructions for access to the data, are available at http://www.ecb.int/home/html/researcher_hfcn.en.html. 2 In this exercise we used the modified OECD equivalence scale that assigns a coefficient equal to 1 for the head of household, 0.5 for household members aged 14 and over and 0.3 for those under 14 years. 3 For the price adjustment we use the Eurozone harmonised index of consumer prices (HICP).

Does inequality lead to a financial crisis?


Michael Bordo, Christopher M. Meissner, 24 March 2012 Did inequality in the US lead to the global financial crisis? This column presents evidence from 14 countries between 1920 and 2008 and argues that while inequality can be blamed for many things, the global crisis is not one of them.
In his 2010 book, Fault Lines, Raghuram Rajan argued that rising inequality in the past three decades led to political pressure for redistribution that eventually came in the form of subsidised housing finance. Political pressure was exerted so that low-income households who otherwise would not have qualified received improved access to mortgage finance. The resulting lending boom created a massive run-up in housing prices and enabled consumption to stay above stagnating incomes. The boom reversed in 2007, leading to the banking crisis of 2008. Along these lines, Kumhof and Rancire (2011) explore the links between inequality, leverage and crises within the context of a DSGE model. They motivate their model with examples from the US in the 1920s and the more familiar events leading up to the subprime crisis. There is reason to pause before accepting the generality of this new view. Income inequality plays no significant role in the large literature on financial instability and credit booms. Standard variables in the international finance literature are current account deficits, pegged exchange rates and the business cycle. Borio and White (2003) also elaborate an influential view on credit booms. Periods of expected low and stable inflation, strong economic growth, and liberalised finance can give rise to complacency among borrowers, lenders, and regulators. Endogenous market forces that might normally rein in these imbalances seem to be absent. Massive buildups in credit lead to financial instability in this case. In recent research (Bordo and Meissner forthcoming) we present some empirical evidence on whether rising income concentration has any explanatory power in accounting for credit booms and financial crises after holding these other factors constant. We focus on a much larger sample than the two unique periods in US economic history that motivate Rajan et al. Our data cover a panel of 14 mainly advanced countries from 1920 to 2008 covering a wide number of boom-bust episodes and financial crises. We proceed in three steps. First, we offer cross-country regressions relating changes in income inequality to credit growth. Next, we relate credit growth to systemic banking crises. Finally, we present further historical evidence.

Inequality and credit growth


After controlling for a number of the variables cited above, rising income concentration, measured by changes in the income share of the top 1% of tax units, plays no significant role in explaining credit growth. Instead, the two key determinants of credit booms are the upswing of the business cycle or economic expansion and low interest rates. This is very much consistent with a broader literature on credit cycles. While inequality often ticks upwards in the expansionary phase of the business cycle, this factor does not appear to be a significant determinant of credit growth once we condition on other macroeconomic aggregates. Figure 1 presents an added-variable plot linking credit growth with changes in the income share of the top 1% after controlling for several other variables including growth in GDP and interest rates. Figure 2 presents a similar plot relating credit growth to changes in real GDP. The relationship is clearly much tighter in Figure 2 than in Figure 1.

Figure 1. Change in loans versus changes in top 1% income shares, 14 countries, 19722008

Notes: The y-axis graphs the residuals from a panel OLS regression of the annual change of the log of real loans on country fixed effects, year fixed effects, and lagged changes of the following: the log of real GDP, short-term interest rates, the log of the real money supply, the log of the ratio of investment to GDP, and the ratio of the current account to GDP. The x-axis plots residuals from a similar regression using changes in the top 1% income share as the dependent variable.

Figure 2. Change in loans versus the change in GDP, 14 countries, 19722008

Notes: The y-axis plots residuals from a panel OLS regression of the annual change of the log of real loans (credit) on country fixed effects, year fixed effects, and lagged changes in the share of income of the top 1% of tax units, short-term interest rates, log of the real

money supply, log of the ratio of investment to GDP, and the ratio of the current account to GDP. The x-axis plots residuals from a similar regression using contemporaneous changes in the log of real GDP and contemporaneous explanatory variables.

Credit and crises


On the other hand, we do agree with Rajan et al that financial instability and banking crises often follow above-average growth in credit. Our evidence, which reproduces that found in Schularick and Taylor (forthcoming), finds a fairly strong relationship between growth in real credit and the probability of a banking crisis. This is consistent with many different models of financial instability, but we take no stand on this in our paper. What can be said is that inequality is not significantly related to systemic banking crises in our large sample. Since income concentration is not a good predictor of credit growth, it is hard to see how it can be related to crises by the channels proposed in the work cited above.

History, credit, and crises


Historical evidence from several major credit booms finds scant support for the inequality/crisis hypothesis. In the 1920s in the US, consumer and mortgage debt did indeed rise as the top 1% share in total income climbed from 15% in 1922 to 18.42% in 1929 (Piketty and Saez 2003). In that period, consumer credit was closely related to the rise of new, big-ticket consumer durables in the 1920s such as automobiles, washing machines, and radios. The rise of consumer credit arguably came from supply-side innovations rather than from increased household demand to maintain consumption in the face of stagnant incomes as in Kumhof and Rancire (Olney 1989). The housing boom that ended in 1926, well before the Depression started, reflected a significant amount of postwar pent-up demand, higher quality housing, and a favourable interest-rate environment (White 2009). It appears to have had little to do with the subsequent Depression and series of banking crises that would begin in mid-1929. Time series evidence from other countries is not consistent with the inequality link either. In Japan in the 1980s, credit growth rose in advance of top income shares. On the other hand, top income shares started rising in 1995 in Japan while credit growth languished. In Sweden in the 1980s, sharp rises in top incomes followed rather than led credit growth. Again, in Sweden, top income shares continued to grow in the aftermath of the banking and real estate bust in 1991, but credit fell. In Australia, credit growth was unrelated to top income shares in the 1970s. Top income shares followed rather than led credit growth in the housing boom of the late 1980s in Australia.

Conclusions and lessons


If income inequality drove the credit boom that preceded the subprime crisis in the US, the event was an outlier by historical standards. Comparative evidence from the last century shows little relationship between rising inequality and credit booms. Even in the US, a more plausible interpretation of events in the first decade of the 21st century is that interest rates were at historical lows. That situation coupled with financial innovation allowing low-income workers to buy houses at unrealistic prices given forecasts of permanent incomes and the likely reversion in interest rates. If there is a policy lesson in all of this, it might be related to the fact that market-determined rates of leverage can lead to a systemic financial crisis and ensuing negative spillovers. But an

increase in the supply of credit that generates a financial crisis has very different policy implications from those that might be prescribed by an increase in the demand for credit that allegedly arose to maintain consumption in an increasingly unequal society. In the former case, financial regulations and reforms to limit leverage and systemic risk that have been discussed and enacted since 2008 are potentially more appropriate remedies to achieve financial stability. While inequality is arguably problematic for many other reasons, we remain sceptical of claims that it engenders financial crises.

References
Bordo, MD and CM Meissner (forthcoming), Does Inequality Lead to a Financial Crisis?, Journal of International Money and Finance. Borio, C and WW White (2003), Whither monetary and financial stability? The implications of evolving policy regimes, in Monetary Policy and Uncertainty: Adapting to a Changing Economy: A Symposium sponsored by the Federal Reserve Bank of Kansas City, Jackson Hole, Wyoming, 28-30 August, pp. 131211. Kumhof, M and R Rancire (2011), Inequality, leverage, and crises,VoxEU.org, 4 February. Olney, M (1989), Credit as a Production-Smoothing Device: The Case of Automobiles, 19131938, Journal of Economic History, 49(2):37791. Piketty, T and E Saez (2003), Income Inequality in the United States, 1913-1998, Quarterly Journal of Economics, CXVIII(1):139. Rajan, R (2010), Fault Lines, Princeton University Press. Schularick, M and AM Taylor (forthcoming), Credit Booms Gone Bust Monetary Policy, Leverage Cycles and Financial Crises, 18702008, American Economic Review. White, Eugene N (2009), Lessons from the Great American Real Estate Boom and Bust of the 1920s, NBER Working Paper No. 15573.

Whos Afraid of Says Law?


Published February 20, 2014 Axel Leijonfuvud , Earl Thompson , Hayek , Keynes , Oskar Lange , Robert Clower , Say's Law , W. H. Hutt , Walras's Law 48 Comments Tags: Don Patinkin, Nick Rowe, Say's Law, Say's Principle

Theres been a lot of discussion about Says Law in the blogosphere lately, some of it finding its way into the comments section of my recent post What Does Keynesisan Mean, in which I made passing reference to Keyness misdirected tirade against Says Law in the General Theory. Keynes wasnt the first economist to make a fuss over Says Law. It was a big deal in the nineteenth century when Say advanced what was then called the Law of the Markets, pointing out that the object of all production is, in the end, consumption, so that all productive activity ultimately constitutes a demand for other products. There were extended debates about whether Says Law was really true, with Say, Ricardo, James and John Stuart Mill all weighing on in favor of the Law, and Malthus and the French economist J. C. L. de Sismondi arguing against it. A bit later, Karl Marx also wrote at length about Says Law, heaping his ample supply of scorn upon Say and his Law. Thomas Sowells first book, I believe drawn from the doctoral dissertation he wrote under George Stigler, was about the classical debates about Says Law. The literature about Says Law is too vast to summarize in a blog post. Heres my own selective take on it. Say was trying to refute a certain kind of explanation of economic crises, and what we now would call cyclical or involuntary unemployment, an explanation attributing such unemployment to excess production for which income earners dont have enough purchasing power in their pockets to buy. Say responded that the reason why income earners had supplied the services necessary to produce the available output was to earn enough income to purchase the output. This is the basic insight behind the famous paraphrase (I dont know if it was Keyness paraphrase or someone elses) of Says Law supply creates its own demand. If it were instead stated as products or services are supplied only because the suppliers want to buy other products or services, I think that it would be more in sync than the standard formulation with Says intent. Another way to think about Says Law is as a kind of conservation law. There were two famous objections made to Says Law: first, current supply might be offered in order to save for future consumption, and, second, current supply might be offered in order to add to holdings of cash. In either case, there could be current supply that is not matched by current demand for output, so that total current demand would be insufficient to generate full employment. Both these objections are associated with Keynes, but he wasnt the first to make either of them. The savings argument goes back to the nineteenth century, and the typical response was that if there was insufficient current demand, because the desire to save had increased, the public deciding to reduce current expenditures on consumption, the shortfall in consumption demand would lead to an increase in investment demand driven by falling interest rates and rising asset prices. In the General Theory, Keynes proposed an argument about liquidity preference and a potential liquidity trap, suggesting a reason why the necessary adjustment in the rate of interest would not necessarily occur. Keyness argument about a liquidity trap was and remains controversial, but the argument that the existence of money implies that Says Law can be violated was widely accepted. Indeed, in his early works on business-cycle theory, F. A. Hayek made the point, seemingly without

embarrassment or feeling any need to justify it at length, that the existence of money implied a disconnect between overall supply and overall demand, describing money as a kind of loose joint in the economic system. This argument, apparently viewed as so trivial or commonplace by Hayek that he didnt bother proving it or citing authority for it, was eventually formalized by the famous market-socialist economist (who, for a number of years was a tenured professor at that famous bastion of left-wing economics the University of Chicago) Oskar Lange who introduced a distinction between Walrass Law and Says Law (Says Law: A Restatement and Criticism). Walrass Law says that the sum of all excess demands and excess supplies, evaluated at any given price vector, must identically equal zero. The existence of a budget constraint makes this true for each individual, and so, by the laws of arithmetic, it must be true for the entire economy. Essentially, this was a formalization of the logic of Says Law. However, Lange showed that Walrass Law reduces to Says Law only in an economy without money. In an economy with money, Walrass Law means that there could be an aggregate excess supply of all goods at some price vector, and the excess supply of goods would be matched by an equal excess demand for money. Aggregate demand would be deficient, and the result would be involuntary unemployment. Thus, according to Langes analysis, Says Law holds, as a matter of necessity, only in a barter economy. But in an economy with money, an excess supply of all real commodities was a logical possibility, which means that there could be a role for some type the choice is yours of stabilization policy to ensure that aggregate demand is sufficient to generate full employment. One of my regular commenters, Tom Brown, asked me recently whether I agreed with Nick Rowes statement: the goal of good monetary policy is to try to make Says Law true. I said that I wasnt sure what the statement meant, thereby avoiding the need to go into a lengthy explanation about why I am not quite satisfied with that way of describing the goal of monetary policy. There are at least two problems with Langes formulation of Says Law. The first was pointed out by Clower and Leijonhufvud in their wonderful paper (Says Principle: What It Means and Doesnt Mean reprinted here and here) on what they called Says Principle in which they accepted Langes definition of Says Law, while introducing the alternative concept of Says Principle as the supply-side analogue of the Keynesian multiplier. The key point was to note that Langes analysis was based on the absence of trading at disequilibrium prices. If there is no trading at disequilibrium prices, because the Walrasian auctioneer or clearinghouse only processes information in a trial-and-error exercise aimed at discovering the equilibrium price vector, no trades being executed until the equilibrium price vector has been discovered (a discovery which, even if an equilibrium price vector exists, may not be made under any priceadjustment rule adopted by the auctioneer, rational expectations being required to guarantee that an equilibrium price vector is actually arrived at, sans auctioneer), then, indeed, Says Law need not obtain in notional disequilibrium states (corresponding to trial price vectors announced by the Walrasian auctioneer or clearinghouse). The insight of Clower and Leijonhufvud was that in a real-time economy in which trading is routinely executed at disequilibrium prices, transactors may be unable to execute the trades that they planned to execute at the prevailing prices. But when planned trades cannot be executed, trading and output contract, because the volume of trade is constrained by the lesser of the amount supplied and the amount demanded. This is where Says Principle kicks in; If transactors do not succeed in supplying as much as they planned to supply at prevailing prices, then, depending on the condition of their balances sheets, and the condition of credit markets, transactors may have to curtail their demands in

subsequent periods; a failure to supply as much as had been planned last period will tend reduce demand in this period. If the distance from equilibrium is large enough, the demand failure may even be amplified in subsequent periods, rather than damped. Thus, Clower and Leijonhufvud showed that the Keynesian multiplier was, at a deep level, really just another way of expressing the insight embodied in Says Law (or Says Principle, if you insist on distinguishing what Say meant from Langes reformulation of it in terms of Walrasian equilibrium). I should add that, as I have mentioned in an earlier post, W. H. Hutt, in a remarkable little book, clarified and elaborated on the Clower-Leijonhufvud analysis, explaining how Says Principle was really implicit in many earlier treatments of business-cycle phenomena. The only reservation I have about Hutts book is that he used it to wage an unnecessary polemical battle against Keynes. At about the same time that Clower and Leijonhufvud were expounding their enlarged view of the meaning and significance of Says Law, Earl Thompson showed that under classical conditions, i.e., a competitive supply of privately produced bank money (notes and deposits) convertible into gold, Says Law in Langes narrow sense, could also be derived in a straightforward fashion. The demonstration followed from the insight that when bank money is competitively issued, it is accomplished by an exchange of assets and liabilities between the bank and the banks customer. In contrast to the nave assumption of Lange (adopted as well by his student Don Patinkin in a number of important articles and a classic treatise) that there is just one market in the monetary sector, there are really two markets in the monetary sector: a market for money supplied by banks and a market for money-backing assets. Thus, any excess demand for money would be offset not, as in the Lange schema, by an excess supply of goods, but by an excess supply of money-backing services. In other words, the public can increase their holdings of cash by giving their IOUs to banks in exchange for the IOUs of the banks, the difference being that the IOUs of the banks are money and the IOUs of customers are not money, but do provide backing for the money created by banks. The market is equilibrated by adjustments in the quantity of bank money and the interest paid on bank money, with no spillover on the real sector. With no spillover from the monetary sector onto the real sector, Says Law holds by necessity, just as it would in a barter economy. A full exposition can be found in Thompsons original article. I summarized and restated its analysis of Says Law in my 1978 1985 article on classical monetary theory and in my book Free Banking and Monetary Reform. Regrettably, I did not incorporate the analysis of Clower and Leijonhufvud and Hutt into my discussion of Says Law either in my article or in my book. But in a world of temporary equilibrium, in which future prices are not correctly foreseen by all transactors, there are no strict intertemporal budget constraints that force excess demands and excess supplies to add up to zero. In short, in such a world, things can get really messy, which is where the Clower-Leijonhufvud-Hutt analysis can be really helpful in sorting things out.

Journal of Economic Perspectives Volume 3, Number 3 Summer 1989 pages 153164

Thomas Robert Malthus and David Ricardo


Robert Dorfman

and Ricardo first met in 1811, in circumstances that might be considered unpromising. By then, Malthus was recognized as the leading economist in England, and Ricardo was an established man of property who had recently gained recognition as the most effective of the critics who blamed the Bank of England for the inflation then in progress. Malthus had reviewed the controversy over the causes of the inflation1 objecting to some of Ricardo's arguments, though not to his basic position. Ricardo had published a rejoinder.2 Whereupon Malthus wrote a letter to Ricardo that began: East India College Hertford June 16th 1811. Dear Sir: One of my principal reasons for taking the liberty of introducing myself to you, next to the pleasure of making your acquaintance, was, that as we are mainly on the same side of the question, we might supersede the necessity of a long controversy in print respecting the points in which we differ, by an amicable discussion in private. (Works,3 vol. VI, p. 21.) They met for their "amicable discussion" about a week later, but did not resolve their
"Publications on the depreciation of paper currency," Edinburgh Review, vol. X V I I , Feb. 1811. 2 The High Price of Bullion, 4th edition., Appendix. Included in Works, vol. I I I . 3 " W o r k s " will denote David Ricardo, Works and Correspondence, edited by P. Sraffa with the collaboration of M. H. Dobb.
1

Malthus

Robert Dorfman is Professor Emeritus of Political Economy, Harvard University, Cambridge, MA.

154

Journal of Economic Perspectives

disagreements. In fact, they were still disagreeing when Ricardo died twelve years later. This article will describe the enduring relationship that Malthus' letter initiated. It was, very likely, the most remarkable and most fruitful collaboration in the history of economics.4

Malthus' Background
Their backgrounds are an essential part of the story. Malthus was the older of the two. He was born in 1766, that is, in the midst of that troubled but gloriously optimistic period, the Age of Enlightenment, on a "small but beautiful"5 estate about 20 miles south of London. He was the second son of one Daniel Malthus, a cultivated landed gentleman of good family and connections but no great distinctions, not even wealth. Daniel had some intellectual stature. He corresponded with Rousseau and was friendly, though not intimate, with Hume. So Malthus was born into the English country gentry, a highly privileged status in life. But he was born with two disadvantages. First, he was a second son. By English law and custom he could not inherit even a share of his father's estate (which was not very great in any event), and therefore had to support himself by engaging in one of the few professions that were considered proper for a member of his privileged caste. Second, he was born with a cleft palate that somewhat disfigured his face and caused a marked stammer throughout his life. Between these two disadvantages, Malthus' choice of career was narrowly constricted. Service as an officer in the army or Royal Navy was highly respected, but not open to someone with his stammer. A career as a barrister was ruled out for the same reason. A life as a businessman was unthinkable for the son of an ancient country family. In fact, about the only possibility was the church (for which the stammer was apparently not considered so disabling a limitation). Accordingly, Malthus prepared to take orders. As a boy, Malthus was an excellent student, the pride of his masters. He won scholarships, went on to Cambridge, and performed there with such distinction that immediately upon graduation he was elected a fellow of Trinity College and was appointed to an adequate living in a country parish. Malthus lived the placid life of a Cambridge don and country cleric until he was about thirty years old. Then an abrupt change occurred in his circumstances. William Godwin, a minister turned author, published his Enquiry Concerning Political Justice.6
4

The only other collaboration that bears comparison is the one between Karl Marx and Friedrich Engels, but Marx-Engels' relationship was predominantly one of master and disciple, while Malthus- Ricardo's relationship was more complex, as we shall see. 5 The phrase is Bishop W. Otter's, in his "Memoir on the life of Malthus," published in the London School of Economics reprint of Malthus' Principles. 6 As an instance of how tight the "tight little isle" was, it is worth noting that Godwin was the father of Mary Wollstonecraft, the author of Frankenstein and wife of Percy Bysshe Shelley.

Robert Dorfman

155

Thomas Robert Malthus Portrait after Linnell of Thomas Malthus reproduced by courtesy of the Trustees of the British Museum.

David Ricardo Portrait of David Ricardo by T. Hodgetts from the National Portrait Gallery, London.

This volume was an immediate sensation, and remains one of the fundamental statements of belief in human perfectibility and of philosophic anarchism. In it, Godwin taught that men and women could learn to live entirely rationally, and that when that had been accomplished, there would be no need for laws, property rights, or other constraints on perfect freedom, and all people would live in peace, plenty, and harmony. Malthus' father, Daniel, was greatly impressed by these doctrines, and expounded them to his son. Whereupon a contentious streak in Malthus' nature revealed itself: he could not abide such unbridled, unsubstantiated optimism. Father and son wrangled night after night. Finally, the son was driven to write down his objections to Godwin's utopian vision in the form of an extended memorandum to his father. The father was not persuaded, but was so impressed by the passionate eloquence of the manuscript that he urged Robert to publish it. He did, under the title Essay on the Principle of Population. That was 1798. Though the first edition was anonymous, it made Malthus famous at the age of 32. It also made him odious to many people for deriding the hopes for human progress and arguing that charity to the poor was futile. A couple of quotations will remind you of the vivid eloquence that made this tract so effective when it was published, and still effective nearly two centuries later. First, a passage that describes the fate that befalls a nation when its population becomes "excessive": The vices of mankind are active and able ministers of depopulation. They are the precursors of the army of destruction; and often finish the dreadful work

156

Journal of Economic Perspectives

themselves. But should they fail in this war of extermination, sickly seasons, epidemics, pestilence, and plague advance in terrific array, and sweep off their thousands and ten thousands. Should success be still incomplete, gigantic inevitable famine stalks in the rear, and with one mighty blow, levels the population with the food of the world.7 Thus Malthus made clear the evils of overpopulation. And, as to Godwin's faith in the ability of the rule of rationality to supplant the principle of population, he had this to say: No move towards the extinction of the passion between the sexes has taken place in the five or six thousand years the world has existed. Men in the decline of life have in all ages declaimed a passion which they have ceased to feel, but with as little reason as success. Those who from coldness of constitutional temperament have never felt what love is, will surely be allowed to be very incompetent judges with regard to the power of this passion to contribute to the sum of pleasurable sensations in life.8

Ricardo's Background
David Ricardo was born six years after Malthus and to a very different station in life. His father was a stockbroker who had migrated from Amsterdam to London a few years before David was born. In London the father joined the community of Jewish merchants and stockbrokers, who were reasonably prosperous and formed a small island of Jewish culture and tradition in the great metropolis of London. They stood at the periphery of English life, because of both their religion and their profession, just as the landed gentry stood at the center. When David became old enough, he was sent back to Amsterdam to get a proper education in the much larger Jewish community there, and returned to London at the age of fourteen. There he went to work in his father's countinghouse to learn the trade of stockbroking. All might have been well except that four years later he fell in love with a Quaker, and informed his horrified parents that he planned to marry her. He was disowned, and expelled from the countinghouse. Nothing to do but to go into business for himself in the only trade he knew. He quickly proved himself to be the Boy Wonder of Threadneedle Street. Before he was thirty he had become rich enough to buy a country estate, to become bored with merely making money, and to turn his mind to other things. One of the principal things he turned his mind to was economics. Somehow, in 1799, he came across The Wealth of Nations, devoured it, and was so thrilled by the insights he found in it that he continued to read and think about economics. When,
7 M a l t h u s , Essay on the Principle of Population, Modern Library edn., Ch. V I I , p . 52. M a l t h u s , ibid., Modern Library edition, Ch. X I , p. 77.

Thomas Robert Malthas and David Ricardo

157

around 1810, a controversy broke out in Parliament and in the press about the cause of the wartime inflation then in progress, Ricardo, as an experienced financier with a background in economics, was ready for his first publication: a series of letters to the Morning Chronicle tracing the inflation to the Bank of England's excessive issue of banknotes. These letters brought Ricardo to the attention of James Mill, who was prominent in London literary circles. Mill introduced Ricardo to his circle of economists and other intellectuals. The letters plus Ricardo's pamphlet, The High Price of Bullion, a Proof of the Depreciation of Banknotes,9 led to the first meeting between Malthus and Ricardo, as related above. Though they were brought together by some disagreements, they became close friends almost immediately. From their first meeting until Ricardo's death in 1823 they saw each other frequently, often several times a week, exchanged some eighty letters each way, stayed frequently at each other's homes, and were never long out of each other's minds.

The "Corn Laws" Controversy


Their extraordinary method of collaboration emerged just a few years later. The occasion was the controversy over the Corn Laws. The English Corn Laws were a scheme of variable tariffs and export subsidies, dating back to Elizabethan times, intended to protect and promote English agriculture. During the Napoleonic Wars, a coincidence of wartime demand and moderate harvests generated farm prices that were satisfactorily high. But as the war waned, the normal war-end economic disorganization, aggravated by some bumper crops, broke the agricultural markets. Wheat prices fell by about 50 percent between 1812 and 1815.10 The agricultural interests demanded stiffened tariff protection, thereby precipitating lively debates in Parliament and the press. Malthus and Ricardo entered the public debate on opposite sides. This debate is important for the history of economics, since in the course of it Malthus formulated his theory of rent and Ricardo elaborated that theory and embedded it in an argument that forms the kernel of his Principles of Political Economy
and Taxation.

The earliest recorded discussions between Malthus and Ricardo that relate to the Corn Laws occurred in the summer of 1813. (The date is not important except to help keep the sequence of developments straight.) A letter from Ricardo to Malthus in August of that year mentions oral discussions between them concerning a thesis that was to become a centerpiece of Ricardian theory.11 The thesis was that as a country's population grew and its capital accumulated, the rate of profit in farming would fall because farmers would have to resort to less and less productive land, and, moreover,
9

Reprinted in Works, vol. III. B. R. Mitchell, 1962, p. 486. 11 Letter dated August 17, 1813, Works, vol. VI, pp. 9495. A letter from Ricardo to Hutches Trower, dated March 8, 1814, is somewhat more explicit. See Works, vol. VI, p. 104.
10

158

Journal of Economic Perspectives

the general rate of profits in the country also would fall, since the rate of profits in other sectors tended to be equal to that in farming. Malthus, apparently, disagreed with this conclusion. Later letters dated in 1813 and 1814 mention more discussions of farm profits and profits in general, but do not inform us of the reasoning of either participant. The first publication to emerge from these discussions was Malthus' pamphlet, Observations on the Corn Laws, published the following year, 1814. It was an even-handed review of the advantages and drawbacks of imposing a high tariff on imported grain. The intensive exchange occurred the following February, when Parliamentary action on corn imports was imminent and a heated public debate on the Corn Laws was in progress. Malthus contributed to this debate by publishing two pamphlets a week
apart. T h e first was An Inquiry into the Nature and Progress of Rent, in which he presented

the Malthusian-Ricardian theory of rent for the first time.12 Toward the end of the pamphlet, Malthus expressed his preference for retaining the high tariffs on corn in order to protect the prosperity of the farmers and the rural population in general. A
week later he took a stronger stand. In Grounds of an Opinion on the Policy of Restricting the

Importation of Foreign Corn, his original diffident endorsement of the Corn Laws was replaced by forthright advocacy. He there argued that the protection afforded by the Corn Laws was essential to the continued health of English agriculture, and that the vitality of English ways and institutions, as well as national security, was rooted in the prosperity of her farms and villages. Ricardo responded within two weeks with his Essay on the Influence of a Low Price of Corn on the Profits of Stock. This pamphlet announced his theory, which had been germinating for the previous two years, of the adverse effect of population growth and capital accumulation on the rate profit. In developing his argument, Ricardo relied repeatedly on the theory of rent that Malthus had just published. This fact did not deter him from drawing policy conclusions diametrically opposed to those advocated by Malthus, or, indeed, from rebutting explicitly some of Malthus' contentions. He argued vehemently that England's future depended on the progress of her industries, which was being stifled by the Corn Laws. On the other hand, Ricardo concluded, If, then, the prosperity of the commercial classes will most certainly lead to accumulation of capital, and the encouragement of productive industry; these can by no means be so surely obtained as by a fall in the price of corn. (Works, vol. IV, p. 37.) Thus the policy issue between the two friends was clearly and openly joined. Though they could not reach agreement about policy, their efforts to explain their views, first to each other and then to the public, led them to their comprehensive theory of the distribution of national income among the three great classes of claimants: the workers, the merchants, and the landed gentry. They both contributed
Neither Malthus nor Ricardo was aware that Edward West was publishing the same ideas at virtually the same time, and apparently none of the three knew that James Anderson had anticipated them all in 1777. See Anderson's Enquiry, footnote beginning on p. 45, or his Observations, p. 376.
12

Robert Dorfman

159

to this effort and to each other's argumentation. Both relied on Malthusian population theory to explain the level of real wages. Both used Malthus' theory of rent. Both recognized that the rate of profit in agriculture was determined by the productivity of the marginal land cultivated, thereby injecting marginal considerations into economic thought, albeit in a limited application. And they agreed that the rate of profit had to be the same in all industries where competition prevailed. Thus all the ingredients of Ricardian distribution and growth theory were in place and agreed upon. Their debate over the Corn Laws was now over. Its effect on economic policy was modest. Parliament voted to retain, and even strengthen, the laws, and they remained in force for another 30 years. But its effect on economic thought was enormous and enduring.

Ricardo's and Malthus' Principles of Political Economy


At this point, developments took on a momentum of their own. James Mill reentered the story. He appreciated, and was enormously impressed by, the clear and comprehensive theory sketched in the Influence of a Low Price of Corn, and urged Ricardo to expand his pamphlet of thirty-odd pages into a fully developed treatise on the principles of economics. Ricardo demurred; he did not feel competent to compose a full-fledged treatise. Somewhat later, he said of himself, "I am but a poor master of language." 13 Mill persisted, and Ricardo was persuaded and published his Principles of Political Economy and Taxation two years later, in 1817. It was an expansion, and in some respects a revision, of the Influence of a Low Price of Corn, but its basic argument was the same. It was an immediate success. In fact, it was the most authoritative and influential text on economics published in the 75-year span between Smith's Wealth of Nations and John Stuart Mill's Principles of Political Economy. Nor did its influence end even then since Mill's Principles was based on Ricardo's doctrines. In short, the friendly but intense debate between Malthus and Ricardo during the Corn Laws controversy set the course that English economics followed for the rest of the nineteenth century. The letters that they exchanged during the public controversy and during the preceding year indicate what was going on behind the pamphlets. They labored together to understand the economic consequences of the Corn Laws. Their discussions led them to a deeper understanding of economics than anyone had attained before. But they could not agree on the substantive matter of policy. So, having failed to persuade each other by argument or by letter, they laid their individual conclusions before the public. Indeed, their correspondence shows that each encouraged the other to make his views public. In so doing neither moderated his criticisms of his friend's arguments. Neither ever wavered from the conviction that the other was striving as earnestly and honestly as himself to attain a true and objective understanding of the principles at work.
13

Works, vol. VIII, p. 20, letter dated Oct. 9, 1820.

160

Journal of Economic Perspectives

The termination of their controversy over the Corn Laws did not, by any means, terminate their joint efforts to understand how the economy works. It is significant that Ricardo devoted the last chapter of his Principles to criticizing some aspects of Malthus' pamphlet, On the Nature and Progress of Rent. Shortly after seeing the Principles Malthus wrote to Ricardo, "I am mediating a volume as I believe I have told you, and I want to answer you, without giving my work a controversial air." 14 The answer was Malthus' Principles of Political Economy, published in 1820. This is so much a point-by-point response to Ricardo's Principles that it can hardly be read with comprehension unless the earlier work is clearly in mind.

The "Gluts" Controversy


The exchange was not yet over but, with the publication of Malthus' Principles, it shifted to a new topic, or rather a long smoldering topic that flared up: "gluts." With the return of peace after Waterloo, the English economy had sagged into a postwar depression or, as they called it, "glut." What should be done about it? Ricardo held that a condition of general overproduction was impossible except transiently. An oversupply of one commodity would have to be counterbalanced automatically by a shortage of some other. Ricardo explained this, in effect Say's Law, to Malthus, painstakingly in letter after letter, but Malthus could not see it. Malthus, for his part, explained to Ricardo repeatedly that total demand might be smaller than the total output that the working population and other resources could produce if fully employed. The working population could not afford to buy much more than bare subsistence. If the well-off classes were too abstemious, the prices of luxuries could fall to the point where there was no profit in producing them, and glut would ensue. In the extreme, Malthus pointed out, if everyone lived on a subsistence scale there would have to be a vast oversupply of commodities since each worker could produce much more than bare subsistence for himself and his family. To no avail. Ricardo couldn't see it. Keynes revived this debate a hundred years after the principals had died, and claimed Malthus as his predecessor in appreciating the possibility of underemployment equilibrium. At any rate, when Malthus wrote his Principles, he devoted the final chapters to the problem of gluts and to the need for a class of "unproductive consumers" who would provide the demand that would keep the rest of the economy employed profitably. He pointed out that the English landed gentry were exceptionally well equipped to fulfill this function. To this notion, Ricardo could only respond, " I can see no soundness in the reasons you give for the usefulness of demand on the part of unproductive consumers. How their consuming, without reproducing, can be beneficial to a country, in any possible state of it, I confess I cannot discover."15 One can almost see him shaking his head as he wrote those words.
14

15

Works, vol. VII, p. 215, letter dated Jan. 3, 1817 Works, vol. VIII, p. 301, letter dated Nov. 24, 1820.

Thomas Robert Malthus and David Ricardo

161

Of course, Malthus did not have the last word. When Ricardo received Malthus' book, he went through it meticulously, recording his disagreements with Malthus' contentions paragraph by paragraph. The result was a book-length manuscript which Ricardo decided not to publish.16 He did circulate it among his friends, including Malthus, however. As a result of being withheld from publication, the manuscript was very nearly lost. No one knew where it was when Ricardo died, and its whereabouts remained a mystery for 89 years until one of his great-grandsons came across it when cleaning out the lumber room of a country house that had been in the Ricardo family. Thanks to that stroke of luck, we know Ricardo's responses to the book that Malthus wrote to answer his own. We also know Malthus' rejoinders to those responses; the second edition of Malthus' Principles includes many changes and allusions in response to Ricardo's criticisms. But that is where the exchange ends. Ricardo died before Malthus revised his Principles.

The "Value" Controversy


All the while that Malthus and Ricardo were arguing about the Corn Laws and the nature of gluts, they were conducting a third interminable dispute. This one concerned the definition, measurement, and cause of "value." From our perspective, the concern over value, which extended from Adam Smith to Stanley Jevons at least, was a great waste of words and time. But Malthus, Ricardo, and their contemporaries took it very seriously, and with some reason. They had enough experience with inflations, crop failures and bumper crops, and other economic disturbances to recognize that money prices fluctuated too erratically to indicate long-run relationships or to reveal underlying trends. They believed that each commodity had a property that, following Adam Smith, they called its "natural value," which explained the ratio of its money price to the prices of other commodities. About that they agreed, but when they attempted to define this natural value, explain its level and changes, and devise ways to measure it in practice (since money prices were not reliable indicators), they became engaged in endless debate. Their final debate concerned the practical measurement of commodities' values. Ricardo held that there was no accurate measure of natural value, but that a commodity's price in terms of gold was the best practical approximation, because the costs of labor and capital contributed to the total cost of gold production in proportions that were about the average for all commodities. (Notice that this reasoning conflicts with the common impression that Ricardo explained commodities' values by a "labor theory of value"; in fact, he held a cost-of-production theory much like Adam Smith's.) Malthus advocated using the cost of laborthat is, wagesas the standard for measuring the values of other commodities, on the ground that "a
The manuscript was entitled "Notes on Mr. Malthus' work 'Principles of Political Economy, considered with a view to their practical application.'" It was published after long delay in 1928 under the editorship of J. H. Hollander and T. E. Gregory, It is included in Works as vol. II.
16

162

Journal of Economic Perspectives

given quantity of labor must always be of the same natural and absolute value," 17 a presumption that Ricardo denied. As I said, Ricardo and Malthus tirelessly and fruitlessly expounded to each other their contradictory convictions about the meaning and measurement of value throughout the time they knew each other. They were still at it on August 31, 1823, when Ricardo was beginning to suffer severe headaches from an abscess on his brain. On that day, Ricardo wrote Malthus a long letter, which began, "I have only a few words more to say on the subject of value, and I have done." After about two pages of careful reasoning, he concluded, "And now, my dear Malthus, I have done. Like other disputants, after much discussion we each retain our own opinions. These discussions, however, never influence our friendship; I could not like you more than I do if you agreed in opinion with me. Pray give Mrs. Ricardo's and my kind regards to Mrs. Malthus. Yours truly . . . " 1 8 Two weeks later, Ricardo was dead. At his funeral, Malthus is reported to have said, " I never loved anybody out of my own family so much. Our interchange of opinions was so unreserved, and the object after which we were both enquiring was so entirely the truth and nothing else, that I cannot but think we sooner or later must have agreed." 19

Envoi
So ends the story of Thomas Robert Malthus and David Ricardo, the two great friends in the history of economics. I cannot help dissenting from Malthus' affectionate and hopeful remark at the funeral. I believe that if there is a corner in heaven where good economists go, they are there to this very day getting no closer to agreement about the meaning and proper measurement of value. There were good reasons why they could never agree. We have already seen that they were born and bred in two subcultures that were as disparate as could be found in England. They came to economics, therefore, with differing preconceptions, particularly with respect to the roles of the gentry and the entrepreneurial class in the British economy and society. Inevitably, these commitments colored their thinking; witness their positions in the Corn Laws and gluts controversies. Besides, their minds operated in entirely different modes. Ricardo's style was quick, brilliant, concise, syllogistic. Malthus' mode was slower, and seemed motivated by deep common-sensical convictions that he had difficulty articulating precisely enough to serve as a basis for rigorous argument. Ricardo was the archetypical theorist; Malthus the typical practical economist. Ricardo loved the clean, simple case where conclusion followed inexorably from hypothesis; Malthus could not avert his gaze from the rich complication of real economic life. Ricardo recognized this source
17

18

A. Smith, Wealth of Nations, Modern Library edn., p. 30. Works, IX, pp. 380382.
R e p o r t e d in Letters of David Ricardo to Thomas Robert Malthus, 18101823, edited by J a m e s Bonar, p. 240.

19

Robert Dorfman

163

of misunderstanding, and wrote to Malthus: Our differences may in some respects, I think, be ascribed to your considering my book as more practical than I intended it to be. My object was to elucidate principles, and to do this I imagined strong cases that I might shew the operation of those.20 The miracle is not that they disagreed, but that they could stand each other. It appears, though, that their long and intimate collaboration, and their friendship as well, thrived on their continual disputations. It is as though each served as the anvil for the other's hammer, and their ideas were hammered out in their efforts to persuade each other. They were two men obsessed by a common enthusiasm, tirelessly pursuing a common goal: to understand the economy. But they did not share a common vision of the good society and thus were condemned to wrestle interminably, though remarkably fruitfully, over the roles of the social classes. Their struggles to convey to each other their views of the forces that drove their economy are an inspiring case study in both the difficulty and the possibility of human communication. These two friends, sustained by enormous affection and respect for one another, never could nullify the differences in preconception and mental style that separated them, but still could help each other attain a deeper understanding of their economy than anyone had achieved before. To do this required invincible faith in each other's candor and open-mindedness, great patience, inexhaustible good will, and unflagging civility.21 These qualities, that made possible their twelve years of fruitful collaboration, remain essential to scientific discourse, particularly in economics. Malthus and Ricardo show that with sufficient good will we, too, can communicate with and perhaps persuade each other.
20

Works, vol. VIII, p. 184, letter dated May 4, 1820. To be sure, like every long relationship, theirs was not exempt from occasional strains. There are a few letters from Ricardo (to correspondents other than Malthus) expressing impatience with Malthus. Examples are a letter to Hutches Trower dated March 2, 1821 (Works, vol. VIII, p. 349) and one to J. R. McCulloch dated April 25, 1821 (Works, vol. VIII, p. 373).
21

164

Journal of Economic Perspectives

References
Anderson, James, An Enquiry Into the Nature of the Corn Laws. Edinburgh, 1777. Anderson, James, Observations on the Means of Exciting a Spirit of National Industry. Edinburgh, 1777. Bonar, James, ed., Letters of David Ricardo to T. R. Malthus, 18101832. Oxford: The Clarendon Press, 1887. Keynes, J. M., "Robert Malthus: The First of the Cambridge Economists." In Essays in Biography. New York: Horizon Press, 1951. Malthus, T. R., An Essay on the Principles of Population, as it Affects the Future Improvement of Society. London: 1798. Reprinted many times. Malthus, T. R., " Publications on the Depreciation of Paper Currency," Edinburgh Review, XVII, February 1811. Malthus, T. R., An Inquiry into the Nature and Progress of Rent, and the Principles by which it is Regulated. London: 1815. (Reprinted by the Johns Hopkins Press, Baltimore, 1903, J. H. Hollander, ed.) Malthus, T. R., The Grounds of an Opinion on the Policy of Restricting the Importation of Foreign Corn. London: John Murray, 1815. Malthus, T. R., Principles of Political Economy Considered with a View to their Practical Application. London: 1820. (2nd edn., 1836, reprinted by the London School of Economics, London: 1936.) Mitchell, B. R., Abstract of British Historical Statistics. Cambridge: Cambridge University Press, 1962. Ricardo, David, Works and Correspondence, vols, IXI, edited by P. Sraffa with the collaboration of M. H. Dobb. Cambridge: Cambridge University Press, 19511973. Referred to above as Works. Smith, Adam, An Inquiry into the Nature and Causes of the Wealth of Nations. London: 1776. Republished many times.

GDP and its Enemies:


Johan Norberg

September 2010

the Questionable Search for a Happiness Index

Policy Brief
To be sure of hitting the target, shoot rst, and call whatever you hit the target. Ashleigh Brilliant

Executive Summary
The nancial crisis and global warming have led to a crisis of condence in our traditional ways of measuring wealth because they do not take speculative risk and environmental costs into consideration. A number of alternative indexes have been proposed that would measure peoples well-being and the environmental sustainability of the planet. Even though the gross domestic product (GDP) measure has its problems, a look at the alternatives reveals that they are constructed with a specic political agenda in mind and are easily manipulated by governments. In fact, a strong argument for sticking with GDP is that it is narrow in scope and value free. It tells us what we can do, but not what we should do, and does not even try to dene well-being. It ts a liberal, pluralistic society where people have different interests, preferences and attitudes to well-being. Our present environmental and nancial problems can and should be solved within the intellectual framework of economic growth.

Telling us so
A recent commission on measures of well-being suggested that had there been more awareness of the limitations of standard metrics, like GDP, there would have been less euphoria over economic performance in the years prior to the crisis. It also points out that the global warming crisis is made worse due to the fact that no

GDP and its Enemies


September 2010

account is made of the cost of these emissions in standard national income accounts (Stiglitz, Sen and Fitoussi 2009, 9). A green think tank that proposes an alternative to the GDP measure claims not to be surprised by these twin crises: For those versed in ecological economicsa discipline which recognises the dependence of our economic systems on the Earths resourcesit is tempting to adopt a smug I told you so attitude (NEF 2009, 8). Global warming and the nancial crisis have undermined condence in our traditional way of looking at wealth, specically the GDPthe value of the goods and services that we produce every year. There is something counter-intuitive about a measure according to which CO2-belching factories and hazardous nancial innovation make us all richer, and that does not count depletion of our ecosystems as a cost. Such a measure is not just wrong, the argument goes; it is also destructive, since it makes us focus too much on simply maximising production. As a result, in many government circles, think tanks and Brussels conferences, discussions are currently going on about doing away with GDP and replacing it with some sort of measure that counts social and environmental sustainability as a good and measures the well-being of the planet and humanity rather than what comes out of our factories. These discussions have renewed political interest in well-being research. This is not a new eld, but it has been revolutionised in the last few decades by neurological research, positive psychology and the enormous number of polls that have been conducted to nd out how happy we are with our lives and how this relates to socioeconomic factors. Writers such as the British labour economist Richard Layard (2005, 3) claim the data shows that money only buys happiness when we climb out of the worst forms of poverty. Beyond this point, we adapt to our new standard, become more stressed and work harder to keep up with the Joneses. Layard wants higher taxes, longer holidays and lower growth to make us focus on more important things in life; and he proposes more job security, a national program of values education and government control of television and the advertising industry to tame destructive temptations and bring an end to the publicity of improper role models. There have already been some attempts to construct happiness indexes to guide policy. In the Himalayan Kingdom of Bhutan, Gross National Happiness is an ofcial policy which measures happiness in 9 dimensions with 72 variables, with the purpose of encouraging a form of modernisation that does not undermine traditional Buddhist culture. In 2006 the New Economics Foundation developed a Happy Planet Index, which measures human well-being and environmental sustainability. The British Conservatives Quality of Life Group have proposed, as a model for the index, that the Tory government should develop a measure of well-being that takes such environmental accounting into consideration (Gummer and Goldsmith 2007, 223, 57).

GDP and its Enemies


September 2010

In 2008 the French President Nicolas Sarkozy appointed a commission, led by Nobel Laureate Joseph Stiglitz, to identify the limits of GDP and nd better measures of economic performance and social progress. The commission did not come up with one measure that could do the trick on its own, but instead proposed several different measures and had a long list of ideas on how to improve on GDP; for example, by also measuring income distribution and valuing housework and leisure. If there is a common denominator, it is that they all emphasise that we care too much about growth and need another intellectual framework that can guide policies and populations to more worthwhile values and sustainability.

A Tale of Two Revolutions


The idea that happiness is the goal of public policy is obviously not new. The idea has been part of public discourse since at least the time of the ancient Greeks, but during the Enlightenment it became the consensus opinion. As society became more secular, the purpose of life became more an earthly matter. Locke maintained that happiness on earth is a foretaste of heavenly delight and so leads us to God; Helvtius declared the eighteenth century the century of happiness; and thinkers like Bentham and Mill began to develop the utilitarian idea that we should always choose the path that leads to the greatest happiness for the greatest number. But there were two radically different approaches to implementing happiness policies. Both the French and the American revolutions were led by individuals who considered happiness the goal. In the American Declaration of Independence, Thomas Jefferson famously declared that man has an inalienable right to pursue happiness. The French Declaration of the Rights of Man and of the Citizen begins with the statement that The goal of society is general happiness, and in 1794 the new calendar concluded every year with a Festival of Happiness. But despite the similarities, the attitudes to the role of government in promoting happiness differed. For Jefferson, inspired by Locke, the right to pursue happiness was just thatthe idea that the government is not there to provide happiness, which is something individuals themselves can do. The governments role is to remove obstacles and create a certain level of security in which this is possible. The government only gives people the right to pursue happiness: as Benjamin Franklin supposedly said, You have to catch it yourself. The Jacobins, who took control of the French revolution from the liberal constitutionalists in May 1793, had a more collectivist approach, inspired by Rousseau. The historian Darrin McMahon describes how one of the most eager revolutionaries, the young lawyer Joseph-Marie Lequinio, visited the port city of Rochefort in the autumn of 1793 and gave a speech about why happiness on earth is the goal of humanity. But it was not the happiness of the individual that concerned him, since individuals only adapt to changes in their own lives, just as Rousseau had said. The goal for Lequinio was a
3

GDP and its Enemies


September 2010

happy society, where individuals and their interests are sacriced for the greater good. Therefore, the government must force citizens to seek happiness in the right way. Lequinio wrote home to Paris that he had found more men in Rochefort to operate the guillotine than he needed. And then he continued his journey, proclaiming happiness and simultaneously bragging about how he had personally executed enemies of the state. No sacrice was too great for happiness (McMahon 2006, 25361). Happiness is a new idea in Europe, the Jacobin leader Saint-Just explained. So was terror. So from these two revolutions, we have inherited two sets of attitudes to happiness: one individualist and one collectivist. One sees the individual as the agent, while the other puts its trust in government. It is the latter approach that made it possible for some of the worst authoritarian regimes and dictators to claim that they were really ghting for the happiness of humanity, even though they treated every specimen of it like dirt. It is in this tradition that Joseph Stalin liked to present himself as the constructor of happiness (McMahon 2006, 403). As commentators like H. L. Mencken and C. S. Lewis have warned us, oppressors who think they work for the good of the oppressed can be worse than oppressors who are just cynics and robbers, because the true believers torment us without end, with the approval of their own conscience. In dystopian literature, like Orwells Nineteen Eighty-Four and Huxleys Brave New World, the brutality of governments is matched only by their rhetoric about human happiness.

Calling it the Target


GDP is not easy to measure, and every GDP gure is full of problems and guesses. But at least we know what we are looking for. This is not the case when we are trying to nd measures of a good society and well-being. It depends on what one considers a good society, how one denes well-being and which parameters are used. And even if we had a common idea, we are by denition trying to capture something that is subjective. We are not counting cars and gigabytes herewe are trying to look into the emotional state of the population. Even though indexes of well-being are not necessarily an excuse for oppression, they are often tools in an ideological toolbox. Although researchers are trying to come up with objective measures of what is going on, the indexes created with the intention of replacing GDP are often designed to give a green light to specic attempts to steer us towards a lifestyle that the authors prefer. A look at the three proposed replacements for GDP that we mentioned above illustrates this risk.

Gross National Happiness


It is said that the King of Bhutan came up with the idea of Gross National Happiness (GNH) in 1972, but the index is a very recent invention. The think tank that constructs

GDP and its Enemies


September 2010

the index, the Centre for Bhutan Studies, was established only in 1999. GNH worked as an excellent rationalisation for policies that did not contribute to development. It was in an interview with the Financial Times in 1986 that the King made the idea world famous. When faced with questions about the lack of development in Bhutan, he responded, Gross National Happiness is more important than Gross National Product. It was very convenient. Taking the high moral ground, one could now brush aside all questions about the lack of growth. Bhutan did not care about such materialist concepts, only about the cultural and spiritual well-being of the people. But in fact the consequences of under-development were brutal. By then, Bhutans life expectancy was just 49 years and the child mortality rate almost 20%. No more than two out of ten women got primary education, and the adult literacy rate was just 25% (UNDP 1990). Since then, globalisation and modernisation have taken place in Bhutan, and social and economic progress has been made. But the government warned that traditional culture and community cohesion was being undermined by this process, so GNH came to be interpreted as a way to control this development and make sure that it was not too fast and spontaneous (Wangyal 2001). Bhutan outlawed television and the Internet until 1999. In 1989 the government of Bhutan launched the policy of One Nation, One People with the aim of creating a national identity. In the south, the Nepali-speaking minority suffered, despite representing a large section of the population. Their language was banned in schools and books were burned. Hindu Sanskrit schools were closed. And Bhutan was the only country in the world to make the national dress mandatory in public, which it still is. Demonstrations against these policies resulted in arbitrary arrests and torture. Tek Nath Rizal, a leading human rights activist against these policies, was imprisoned for ten years for the peaceful expression of his political beliefs, according to Amnesty International. More than 60,000 peoplearound a tenth of Bhutans populationhad to ee to Nepal or India. There is widespread discrimination against the members of the Nepali-speaking minority who stayed behind (Amnesty International 1999; see also http://www.bhutaneserefugees.com).

The Happy Planet Index


In July 2006, newspapers around the world reported that the small island of Vanuatu was the happiest place in the world. That was one of the ndings of the rst Happy Planet Index (HPI), created by the New Economics Foundation. The top spots were reserved for Latin American and Asian countries, whereas rich, Western countries came a long way down the list. The rst European country on the list was Austria, in 61st place55 places behind Cuba and 36 places behind Tajikistan. But it turned out that the HPI did not even have a study of happiness from Vanuatu; it simply extrapolated the score from other countries (Bialik 2006). One can always impute missing data when one is looking for something else. If I want to know whether there is a global correlation between climate and happiness, I can, without destroying

GDP and its Enemies


September 2010

the analysis, obtain the happiness score for a few countries where data is missing by extrapolating from countries for which this information is available. But if the missing variable is the thing I am looking for, it is not possible to cut corners in this wayit is a methodological error, plain and simple. But the HPI was never meant to be simply a measure of happiness; it is also a measure of ecological sustainability. It is an index of self-reported happiness and life expectancy (happy life years) divided by the countrys ecological footprint per capita, which is basically the same as its consumption of resources whose production requires CO2. A country that is very poor and cannot consume much is therefore a Happy Planet champion. This is a controversial way of measuring environmental sustainability because it begs the question of whether wealth is good or bad for the environment. According to this measure, the worst ecological rogues are Denmark, Norway, Canada and the United States. The most environmentally sustainable countries on the planet are Malawi, Haiti, Afghanistan and the Republic of Congo (Hails 2008). It is true that they do not produce or consume much. That is why their people are starving and dying from trivial causes. In other words, this is not a measure of how good the environment is for human beings. The biggest environmental problems in our world still come from traditional sources. Indoor cooking results in bad indoor air that kills 1.6 million people annually (Warwick and Doig 2004). Unsafe drinking water kills even more. When a country does not solve these environmental problems with electricity and modern technology, it is considered more sustainable on the basis of its ecological footprint. There is an alternative environmental index which tries instead to measure a host of environmental factors to see what damage human beings cause to nature: the Environmental Sustainability Index, which is compiled by the Center for Environmental Law and Policy at Yale University in the US. The results are almost the opposite of the ecological footprint index, with rich countries in the lead and poor African countries at the bottom. Haiti, which has the third-smallest ecological footprint, does not look as impressive here: it is the 155th most environmentally sustainable of 163 countries. The latest edition of the index concludes that wealth has a strong association with environmental health results (EPI 2010).

The Stiglitz Commission


The Stiglitz Commission, appointed by the French President in 2008, clearly provides the most sophisticated recent challenge to the traditional GDP measure. This is partly because the Stiglitz Commission does not attempt to create a super-index that integrates everything, and partly because it highlights many of the classic shortcomings of GDP and presents a number of ideas on how to correct them. But even here we see how political motives lie behind the proposals. In many ways the report is suspiciously attering for France. Again and again it shows how particular ways of adjusting GDP or including other measurements would make France look better in

GDP and its Enemies


September 2010

comparison with the US. At times, it looks more like a set of instructions on how to beat the US by changing measurements. In Figure 1.7 of the report it becomes almost comical when we are shown four comparisons on how French income per capita climbs as a proportion of US income per capita if the statistics ofce agrees with the commissions proposals. At the outset, the gure is just 66%, but France pulls closer if the output of government-provided services is included, and closer still if unpaid housework is included; and if they also included leisure, France would end up with a relative income level of 87%. Sometimes, the suggestions are strange. The commission actually suggests that the value of leisure is the same as the wage foregone. That makes sense on an individual basis. No one would want to take a week off work unless they thought that the resulting leisure was worth more than the value of the wage they would have earned. But as a way of measuring wealth it is more problematic. Is it reasonable to say that this individual is just as well off after this choice? Would it not make more sense to say that he chose to become poorer because he valued leisure more? Am I not poorer as a result of not working as a doctor, even though I prefer my present job? (If not, I am just as well off after buying a new car, because I would not spend money on a new car unless I thought the car was worth more than the money I spent.) Furthermore, parttime employment is sometimes the result of a lack of job opportunities, and in that case it is a loss for both the economy and the individual, not just another form of wealth. In most instances, though, the discussion is interesting and the proposals make sense. The problem is that they are applied selectively. The authors point out that studies show that unemployment reduces subjective well-being and that the costs of unemployment exceed the income-loss suffered by those who lose their jobs (Stiglitz, Sen and Fitoussi 2009, 44). But interestingly (since this is a dimension where France does not come out particularly well), they do not suggest an adjustment of the GDP measure to account for this, and in this matter they do not show how France fares compared to the US (see also Bate 2009). Obviously it would be nice to value our stock of natural wealth or produce alternative measures when asset prices rise as a result of an unsustainable boom, as the commission proposes. But this is something that not even the best minds know how to do. How can we trust governments to measure stock in an objective and evenhanded way when we know that even government nances are manipulated fairly regularly? And how can we trust them to identify a bubble when the nancial crisis showed that neither central banks nor politicians understood that there was a bubble and that they simply continued to inate it? The commission writes: It is no longer a question of measuring the present, but of predicting the future. This is a good idea for policy makers and researchers, but a terrible one for statisticians. If this is what this new way of measuring wealth looks like when Nobel laureates write, one can only guess what would happen to it in the hands of governments, eager to exaggerate the wealth of their countries and their own achievements. According to the commission, one reason why we need to take a second look at our measurements is that condence in ofcial statistics has been undermined. But it is difcult to see how that condence would increase if we gave our politicians more discretion to adjust measures and choose the ones that atter their own countries.
7

GDP and its Enemies


September 2010

In Defence of GDP
There are several problems with GDP, with what it measures and what it fails to measure. There are quality problems with the data, and a lot of guesswork is involved. Using it as the only yardstick to evaluate our societies would be bizarre. But nding problems is one thing. If we also want to replace GDP, we need to know that the alternative is better. There is something to be said for the Stiglitz Commissions point that it makes sense to have a pluralistic system which encompasses a range of different measures. The Organisation for Economic Co-operation and Development (OECD) and other institutions are working on that. Other indexes and measurements can give us information about other dimensions of well-being. We already use several, and that GDP is our primary way of measuring the progress of the economy does not mean that they receive less attention. It would be difcult to make the case that measuring GDP makes us less interested in the unemployment rate or the proportion of the population in higher education. Replacing GDP with another preferred index of well-being and life satisfaction would therefore not add to our knowledge; rather, it would reduce the amount of information available to us. And in view here is not just any information, but information about how much we produce and how much we have to exchange for the things we want in life. This information provides a rough estimate of our possibilities as a society, what we can do and how many problems we can solve. This is why GDP correlates strongly with most of the things that most people want: economic security, improved education, better health, longer lives and less poverty. This is no coincidence. When we are able to do more things, we usually do what seems more important to us, both as individuals and as societies. According to the World Bank, 730 million fewer people lived in extreme poverty (less than $1.25/day) in 2009 than in 1981, even though world population increased by two billion during this time. One study of low-income countries showed that one percentage point of additional growth correlated with a reduction in extreme poverty of 2.4 percentage points (Chen and Ravallion 2008; World Bank 2009; World Bank 2005, 85). In fact, GDP growth even correlates with happiness, contrary to the claims of Richard Layard and almost everybody else who prefers well-being measures (the Stiglitz Commission is an exception). The latest research shows not only that rich countries are happier, but also that countries get happier as they get richer. According to Gallups World Pollthe largest global study of its kind1% of growth results in a 0.20.4% increase in subjective well-being. The World Values Survey, which has been conducted at intervals since 1981, shows an average increase in well-being of almost 7%, and concludes that [t]he trend toward rising happiness is overwhelming (Stephenson and Wolfers 2008; Inglehart et al. 2008; see also Norberg 2006; Norberg 2009). An even stronger argument for sticking with GDP is that it is pluralistic and well adapted to a society where people have different goals, and the role of government is

GDP and its Enemies


September 2010

to help us achieve our diverse goals rather than picking and choosing them for us. GDP measures what we can do, but it does not tell us what to do. If we increase wealth, we can use that wealth for the purposes that we prefer: we can consume more but we can also reduce our work hours; we can travel more but we can also incorporate green technology into our lives. A growing economy gives us the means to create the kind of life we want; it does not tell us how to live it. It ts a liberal, pluralistic society where people have different interests, preferences and attitudes to well-being and the meaning of life. If we replaced GDP with some sort of well-being measure, we would have to come to an agreement on what well-being is; and there is a risk that governments would be tempted to take a one-size-ts-all approach and try to make us all wear the result. We have seen how the Bhutanese happiness index has been used as a rationalisation to force minorities to live in the preferred way. Admittedly, this is an extreme example, but it is interesting that it is so often used as a positive example by those who seek to do away with GDP. As the French classical liberal Benjamin Constant warned in 1819, The holders of authority ... are so ready to spare us all sort of troubles, except those of obeying and paying! They will say to us: what, in the end, is the aim of your efforts, the motive of your labours, the object of all your hopes? Is it not happiness? Well, leave this happiness to us and we shall give it to you. No, Sirs, we must not leave it to them. No matter how touching such a tender commitment may be, let us ask the authorities to keep within their limits. Let them conne themselves to being just. We shall assume the responsibility of being happy for ourselves. (Constant 1819, 326) One of the best features of GDP is that it does not try to include all the different aspects of human welfare. It is a measure of material wealth, and we should not associate it with everything that is good in life. When governments make trade-offs, they do so in a fairly transparent way. Measuring GDP does not mean that we always want to maximise it. It means that we always know what we are doing when we sacrice it for something that is considered a greater good. If we sacrice income for leisure, it is good if we understand that this is what we are doing, so that we can make well-informed choices. The longing for a summary index of everything that is desirable belongs to the dream of a world where no difcult choices have to be made, where there is no need to set priorities or make trade-offs. The politicians sole task would be to maximise the (index of the) good. Of course those choices would still have to be made, but in that case they would be removed from public debates and left to the statistical ofces and government authorities that produce the index, when they estimate different values for different public goods. It is the technocrats dream, and therefore something that should make us suspicious. As Winston Churchill might have put it, Gross Domestic Product is the worst of all means of measuring wealth, except for all those other means that have been tried from time to time.

GDP and its Enemies


September 2010

The argument that too great an emphasis on GDP growth made us neglectful of the factors that led to the nancial crisis and climate change is not a strong one. GDP has not stood in the way of an increased awareness of environmental issues or of the unprecedented action taken against such problems since the 1970s. Indeed, as evidence that global warming has human causes grew stronger in the early twentyrst century, the discussion almost swallowed the entire policy agenda. When policymakers reduced interest rates and encouraged the housing market after the dot com bubble, they did so not just to increase GDP, but to reduce unemployment, avoid bankruptcies and encourage home ownership. Moreover, I do not think anyone seriously believes bankers made leveraged bets on the housing market because they wanted to encourage economic growth. Is it not more likely that they had their eyes xed on stock market prices, house prices and bonuses? Dealing with these challenges does not require another sort of intellectual framework or another way of measuring wealth. As the Great Recession shows, in the long run, GDP indicates whether rising asset prices are the result of a real improvement in savings and productivity or simply the result of a speculative boom. Considering how governments, central bankers and leading economic analysts regularly fail to make the distinction, perhaps it is better to stick with GDP, rather than giving someone the authority to second-guess everybody. And we need more economic growth in the future, to be able to bear the burden of the unprecedented levels of public debt to which the crisis has given rise. Using GDP also helps us put our present problems in perspective and gives us a better sense of how far we have come. The nancial crisis made a terrible mess. It meant that 2009 was only the second-best year in human history when it comes to our total yearly creation of economic value. Far from having destroyed human progress, the crisis only set it back by about a year. The environmental problems that cause the most damage are still the result of a lack of wealth and technology, and, on average, richer countries are much more environmentally sustainable than poor ones. We have no reason to think that global warming cannot be dealt with in the same way. The amount of energy required to produce a given amount of wealth in high-income countries has declined by 1% per year in the past 150 years, and that pace has accelerated. In fact, countries such as Sweden, the United Kingdom and France have been reducing their CO2 emissions per capita since 1980. The improvements in nuclear and solar energy, fuel cells and other technologies are potentially revolutionary. They are still too expensive to be put to use globally, but what is the solution when something is too expensive? It is technological progress that reduces the price and economic growth that increases our purchasing power. No matter what we do, nature will surprise us with problems and difculties. If annual global economic growth remains at around 2% per capita, in 100 years time the average person will be approximately eight times richer than todays average person. With the resources, the level of scientic knowledge and the technological solutions that may then be at our disposal, many of the problems that intimidate us

10

GDP and its Enemies


September 2010

today will be much easier to handle. This is particularly important since there is a risk that the worst damage from global warming will occur in poor countries, in part as a direct result of their poverty. Judging from the Stiglitz Commissions work, even the best attempts to adjust the GDP measure would open the way to the politicisation of statistics in an unprecedented way. It will not be long until other countries nd out that they can change their statistics to take account of indicators where they do better. Once we had competitive devaluations; now we could see competitive accounting revaluation, where governments dene progress in the way that suits their interests. The only result would be less reliable statistics and meaningless international comparisons. The Stiglitz Commission warned that those attempting to guide the economy and our societies are like pilots trying to steering a course without a reliable compass (Stiglitz, Sen and Fitoussi 2009, 9). But even a compass that makes mistakes once in a while is better than having pilots steering a course with a compass that always points in a direction that reassures them.

References
Amnesty International. (1999). Bhutan: Amnesty International welcomes release of prisoners of conscience. Amnesty International, News Service, 21 December. Available at http://www.amnesty.org/en/library/asset/ASA14/004/1999/en/d2033b98-dfd8-11dd-82c9a1d1b98af6ef/asa140041999en.html Bate, R. (2009). What is prosperity and how do we measure it? AEI Outlook, American Enterprise Institute, No 3, October. Bialik, C. (2006). Putting a number on happiness. Wall Street Journal, 20 July. Chen, S., and Ravallion, M. (2008). The developing world is poorer than we thought, but no less successful in the ght against poverty. Policy Research Working Paper 4703. World Bank, August. Available at SSRN: http://ssrn.com/abstract=1259575 Constant, B. (1819). The liberty of the ancients compared to that of the moderns: speech given at the Athne Royal in Paris. In Constant: political writings (pp. 30728). Cambridge: Cambridge University Press, 1988. EPI (Environmental Performance Index). (2010). Yale Center for Environmental Law and Policy & Center for International Earth Science Information Network, 2010. Available at http://epi.yale.edu/Home Gummer, J., and Goldsmith, Z. (2007). Blueprint for a green economy. The Quality of Life Policy Group, September. London: Conservative Party. Hails, C. (Ed.). (2008). Living Planet report 2008. Bland, Switzerland: WWF International. Hall, B. (2009). France to count happiness in GDP. Financial Times, 14 September. Inglehart, R., et al. (2008). Development, freedom and rising happiness: a global perspective (1981 2007). Perspectives on Psychological Science 3(4)

11

GDP and its Enemies


September 2010

Layard, R. (2005). Happiness: lessons from a new science. London: Allen Lane. McMahon, D. M. (2006). Happiness: A history. New York: Grove Press. NEF (The New Economics Foundation). (2009). Happy Planet Index 2.0. London: NEF. Available at http://www.neweconomics.org/sites/neweconomics.org/les/The_Happy_Planet_Index_2.0_1.pdf Norberg, J. (2006). Happiness paternalism: blunders from a new science. Brussels: CNE. Norberg, J. (2009). Den eviga matchen om lyckan. Natur & Kultur. Stevenson, B., and Wolfers, J. (2008). Economic growth and subjective well-being: reassessing the Easterlin paradox. Working paper prepared for Brookings Papers on Economic Activity, 16 April. Stiglitz, J., Sen, A., and Fitoussi, J.-P. (2009). Report by the Commission on the measurement of economic performance and social progress. Available at http://www.stiglitz-sen-toussi.fr/documents/rapport_anglais.pdf UNDP (United Nations Development Programme). (1990). Human development report 1990: concept and measurement of human development. New York: Oxford University Press. Available at http://hdr.undp.org/en/reports/global/hdr1990/chapters/ Wangyal, T. (2001). Ensuring social sustainability: can Bhutans education system ensure intergenerational transmission of values? Journal of Bhutan Studies 3(1): 10631. Warwick, H., and Doig, A. (2004). Smoke the killer in the kitchen: indoor air pollution in developing countries. London: ITDG Publishing. Available at http://practicalaction.org/?id=smoke_report_home#Download World Bank. (2005). World development report 2006: equity and development. Washington, DC: World Bank. Available at http://go.worldbank.org/XP2234QDV0 World Bank. (2009). Global monitoring report 2009: a development emergency. Washington, DC: World Bank. Available at http://go.worldbank.org/1J2GN1XTO0

Johan Norberg
Johan Norberg is a senior fellow at the Cato Institute and a writer who focuses on globalization, entrepreneurship, and individual liberty. Norberg is the author and editor of several books exploring liberal themes, including his newest book on the history and science of happiness, Den eviga matchen om lyckan, and his recent book, Financial Fiasco: How Americas Infatuation with Homeownership and Easy Money Created the Economic Crisis. His book In Defense of Global Capitalism, originally published in Swedish in 2001, has since been published in over twenty different countries.

12

GDP and life satisfaction: New evidence


Eugenio Proto, Aldo Rustichini, 11 January 2014 The link between higher national income and higher national life satisfaction is critical to economic policymaking. This column presents new evidence that the connection is humpshaped. There is a clear, positive relation in the poorer nations and regions, but it flattens out at around $30,000$35,000, and then turns negative.
A commission on the measurement of economic performance and social progress was created on the French governments initiative. Since 2008, this distinguished group of social scientists has put subjective well-being into the limelight as a possible supplement to traditional measures of development such as GDP (Stiglitz et al. 2009). The British government has also shown considerable interest in developing a subjective well-being measure in recent years as an instrument for policy.

Income, GDP, and life satisfaction


The debate on the link between income, or GDP, and life satisfaction is still open.

On one hand, Easterlin (1974) reported no significant relationship between happiness and aggregate income in time-series analysis.

For example, Easterlin shows that US income per capita during the period 19742004 almost doubled, but the average level of happiness showed no appreciable upward trend. This puzzling finding, appropriately called the Easterlin Paradox, has been confirmed in similar studies by psychologists (Diener et al. 1995) and political scientists (Inglehart 1990), and has been confirmed for European countries (Easterlin 1995).1

On the other hand, life satisfaction appears to be strictly monotonically increasing with income when one studies this relation at a point in time across nations (Inglehart 1990, Deaton 2008, Stevenson and Wolfers 2008).

To reconcile the cross-sectional evidence with the Easterlin Paradox, some have suggested that the positive relation in happiness vanishes beyond some value of income (Layard 2005, Inglehart 1990, Inglehart et al. 2008, Di Tella et al. 2010). This last interpretation has been questioned by Deaton (2008) and Stevenson and Wolfers (2008), who claim that there is a positive relation between GDP and life satisfaction in developed countries. From the opposite perspective, it is being questioned by Easterlin et al. (2010), who provide some evidence that there is no long-run effect even for developing countries.

The complexity of the incomelife satisfaction relation


To start from the evidence:

It is known that life satisfaction is increasing in personal income at a decreasing rate (e.g. Blanchflower and Oswald 2004, Layard et al. 2008, Kahneman and Deaton 2010).

However, the relationship is complicated by the existence of other effects acting with an opposite sign:2

Peoples aspirations adapt to the new situation, an idea originally proposed by Brickman and Campbell (1971).3 The effect of relative income on individual life satisfaction the so-called Keeping up with the Joneses hypothesis an idea that dates back to Duesenberry (1949).4

Recently, Benjamin et al. (2012) show that maximising an individuals utility might not always coincide with maximising life satisfaction. This opens the theoretical possibility that, in equilibrium, lower levels of income might correspond to higher levels of life satisfaction.

Reassessing the relationship


Our recent work differs from the previous literature in that we undertake the analysis without imposing a particular functional form on the econometric model (Proto and Rustichini 2013). Specifically, we partition all individual observations into quantiles of per capita GDP by the country of residence (with the first quantile of the distribution containing the individuals living in the poorest countries), and then we estimate the relation between GDP and happiness by using the quantiles. The second important methodological feature of our analysis is the introduction of a countryspecific effect, to control for time-invariant country-specific unobservable variables. This eliminates potential sources of country-specific measurement error and omitted-variable bias. This control is possible thanks to the panel structure of the World Values Survey dataset. The introduction of this control is of crucial importance for analysis based on survey data, because the questionnaires are generally different across countries, and there are pervasive effects due to culture and language on the statement made in the surveys. Many measurement errors in indices of life satisfaction are possible. For example, a well-known error is due to differential item functioning, defined as the interpersonal and intercultural variation in interpreting and using the response categories for the same question. A systematic measurement error in the life satisfaction reports lead to either a positive or negative bias, depending on the correlation between the measurement error and other variables in the regression. For example, an over-report of life satisfaction, in Western countries, could generate a positive bias in cross-country estimates of the impact of income on life satisfaction. Omittedvariable bias could be equally problematic. If cultural elements determine a time-invariant preference for public good supply in some country, or if income distribution usually very persistent is correlated with both life satisfaction and GDP, this would result in a bias in the relation between GDP and life satisfaction. Controlling for country-specific effects eliminates all biases that could be generated by the time-invariant unobservable variables mentioned in the examples.

Main findings
Figure 1 summarises the main findings of the analysis. Figure 1 Relationship between GDP per capita and life satisfaction

Note: Each point represents the coefficient of an ordered probit regression of life satisfaction on all individuals in the World Values Survey (for the waves 198184, 198993, 199499, 19992004, 200508) grouped in 50 different quantiles according to the per capita GDP of the country they live in.

Most of the variation of life satisfaction due to GDP is explained by the effect in countries with per capita GDP below $10,000 (data are PPP-adjusted).

People in countries with a GDP per capita of below $6,700 were 12% less likely to report the highest level of life satisfaction than those in countries with a GDP per capita of around $20,000.

Countries with GDP per capita over $20,000 see a much less obvious link between GDP and happiness.

Between this level and the very highest GDP per capita level ($54,000), the probability of reporting the highest level of life satisfaction changes by no more than 2%, and seems to be hump-shaped, with a bliss point at around $33,000. This corresponds broadly to the well-known Easterlin Paradox that the link between life satisfaction and GDP is more or less flat in richer countries. To further assess the importance of taking into account the unobserved heterogeneity, we perform a second analysis of the relationship between aggregate income and life satisfaction on more homogeneous territorial units. We restrict the sample to all countries within the EU before the first enlargement to eliminate potentially confounding factors at the country level; Figure 2 show the main findings of this second analysis. This confirms the findings of the country-based analysis outlined above. Figure 2 Relationship between regional GDP and life satisfaction in Europe

Note: Each point represents the European Region (following the NUTS 2 nomenclature) before the first enlargement, weighted for the population. Life satisfaction data are from the European Values Survey. Data for regional GDP are from Eurostat; they are PPP-adjusted at the country level.

Data show a clearly positive relation between aggregate income and life satisfaction in the poorer regions, but this relation flattens and appears to turn negative for richer regions, with a bliss point between $30,000 and $33,000.

An explanation for the bliss point


If, following the above-mentioned literature, the relation between GDP and life satisfaction is the result of combined effects of aspirations to increase personal income, or an increasing target in terms of income comparison, then the net effect on life satisfaction is not necessarily monotonic. In Proto and Rustichini (2012), we provide a micro-founded model, where income is endogenous and increases with aspirations. If the probability of fulfilling aspirations is decreasing in aspirations, this can generate a negative effect on life satisfaction that can counterbalance the positive direct effect of the income. We test this hypothesis using the European data, and find the usual positive effect due to personal income and a negative effect due to the negative distance between personal income and regional GDP. We argue that this second effect can be related to the negative effect induced by the distance from the target income. We use modern personality theory to test the hypothesis,

predicting that this effect should be higher for more neurotic individuals, naturally more averse to losses. We find support in the data for this explanation.

Conclusions
Our econometric analysis implies that long-term GDP growth is certainly desirable among poorer countries, but is it a desirable feature among developed countries as well? Recent evidence shows the negative effect of high aspiration can also be rationally predicted by individuals who, nevertheless, may still choose options that may not seem to maximise happiness, but which are compatible with high-income aspirations. This implies that individuals may still prefer to live in richer countries, even if this would result in a decreased level of life satisfaction. In other words, the fact that individuals aspire to a higher income may not be considered from an individual perspective a negative feature of an economy even if this might result in a lower level of reported life satisfaction among the richest countries.

References
Benjamin, D J, O Heffetz, M S Kimball, and A Rees-Jones (2012), What Do You Think Would Make You Happier? What Do You Think You Would Choose?, American Economic Review, 102(5): 20832110. Blanchflower, D G and A J Oswald (2004), Well-Being over Time in Britain and the USA, Journal of Public Economics, 88: 13591386. Brickman, P D and D T Campbell (1971), Hedonic relativism and planning the good society, in M H Appley (ed.), Adaptation-Level Theory: A Symposium, New York: Academic Press: 287302. Clark, A E, P Frijters, and M A Shields (2008), Relative Income, Happiness, and Utility: An Explanation for the Easterlin Paradox and Other Puzzles, Journal of Economic Literature, 46(1): 95144. Clark, A E and A J Oswald (1996), Satisfaction and Comparison Income, Journal of Public Economics, 61: 359381. Clark, L A and D Watson (2008), Temperament: An Organizing Paradigm for Trait Psychology, in O P John, R W Robins, and L A Pervin (eds), Handbook of personality: Theory and research, New York: Guilford Press: 265286. Deaton, A (2008), Income, Health and Well-Being around the World: Evidence from the Gallup World Poll, Journal of Economic Perspectives, 22: 5372. Diener, E, M Diener, and C Diener (1995), Factors Predicting the Subjective Well-Being of Nations, Journal of Personality and Social Psychology, 69: 851864. Di Tella, R and R MacCulloch (2010), Happiness Adaptation to Income Beyond Basic Needs, in E Diener, J Helliwell, and D Kahneman (eds), International Differences in Well-Being, Oxford: Oxford University Press.

Duesenberry, J S (1949), Income, Saving, and the Theory of Consumer Behavior, Cambridge and London: Harvard University Press. Easterlin, R A (1974), Does Economic Growth Improve the Human Lot? Some Empirical Evidence, in R David and M Reder (eds), Nations and Households in Economic Growth: Essays in Honor of Moses Abramovitz, New York: Academic Press: 89125. Easterlin, R A (1995), Will Raising the Incomes of All Increase the Happiness of All?, Journal of Economic Behavior and Organization, 27: 3547. Easterlin, R A (2005), Feeding the Illusion of Growth and Happiness: A Reply to Hagerty and Veenhoven, Social Indicators Research, 74: 429443. Easterlin, R A (2005), A Puzzle for Adaptive Theory, Journal of Economic Behavior and Organization, 56(4): 513521. Easterlin, R A, L A McVey, M Switek, O Sawangfa, and J S Zweig (2010). The HappinessIncome Paradox Revisited, Proceedings of the National Academy of Sciences, 107(52): 22463 22468. Ferrer-i-Carbonell, A (2005), Income and well-being: an empirical analysis of the comparison income effect, Journal of Public Economics, 89: 9971019. Inglehart, R (1990), Cultural Shift in Advanced Industrial Society, Princeton: Princeton University Press. Inglehart, R, R Foa, C Peterson, and C Welzel (2008), Development, Freedom, and Rising Happiness: A Global Perspective (19812007), Perspectives on Psychological Science, 3: 264 285. Kahneman, D and A Deaton (2010), High Income Improves Evaluation of Life but not Emotional Well-Being, Proceedings of the National Academy of Sciences, 107: 1648916493. Layard, R (2005), Happiness: Lessons from a New Science, London: Penguin. Layard, R, G Mayraz, and S Nickell (2008), The Marginal Utility of Income, Journal of Public Economics, 92(89), 18461857. Luttmer, E (2005), Neighbors as Negatives: Relative Earnings and Well-Being, Quarterly Journal of Economics, 120(3): 9631002. McBride, M (2010), Money, Happiness, and Aspiration Formation: An Experimental Study, Journal of Economic Behavior and Organization, 74(3): 262276. Oswald, A (1997), Happiness and Economic Performance, Economic Journal, 107: 18151831. Proto, E and A Rustichini (2012), Life Satisfaction, Household Income and Personality Traits, The Warwick Economics Research Paper Series (TWERPS) 988. Proto E and A Rustichini (2013), A Reassessment of the Relationship between GDP and Life Satisfaction, PLoS ONE, 8(11): e79358.

Senik, C (2009), Direct Evidence on Income Comparison and their Welfare Effects, Journal of Economic Behavior and Organization, 72: 408424. Stevenson, B and J Wolfers (2008), Economic Growth and Subjective Well-Being: Reassessing the Easterlin Paradox, Brookings Papers on Economic Activity, 1: 187. Stiglitz, J E, A Sen, and J-P Fitoussi (2009), Report of the commission on the measurement of economic performance and social progress, CMEPSP. http://www.stiglitz-sen-fitoussi.fr Stutzer, A (2004), The Role of Income Aspirations in Individual Happiness, Journal of Economic Behavior and Organization, 54(1): 89109.

1 There is some disagreement on the conclusion when an analysis based on time-series is used. Oswald (1997) shows evidence of a small positive temporal correlation between life satisfaction and GDP in industrialised countries, and Stevenson and Wolfers (2008) find significant happiness gains in Japan in the post-war period. 2 On the considerable literature that examines the complicating factors, see Clark et al. 2008 for an extensive survey. 3 Easterlin (2005), Stutzer (2004), and McBride (2010) provide some empirical evidence on how aspirations increase in income. 4 Clark and Oswald (1996), Blanchflower and Oswald (2004), Luttmer (2005), Ferrer-i-Carbonell (2005), and Senik (2009) among others present empirical validations of this hypothesis.

Consumption and Saving: Theory and Evidence


Christopher D. Carroll Chris Carroll is Professor of Economics at the Johns Hopkins University, and a Research Associate in the National Bureau of Economic Research programs on Monetary Economics and on Economic Fluctuations and Growth. Consumption and saving decisions are at the heart of both short-run and long-run macroeconomic analysis (as well as much of microeconomics). In the short run, spending dynamics are of central importance for business cycle analysis and the management of monetary policy. And in the long run, aggregate saving determines the size of the aggregate capital stock, with consequences for wages, interest rates, and the standard of living. Since the pioneering work of Friedman and of Modigliani and Brumberg in the 1950s, the principal goal of the economic analysis of saving has been to formulate mathematically rigorous theories of behavior. But that goal was dicult until recently because the optimal response of saving to uncertainty was dicult to compute. Research was generally carried out under the assumption that uncertainty might boost saving somewhat, but that behavior in the presence of uncertainty was likely to be broadly similar to optimal behavior in a world in which households had perfect foresight about their future circumstances. In two papers that grew out of my 1990 dissertation,1 I showed that the presence of uncertainty could change the nature of optimal behavior in qualitatively and quantitatively important ways. Specically, I examined the optimal behavior of consumers with standard attitudes toward risk (constant relative risk aversion), who faced income uncertainty of the kind that appears to exist in household-level data sources. The rst paper found that target or buer-stock saving can be optimal under some circumstances; the second paper found that, depending on households income proles and their degree of impatience, it can be optimal for average household spending patterns to mirror average household income proles over much of the life cycle. This was a surprising nding because in models without uncertainty optimizing consumers spend based on their expected lifetime resources without regard to the expected timing of income; that is, spending patterns by age are not intrinsically determined by income patterns by age. (This work, and my subsequent related work, assumes that consumers have successfully solved 1

any self-control problems of the kind that David Laibson and others have so persuasively described). This paper was related to two other, more abstract, papers. The more fundamental of these,2 written with Miles Kimball, showed that in the presence of uncertainty, households with low levels of wealth will respond more to a windfall infusion of cash than households with ample resources. The other paper3 demonstrated that the logic of precautionary saving undermines the standard Euler equation method of testing for optimizing consumption behavior. Mathematical and computational aspects of optimal behavior have remained a theme in my research to the present. A recent paper provides the rigorous foundations for the mathematical methods employed in my earlier work.4 Another paper with Miles Kimball5 explores the theoretical implications of borrowing limitations; and a very short new paper describes a conceptual trick that can be used to simplify and accelerate the solution of many kinds of optimal intertemporal choice models.6 As an aid to other researchers, I have posted on my web page computer software that implements this trick to solve a variety of standard optimization problems; my web page also contains software that reproduces the computational and empirical results in most of my published papers, as well as a set of lecture notes (and associated software) that provide a comprehensive treatment of the methods for solving these models.7 In the end, however, mathematical models are useful only insofar as they can be related to empirical evidence about the real world. Toward the end of matching theory and data, Andrew Samwick and I wrote two papers8 , 9 whose goal was to get a quantitative sense of the nature and magnitude of household responses to uncertainty. The rst of these papers found that a standard source of microeconomic data, the Panel Study of Income Dynamics, implied that income uncertainty was very large indeed. According to the benchmark specication, a conservative estimate was that in any given year about a third of households could expect their permanent income to rise or fall by as much as ten percent. (Permanent changes in income here mean the kind of change associated with a promotion or being laid o and settling for a new lower-paying job). The second paper with Samwick estimated that as much as 40 percent of the wealth held by the typical household represented a response to the fact that some households face greater uncertainty than others. An important caveat about these results is that many of the wealthiest 2

households are missing from the PSID dataset on which the estimates are based. Since a large proportion of aggregate wealth is held by the richest few percent of households, these estimates very likely overstate the proportion of aggregate wealth that can be attributed to precautionary motives. Indeed, another paper10 showed that the theoretical model used in the rst paper with Samwick severely underpredicts the wealth holdings of the wealthiest households in the U.S. even if wealthy individuals are assumed to be more patient than others. That paper argued that a bequest motive in which bequests are a luxury good is essential to explaining why saving rates of wealthy households are so high. A subsequent paper11 showed that the bequests as luxuries model can also explain a variety of facts about the portfolio choices of wealthy households, particularly their comparatively high tolerance for nancial risk. Another potential problem with my work with Samwick is that we were forced by data limitations to make the assumption that income risk is something over which people have no control. If instead people make employment choices based partly on the riskiness of the dierent alternatives (for example, if risk-averse people seek civil service jobs while the risk-lovers become entrepreneurs), the estimated eect of uncertainty on saving might be incorrect. The likeliest eect would be to underestimate the importance of precautionary behavior, since the theory tends to suggest that those who dislike risk more will both avoid risky occupations and save more. But in an attempt to get around this problem, Karen Dynan and Spencer Krane and I wrote a paper12 that used temporary regional variations in unemployment risk (over which individual households have no control) to measure the size of uncertainty. Empirical results in that paper suggested that precautionary motives for saving were more important for people in the upper half of the income distribution, and that precautionary behavior is manifested partly in a reluctance to borrow against home equity when unemployment is high, rather than an explicit accumulation of greater liquid assets. If uncertainty matters this much for spending decisions on average, it seems plausible that the changes in uncertainty that accompany business cycles might be an important source of uctuations in consumer spending. Wendy Dunn and I showed13 that while there does not seem to be any systematic relationship between spending and various measures of households nancial condition, measures of consumers degree of uncertainty (especially their assessment of whether the unemployment rate is likely to rise) have a powerful impact on spending (particularly purchases of big-ticket items like 3

vehicles and houses). In fact, the model in that paper suggested that, if anything, the mystery is why uncertainty-driven uctions in expenditures on durable goods are not even larger. According to the model, most of the people who were on the verge of buying a car should be willing to postpone their purchase in response to even a very modest increase in uncertainty; while the evidence conrms that durable goods spending is indeed more volatile than spending on nondurables like food, the size of the discrepancy is not as large as the rational optimization model tends to suggest it should be. This nding seems to t with the results of an earlier paper with David N. Weil14 which found that, across countries, the relationship between aggregate saving and aggregate growth is not what would be expected from the standard framework in which spending depends on expectations about future income. The problem is that people living in fast-growing economies should expect their future incomes to be large relative to their current incomes, and should therefore be borrowing to nance their current expenditures, while people in slow-growing economies should anticipate that they may need to save a lot if they wish to maintain their current standard of living in the future. The logic therefore suggests that we should expect to see a negative association between saving and growth. One objection to this thread of reasoning might be that countries saving rates dier partly for cultural reasons, and it seems natural to expect that countries whose saving rates are high because of a cultural preference for saving would consequently exhibit high growth. Byung-Kun Rhee and Changyong Rhee and I used data on immigrants to Canada15 to investigate the possibility that cultural dierences explain saving dierences. Under the cultural theory of saving, one might expect that immigrants from highsaving countries (e.g. Japan) to save more than immigrants from low-saving countries (e.g. Sweden). But we found no evidence of such a pattern, either in Canada or in a subsequent study using Census data from the U.S.16 Furthermore, the evidence clearly suggests that the relationship between saving and growth is dynamic, not static: Countries that go through periods of prolonged growth tend to experience rising saving rates, while countries that experience sustained economic slowdowns tend to suer declining saving rates. Both the sluggish response of spending to uncertainty and the pattern in which increases (or decreases) in growth produce increases (respectively, decreases) in saving might be explained by a model in which spending habits 4

exert a powerful inuence on behavior. A paper with Jody Overland and David N. Weil17 explored how the incorporation of spending habits modies the predictions of a model of optimal spending behavior. A subsequent paper18 incorporated both habits and uncertainty, and argued that the broad patterns of saving and growth seen in the East Asian tiger economies could be explained in a model where both precautionary motives and habit formation were important. This work meshes with a prominent strand of the macroeconomics and nance literatures over the past decade, which have argued that habit formation can explain a wide range of empirical observations that are dicult to reconcile with standard models without habits. A new paper with Jirka Slacalek,19 however, casts doubt on the view that habits are the right explanation for the sluggishness of aggregate spending dynamics. This paper points out that habits imply that spending dynamics should be similar in microeconomic and macroeconomic data. Yet empirical studies using microeconomic data, using exactly the same methods as applied to macroeconomic data, nd very dierent results. While the data hint that there may be some modest habit formation eects in a few categories of spending, models in which habits are a dominant force in microeconomic spending decisions can be decisively rejected. The new paper relates to another strand of my research, which argues that economists should pay more attention than has been customary to the evidence provided by surveys of households. A 2001 NBER working paper proposed modeling household survey data on ination expectations using a simple model of disease transmission. The idea is that rather than forming their own independent views of the likely future ination rate, typical peoples views are formed by exposure to the views of experts as represented in the news media. In this model, households forecasts of ination, while not fully rational in the economists usual strict sense of the term, do not deviate very long or very far from the experts view. The paper presented empirical evidence that information in newspaper reports about ination seems to lter out to the population gradually rather than instantly. The proposed model can be interpreted as providing a concrete theoretical justication for the model of sticky expectations that has become increasingly popular in the macroeconomics literature in recent years. (The NBER working paper was subsequently split into two papers, one containing the empirical evidence and a stripped-down version of the model, and the other examining a detailed examination of the epidemological modeling framework and its application).20 5

The paper with Slacalek proposes to reconcile the microeconomic and macroeconomic evidence about consumption dynamics by applying the same model of sticky expectations. The essential idea is that people have a very good understanding of the circumstances they face in their own lives (for example, they know whether they have been red), but they do not pay as much attention to macroeconomic developments (for example, they may not know the latest aggregate unemployment statistic). Since household-specic uncertainty is much greater than aggregate uncertainty (a rough estimate is that household-specic risks are about 100 times larger than macroeconomic risks), it makes sense for busy consumers to pay less than perfect attention to the macroeconomy. Whether or not this particular explanation for the conict between microeconomic and macroeconomic data on consumption dynamics is accepted, this conict seems likely to be a topic of growing attention over the next few years. While great progress has been made in understanding the quantitative implications of alternative models of consumption and saving behavior, much remains to be understood.

Endnotes
1

Christopher D. Carroll. The Buer-Stock Theory of Saving: Some Macroe-

conomic Evidence. Brookings Papers on Economic Activity, 1992(2):61156, 1992 and Christopher D. Carroll. Buer-Stock Saving and the Life Cycle/Permanent Income Hypothesis. NBER Working Paper No. 5788, October 1996; the second paper was published in the Quarterly Journal of Economics, CXII(1):156, 1997a. URL http://econ.jhu.edu/people/ccarroll/BSLCPIH.zip
2

Christopher D. Carroll and Miles S. Kimball. On the Concavity of the

Consumption Function. Econometrica, 64(4):981992, 1996


3

Christopher D. Carroll. Death to the Log-Linearized Consumption Eu-

ler Equation! (And Very Poor Health to the Second-Order Approximation). NBER Working Paper No. 6298, 1997; published in the Advances in Macroe-

conomics, 1(1):Article 6, 2001


4 5

Christopher D. Carroll and Miles S. Kimball. Liquidity Constraints and

Precautionary Saving. NBER Working Paper No. 8496, 2001


6

Christopher D. Carroll.

The Method of Endogenous Gridpoints for

Solving Dynamic Stochastic Optimization Problems. Economics Letters, pages 312320, September 2006. doi: 10.1016/j.econlet.2005.09.013. URL

http://econ.jhu.edu/people/ccarroll/EndogenousArchive.zip. http://econ.jhu.edu/peop forthcoming in Economics Letters.


7

Christopher D. Carroll. Solving microeconomic dynamic stochastic opti-

mization problems. Archive, Johns Hopkins University, 2011. URL http://econ.jhu.edu/people/


8

Christopher D. Carroll and Andrew A. Samwick. The Nature of Precau-

tionary Wealth. NBER Working Paper No. 5193, July 1995a; published in the Journal of Monetary Economics, 40(1):4171, 1997b
9

Christopher D. Carroll and Andrew A. Samwick. How Important Is Pre-

cautionary Saving? NBER Working Paper No. 5194, July 1995b; published in Review of Economics and Statistics, 80(3):410419, August 1998
10

Christopher D. Carroll. Why Do the Rich Save So Much? NBER Work-

ing Paper Number 6549, May 1998; published In Joel B. Slemrod, editor, Does Atlas Shrug? The Economic Consequences of Taxing the Rich. Harvard University Press, 2000a
11

Christopher D. Carroll. Portfolios of the Rich. In NBER Working Pa-

per No. 7826. August 2001; published In Household Portfolios: Theory and Evidence. MIT Press, Cambridge, MA, 2002.
12

Christopher D. Carroll, Karen E. Dynan, and Spencer S. Krane. Unem7

ployment Risk and Precautionary Wealth: Evidence from Households Balance Sheets. Review of Economics and Statistics, 85(3), August 2003
13

Christopher D. Carroll and Wendy E. Dunn. Unemployment Expecta-

tions, Jumping (S,s) Triggers, and Household Balance Sheets. NBER Working Paper No. 6081, July 1997; published In Benjamin S. Bernanke and Julio J. Rotemberg, editors, NBER Macroeconomics Annual, 1997, pages 165229. MIT Press, Cambridge, MA, 1997c
14

Christopher D. Carroll and David N. Weil. Saving and Growth: A Rein-

terpretation. NBER Working Paper No. 4470, September 1993; published in the Carnegie-Rochester Conference Series on Public Policy, 40:133192, June 1994
15

Christopher D. Carroll, Changyong Rhee, and Byungkun Rhee. Are

There Cultural Eects on Saving? Some Cross-Sectional Evidence. The Quarterly Journal of Economics, CIX(3):685700, August 1994
16

Christopher D. Carroll, Changyong Rhee, and Byungkun Rhee. Does

Cultural Origin Aect Saving Behavior? Evidence from Immigrants. NBER Working Paper No. 6568, May 1998; published in Economic Development and Cultural Change, 48(1):3350, October 1999
17

Christopher D. Carroll, Jody R. Overland, and David N. Weil. Saving

and Growth with Habit Formation. American Economic Review, 90(3):341 355, June 2000
18

Christopher D. Carroll. Risky Habits and the Marginal Propsensity to

Consume Out of Permanent Income. NBER Working Paper No. 7839, 2000; published in the International Economic Journal, 14(4):141, 2000b
19

Christopher D. Carroll and Jiri Slacalek. Sticky Expectations and Con8

sumption Dynamics. Manuscript, Johns Hopkins University, 2007


20

The basic model and evidence are published in Christopher D. Carroll.

Macroeconomic Expectations of Households and Professional Forecasters. Quarterly Journal of Economics, 118(1):269298, 2003; the full model and its extensions can be found in The Epidemiology of Macroeconomic Expectations. In Larry Blume and Steven Durlauf, editors, The Economy as an Evolving Complex System, III. Oxford University Press, 2006.

What can company data tell us about financing and investment decisions?
Katie Farrant, Mika Inkinen, Magda Rutkowska, Konstantinos Theodoridis, 9 February 2014 The investment decline in the UK that has followed after the recent crisis is hardly a surprise. What is baffling is that at the same time, corporate bond issuance has remained strong. This column discusses this puzzling pattern and provides possible explanations for it. Heterogeneity among companies is one possible argument, where firms with capital market access invest, and those without do not. However, evidence from 2012 shows that investment across companies with capital fell as well. Thus, other factors such as the increased financial uncertainty could play a role in the investment decisions of companies.

Following the financial crisis, UK companies revised their spending and financing decisions dramatically. They reduced investment by around 13% in real terms between 2008 and 2012 (Besley and Van Reenen 2013, Haddow et al. 2013). But during that same period, corporate bond issuance by UK companies was strong, with record corporate bond issuance in 2012. Taken at face value, this might appear puzzling, as one might expect strong bond issuance to feed into stronger investment. In a recent Bank of England Quarterly Bulletin article (Farrant et al. 2013), we examine some alternative explanations for this pattern of corporate behaviour, which have different implications for the real economy:

That companies have been issuing bonds in order to restructure their balance sheets, rather than to invest; That companies that issue bonds do not matter for UK investment; and That the aggregate data reflect heterogeneity across firms, with those companies issuing bonds investing, while the weakness in aggregate data reflects investment by companies that do not issue bonds.

We draw on three main data sources: aggregate statistics on corporate liabilities and investment; a company-level database for publicly listed firms constructed at the Bank of England; and publicly available surveys.

Have UK companies been issuing bonds in order to change the structure of their balance sheets?
There is some evidence that UK companies have restructured their balance sheets, and that this is one of the factors behind the recent strength in bond issuance. This is seen in both aggregate and company-level data. According to ONS data, for example, bonds accounted for 7% of the stock of UK companies financial liabilities prior to the crisis in 2007. That has since risen to 10% in 2013 Q2. The use of loans

as a source of finance, meanwhile, has fallen from its peak of 38% of UK PNFCs financial liabilities in 2009 Q1, to 27% in 2013 Q2.

One reason why companies are likely to have restructured their balance sheets since 2009 relates to the sharp contraction in bank credit following the financial crisis.

The Deloitte CFO Survey of large corporates, for example, showed that bank borrowing went from being the most attractive source of funding in 2007 and 2008 compared with raising funds through bond and equity issuance to the least attractive in 2009.

Companies may have also restructured their balance sheets in response to the Bank of Englands asset purchase programme quantitative easing (QE) to the extent that it reduced the term premium in corporate bond yields.

Federal Reserve Governor Stein (2012) has put forward an argument along these lines, suggesting that companies may respond differently when interest rates move because of a change in term premia rather than expected policy rates, particularly when term premia are negative. When term premia are negative say a company can issue a ten-year bond at an annualised rate of 2%, but expects the sequence of rolled-over short-term rates to average 3% then the company may be incentivised to restructure its balance sheet. This is because it could issue long-term debt at 2%, and use these funds to pay back short-term debt, repurchase equity, or buy shortterm securities, as all these adjustments yield an effective return of 3%. As a result, the hurdle rate for capital investment remains pinned at 3% the return a company can earn if it invests in financial assets instead. According to this argument, once term premia become negative, further QE may encourage bond issuance but has less effect on investment. In our article, we present a model that tries to test this hypothesis formally for UK companies. We find that UK companies respond to a decline in long-term interest rates by increasing investment, even when the decline in interest rates comes about because of a fall in term premia, and term premia are negative. This suggests that QE has not encouraged UK companies to issue bonds to restructure their balance sheets at the expense of any increase in their investment spending.

Do companies that issue bonds matter for UKs growth prospects?


One reason why corporate bond issuance has been strong in the UK since 2009, while investment has been weak, could be that companies issuing bonds do not matter for UK investment. This could be the case if the corporate bond market is not available to most companies and, so, is not an important source of funds for UK

companies in aggregate. Alternatively, it could reflect companies that have access to the bond market not investing very much in the UK. We do not find evidence in support of this explanation. The corporate bond market has become increasingly important as a source of finance for UK companies over time, with record bond issuance in 2012. That strength has broadly continued in 2013. According to our estimates, the number of companies issuing bonds has also increased, particularly since the beginning of the financial crisis in 2007. And so has the number of companies accessing the corporate bond market for the first time. Our firm-level database of publicly listed companies can also help assess the importance to the UK economy of companies that issue bonds. We find that these companies tend to be large in 2012, none of the publicly listed companies that have issued bonds in the past would be classified as a small or medium-sized enterprise. But, despite this, the companies that have access to the bond market play an important role in influencing UK growth prospects. We estimate that all listed UK companies accounted for around 45% of UK business investment in 2012.1 And while only a few of these listed companies also issue bonds, those that do accounted for around a third of UK business investment. Taken together with the recent strength in bond issuance, there would, therefore, appear to be little support for the strength in corporate bond issuance at a time of weak investment being a reflection of bond issuers not being important for the UK economy.

Is the aggregate picture masking different behaviour across companies?


Aggregate data could also be masking heterogeneity across UK companies. Chart 1 shows growth rates of business investment, using both company-level data and aggregate data. Up until 2009, there was a close correlation between the aggregate business investment growth rate (blue line) and investment by companies issuing both bonds and equity (magenta line), suggesting no obvious bias in investment behaviour between the median company in the company-level database, and the aggregate data. Figure 1. Annual growth in UK real business investment and median annual growth in real capital expenditure for firms in the company-level database

Sources: Dealogic, ONS, Thomson Reuters Datastream Worldscope and Bank calculations.

Since 2010, however, while aggregate UK business investment has remained weak, investment by companies with access to capital markets recovered sharply. This suggests that improvements in capital market conditions have allowed companies with access to those capital markets to undertake investment. This strength in investment in 2010 and 2011, combined with the weakness in aggregate UK business investment over that period, suggests that companies without access to capital markets may have reduced their investment markedly in 2010 and 2011. In 2012, however, investment growth has fallen for companies that access capital markets, despite their continued strong bond issuance. This suggests that other factors, besides the availability of finance, are likely to have influenced companies investment behaviour in 2012. The Deloitte CFO Survey suggests that large companies anticipated a slowdown in investment in late 2011. The deterioration in expectations for investment over the following twelve months appeared to be linked with an increase in financial and economic uncertainty, and a decrease in optimism regarding the economic outlook. Looking ahead, optimism and CFOs expectations of UK companies investment have since risen, suggesting that investment growth by larger companies with bond market access may have picked up again in 2013, despite the continuing weakness in the aggregate investment data.2 There may also be a lag between companies raising finance and undertaking investment projects,

which may suggest that some of the record bond issuance in 2012 could have been used to support investment in 2013.

Conclusions
Understanding why aggregate UK business investment has remained weak, while corporate bond issuance has been strong, is important in the context of understanding the role public capital markets play for UK companies. And different explanations for this pattern have different implications for the real economy. There is some evidence that UK companies have been raising bond finance because of a desire to restructure their balance sheets and in particular, to reduce their reliance on banks. Much of the evidence suggests that the pattern of weak investment in 2010 and 2011 at a time of strong corporate bond issuance reflects heterogeneity among companies, with those with capital market access investing, and those without not, such that overall aggregate investment remained weak. That might suggest that an improvement in the availability of external finance to companies without capital market access could provide support for UK business investment. In 2012, however, investment growth across companies with capital market access appeared to fall. That suggests that other factors, besides the availability of external finance, have played a role in explaining the weakness of business investment in 2012. These factors may include increased uncertainty about the economic and financial outlook and weak business confidence. Looking ahead, larger companies have become more optimistic in 2013, suggesting that we may see their investment pick up in 2013 and 2014, despite the continued weak aggregate investment data seen in 2013.

References
Besley, Tim and John Van Reenen (2013), Investing in UK prosperity: skills, infrastructure, and innovation, VoxEU.org, 31 January. Farrant K, M Inkinen, M Rutkowska and K Theodoridis (2013), What can company data tell us about financing and investment decisions?, Bank of England Quarterly Bulletin, Vol. 53, No. 4, pages 36170. Haddow, Abigail, Chris Hare, John Hooley, and Tamarah Shakir (2013), A new age of uncertainty? Measuring its effect on the UK economy, VoxEU.org, 27 August. Pattani A, G Vera and J Wackett (2011), Going public: UK companies use of capital markets, Bank of England Quarterly Bulletin, Vol. 51, No. 4, pages 31930. Stein, J C (2012), Evaluating large-scale asset purchases, remarks at The Brookings Institution, Washington DC, October.

1 In line with Pattani, Vera and Wackett (2011), this is estimated as a companys total capital expenditure scaled by the average share of a companys domestic sales and domestic assets (as reported in their financial statements). This approximation may, of course, not be accurate in all cases. For example, a company may hold a majority of its assets (or conduct a majority of its sales) at home, but invest predominantly abroad (or vice versa). 2 As stated in the November 2013 Inflation Report on page 38, the Monetary Policy Committee continues to put relatively little weight on the recent weakness suggested by the official investment data.

Private Fixed Investments Recovery: Not So Bad After All


Daniel Carroll A little over a year ago in these pages we documented the sluggish recovery of private fixed investment since the end of the recession. Up to that point, investment was not rebounding relative to GDP as quickly as it typically does during recoveries, and residential investment seemed to be the key factor holding the total down. But over the past 13 months new data has been released and a new component has been added to GDPintellectual propertyand both of these developments have changed our view of investment in the recovery. This article reexamines the path of private fixed investment and its relationship to GDP over the business cycle, taking these new developments into account. Back in November 2012, private fixed investment appeared to have stalled. In particular, the usual Vshaped response characteristic of the series in previous recoveries had not materialized. Including current data changes the picture. The V-shape appears, although there is still a long way to go to complete the pattern. Overall, private fixed investment now appears to have been recovering faster than GDP since 2010:Q4.

Meanwhile, the new data show that residential investment rose substantially in 2013. Year-over-year growth by quarter exploded, hitting 20 percent for the first time since 2004:Q2. The sustained, positive pattern over 2013 was a welcome sight after five years of mostly negative growth. The rise in residential investment elevated total fixed investment, compensating for weaker growth in nonresidential investment. Back in November 2012, nonresidential investment had been growing by double digits each quarter (year-over-year) from 2011:Q3 to 2012:Q2, and it appeared likely to continue, bolstering investment in the future. Since then, however, it has averaged just 4.9 percent.

In addition to new data, a change in the way nonresidential investment is measured has altered the picture considerably. During 2013, the Bureau of Economic Analysis added intellectual property to nonresidential fixed investment and adjusted the series all the way back to the beginning of the data. This change increased nonresidential investment by between 7 percent and 24 percent. It also resulted in a small rise in measured GDP, but because GDP is much larger than nonresidential investment the percentage increase was much smaller (between 2 percent and 4 percent). As a consequence, fixed investment as a fraction of GDP from 1950 to 2012 rose from 15.3 percent to 16.7 percent.

While the addition of intellectual property as a component of nonresidential fixed investment made a considerable impact on the size of investment relative to GDP, it had little effect on its business cycle properties. The correlation between fixed investment and GDP over the business cycle was reduced only slightly. This is due to the business cycle properties of intellectual property. While it shares the same qualitative pattern as nonresidential investment, it generally has a smaller quantitative relationship. As a result, the addition of intellectual property weakens the correlations of nonresidential fixed investment and total fixed investment with GDP only slightly.

Fiscal Stimulus and the Promise of Future Spending Cuts


Volker Wieland
Goethe University of Frankfurt Recent evaluations of the scal stimulus packages enacted in 2009 in the United States and Europe such as Cogan et al. (2009) and Cwik and Wieland (2009) suggest that the GDP eects will be modest due to crowding out of private consumption and investment. Corsetti, Meier, and M uller (2009, 2010) argue that spending shocks are typically followed by consolidations with substantive spending cuts, which enhance the short-run stimulus eect. This note investigates the implications of this argument for the estimated impact of recent stimulus packages and the case for discretionary scal policy. JEL Codes: C52, E62.

Recent scal stimulus packages such as the U.S. American Recovery and Reinvestment Act (ARRA) or Germanys Konjunkturpakete I und II have triggered a new literature on Keynesianstyle multiplier eects. A multiplier eect emerges if the increase in government purchases triggers additional purchases by households funded with the income earned from the production paid for by the government. Countervailing crowding-out eects may arise because of the upward pressure on real interest rates due to increased government-debt nancing, the expectation of future tax increases, and an appreciation of the real exchange rate. The recent stimulus measures provide new observations that will help gain new insights in the eects of discretionary scal policy. Cogan et al. (2009) and Cwik and Wieland (2009) identify the magnitudes and timing of the implied spending increases and tax reductions directly from publicly available documents regarding the
Author contact: Goethe University of Frankfurt, House of Finance, Grueneburgplatz 1, D 60323 Frankfurt am Main, Germany. E-mail: wieland@wiwi. uni-frankfurt.de.

39

40

International Journal of Central Banking

March 2010

particular legislation. They proceed to estimate the GDP eects of the announced policy changes using a range of empirically estimated structural macroeconomic models that account for dierent assumptions regarding the behavioral responses of households and rms. Their ndings suggest that crowding-out eects dominate and the short-run boost to GDP is signicantly smaller than the increase in government purchases. Another widely used approach aims to identify typical government spending impulses along with their eects on GDP and other variables from historical aggregate time series (cf. Gal , L opezSalido, and Vall es 2007). An impulse is then a change in government spending that is not forecastable by a vector autoregression on the basis of selected aggregates. Identication assumptions are needed to separate the government spending surprise from other surprises dened with respect to the forecast of the VAR. Using such a VAR approach Corsetti, Meier, and M uller (CMM) (2009) nd that government spending impulses boost private consumption. The GDP multiplier is about one on impact and increases for a while due to the crowding in of consumption. In addition, they note that government spending later on declines below baseline. Similarly, output falls somewhat in subsequent years. However, 90 percent condence intervals on output and consumption are very large and always include the zero line. Corsetti, Meier, and M uller (2009) then go on to simulate a combination of short-run scal stimulus and medium-run spending cuts in a calibrated New Keynesian macroeconomic model. The simulation results indicate that the anticipation of future spending cuts induces greater short-run multiplier eects of government spending impulses. Corsetti, Meier, and M uller (2010) extend the analysis to a calibrated two-country model and explore international spillover eects of government spending increases combined with future spending cuts. The anticipated cuts not only strengthen the domestic stimulus eect but also enhance positive cross-border spillovers. The mechanism of transmission is a reduction of long-term real interest rates across the two economies, not a depreciation of the foreign currency. These ndings regarding the eect of future spending reversals raise two questionsa positive and a normative one. First, should observers expect greater short-run multiplier eects from the U.S. and European stimulus packages of 2008 and 2009 than estimated by

Vol. 6 No. 1

Fiscal Stimulus and the Promise

41

Cogan et al. (2009) and Cwik and Wieland (2009)? Secondly, should governments be advised to combine short-run stimulus packages with medium-term spending reductions? 1. Should GDP Impact Estimates of the 200809 Stimulus Account for Spending Reversals?

The answer is no. There are at least two reasons. The rst reason is related to what is publicly known about these particular policy changes, while the second one lies in the mixed empirical evidence obtained with dierent approaches for identifying historical impulses. First, the ARRA legislation and European measures such as the German Konjunkturpakete are clearly identied and announced plans of governments approved by their parliaments. There is no need to make identifying assumptions and consider historical VARs in order to estimate the timing and magnitude of these additional government purchases and tax reductions. Instead, these numbers may be obtained directly from the announced plans. The ARRA includes spending increases and tax cuts spread over 2009 to 2018. Indeed, it also involves some very small spending cuts in 2016 to 2018. Additional medium-term spending cuts have not been announced in conjunction with the ARRA legislation. Thus, model-based evaluations that aim to account for the anticipation of rational, forward-looking households and rms should reect the legislation as it has been passed at the beginning of 2009. The spending cuts planned for 2016 to 2018 are already included in such an assessment by Cogan et al. (2009). Should the U.S. government announce additional medium-term spending cuts at a later stage, say in 2011 for the years 2015 to 2018, and Congress pass them into law, then they would aect expectations and decisions of households at that time but not retroactively in 2009. As to the European stimulus packages, Cwik and Wieland review countries nancial stability plans and collect information on the specic measures and magnitudes. Indeed, the net eect of these measures is not always positive. For example, in Italy measures intended to raise tax collection overwhelm the planned spending increases and tax reductions. The stimulus packages do not involve announcements for spending increases or cuts after 2010.

42

International Journal of Central Banking

March 2010

Of course, one could still argue that American households might have concluded in January 2009 that they should expect future spending cuts even though none had been promised by the U.S. government. The argument would go as follows: historical experience indicates that past U.S. government spending impulses identied by VAR studies are followed later on by spending cuts. Therefore, households expected such consolidations in the past and will foresee such a consolidation as an unannounced companion to the ARRA. However, these historical dynamics may simply be due to automatic stabilizers that the VAR missed and falsely interpreted as a follow-up to discretionary scal stimuli. The VARs typically do not use the real-time data that formed the basis of market participants expectations at the time the discretionary scal measures were initiated. More importantly, the above-mentioned VAR evidence on the eect of historical government spending impulses does not stand unchallenged. Ramey (2009), for example, uses new variables on military spending dates and professional forecasts in order to better measure historical anticipations regarding scal policies. Her ndings indicate government spending multipliers for the United States ranging from 0.6 to 1.1. Corsetti, Meier, and M uller (2009) in turn challenge some of her ndings and come out on the side of earlier VAR evidence pointing to greater multipliers. Another study by Barro and Redlick (2009), however, estimates defense spending multipliers of 0.6 to 0.7 that may reach unity only in scenarios with the U.S. unemployment rate rising to 12 percent. These authors conclude that multipliers for non-defense government purchases cannot be reliably estimated because of the lack of good instruments. Thus, the empirical evidence regarding historical government impulses implies a wide range of GDP eects that remains consistent with the GDP impact of the ARRA and the European stimulus packages found by Cogan et al. (2009) and Cwik and Wieland (2009) using estimated structural models. 2. Should Governments Combine Short-Run Stimulus with Medium-Term Spending Cuts?

Corsetti, Meier, and M uller (2009, 2010) report an interesting set of results regarding the consequences of spending impulses followed by

Vol. 6 No. 1

Fiscal Stimulus and the Promise

43

Figure 1. CMM Government Spending Increases and Cuts with GDP Eects Simulated in the CMM (2010) Model

spending reversals in calibrated structural macroeconomic models. Clearly, these ndings could have important normative implications for the design of discretionary scal policies. The model they use in the article appearing in this journal is a two-country business-cycle model. It features signicant Keynesian elements by assuming that all rms are constrained in adjusting prices due to Calvo-style contracts and a third of the households are restricted to consume all their current income and abstain from borrowing or saving. In order to investigate possible policy implications, it is important to assess the magnitude of the near-term increase relative to the medium- to long-term reduction in government purchases and quantify their GDP eects jointly as well as separately in an empirically estimated model. Figure 1 shows the particular path of government spending simulated by Corsetti, Meier, and M uller (2010) and the GDP impact obtained in their model. The bar graph in gure 1 indicates government expenditures equal to the solid line in the top left panel of gure 1 in Corsetti, Meier, and M uller (2010), while the thick solid line shows the response of GDP equal to the solid line in the top middle panel of gure 1 in their paper.1 In this simulation, government purchases increase by an amount equal to 1 percent of GDP in the rst quarter. In subsequent quarters
1 I thank Giancarlo Corsetti and Gernot M uller in particular for sending all the MATLAB computer codes necessary to replicate their simulation analysis, as well as the results displayed in their gures.

44

International Journal of Central Banking

March 2010

the additional amount of purchases declines in magnitude. Over the rst nine quarters the total sum of additional purchases amounts to 4.58 percent of GDP, roughly 0.5 percent per quarter. Spending declines below trend about ten quarters after the initial impulse. This decline follows from a scal rule that enforces a certain degree of budget consolidation. The spending cuts relative to baseline are substantial and last for a long time. Between quarters 10 and 30 they sum to about 5 percent. Thus, the overall plan over thirty quarters implies a net reduction of government purchases below baseline of about 0.4 percent of GDP. It could be called a government savings plan rather than a government spending package. The ratio of anticipated spending cuts to spending increases in absolute value is 1.1. One can calculate the magnitude of spending cuts that would have to be incorporated in the ARRA as announced in January 2009 so as to achieve a prole similar to the path in Corsetti, Meier, and M uller (2009, 2010). As reported by Cogan et al. (2009), federal purchases and transfers supporting spending in states and localities amount to $246 billion during the rst nine quarters (the rst quarter of 2009 up to and including the rst quarter of 2011). Net spending increases for the next twenty-one quarters (the second quarter of 2011 up to and including the second quarter of 2016) sum up to $180 billion. Thus, the path shown in gure 1 could be matched if the U.S. administration would implement spending cuts equal to $450 billion from the second quarter of 2011 onward. This includes $180 billion needed to oset planned future purchases and $270 billion corresponding to 1.1 times the purchases executed in the rst nine quarters. $450 billion equals 57 percent of the total of $787 billion allocated to spending increases, transfers, and short-run tax cuts by the ARRA legislation. Accordingly, households should have expected the U.S. administration to cut spending by $450 billion (or 3.1 percent of current U.S. GDP) starting in the second quarter of 2011 and spread over four and a half years. It seems a far stretch from reality to assume that U.S. households would have adopted this belief in January 2009 without any supporting announcement by the government or accompanying legislation. Hence, the ndings of Corsetti, Meier, and M uller (2009, 2010) should better not be interpreted as indicative of the likely eects of recent scal stimulus packages such as the ARRA.

Vol. 6 No. 1

Fiscal Stimulus and the Promise

45

Nevertheless, it is of great interest to ask how future scal stimuli should be designed in light of these studies. Clearly, the multiplier implied by the path of GDP (thick solid line in gure 1) is substantially greater than one. Output stays above government spending up to quarter 22 and takes only slightly lower values in the last few quarters. The cumulative sum of output deviations over thirty quarters totals 7.4 percent of quarterly GDP relative to a net reduction of government spending of 0.4 percent. This nding seems to suggest that it would be better to announce stimulus packages that combine initial spending increases with an announcement that they will be followed later by substantial spending cuts. In the remainder of this comment, I will analyze the impact of such spending increases and reductions in more detail. First, it is instructive to check whether these measures imply a reduction or an increase in the future tax burden perceived by forward-looking households. If one applies a standard quarterly discount factor of 0.99, then the discounted sum of government spending over thirty quarters corresponds to 0.33 percent of GDP.2 Given that the discounted sum of additional purchases over the rst nine quarters is 4.49 percent of GDP, the subsequent spending cuts pay for over 90 percent of the initial stimulus. Thus, it is perhaps not surprising that these particular scal measures do not induce crowding out of private consumption. Corsetti, Meier, and M uller (2009, 2010) rely on calibrated New Keynesian DSGE models. Their ndings may be quite sensitive to the particular model and parameterization. It is preferable to base policy recommendations on an estimated model. To assess the robustness of their ndings, I evaluate the impact of their particular path for discretionary government spending in a larger-scale New Keynesian DSGE model originally estimated by Smets and Wouters (2007) with U.S. macroeconomic data. Technically, I simulate the model under the assumption that the government announces the exact path of planned government spending increases and cuts displayed in the bar graph in gure 1 in the rst quarter of the
2 The above-mentioned discount factor of 0.99 corresponds to the steady-state discount factor in Corsetti, Meier, and M uller (2010) (see table 1). However, the discount factor during the transition to the steady state is endogenous in order to ensure stationarity of equilibria in this two-country model.

46

International Journal of Central Banking

March 2010

Figure 2. CMM Spending Increases and Cuts with GDP Eects in the Smets and Wouters (2007) Model

simulation. This approach requires using a solution method for nonlinear rational expectations models as in Cogan et al. (2009). It makes it possible to study the same stimulus plan as in Corsetti, Meier, and M uller (2010) without having to introduce an explicit government spending rule that feeds back on output and government debt. The advantage of this approach is that it renders the experiment easily portable to dierent models. The GDP eect resulting in the Smets and Wouters (2007) model is shown in gure 2. It is not as large as in the CMM (2010) model. Nevertheless, the multiplier remains above unity. The thick solid line depicting U.S. GDP stays higher than government spending as a share of GDP for the full length of the simulation. The impact of the scal package on the economy depends importantly on the particular response of monetary authorities. For this reason, the simulation of the Smets and Wouters model shown in gure 2 assumes that the interest rate is set according to the same policy rule as in Corsetti, Meier, and M uller (2010). Thus, the reduction in the multiplier eect results from dierences in the structure and parameterization of the empirically estimated model relative to their calibrated model. The monetary policy rule in Corsetti, Meier, and M uller (2010) implies that the nominal interest rate, it , responds to deviations of the ination rate, t , from target by a factor of 1.5: i + 1.5(t ). it = (1)

Vol. 6 No. 1

Fiscal Stimulus and the Promise

47

Figure 3. CMM Spending Increases and Cuts with GDP Eects under Standard Taylor Rule (0.5*Output Gap, Smets and Wouters 2007 Model)

Here, a bar over a variable refers to its steady-state value. Interestingly, this rule does not involve an interest rate response to the output gap as in the original Taylor (1993) rule. Taylors rule featured a coecient of 0.5 on this gap. It is more relevant from an empirical perspective, because it matches Federal Reserve policy during the early Greenspan years (1987 to 1993) very well. Thus, I modied the policy rule to make it more similar to the original Taylor rule: it = i + 1.5(t ) + 0.5yt . (2)

In the simulation, the output gap, yt , is dened as the percentage dierence between the actual level of output and the level of output that would be realized in the Smets and Wouters model if prices and wages were exible. The outcome under Taylors rule is reported in gure 3. Once one takes into account that the U.S. Federal Reserve tends to raise the federal funds rate along with increases in the output gap, the response of output to scal policy is reduced. During the rst nine quarters of spending increases, the multiplier eect is now roughly equal to unity. Thus, using an empirically estimated model of the U.S. economy together with an empirically relevant interest rate rule implies a scal multiplier that is quite a bit smaller than in Corsetti, Meier, and M uller (2010). However, it is still bigger than in the case of the assessment of the ARRA conducted

48

International Journal of Central Banking

March 2010

Figure 4. GDP Eect of CMM Spending Cuts Alone (Rule without Output Gap, Smets and Wouters 2007 Model)

by Cogan et al. (2009). Thus, the anticipation of future spending cuts continues to play an important role in boosting output and consumption from the start. In the next step, I investigate the eect of the spending cuts separately from the initial increase in purchases. Market participants are informed that the government plans to reduce purchases in two and half years time, and they form expectations accordingly. The resulting path for GDP is shown in gure 4. Output increases from the rst quarter onward, reaching a peak in the ninth quarter of the simulation. During this period no additional government purchases are executed. Thus, for the rst nine quarters the spending multiplier is equal to innity. It arises from planned and anticipated government savings rather than spending. As a consequence of the anticipated future reduction in government spending, private consumption and investment increase today. This increase is accompanied by a rise in labor supply generating greater output. Note that monetary policy is again assumed to follow the interest rate rule without output-gap response as in Corsetti, Meier, and M uller (2010). 3. Conclusions

The empirical case for greater multiplier eects of the recent U.S. and European stimulus packages due to market participants

Vol. 6 No. 1

Fiscal Stimulus and the Promise

49

expectation of drastic spending cuts starting as early as 2011 appears rather weak. However, the eects of spending reversals reported by Corsetti, Meier, and M uller (2009, 2010) and the stimulative power of anticipated spending cuts revealed in this paper have normative implications. Rather than trying to quickly increase government purchases in a recession, scal authorities may instead counter the downturn by announcing future cuts in government spending. The eect of such spending reductions could even be greater than suggested by the preceding simulations. Both models assume that spending cuts translate to a reduction of lump-sum taxes. Accounting for the distortionary nature of taxes in practice would imply a greater and longer-lasting stimulus from anticipated government spending cuts. Of course, it is crucial that government announcements regarding future consolidation by spending cuts are credible. In Europe, the Stability Pact implied by the Maastricht Treaty provides an avenue for improving individual governments credibility. In the case of Europe, the question of spillovers between countries is also of great interest. Using a multi-country model, Cwik and Wieland (2009) nd that spillovers between Germany, France, and Italy are negligible or even negative even though export demand is signicantly positively related to income in the other countries. Because of the common currency, scal stimulus in one member country induces higher interest rates and an appreciation vis-` a-vis other currencies, thereby osetting the direct demand eects for other member countries exports. Corsetti, Meier, and M uller (2010) report signicant positive spillovers in a two-country model with a exible exchange rate. These spillovers have a non-Keynesian avor. They are driven by a reduction in the long-run world real interest rate rather than a depreciation of the currency of the country that does not engage in scal stimulus. It would be of interest to investigate to what extent such eects would arise in an empirically estimated multi-country model and how sensitive they are to the magnitude and credibility of future spending cuts. References Barro, R. J., and C. J. Redlick. 2009. Macroeconomic Eects from Government Purchases and Taxes. NBER Working Paper No. 15369.

50

International Journal of Central Banking

March 2010

Cogan, J. F., T. Cwik, J. B. Taylor, and V. Wieland. 2009. New Keynesian versus Old Keynesian Government Spending Multipliers. NBER Working Paper No. 14782. Corsetti, G., A. Meier, and G. M uller. 2009. Fiscal Stimulus with Spending Reversals. IMF Working Paper No. 09/106. . 2010. Cross-Border Spillovers from Fiscal Stimulus. International Journal of Central Banking 6 (1). Cwik, T., and V. Wieland. 2009. Keynesian Government Spending Multipliers and Spillovers in the Euro Area. CEPR Discussion Paper No. 7389. Gal , J., J. D. L opez-Salido, and J. Vall es. 2007. Understanding the Eects of Government Spending on Consumption. Journal of the European Economic Association 5 (1): 22770. Ramey, V. A. 2009. Identifying Government Spending Shocks: Its All in the Timing. NBER Working Paper No. 15464. Smets, F., and R. Wouters. 2007. Shocks and Frictions in U.S. Business Cycles: A Bayesian DSGE Approach. American Economic Review 97 (3): 586606. Taylor, J. B. 1993. Discretion versus Policy Rules in Practice. Carnegie-Rochester Conference Series on Public Policy 39 (1).

Baker: Fiscal Policy, the Long-Term Budget, and Inequality This post from Dean Baker, co-director of the Center for Economic and Policy Research in Washington, D.C., is the fifth contribution to the joint American Prospect/Democratic Strategist forum, Progressive Perspectives on the Future of the New Deal/Great Society Entitlement Programs. It is cross-posted from the Prospect. The American Prospect deserves credit for sponsoring this forum. It gives progressives an opportunity to engage in a serious discussion about many of the key economic, social, and political issues facing the country. At least as important, it allows us to tie them together in a way that generally is not done but is essential in a serious discussion. Asserting that budget policy, fiscal policy, and inequality are integrally linked is not just rhetoric. In fact, they are inextricably tied through economic relations that are too little appreciated. The failure to appreciate these ties often leads to policies that are ineffective or even self-defeating. This essay describes how the policies are necessarily linked beginning with fiscal policy and macroeconomic policy. It then turns to a discussion of social insurance programs and inequality and the long-term budget. Macroeconomics and the Iron Truths of Accounting Identities Most college-educated people have been through an intro econ class where they were punished with the basic macroeconomic accounting identities. They then quickly forget them as soon as the class was over. Unfortunately, this appears to be as true for people engaged in economic policy debates as for the larger public. The good or bad thing about accounting identities is that there is no way around them: They must be true. One of the basic macroeconomic accounting identities is that net national savings must be equal to the trade surplus. This means that the total of private and public savings, net of investment, must be equal to the trade surplus. For algebra fans this means that: (S -I) + (T-G) = X-M; Where S is the sum of all private savings, both household and corporate. I is investment, which means both corporate investment and the construction of residential housing. T is taxes and G is government spending, so T-G means the budget surplus. (Since we have been running large deficits in recent years, T-G has been negative.) X is exports from the United States, while M is imports into the United States. This means that X-M is the trade surplus. This number has also been a large negative in recent years, as the country has been running a large trade deficit. I apologize for the detour to intro macro, but it is important that people understand the logic of this accounting identity. If the country has a trade deficit, then it means we have negative national savings. There is no way around this fact. If we have negative national savings then either the government must have negative savings, the private sector must have negative savings, or both sectors can have negative savings. Again, there is no way around this fact. Currently our trade deficit would be between 4-5 percent of GDP if the economy were at full employment. This comes to $650 billion to $800 billion a year in the current economy. This means that the budget deficit, plus whatever negative savings we see on the private side, must sum to between $650 billion to $800 billion. If the deficit hawks got their dream and we somehow balanced the budget, and the trade deficit stayed the same (I'll come back to this), then it would mean that the private sector would need to have negative annual savings of between $650-$800 billion. In most of the post-war period private sector savings have been close to zero, with households providing the savings businesses needed to finance investment. The two notable exceptions were during the years of the stock bubble at the end of the 1990s and the housing bubble in the last decade. In both cases household savings plummeted as the wealth generated by the two bubbles led households to increase their consumption at the expense of their savings. Both bubbles also led to an increase in investment. In the stock bubble years, the uptick was in corporate investment. In the housing bubble years, residential construction reached post-war highs measured as share of GDP. It is difficult to imagine that anyone would actually advocate bringing back bubbles to sustain the economy. Furthermore, since their main impact was on boosting consumption at the expense of savings, one result of the bubble driven growth of the last two decades was that households did not save as much as they would have otherwise, leaving them less well prepared for retirement. That can hardly be a desirable outcome.

The alternative way to have negative savings on the private side is to have an investment boom. That might be wonderful, but that is not a story for the real world. The investment share of GDP has varied little over the last 50 years. Even at the peak of stock bubble years, when concerns over the Y2K problem spurred software investment and people threw money at every crazy Internet start-up, non-residential investment rose by just 1.4 percentage points above its 12.7 average share of GDP over the past 40 years.

If we can't expect private savings to turn negative in a big way, and we keep the government budget balanced, then there is one other way for the income accounting identity to hold. If the economy shrinks due to insufficient demand, then savings will fall more than investment. At some point, this will give us a large enough excess of private investment over private savings for the national income accounts to be in balance. If it is not clear, we are bringing the national accounts into balance in this story with a shrinking economy and rising unemployment. That is what happens if we run a balanced budget in the context of having a large trade deficit. The deficit hawks may yell and scream that they don't want to shrink the economy and have mass unemployment, but this is what they will get if we have deficit reduction without a clear plan for reducing the trade deficit. Of course, the trade deficit is not a law of nature. The trade deficit exploded in the years following the East Asian financial crisis. It fell back substantially in the years from 2006 until the recession. The main factor in both cases was changes in the value of the dollar: first a sharp rise following in the wake of the crisis and a gradual decline in the years after 2002. The trade deficit clearly responds to changes in the value of the dollar. The high dollar that we saw after the East Asian financial crisis made U.S. goods less competitive in the world economy. When it fell back to more normal levels, the trade deficit began to shrink.

Many actors in policy debates have argued for alternative methods of reducing trade deficits, such as new trade agreements or industrial policy. In fact, the former have often increased the trade deficit. Well-designed industrial policy can raise productivity and increase competitiveness, but even in a best-case scenario this is a long-term outcome. Even with optimistic assumptions, currency adjustments will swamp the plausible impact of industrial policy. This means that if we want to see a substantially lower trade deficit, we should want to see a lower valued dollar. This should be front and center of every progressive's agenda. If we don't see a drop in the dollar, then we should anticipate that the large trade deficit persists. In this case, our alternatives are large budget deficits or unemployment. That's it: Those who are not prepared to push for a lower valued dollar and also want a balanced budget, want more unemployment. Flipping this over, we can attain full employment by running large budget deficits. Given the size of the current trade deficit, budget deficits of the size needed to bring the economy back to full employment would probably be in the neighborhood of $1 trillion a year or 6 percent of GDP. The U.S. government can certainly run deficits of this size for a very long time. In the downturn financial markets have shown little reluctance to hold U.S. government bonds at extraordinarily low interest rates, in fact the real interest rate on long-term bonds has been close to zero. This makes borrowing for both short-term stimulus and longer-term investment measures attractive. This would mean not just physical infrastructure (including retrofitting buildings to make them more energy efficient) but research and development in a wide range of sectors. Since the main point is to stimulate demand, we can also experiment in various areas. For example, we could allocate money to allow some cities to offer free bus fares for two years. It would be interesting to see the extent to which bus travel can be encouraged if it were simple, quick, and free. The potential reduction in greenhouse gas emissions would be substantial. We could also use public funds to encourage shorter work weeks/work years. It is important to realize that our central problem right now is too much supply, not too little. Traditional stimulus addresses this problem by increasing demand. We can also address the problem by giving employers an incentive to reduce work hours in family friendly ways. We work 20 percent more hours on average than do people in Western Europe. If our work years were comparable in length to those in Western Europe, unemployment would be immediately eliminated. Of course such a transition could not be accomplished overnight, but there is no reason that government funds could not be used to provide incentives for shortening work hours with policies like paid vacations, paid family leave, and paid sick days, rather than paying workers unemployment benefits. There is already a short-work program attached to the unemployment insurance system in 25 states (including New York and California), but the take-up rate on this program is low. If the problem is that we don't want all the goods and services that we are capable of producing then a simple answer would be to produce less and let people share the leisure. If this discussion seems dismissive of deficit concerns, it is because it is based in economic logic and not Washington generated hysteria. We are not and cannot be Greece. That is not a subjective assessment of the

relative strength of the U.S. and Greek economies, where we could turn out to be Greece at some point in the future. It is a statement about the fundamental differences in our currency regimes. The United States has its own currency. Greece does not. If we actually saw the investor panic that the Washington deficit hawks crave, we could always have the Federal Reserve Board just buy up U.S. debt. Greece did not have this option with the euro. This could create inflation, but only in a context where the country was seeing a serious problem of excess demand - too many dollars chasing too few goods and services. Inflation does not just drop out of the sky. This means that the idea that we have to fear investors turning on a dime and running from the dollar is nonsense. We could envision scenarios in which we overheat the economy and have serious problems with inflation, but that will not happen overnight and it certainly will not happen in a context where we are 9 million jobs below the trend level of employment as is the case presently. There is one other crucial point about the need to get to full employment. Low levels of unemployment disproportionately benefit those in the bottom half, and especially the bottom third of the income distribution. There is no better policy than to ensure that those at the middle and bottom share in the gains of economic growth. In fact, the decision to have fiscal and monetary policies that do not bring the economy to full employment can be viewed as a decision to run policies to redistribute income upward, since that is their clear effect. Long-term Budget Deficits and Social Insurance While there may be no reason to worry about the budget deficit in the near or even intermediate future, the longer-term projections showing large deficits should provide some cause for concern. It is worth noting that even these longer term projections show much smaller deficits now than they did a few years ago due to the slower projected pace of health care cost growth. Nonetheless there are still substantial, if manageable increases in spending projected over the next two decades. The main sources are, of course, Social Security and Medicare. It is difficult to see how anyone can get be too concerned about the projected path of Social Security spending. It went from 4.1 percent of GDP in 2000 to 5.1 percent in 2013. It is expected to rise by another 1.1 percentage point of GDP over the next twenty years. It's not clear why anyone would view this as a major problem. It is also worth asking if we think the problem is that seniors have too much money now. Their median income is less than $20,000 a year. With the collapse of the defined benefit pension system and only a small fraction of the population able to accumulate significant assets in 401(k) or other retirement accounts, we should be asking whether benefits should be raised rather than lowered. Ideally, we would have a second leg to the retirement income system where middle- and moderate-income people can accumulate savings to help support themselves in retirement, but we don't have that now for most retirees or near retirees. Unless and until we do have a system that allows most workers to supplement their retirement income, we must recognize that Social Security is the only real retirement system for much of the population. (It provides more than 90 percent of the income for 40 percent of seniors.) The program is projected to face a shortfall in the years after 2033. Inequality is a big part of this story. If so much income had not been redistributed upward over the past three decades, placing it above the cap on taxable payroll, the projected shortfall would be 30-40 percent smaller. The other reason inequality is important to the solvency of Social Security is that workers willingness to pay taxes will depend in part on the growth of their wages. If workers were getting their share of productivity growth so that real wages were rising 1.0-1.5 percent annually, it is likely that they would be more receptive to taking back 0.1 percentage points of this increase in the form of higher payroll taxes. It is much better to get a 1.5 percent wage hike and a 0.1 percentage increase in the payroll tax than no increase in wages and no increase in taxes. In short, in a context where workers are getting their share of the economy's growth, it would be difficult to see the problems facing Social Security as anything other than trivial. This gets us to Medicare and other government health care spending. The figure below adjusts CBO's long-term deficit projections assuming that per person costs in the United States were the same as in Germany or the U.K. In either example, we would be looking at huge long-term surpluses in the primary budget.

Progressives should be upset by the projections of exploding growth. Essentially, they imply that we will be devoting an ever-larger share of the economy to paying rents to providers in the health care industry. While progressives must be ardent defenders of quality health care, we should also be vigilant in attacking rentseeking that redistributes an enormous amount of income upward and is often harmful to the public's health. If our per person health care costs were comparable to those of any other country's, we would be looking at huge budget surpluses, not deficits. Just to take the most obvious, doctors in the United States earn roughly twice the pay as the average for other wealthy countries. There is no reason to believe that we get better quality care for these generous paychecks. The country could save close to $80 billion a year (0.5 percent of GDP) if our doctors were paid the same as doctors in Europe or Canada. We could get much of the way towards bringing doctors' pay in line with other countries by exposing them to international competition. It is incredible that we openly discuss bringing immigrant STEM workers, nurses, even farmworkers to bring down the pay in these sectors, but no one ever raises the issue of bringing in foreign doctors. There is no reason that progressives should not make opening up the medical profession to market forces as a big item on our agenda. Doctors are the largest single occupation among the one percent, and virtually all of them are in the richest 5 percent of workers. We can make the economy more efficient, saving trillions of taxpayer dollars in the decades ahead, and promote equality by bringing the pay of our doctors in line with the rest of the world. The same applies to drugs. We spend more than $300 billion a year on drugs that would probably not cost one tenth as much in a free market. The reason is government granted patent monopolies. The justification for patent monopolies is that they are needed to finance research. However there are much more efficient mechanisms to finance research. Everyone who has had an intro economic class can identify all the bad things that happen when the government imposes a tariff or quota that raises the price of a product by 20 or 30 percent. All the same bad things happen and more when the government grants a patent monopoly that allows drug companies to charge prices that are thousands of percent above the free market price. The situation is made even worse by the problem of asymmetric information: drug companies know far more about their drugs than do patients or even doctors. In this context we should expect to see drug companies mislead the public about the safety and efficacy of their drugs and to promote them for inappropriate uses. This is of course what we do see. The result is that people often get bad care and needlessly suffer injury or even death. That's what happens when the government intervenes in the market. There is a much longer list of ways that we can get our health care costs in line with the rest of the world. All will look like political non-starters. That should not be relevant in the context of the question asked here. We do have a problem of out of control health care costs for both the public and private sector. Progressives should have our to-do list to throw out in public wherever and whenever possible. Let the Peter Peterson types tell us that we can't do anything about doctors' salaries because they are too powerful. The point is that we need a clear answer to the people who are scared by the projections of exploding health care costs. There are ways to deal with these costs, if the rich and powerful would just let us. We do not have worry about lack of ideas. Our problem is that lack of political power to implement them.

Posted by staff on January 27, 2014 3:37 PM

Menzie Chinn, The New Palgrave Dictionary of Economics (2013).

scal multipliers
The concept of scal multipliers is examined in the context of the major theoretical approaches. Diering methods of calculating multipliers are then recounted (structural equations, VAR, simulation). The sensitivity of estimates to conditioning on the state of the economy (slack, nancial system) and policy regimes (exchange rate system, monetary policy reaction function) is discussed.

Introduction
The scal multiplier plays a central role in macroeconomic theory; at its simplest level, it is the change in output for a change in a scal policy instrument. For instance, dYt dZt where Y is output (or some other activity variable) and Z is a scal instrument, either government spending on goods and services, on government transfers, or taxes or tax rates. Since there are typically lags in the eects, one should distinguish between impact multipliers (above) and the cumulative multiplier:
n P j 0 n P j 0

dYtj dZtj

The interpretation of the scal multiplier is complicated by the fact that it is not a structural parameter. Rather, in most relevant contexts, the multiplier is a function of structural parameters and policy reaction parameters. The issue of scal multipliers took on heightened importance in the wake of the 2008 global nancial crisis, in which monetary policy and nondiscretionary scal policy proved insucient to stem the sharp drop in income and employment. Substantial confusion regarding the nature and magnitude of scal multipliers arose; many of the disagreements remain. This survey reviews the theoretical bases for the scal multiplier in diering frameworks. Then the diering methodologies for assessing the magnitude of diering multipliers are reviewed. Special cases and allowances for asymmetric eects are examined.

Theory The neoclassical synthesis


The simplest way to understand multipliers is to consider an aggregate supplyaggregate demand model in the Neoclassical Synthesis essentially a framework with short run Keynesian-type attributes and long run Classical properties. While this framework is not particularly rigorous, it turns out that many of the basic insights gleaned in other approaches can be understood in this framework.

scal multipliers

For the moment, think of the aggregate demand as separable from the aggregate supply. Demand depends on scal policy and monetary policy, while the long run aggregate supply curve is determined by the level of technology, labour force, and capital stock. In the short run, a higher price level is associated with a higher economic activity. Over time, the price level adjusts toward the expected price level and any deviation of output from full employment is eventually eroded. Hence, in the long run, the Classical model holds, so that any scal policy has zero eect. This framework is sometimes called the Neoclassical synthesis. The more responsive the price level to the output gap, the smaller the change in income for any given government spending increase. In the extreme case, where there is no response of wages and prices to tightness in the labour and product markets, then the multiplier is relatively large. In this Keynesian model, the multiplier is a positive function of the marginal propensity to consume. From the national income accounting perspective, a distinction has to be made between spending on goods and services, and transfer expenditures. The former will have a larger impact on output than the latter. In the other extreme case, where wages and prices are innitely responsive to the output gap, the short run aggregate supply and long run aggregate supply curve are the same. Then clearly the scal multiplier is zero. (Note that the supply side perspective can be interpreted in the framework of the Neoclassical Synthesis. The long run aggregate supply depends on the capital stock and labour force employed, as well as the level of technology. If marginal tax rate reductions increase employment and/or investment, then the multiplier for tax rate changes could be positive, even in the absence of demand eects.) In addition, the multiplier also depends critically on the conduct of monetary policy. When policy controls the money supply, the multiplier depends on the income and interest sensitivities of money demand. In the more general case where there is a monetary policy reaction function, the multiplier will depend on the reaction function parameters. For instance, if the central bank is completely accommodative (i.e. keeps the interest rate constant), the multiplier is larger than if it is non-accommodative (as discussed further in the section on monetary regimes).

New Classical approaches


The real business cycle (RBC) approaches can be thought of as stochastic versions of the Classical Models. One of the dening features of these types of models is the incorporation of microfoundations, in particular intertemporal considerations. With innitely lived agents and no nominal rigidities, nondistortionary taxes have no impact on the present value of income. Hence, tax cuts have no impact on consumption, and thus on income. This tax cut result is often characterised as Ricardian equivalence (Barro, 1974). The implications of government spending are more dicult to analyse. In particular, if government spending is nanced by higher non-distortionary taxes, then after tax income declines. As a consequence, labour eort increases, and output (measured as the sum of private and public consumption) rises. In the standard setup, where government consumption yields no utility, social welfare decreases even thought output rises. When distortionary taxes are used to pay for government spending, then both output and social welfare will decline. Then the government spending multiplier would be negative. While the stereotype of the RBC approach is consistent with small multipliers, small variations in the assumptions can deliver large multipliers.

scal multipliers

For instance, assuming that government capital and private capital and labour are complements can deliver large scal multipliers (Baxter and King, 1993). Notice, however, that the multipliers in this case do not arise from the familiar demand-side eects, but rather from supply-side eects.

New Keynesian models


New Keynesian models represent the result of combining microfounded models incorporating intertemporal optimisation with Keynesian-type nominal and real rigidities. Such models are associated with Gali and Woodford, for instance. The basis of these models are the real business cycle models, with money introduced using money in utility functions. The deviations from the RBCs usually come in the form of rigidities, both nominal and real. Nominal rigidities are often introduced by way of sticky prices; prices adjust at random points in time (often called Calvo pricing). Real rigidities often include adjustment costs (say, for investment) and deviations from full intertemporal optimisation: for instance, rule-of-thumb or hand-to-mouth consumers (e.g. Gali et al., 2007). In addition to allowing the models to t the data better, the inclusion of these rigidities provides a role for scal as well as monetary policy. Because the models are built around an essentially neoclassical framework, policies do not have large long run eects. However, in the short term, monetary and scal policies have an eect on output. The magnitude of the impact depends on the various parameters of the model, and as in the Keynesian model the nature of the monetary policy reaction function. An excellent overview of how these factors come into play in determining the multiplier is provided by Woodford (2011). One key limitation highlighted by the nancial crisis and the ensuing recession and recovery is the omission of nancial frictions. In fact, the nancial sector in the typical New Keynesian model is usually very simple (a single bond, for instance; in two-country models, uncovered interest parity might be relaxed by the inclusion of an ad hoc risk premium term). Summing up, one can see that the dierent types of model will deliver scal multipliers of almost any magnitude. Moreover, even models of a particular class can deliver quite dierent multiplier values, depending on underlying parameter values and the assumptions regarding monetary policy reaction functions. As a consequence, one can only address the magnitude of multipliers by empirics.

Empirics
There are many ways of calculating multipliers, with the approaches often associated with certain theoretical frameworks. However, in general, there are three major approaches: (1) structural econometric, a` la Cowles Commission; (2) vector autoregressions (VARs); and (3) simulation results from dynamic stochastic general equilibrium (DSGE) models. There are also other miscellaneous regression approaches.

Structural econometric approaches


The earliest approach to estimating multipliers involved estimating behavioural equations for the economy. Since the multiplier depends critically upon the marginal propensity to consume, estimates of the

scal multipliers

consumption function are central to the enterprise of calculating the multiplier. This enterprise is closely associated with the Cowles Commission approach to econometrics, which used (Keynesian) theory to achieve identication in multi-equation systems. Large-scale macroeconometric models are the descendent of the early Keynesian Klein Goldberger model (Goldberger, 1959), and despite the disdain with which such models are held in academic circles they still provide the basis for most estimates of multipliers. It appears that business sector economists still nd such models useful for forecasting and policy analysis. They include the models run by Global Insight-IHS and Macroeconomic Advisers. The equations in such models include, for instance, a consumption equation, an investment equation and price adjustment equations. Identication would require that there should be sucient number of exogenous variables. Two assaults on this approach include the Lucas econometric policy evaluation critique, and the charge of incredible identifying assumptions (Sims, 1980). In the former case, the relevant question is whether the estimation procedure (which typically incorporates a complicated lag structure) actually identies parameters that are invariant to policy changes (such as government spending changes). (Ericsson and Irons (1995) have argued that the Lucas critique is actually seldom relevant, given that large policy changes are rare.) In the latter, the concern is that identication is not possible, since there are very few truly exogenous variables. This concern motivates the enterprise of estimating vector autoregressions (described below). While it is customary to disparage these types of model as eschewing intertemporal considerations, this characterisation is not always accurate. Some macroeconometric models incorporate model-consistent expectations essentially an implementable version of rational expectations. Taylor (1993) is an early example of a relatively conventional macroeconometric model with forward-looking expectations. Other cases include the IMFs Multimod and the Feds FRB/US model: see Laxton et al. (2008) and Brayton et al. (2007).

Vector autoregressions (VARs)


Sims (1980) argued that the Cowles Commission approach to estimating large systems of equations required incredible identifying assumptions. His alternative approach involved estimating a small system of equations, where each variable is modelled as a function of lags of all variables in the system. In Sims original formulation, a recursive ordering is assumed. Since there are no exogenous variables, the response is expressed in terms of the error term or shock. That is, the response is expressed in terms of the unpredictable component of government spending or tax revenues, and not in terms of a given change in either of those instruments. There is no reason why the nature of shocks should follow a recursive ordering. Alternative approaches include long run restrictions, wherein one variable is not aected by a shock in another variable in the long run. This approach was pioneered in Blanchard and Quah (1989). Short run restrictions can also be incorporated, such that a shock to one variable has no immediate impact on another, as in Clarida and Gali (1993). Blanchard and Perotti (2002) used institutional features to add additional restrictions. Yet other types of restriction, including negative or positive responses, are also feasible (Mountford and Uhlig, 2009). Ramey (2011b) focused on news

scal multipliers

in defence spending as a means of circumventing issues of identifying exogenous shocks. In all these cases, belief in the results depends upon how plausible one nds the identifying restrictions including the restrictions on the number of relevant equations. These VARs typically employ relatively few equations, due to the large number of parameters that have to be estimated. Another way of dealing with the issue of distinguishing between endogenous and exogenous scal measures is to use a narrative approach, as pioneered by Romer and Romer (1989) for monetary policy. Romer and Romer (2010) estimated the impact of tax changes on output using this approach.

Simulations using dynamic stochastic general equilibrium models


In response to the criticism of the ad hoc nature of the large-scale macroeconometric models, most recent analyses of policy eects have been conducted using dynamic stochastic general equilibrium (DSGE) models which incorporate, to a greater or lesser degree, New Keynesian formulations. The equations in these models are either calibrated (that is parameter values are selected) or estimated, or a combination thereof is used. The majority of these models incorporate Ricardian equivalence, contrary to the bulk of empirical evidence. Hence, almost by assumption, scal multipliers are typically small relative to those obtained in traditional macroeconometric models. In cases where Ricardian equivalence is dispensed with, multipliers are typically larger. (See for instance Kumhof et al. (2010). Note that instead of the future tax burden rising with spending, future spending might be restrained. Corsetti et al. (2010) and Corsetti et al. (forthcoming) trace out the dynamics in this case.)

Miscellaneous approaches
Since multipliers are changes in output for a change in a scal instrument, estimation can proceed in a variety of ways. The simplest entails regression of output changes on instrument changes; the challenge is controlling for other eects. Since discretionary scal policy reacts, by denition, to other factors that might be unobservable to the econometrician, there are serious challenges to this approach. For instance, Almunia et al. (2010) use panel regression analysis (in addition to VARs) for a set of countries; Nakamura and Steinsson (2011) for a set of states; and Acconcia et al. (2013) for Italian provinces. In contrast, Barro and Redlick (2009) use a long time series for the USA. (Reichling and Whalen (2012) survey local multipliers, which tend to focus on employment rather than output eects in subnational units. Other relevant studies (typically focusing on employment eects) include Chodorow-Reich et al. (forthcoming), Mendel (2012) and Moretti (2010).)

A survey of basic results


Obviously the literature is too voluminous to review comprehensively. I focus rst on the USA. CBO (2012a, Table 2) has provided a range of estimates that the CBO considers plausible, based upon a variety of empirical and theoretical approaches (see Table 1). For goods and services, the range is 0.5 to 2.5; in line with demand side models, the cumulative multiplier for government spending on transfers to

scal multipliers

Table 1 Ranges for US cumulative output multipliers (source: CBO (2012a), Table 2). Estimated output multipliers Type of activity Purchase of goods and services by the Federal Government Transfer payments to state and local governments for infrastructure Transfer payments to state and local governments for other purposes Transfer payments to individuals One-time payments to retirees Two-year tax cuts for lower- and middle-income people One-year tax cut for higher-income people Low estimate 0.5 0.4 0.4 0.4 0.2 0.3 0.1 High estimate 2.5 2.2 1.8 2.1 1.0 1.5 0.6

individuals are typically lower, and range from 0.4 to 2.1. Tax cuts for individuals have a multiplier of between 0.3 and 1.5, if aimed at households with a relatively high marginal propensity to consume. (See the survey of approaches in the appendix to CBO (2012a).) When assessing whether a government spending multiplier is large or small, the value of unity is often taken as a threshold. From the demand side perspective, when the spending multiplier is greater than one, then the private components of GDP rise along with government spending on goods and services; less than one, and some private components of demand are crowded out. (Since transfers aect output indirectly through consumption, multipliers for government transfers to individuals should be smaller than multipliers for spending on goods and services.) Reichling and Whalen (2012) discuss the range of multiplier estimates associated with various approaches. Ramey (2011a) also surveys the literature, and concludes spending multipliers range from 0.8 to 1.5. Romer (2011) cites a higher range of estimates, conditioned on those relevant to post-2008 conditions. The above estimates pertain to the US. Obviously, one can expand the sample to other countries and other times. Van Brusselen (2009) and Spilimbergo et al. (2009) survey a variety of developed country multiplier estimates. Almunia et al. (2010) nd, using a variety of econometric methodologies, that scal multipliers during the interwar years are in excess of unity, when looking across countries. Barro and Redlick (2009) incorporate WWII data in their analysis of US multipliers; critics have noted that rationing during the WWII period makes extrapolation of their results to peacetime conditions questionable.

Distinctions Large, closed vs. small, open economies


Theory suggests that, at least from the demand side, scal multipliers should be smaller in open economies (where openness is measured in the context of trade of goods and services), holding all else constant. This is because the leakage from a small open economy due to imports or purchases of internationally tradable goods more generally rising with income mitigates

scal multipliers

the recirculation of spending in the economy. In a closed economy, the marginal propensity to import is arguably smaller. Ilzetzki et al. (forthcoming) estimate panel VARs and nd that indeed small open economies have smaller multipliers. In addition, for large economies, some portion of the leakage of spending that occurs through imports would return as increased demand for exports. That means that the large country multiplier would be larger than that for a small country, holding all other characteristics such as trade openness constant.

Fixed vs. exible exchange rate regimes


Ilzetzki et al. nd that countries under xed exchange rates have larger multipliers than those under exible exchange rates. This nding is in accord with the MundellFleming model, which predicts that under xed exchange rates, the monetary authority is forced to accommodate scal policy. With high capital mobility (which is likely in the set of countries examined), monetary policy has to be very accommodative, in order to maintain the exchange rate peg. Corsetti et al. (2012) obtain similar results regarding the magnitude of the multiplier, even after controlling for other factors (debt levels etc.) despite the fact that they nd the policy rate rises. They argue imperfect peg credibility accounts for this eect. In a slightly dierent context, Nakamura and Steinsson (2011) conrm this result. Examining states in the USA, they nd that the scal multiplier is 1.5 for government spending on goods and services. Since the USA is a monetary union, they interpret this multiplier as one pertaining to small economies on xed exchange rates.

Monetary regimes (ination targeting, zero interest rate bound)


Perhaps the most important insight arising from the debates over scal policy during and after the great recession is that the multiplier depends critically on the conduct of monetary policy. This insight is obvious if one thinks about policy in a standard IS-LM framework, where the interest rate is constant either because of accommodative monetary policy (Davig and Leeper, 2009), or because the economy is in a liquidity trap. Christiano et al. (2011) provide a rationale for this eect in the context of a liquidity trap in a DSGE. Coenen et al. (2012) show that in DSGEs, the degree of monetary accommodation is critical. When central banks follow a Taylor Rule or ination forecast-based rules, then multipliers are relatively small. However, when monetary policy is accommodative that is interest rates are kept constant then the cumulative multiplier is greater. This nding is consistent with the idea that scal policy in a liquidity trap is equivalent to a helicopter drop. As DeLong (2010) notes, when the price level is xed, a helicopter drop changes nominal demand one-for-one, and therefore must have real eects. However, a helicopter drop is a combination of (i) an open market operation (OMO) purchasing bonds for cash, and (ii) a bond-nanced tax cut. The monetary eects of an OMO plus the scal eects of a tax cut must therefore add up to the eects of a helicopter drop. In a liquidity trap, where one believes an OMO is powerless, scal expansion must therefore be powerful. This insight is of particular importance because estimates of multipliers based upon historical data are likely to be less relevant in current circumstances, where interest rates have been kept near zero since 2008.

scal multipliers

3 2.5 2 1.5 1 0.5 0 0.5 1 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005

Figure 1 Historical multiplier for total government spending (Source: Auerbach and Gorodnichenko (2012b)). There is some evidence that the eects of scal policy in Europe have been unusually large in recent years (see Blanchard and Leigh, forthcoming). One of the reasons is that the zero lower bound has prevented central banks from cutting interest rates to oset the negative short-term eects of scal consolidation.

Asymmetric scal eects


Many of the earlier studies assumed that the impact of scal policy was homogeneous across dierent states of the economy. Recent work has sought to relax this assumption. Given that the size of the multiplier is more relevant in certain circumstances than others, accounting for heterogeneous eects is critically important.

State-dependent multipliers
The demand side interpretation of the multiplier relies upon the possibility that additional factors of production will be drawn into use as demand rises. If factors of production are constrained, or are relatively more constrained, as economic slack disappears, then one might entertain asymmetry in the multiplier. Auerbach and Gorodnichenko (2012a,b) and Fazzari et al. (2012) use VARs which allow the parameters to vary over expansions and contractions (Auerbach and Gorodnichenko use a smooth transition threshold where the threshold is selected a priori. Fazzari et al. estimate a discrete threshold.). Baum et al. (2012) condition on the output gap. The common nding in these instances is that multipliers are substantially larger during recessions. To highlight the variation in the multiplier for the USA, I reproduce Figure 5 from Auerbach and Gorodnichenko (2012b), which plots their estimates of the multiplier over time (Figure 1). A dierent perspective on why long term multipliers are larger during periods of slack is delivered by Delong and Summers (2012). (Quantication of long-term impacts of depressed activity on potential GDP can be found in CBO (2012b).) They argue that long periods of depressed output can itself aect potential GDP, following the analysis of Blanchard and Summers (1986). The prevalence of high rates of long-term unemployed is one obvious

scal multipliers

channel by which hysteretic eects can be imparted. When combined with an accommodative monetary policy or liquidity trap, the long term multiplier can be substantially larger than the impact multiplier. Hence scal multipliers are largest exactly at times when expansionary scal policy is most needed. Estimates of multipliers based on averaging over periods of high and low activity are therefore useful, but not necessarily always relevant to the policy debate at hand.

Low versus high debt levels


Ilzetzki et al. (forthcoming) determine that scal multipliers are essentially zero when debt is above (the sample) average. Corsetti et al. (2012) also nd multipliers are smaller when public debt is high, controlling for other factors, although the measured dierences are modest. In high-debt situations, contractionary scal policy can in principle stimulate activity in the short run if it raises condence in the governments solvency and reduces the need for disruptive adjustments later on (Blanchard, 1990). A recent theoretical analysis of scal policy under conditions of high sovereign risk is by Corsetti et al. (forthcoming). A number of empirical studies nd evidence of such expansionary eects (Giavazzi and Pagano, 1990; Alesina and Perotti, 1995; Alesina and Ardagna, 2010; and others). Other papers suggest that such ndings of expansionary eects are sensitive to how scal consolidation is dened (IMF, 2010), and that the famous cases of expansionary contractions were typically driven by external demand rather than condence eects (Perotti, 2011).

Ordinary versus stressed nancial systems


Historical estimates of the scal multiplier also condition on data when the nancial system is operating normally, or is at least not highly impaired. However, the nancial conditions during the crisis were arguably abnormal. To the extent that credit constraints were more binding (e.g. Eggertsson and Krugman, 2010), households could be expected to behave in a more Keynesian fashion, with less reference to permanent income. This would tend to result in a ndez-Villaverde (2010). larger multiplier. See also Ferna Corsetti et al. (2012), conrm empirically (using VARs) that during times of nancial crisis, scal multipliers are larger. They conjecture that liquidityconstrained households are more pervasive during crises. They add the caveat that this nding holds true when public nances are strong.

Conclusion
The magnitude of the scal multiplier, in theory and in the data, depends on the characteristics of the economy. In some senses this observation is obvious. What is less recognised is that the state of the economy is as, or more, important than many other aspects that have been the focus of analysis. The most critical aspects include the degree of slack in the economy, the state of the nancial system, and the conduct of monetary policy.

10

scal multipliers

Acknowledgments
I thank Giancarlo Corsetti, Brad DeLong, Jerey Frankel, Ethan Ilzetzki, Daniel Leigh, Felix Reichling, Carlos Vegh and the editor Garett Jones, for very helpful comments. Menzie Chinn

See also
monetary and scal policy overview; neoclassical synthesis; new Keynesian macroeconomics; vector autoregressions

Bibliography
Acconcia, A., Corsetti, G. and Simonelli, S. 2013. Maa and public spending: evidence on the scal multiplier from a quasi-experiment. Mimeo (January). Alesina, A. and Ardagna, S. 2010. Large changes in scal policy: taxes versus spending. In: Tax Policy and the Economy, Vol. 24 (ed. J. R. Brown). National Bureau of Economic Research, Cambridge, MA. Alesina, A. and Perotti, R. 1995. Fiscal expansions and scal adjustments in OECD countries. Economic Policy, 10(21), 20548. ne trix, A. S., Eichengreen, B. ORourke, K. H. and Rua, G. 2010. Almunia, M., Be From great depression to great credit crisis: similarities, dierences and lessons. Economic Policy, 25(62), 21965. Auerbach, A. J. and Gorodnichenko, Y. 2012a. Fiscal multipliers in recession and expansion. In: Fiscal Policy after the Financial Crisis (eds. A. Alesina and F. Giavazzi). University of Chicago Press, Chicago. Auerbach, A. J. and Gorodnichenko, Y. 2012b. Measuring the output responses to scal policy. American Economic Journal: Economic Policy, 4, 127. Barro, R. J. 1974. Are bonds net wealth? Journal of Political Economy, 82(6), 1095117. Barro, R. J. and Redlick, C. J. 2009. Macroeconomic eects of government purchases and taxes. NBER Working Paper, No. 15369 (September). Baum, A., Poplawski-Ribeiro, M. and Weber, A. 2012. Fiscal multipliers and the state of the economy. IMF Working Paper No.12/286 (December). Baxter, M. and King, R. G. 1993. Fiscal policy in general equilibrium. American Economic Review, 83(3), 31534. Blanchard, O. J. 1990. Comment on Francesco Giavazzi and Marco Pagano, Can severe scal consolidations be expansionary? Tales of two small European countries. NBER Macroeconomics Annual, 5, 11116. Blanchard, O. and Leigh, D. Forthcoming. Growth forecast errors and scal multipliers. American Economic Review: Papers and Proceedings. Blanchard, O. and Perotti, R. 2002. An empirical characterization of the dynamic eects of changes in government spending and taxes on output. Quarterly Journal of Economics, 117(4), 132968. Blanchard, O. and Quah, D. 1989. The dynamic eects of aggregate demand and supply disturbances. American Economic Review, 79(4), 65573. Blanchard, O. and Summers, L. 1986. Hysteresis and the European unemployment problem. NBER Macro Annual. Brayton, F., Mauskopf, E., Reifschneider, D., Tinsley, P. and Williams, J. 1997. The role of expectations in the FRB/US macroeconomic model. Federal Reserve Bulletin (April). CBO. 2012a. Estimated Impact of the American Recovery and Reinvestment Act on Employment and Economic Output from October 2011 through December 2011. CBO, Washington, DC. CBO. 2012b. What Accounts for the Slow Growth of the Economy After the Recession? CBO, Washington, DC. Chodorow-Reich, G., Feiveson, L., Liscow, Z. and Woolston, W. G. Forthcoming. Does state scal relief during recessions increase employment? Evidence from the

scal multipliers

11

American Recovery and Reinvestment Act. American Economic Journal: Economic Policy. Christiano, L., Eichenbaum, M. and Rebelo, S. 2011. When is the government spending multiplier large? Journal of Political Economy, 119(1), 78121. Clarida, R. and Gali, J. 1994. Sources of real exchange-rate uctuations: how important are nominal shocks? Carnegie-Rochester Conference Series on Public Policy, 41, 156. Coenen, G. et al. 2012. Eects of scal stimulus in structural models. American Economic Journal: Macroeconomics, 4(1), 2268. Corsetti, G., Kuester, K., Meier, A. and Mueller, G. J. 2010. Debt consolidation and scal stabilization of deep recessions. American Economic Review: Papers and Proceedings, 100(2), 415. Corsetti, G., Meier, A. and Mu ller, G. 2012. What determines government spending multipliers? Economic Policy (October): 52165. Corsetti, G., Kuester, K., Meier, A. and Mueller, G. J. Forthcoming. Sovereign risk, scal policy, and macroeconomic stability. Economic Journal. Davig, T. and Leeper, E. M. 2009. Monetaryscal policy interactions and scal stimulus. NBER Working Paper No. 15133 (July). DeLong, B. J. 2010. Helicopter drop time: Paul Krugman gets one wrong. Grasping Reality with Both Invisible Hands: Fair, Balanced, and Reality-Based: A Semi-Daily Journal (14 July). Available at: http://delong.typepad.com/sdj/2010/07/helicopterdrop-time-paul-krugman-gets-one-wrong.html (accessed 15 February 2013). DeLong, B. J. and Summers, L. 2012. Fiscal policy in a depressed economy. Brookings Papers on Economic Activity (in press). Eggertsson, G. B. and Krugman, P. 2012. Debt, deleveraging, and the liquidity trap: a FisherMinskyKoo approach. Quarterly Journal of Economics, 127(3), 1469513. Ericsson, N. and Irons, J. 1995. The Lucas critique in practice: theory without measurement. In: Macroeconometrics: Developments, Tensions, and Prospects (ed. K. D. Hoover). Springer, New York. Fazzari, S. M., Morley, J. and Panovska, I. 2012. State dependent eects of scal policy. Australian School of Business Research Paper No. 2012 ECON 27. ndez-Villaverde, J. 2010. Fiscal policy in a model with nancial frictions. Ferna American Economic Review, 100(2), 3540. pez-Salido, J. D. and Valle s, J. 2007. Understanding the eects of Gali, J., Lo government spending on consumption. Journal of the European Economic Association, 5(1), 22770. Giavazzi, F. and Pagano, M. 1990. Can severe scal consolidations be expansionary? Tales of two small European countries. NBER Macroeconomics Annual, 5, 75111. Goldberger, A. S. 1959. Impact Multipliers and Dynamic Properties of the Klein Goldberger Model. North-Holland Publishing Company, Amsterdam. Ilzetzki, E., Mendoza, E. G. and Vegh, C. A. Forthcoming. How big (small?) are scal multipliers?. Journal of Monetary Economics. IMF. 2010. Chapter 3: Will it hurt? Macroeconomic eects of scal consolidation. World Economic Outlook (October). Kumhof, M., Laxton, D., Muir, D. and Mursula, S. 2010. The Global Integrated Monetary and Fiscal Model (GIMF) theoretical structure. IMF Working Paper No. 10/34. Laxton, D., Isard, P., Faruqee, H., Prasad, E. and Turtelboom, B. 1998. MULTIMOD Mark III: the core dynamic and steady-state models. Occasional Paper 164. IMF, Washington, DC. Mendel, B. 2012. Local multipliers: theory and evidence. Mimeo. Harvard University, September. Moretti, E. 2010. Local multipliers. American Economic Review: Papers & Proceedings, 100 (May), 17. Mountford, A. and Uhlig, H. 2009. What are the eects of scal policy shocks? Journal of Applied Econometrics, 24(6), 96092. Nakamura, E. and Steinsson, J. 2011. Fiscal stimulus in a monetary union: evidence from U.S. regions. NBER Working Paper No. 17391. Perotti, R. 2011. The Austerity Myth: Gain Without Pain? NBER Working Paper No. 17571. Ramey, V. A. 2011a. Identifying government spending shocks: its all in the timing. Quarterly Journal of Economics, 126(1), 150. Ramey, V. A. 2011b. Can government purchases stimulate the economy? Journal of Economic Literature, 49(3), 67385.

12

scal multipliers

Reichling, F. and Whalen, C. 2012. Assessing the short-term eects on output of changes in federal scal policies Working Paper No. 2012-08. Congressional Budget Oce, Washington, DC. Romer, C. D. 2011. What do we know about the eects of scal policy? Separating evidence from ideology. Speech at Hamilton College, 7 November. Romer, C. D. and Romer, D. H. 1989. Does monetary policy matter? A new test in the spirit of Friedman and Schwartz, NBER Macroeconomics Annual 1989. MIT Press, Cambridge, MA. Romer, C. D. and Romer, D. H. 2010. The macroeconomic eects of tax changes: estimates based on a new measure of scal shocks. American Economic Review, 100 (June), 763801. Sims, C. 1980. Macroeconomics and reality. Econometrica, 48(1), 148. Spilimbergo, A., Symansky, S. and Schindler, M. 2009. Fiscal multipliers. IMF Sta Position Notes No. 09/11. IMF, Washington, DC. Taylor, J. B. 1993. Macroeconomic Policy in a World Economy. W.W. Norton, New York. Van Brusselen, P. 2009. Fiscal Stabilisation Plans and the Outlook for the World Economy. NIME Policy Brief No. 01-2009. Federal Planning Bureau, Brussels (April). Woodford, M. 2011. Simple analytics of the government expenditure multiplier. American Economic Journal: Macroeconomics, 3(1), 135.

Government Spending Is No Free Lunch Now the Democrats are peddling voodoo economics. By Robert J. Barro Updated Jan. 22, 2009 12:01 a.m. ET Back in the 1980s, many commentators ridiculed as voodoo economics the extreme supply-side view that across-the-board cuts in income-tax rates might raise overall tax revenues. Now we have the extreme demand-side view that the so-called "multiplier" effect of government spending on economic output is greater than one -- Team Obama is reportedly using a number around 1.5. To think about what this means, first assume that the multiplier was 1.0. In this case, an increase by one unit in government purchases and, thereby, in the aggregate demand for goods would lead to an increase by one unit in real gross domestic product (GDP). Thus, the added public goods are essentially free to society. If the government buys another airplane or bridge, the economy's total output expands by enough to create the airplane or bridge without requiring a cut in anyone's consumption or investment. The Opinion Journal Widget Download Opinion Journal's widget and link to the most important editorials and op-eds of the day from your blog or Web page. The explanation for this magic is that idle resources -- unemployed labor and capital -- are put to work to produce the added goods and services. If the multiplier is greater than 1.0, as is apparently assumed by Team Obama, the process is even more wonderful. In this case, real GDP rises by more than the increase in government purchases. Thus, in addition to the free airplane or bridge, we also have more goods and services left over to raise private consumption or investment. In this scenario, the added government spending is a good idea even if the bridge goes to nowhere, or if public employees are just filling useless holes. Of course, if this mechanism is genuine, one might ask why the government should stop with only $1 trillion of added purchases. What's the flaw? The theory (a simple Keynesian macroeconomic model) implicitly assumes that the government is better than the private market at marshaling idle resources to produce useful stuff. Unemployed labor and capital can be utilized at essentially zero social cost, but the private market is somehow unable to figure any of this out. In other words, there is something wrong with the price system. John Maynard Keynes thought that the problem lay with wages and prices that were stuck at excessive levels. But this problem could be readily fixed by expansionary monetary policy, enough of which will mean that wages and prices do not have to fall. So, something deeper must be involved -but economists have not come up with explanations, such as incomplete information, for multipliers above one. A much more plausible starting point is a multiplier of zero. In this case, the GDP is given, and a rise in government purchases requires an equal fall in the total of other parts of GDP -- consumption, investment and net exports. In other words, the social cost of one unit of additional government purchases is one. This approach is the one usually applied to cost-benefit analyses of public projects. In particular, the value of the project (counting, say, the whole flow of future benefits from a bridge or a road) has to justify the social cost. I think this perspective, not the supposed macroeconomic benefits from fiscal stimulus, is the right one to apply to the many new and expanded government programs that we are likely to see this year and next. What do the data show about multipliers? Because it is not easy to separate movements in government purchases from overall business fluctuations, the best evidence comes from large changes in military purchases that are driven by shifts in war and peace. A particularly good experiment is the massive expansion of U.S. defense expenditures during World War II. The usual Keynesian view is that the World War II fiscal expansion provided the stimulus that finally got us out

of the Great Depression. Thus, I think that most macroeconomists would regard this case as a fair one for seeing whether a large multiplier ever exists. I have estimated that World War II raised U.S. defense expenditures by $540 billion (1996 dollars) per year at the peak in 1943-44, amounting to 44% of real GDP. I also estimated that the war raised real GDP by $430 billion per year in 1943-44. Thus, the multiplier was 0.8 (430/540). The other way to put this is that the war lowered components of GDP aside from military purchases. The main declines were in private investment, nonmilitary parts of government purchases, and net exports -- personal consumer expenditure changed little. Wartime production siphoned off resources from other economic uses -- there was a dampener, rather than a multiplier. We can consider similarly three other U.S. wartime experiences -- World War I, the Korean War, and the Vietnam War -- although the magnitudes of the added defense expenditures were much smaller in comparison to GDP. Combining the evidence with that of World War II (which gets a lot of the weight because the added government spending is so large in that case) yields an overall estimate of the multiplier of 0.8 -- the same value as before. (These estimates were published last year in my book, "Macroeconomics, a Modern Approach.") There are reasons to believe that the war-based multiplier of 0.8 substantially overstates the multiplier that applies to peacetime government purchases. For one thing, people would expect the added wartime outlays to be partly temporary (so that consumer demand would not fall a lot). Second, the use of the military draft in wartime has a direct, coercive effect on total employment. Finally, the U.S. economy was already growing rapidly after 1933 (aside from the 1938 recession), and it is probably unfair to ascribe all of the rapid GDP growth from 1941 to 1945 to the added military outlays. In any event, when I attempted to estimate directly the multiplier associated with peacetime government purchases, I got a number insignificantly different from zero. As we all know, we are in the middle of what will likely be the worst U.S. economic contraction since the 1930s. In this context and from the history of the Great Depression, I can understand various attempts to prop up the financial system. These efforts, akin to avoiding bank runs in prior periods, recognize that the social consequences of credit-market decisions extend well beyond the individuals and businesses making the decisions. But, in terms of fiscal-stimulus proposals, it would be unfortunate if the best Team Obama can offer is an unvarnished version of Keynes's 1936 "General Theory of Employment, Interest and Money." The financial crisis and possible depression do not invalidate everything we have learned about macroeconomics since 1936. Much more focus should be on incentives for people and businesses to invest, produce and work. On the tax side, we should avoid programs that throw money at people and emphasize instead reductions in marginal income-tax rates -- especially where these rates are already high and fall on capital income. Eliminating the federal corporate income tax would be brilliant. On the spending side, the main point is that we should not be considering massive public-works programs that do not pass muster from the perspective of cost-benefit analysis. Just as in the 1980s, when extreme supply-side views on tax cuts were unjustified, it is wrong now to think that added government spending is free. Mr. Barro is an economics professor at Harvard University and a senior fellow at Stanford University's Hoover Institution.

How the euro synchronised EZ cycles


Ayako Saiki, Sunghyun Henry Kim, 2 February 2014 Before the introduction of the euro, it was hoped that by promoting increased intra-regional trade it would increase business-cycle synchronisation within the Eurozone, and thus help it to fulfil the criteria for an optimum currency area. This column presents recent research that compares the evolution of business-cycle synchronisation in the Eurozone and east Asia. While the euro has had some impact on business-cycle synchronisation in the Eurozone, it has done so not through increased intra-regional trade intensity, but rather through some other channel most likely financial integration.

Prior to the introduction of the euro, the topic of whether the Eurozone fulfils the conditions for an optimum currency area was highly debated (e.g. Bayoumi and Eichengreen 1992). In their seminal study, Frankel and Rose (1996) argue that the conditions for an optimum currency area specifically business-cycle synchronisation can be satisfied ex post because business-cycle correlation can be endogenous. Based on their empirical analysis of 21 industrial countries, they conclude that a currency union would generate stronger trade linkages defined as trade intensity among member countries which would in turn generate higher business-cycle correlations among them. Now, 15 years after its inception, has the Euro increased intra-regional trade intensity and business-cycle correlation as predicted? In our recent working paper (Saiki and Kim 2014), we attempt to answer these questions by investigating trade patterns, business-cycle correlations, and their connecting factors in the Eurozone. Our finding is that, while the euro has had some impact on business-cycle synchronisation in the Eurozone, it is not through the channel that Frankel and Rose (1996) predicted. To enrich our analysis, we compare our results for the Eurozone with those for east Asia during the same period (19802007).1 East Asias economies have rapidly integrated during this period without belonging to a currency union or fixed exchange-rate regime. Therefore, a comparison with the Eurozone should shed some light on the influence of a currency union from an additional angle.

Stylised facts
The stylised facts are as follows:

Business cycles: The increase in business-cycle correlation in the Eurozone has been quite gradual, and there is no indication that the introduction of the euro increased business-cycle synchronisation.2 On the other hand, since 1980, business-cycle correlation has rapidly increased in east Asia, and is now approaching that of the Eurozone (see Figure 1).

Figure 1. Business-cycle correlation in east Asia and the Eurozone

Source: Authors calculations based on the OECD database. Notes: The lines denote the ten-year rolling window correlation between each individual countrys business cycle and the regional business cycle (the year on the x axis is the last year of the windows). The shaded area denotes the average of all correlations in the region. Correlations in this graph are calculated based on HodrickPrescott filtered logged real GDP in local currency terms.

Regional trade intensity: Various studies (e.g. Micco et al. 2003) find that there has been some positive impact of the euro on intra-Eurozone trade volumes of around 1015%. However, when we measure trade intensity, intra-Eurozone trade after the introduction of the euro exhibits a different pattern. Figure 2 presents regional trade intensities measured by bilateral trade volume as a fraction of total trade (left panel) and GDP (right panel). As Figure 2 shows, regional trade intensity has been declining in the Eurozone since around 1998, while it has been increasing in east Asia. Our conjecture is that this is perhaps due to Chinas remarkably rapid integration into the global economy, which increased intra-regional trade intensity in east Asia, and decreased it in the Eurozone.

Figure 2. Trade intensity in east Asia and the Eurozone

Trade patterns: We examine the degree of intra-industry trade (henceforth IIT) this means trade within the same industry, measured by the GrubelLloyd Index as well as the degree of vertical intra-industry trade (henceforth VIIT) this means IIT in which goods are differentiated by quality in the Eurozone and east Asia. The degree of VIIT has been increasing over time in east Asia, and it is currently higher in east Asia (0.8) than in the Eurozone (0.75), where VIIT is declining. This rising degree of VIIT in east Asia is consistent with the rapidly growing global supply chain in the region. The declining VIIT in the Eurozone may be an indication of supply-chain development between

Eurozone and non-Eurozone countries, including China, although we need to look into this further.

Analysis and results


In order to test how these different trade dynamics affect business-cycle correlation, we regress the correlation of business cycles as measured by BaxterKing filtered GDP, HodrickPrescott filtered GDP, and year-on-year growth on

trade intensity; the degree of IIT measured by the GrubelLloyd Index; and the ratio of vertical IIT to total IIT;

along with dummy-variable controls for regional effects and the adoption of the euro. We look at business-cycle correlations for three sub-periods: 1980Q11989Q4, 1990Q11999Q4, and 2000Q12007Q4. So the data is panel data for three periods with approximately 55 country pairs. The results are presented in Table 1. Table 1. Main results

Note: Numbers in parentheses are t-statistics. ***, **, and * denote statistical significance at the 1%, 5%, and 10% levels, respectively. Coefficients are derived from panel regressions with country fixed effects. The Eurozone dummy is set to 1 for countries in the Eurozone geographically, both before and after the adoption of the euro. The EMU dummy is set to 1 after the countries in the Eurozone adopted the euro. The main take-away messages from these results can be summarised as follows.

First, unlike Frankel and Rose (1996), who show that trade intensity has an unambiguously positive and significant impact on business cycle correlation, our results indicates that trade intensity is significant for the Eurozone but not

for east Asia.3 In fact, after controlling for (V)IIT, the coefficient on trade intensity becomes negative. This is consistent with the prediction from classical international trade theory that trade between countries with different factor endowments will facilitate product specialisation, which makes countries susceptible to asymmetric shocks that can dampen business-cycle synchronisation. Our sample east Asian countries have different endowment structures the standard deviation of GDP per capita, spanning from Singapore to Vietnam, is almost three times that of the Eurozone. Second, VIITs positive and significant impact on business-cycle correlation is more visible in east Asia. This result is robust to different measures of business cycles (BaxterKing or HodrickPrescott filtered GDP, or year-onyear growth) and sample periods (including the financial crisis period). This perhaps reflects the lack of progress in VIIT in the Eurozone during the sample period. When we pool the samples for Eurozone and east Asia, we observe that higher trade intensity increases business-cycle correlation. However, after controlling for IIT or VIIT measures, the effects of trade intensity become insignificant, indicating that it is IIT especially the VIIT part that drives business-cycle synchronisation, not trade intensity per se. Third is the noteworthy result that the EMU dummy is positive and significant throughout the analysis, even after we control for the regional (Eurozone) dummy. This indicates that some other channels most likely financial integration are working to enhance business-cycle correlation in the Eurozone, offsetting the negative impact from declining trade intensity. However, it will require thorough investigation to determine the exact role of financial integration a topic we leave for further research.

Our findings have policy implications for Eurozone enlargement. As new member states with lower income per capita join the Eurozone, developing an intra-industry network or production-segmentation across countries between old and new member states would help to amplify business-cycle correlation among them. In other words, actively developing a supply chain between old and new Eurozone members can increase the benefits of the currency union. Disclaimer: This article reflects solely the opinion of the authors and does not necessarily reflect the views of De Nederlandsche Bank. The authors acknowledge the valuable comments from Jakob De Haan, Peter van Els, and Pierre Lafourcade.

References
Bayoumi, T and B Eichengreen (1992), Shocking Aspects of European Monetary Unification, NBER Working Paper 3949. Frankel, J and A Rose (1996), The Endogeneity of the Optimum Currency Area Criteria, NBER Working Paper 5700.

Micco, A, E Stein, and G Ordoez (2003), The Currency Union Effect on Trade: Early Evidence from EMU, Economic Policy, 18(37): 315356. Mink, M and J De Haan (2013), Contagion During the Greek Sovereign Debt Crisis, Journal of International Money and Finance, 34: 102113. Saiki, A and S H Kim (2014), Business Cycle Synchronization and Vertical Trade Integration: A Case Study of the Eurozone and East Asia, De Nederlandsche Bank Working Paper 407.

1 We use the following sample countries: China, Indonesia, Japan, Korea, Malaysia, Philippines, Singapore, Taiwan, Thailand, and Hong Kong for east Asia; and Austria, Belgium, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Portugal, and Spain for the Eurozone. 2 We use three measures of business cycles: HodrickPrescott filtered GDP series, BaxterKing filtered GDP series, and annual growth rates. 3 Note that their study focuses on 21 advanced countries, and does not include emerging economies.

On Economic Rationality, Bubbles, and Macroprudence


SUBMITTED BY BIAGIO BOSSONE ON MON, 02/03/2014

INSHARE1

ADD COMMENT

(Non-)rationality in economic decisions As last years choice of the Nobel award for economic sciences well reflects, economists are deeply divided as to whether, and how, rationality should be modified as a basic assumption for modeling asset allocation and pricing decisions. The three Nobel laureates for 2013 Eugene Fama, Lars Peter Hansen, and Robert Shiller epitomize the economics professions broad spectrum of positions currently existing on the subject: from Famas unflinching faith in the full rationality of economic action to Shillers recognition of the influence of non-rational and irrational factors upon human economic determinations, passing through Hansens acceptance of distorted beliefs as explanations of some otherwise inconsistent economic behaviors empirically observed. The unresolved differences bear on the scientific status of contemporary macroeconomic analysis, especially since the crisis of 2007-09 has demonstrated the inadequacy of its underlying microfoundations. Particular attention has since been placed by economists on what they really know about asset bubbles, as these cannot be endogenized within purely rational choice models, and policymakers have re-considered whether bubbles can (or should) be managed in the public interest. (Ir)rationality and bubbles In fact, while it is unnecessary to abandon the rationality hypothesis to understand real-world economic phenomena, and financial crises in particular, combining human rational thinking with changing emotional states should feature prominently in the economics research agenda both to gain a deeper appreciation of real decision-making processes and to design policy tools that can reorient individuals decisions, when necessary, toward superior public good objectives. This is what I have tried to do with the general utility-based approach to asset allocation and pricing, which I have recently developed to study agents responses to shocks when expectations reflect the interaction of knowledge and changing market sentiment. The approach rests on three building blocks:

All assets deliver utility. Assets of all types are considered as vehicles to future consumption, each characterized by its own peculiar speed (that is, the immediacy and the cost of converting it into consumption) and power (that is, its capacity to accumulate and store wealth over time at some risk). Greater speed would come at some power cost, and vice versa. Also, at each instant the agent would be faced with the likelihood of having to liquidate the asset to face a consumption shock: changes in likelihood affect differently the utility from asset with different speed and power load. The instantaneous utility of every asset is thus calculated through the agents time-horizon as the expected value of the discounted summation of stochastic (uncertain) consumption utility, to which the asset gives access, net of the (uncertain) consumption utility lost to the asset liquidation cost. Variable cost of asset liquidation. Every asset is characterized by an optimal speed, defined as the shorter time-interval possible for the asset to be sold at the minimum liquidation (or transaction) cost possible. Asset optimal speeds are structural parameters determined by the economys level of institutional and technological development: all else equal, a more efficient and safe financial infrastructure allows asset liquidation to be effected more rapidly and at lower costs.[1] Some assets can be liquidated and converted into consumption immediately and at no cost, while others require longer time-intervals and involve positive costs. Having to liquidate an asset at a higher than optimal speed (owing, for instance, to immediate and unexpected consumption needs) results in higher liquidation costs or forces the agent to accept larger discounts on the asset sale price. Uncertainty affects asset utility also by influencing expected asset liquidation costs. Rational expectations and emotions. (1) Expectations depend on the state of knowledge: better knowledge and information help the agents to form more precise expectations about future relevant variables, while lower quality knowledge and information cause their predictions to be less determinate. (2) Expectations depend also on emotions: defining optimism (pessimism) as

the state of mind that induces the agents to expect superior (inferior) outcomes of future events than would otherwise be reasonable for them to expect exclusively on the basis of the given knowledge and information, optimistic (pessimistic) expectations derive from a distortion process. This process introduces a deviation between purely rational predictions and predictions that affected by emotional states of mind (e.g., animal spirits): optimism (pessimism) distorts the way agents process and interpret information. All assets can then be directly comparable based on the utility they deliver, as differently affected by perceived uncertainty and emotional states. In equilibrium, asset allocations and prices must be such that, at each date, all agents extract from the last resource unit still left unallocated the same utility from all assets, which in turn must equal the marginal utility derived from optimal intertemporal consumption at the given date. The agents consumption and portfolio decisions can be simultaneously determined as inter- and intra-temporal solutions to an optimal programming problems: in deciding about which asset to convert into consumption, when, and how much, the agent evaluates the expected utility gains/losses associated with the different speed vs. power tradeoffs featured by the different assets under the possible future states of nature and emotions. Bubbles as asset price overshooting The approach can be used, inter alia, to perform an instructive dissection of the microeconomic dynamics of bubbles, and to draw a policy proposition. Lets take a new production technology being launched in the market, which promises to yield large increases of output productivity. The offer price of the new asset and its existing stock of shares are both initially very low, making the assets relative marginal utility (that is, standardized to the assets unit price) abnormally high relative to that of consumption and the other assets available. Under complete knowledge and full information, and assuming a fully flexible supply of the asset stock, both its price and quantity would settle instantly at their rational-expectations equivalents. However, under optimistic expectations and a sluggish supply of the asset stock (due to short-term rigidities), the dynamics can give origin to speculative booms. Specifically, if positive news on the asset leads the agents to expect increasing trends of its price, an upbeat market lowers the asset optimal liquidation cost and raises its utility. Also, as optimism prevails, agents keep delaying the expected time of stock liquidation, once again raising its utility. Finally, optimism lowers the assets expected liquidation cost, once again raising its utility. Meanwhile, as the asset stock supply adjusts only slowly to demand, the stock return grows as the assets yield and price both rise. All this contributes to keeping the relative marginal utility of the asset (and, hence, its demand) increasing. But as the stock supply responds slowly to increasing demand, equilibrium requires that its price overshoot its rational-expectations equivalent. Price overshooting feeds into optimistic expectations and causes the stock price to stay (or to rise farther up) above equilibrium even as its supply starts adjusting to demand. This can generate a sequence of temporary equilibria characterized by levels of the asset stock and price higher than their rational-expectations equivalents, thus forming a bubble. Pricking bubbles: macroprudential policy options As the crisis has challenged the consensus whereby monetary policy can ensure financial stability by targeting the price inflation of goods and services, alternative (macroprudential) policy tools have been designed to prevent financial imbalances. My approach suggests a type of tool that could be activated when the asset price dynamics indicate the presence of speculative activity. Bubble growth could be pre-empted by increasing the assets liquidation cost, thus throwing sand in the wheel of its trading; this could be accomplished by an authority imposing an ad-hoc (adjustable) fee on the trading price of the asset. Set at an appropriate level, the fee would discourage excess demand for the asset, as it would lower its utility. By altering the assets (current and future) optimal trading cost, the fee would force the economy to settle at a lower equilibrium level of its stock. To the extent that the tool would become entrenched in financial policymaking, and credible, it could stabilize market behaviors ex ante much as a street speed control limit and fine would discourage drivers from exceeding the limit. Clearly, the fee would distort the given asset market price, but it would act as a corrective device to the distortive effect of optimistic expectations discussed above. Compared with using monetary policy to prick bubbles, as strongly suggested by Roubini, the adhoc fee tool would be market- (or asset-) specific, thus averting the disruptive consequences on the rest of the economy from using coarser (non-selective) instruments like interest rates or credit restraints, as noted by Posen. The ad-hoc fee tool, also, could be calibrated to any level necessary to discourage excess trading of an asset, a flexibility feature that is unavailable to the monetary

authorities (see Bernanke). The tool could be more specifically targeted than the central banks macroprudential instruments, which are intended to constrain the supply of funds from financial intermediaries.[2] Finally, the use of a quasi-fiscal tool would call for reconsidering the question of whether macroprudential policy should overburden central banks, or should not better be assigned to an inter-institutional systemic-risk supervisory institution with broader (not just financial) instruments and a horizontal (not just vertical) responsibility extending across banks, securities firms, financial and nonfinancial markets, and geographies, as was alluded to by the New Yorks Fed chairman William Dudley soon after the crisis. ____________________________ *I wish to thank Luigi Passamonti and Richard Wood for their comments and suggestions. Obviously, I remain the only responsible for the content of the commentary. [1] For an early theoretical and empirical study on the effects of financial infrastructure development, see my work with Sandeep Mahajan and Farah Zahir. [2] For a comprehensive state-of-the-art evaluation of central banks macroprudential tools, see the report from the Committee on the Global Financial System.

Monetary Policy and a Brightening Economy


2014 Economic Forecast Lyons Companies, the University of Delawares Center for Economic Education and Entrepreneurship, and Alfred Lerner College of Business and Economics

February 11, 2014

Charles I. Plosser President and CEO Federal Reserve Bank of Philadelphia

The views expressed today are my own and not necessarily those of the Federal Reserve System or the FOMC.

Monetary Policy and a Brightening Economy 2014 Economic Forecast Lyons Companies, the University of Delawares Center for Economic Education and Entrepreneurship, and Alfred Lerner College of Business and Economics Newark, DE February 11, 2014 Charles I. Plosser President and Chief Executive Officer Federal Reserve Bank of Philadelphia Note: President Plosser presented similar remarks at the Simon Business School in Rochester, NY, on February 5, 2014.

Highlights
President Charles Plosser provides his economic outlook for 2014 and reports that the decision of the Federal Open Market Committee (FOMC) to reduce the pace of asset purchases was a step in the right direction. President Plosser expects growth of about 3 percent in 2014. He also expects the unemployment rate to continue its steady decline and to reach about 6.2 percent by the end of 2014. Inflation expectations will be relatively stable, and inflation will move up toward the FOMC target of 2 percent over the next year. Based on the economic progress that has been made and his economic outlook, President Plosser believes it is appropriate to end asset purchases, and he supported the FOMCs decision in January to continue to reduce the pace of purchases. A case can be made for ending the current asset purchase program sooner to reflect the improvement in the economic outlook and to lessen some of the communications problems the FOMC will face with its forward guidance.

Introduction I am delighted to return once again to the University of Delaware to speak at this annual event. This is a great event for the region and I am pleased to see it grow year after year.
1

I would like to begin my remarks this morning with a bit of history. We are observing the 100th anniversary of the Federal Reserve and that gives me the opportunity, or the excuse, to offer a perspective on this important, but sometimes misunderstood, institution. I should note that my views are my own and do not necessarily reflect those of the Federal Reserve System or my colleagues on the Federal Open Market Committee. On December 23, 1913, President Woodrow Wilson signed the act that created the Federal Reserve System, and the 12 Federal Reserve Banks opened their doors on November 16, 1914. I have often described the Federal Reserve as a uniquely American form of a central bank a decentralized central bank. To understand why, we need to consider two earlier attempts at central banking in the United States. Just a few blocks from the Philadelphia Feds present building on Independence Mall, you will find the vestiges of both institutions, dating back to the early years when Philadelphia was the major financial and political center of the country. Alexander Hamilton, our first secretary of the Treasury, championed the First Bank of the United States to help our young nation manage its financial affairs. The First Bank received a 20-year charter from Congress and operated from 1791 to 1811. Although this charter was not renewed, the War of 1812 and the ensuing inflation and economic turmoil convinced Congress to establish the Second Bank of the United States, which operated from 1816 to 1836. However, Congress could not override the veto of President Andrew Jackson, and the Second Banks charter also lapsed. Both institutions failed to overcome the publics mistrust of centralized power and special interests. Nearly 80 years later, Congress tried again. To balance political, economic, and geographic interests, Congress created a Federal Reserve System of 12 regional Reserve Banks with oversight provided by a Board of Governors in Washington, D.C. This decentralized central bank was structured to overcome concerns that its actions would be dominated either by political interests in Washington or by financial interests on Wall Street.
2

These 12 Reserve Banks perform several roles. They distribute currency, act as a bankers bank, and generally perform the functions of a central bank, which includes serving as the bank for the U.S. Treasury. They also play a critical role in supervising many banks and bank holding companies across the country. Each Reserve Bank has a nine-member board of directors selected in a nonpartisan way to represent a cross-section of banking, commercial, and community interests. Pat Harker, president of the University of Delaware, serves on our board. These directors fulfill the traditional governance role, but they also provide valuable insights into economic and financial conditions, which contribute to our assessment of the economy. The Reserve Banks seek to stay in touch with Main Street in other ways. Some have Branch boards, and all have advisory councils. Reserve Banks also collect and analyze data and conduct surveys of economic activity. This rich array of information and the diverse views from across the country help policymakers paint a mosaic of the economy that is essential as we formulate national monetary policy. Within the Federal Reserve, the body that makes monetary policy decisions is the Federal Open Market Committee, or the FOMC. Here again, Congress has designed the system with a number of checks and balances. Since 1935, the composition of the FOMC has included the seven Governors in Washington, who are appointed by the President of the United States and confirmed by the Senate, as well as the president of the New York Fed, and four other Reserve Bank presidents who serve one-year terms as members on a rotating basis. Whether we vote or not, all Reserve Bank presidents attend the FOMC meetings, participate in the discussions, and contribute to the Committees assessment of the economy and policy options. The FOMC has eight regularly scheduled meetings each year to set monetary policy. The first one in 2014 was held just two weeks ago. In normal times, the Committee votes to adjust short-term interest rates to achieve the goals of monetary policy that Congress has set for us.
3

Congress spelled out the current set of monetary policy goals in 1978. The amended Federal Reserve Act specifies that the FOMC shall maintain long run growth of the monetary and credit aggregates commensurate with the economys long run potential to increase production, so as to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates. Since moderate long-term interest rates generally result when prices are stable and the economy is operating at full employment, many have interpreted these instructions as being a dual mandate to manage fluctuations in employment in the short run while preserving price stability in the long run. Economic Conditions In order to determine the appropriate monetary policy to promote these goals, the FOMC must monitor and assess economic developments. So, let me turn to an assessment of our economy as we enter 2014. In a nutshell, my view is that the economy is on firmer footing than it has been for the past several years. So lets look at some of the details of how last year ended and the implications for the coming year. Real output grew at a 3.2 percent annual pace in the fourth quarter of last year, following a 4.1 percent growth rate in the third quarter. That means economic growth doubled in the second half of 2013 compared with the first half. Consumer spending and business investment in equipment ended the year with strong increases. Inventory investment and net exports also contributed to growth. On the other side of the ledger, a decline in residential investment and a sharp decrease in federal government spending subtracted from growth in the fourth quarter of the year. Going forward, I expect that there will be less fiscal drag in 2014 than we saw in 2013 and that housing will continue its recovery. Personal consumption, which accounts for more than two-thirds of GDP, advanced at an annual rate of 3.3 percent in the fourth quarter, the highest personal consumption
4

growth rate since fourth quarter 2010. Rising house prices and stock prices have helped improve consumer balance sheets, and steady job growth has added to wage and salary growth, all of which have supported spending. On the job front, the January employment report showed payroll gains of 113,000 jobs, following an increase of 75,000 jobs in December, which was below many analysts expectations. Yet, these numbers were affected by the unseasonably cold and snowy weather. Indeed, winter weather has continued to be unusually disruptive, and that is making it difficult to assess underlying economic trends. I suspect it may be another couple of months before we have a better read on the economy hopefully the weather will turn better before that! We have to remember that these employment numbers are subject to revisions, and such revisions can be significant. For instance, in the latest report the November jobs number was revised up by 33,000, from 241,000 to 274,000, while the December number was also revised up by 1,000 to 75,000. Because of month-to-month volatility and data revisions, I prefer not to read too much into the most recent numbers but, instead, look at averages over several months. Here the news remains positive. Based on the latest revision, firms added an average of 194,000 jobs per month in 2013, somewhat better than the pace in 2012. This consistent pace of job growth was enough to drop the unemployment rate 1.2 percentage points last year. In January, it fell a bit further to 6.6 percent. This is noticeably lower than the FOMC anticipated in its Summary of Economic Projections in September 2012, when we started the current asset purchase program. That is, the labor market has performed better than expected, according to the unemployment rate measure. Should we be skeptical of the unemployment rate as an indicator of labor market conditions? Some people are because the decline in the unemployment rate reflects not only increases in employment but declines in labor force participation as well.
5

Declines in participation can include discouraged workers who have stopped looking for work because it is difficult to find a job. There is concern in some quarters that the unemployment rate will move back up significantly when these discouraged workers reenter the labor force. Based on research by my staff, I am less concerned about this possibility. 1 First, it is important to realize that labor force participation rates can decline for reasons other than a rise in discouraged workers. Indeed, we have seen steadily declining participation rates since 2000 that reflect demographic changes, most notably the aging of the baby boomers. This trend is continuing and has long been expected to accelerate. Second, detailed analysis of the Current Population Surveys micro data indicates that about three-quarters of the decline in participation since the start of the recession in December 2007 can be accounted for by increased retirements and movements into disability. Some of these increases might have been driven by the state of the economy. For example, some baby boomers may have moved their retirement decision forward after losing a job. Nevertheless, few of these individuals are likely to reenter the labor force. So, while I do expect some discouraged workers to reenter the labor force as the economy improves, I still believe the overall unemployment rate remains a good summary statistic of labor market conditions. The business sector is also entering the year on a positive note. At the national level, manufacturing activity accelerated over the final three months of 2013. The Philadelphia Feds Business Outlook Survey of regional manufacturing, which is a reliable indicator of national manufacturing trends, also showed that manufacturing activity picked up in the second half of 2013. In January, the surveys general activity index posted its eighth consecutive positive number. Expectations for manufacturing
Shigeru Fujita, On the Causes of Declines in the Labor Force Participation Rate, Research Rap Special Report, Federal Reserve Bank of Philadelphia, February 6, 2014.
1

activity six months ahead also remained positive. This gives me some hope that business fixed investment, which has been generally lackluster during the course of the recovery, will pick up somewhat this year. Indeed, we saw a nice rebound in equipment spending in the fourth quarter. Inflation has been running below the FOMCs long-run goal of 2 percent. The Feds preferred measure of inflation is the year-over-year change in the price index for personal consumption expenditures, or PCE inflation. It came in at 1.1 percent last year. It is important to defend our 2 percent inflation target from both below and above. But I believe inflation is likely to move up. Economic growth is firming, and some of the factors that have held inflation down, such as the one-time cut in payments to Medicare providers, are likely to abate over time. An additional and important determinant of actual inflation is consumer and business expectations of inflation. I am encouraged that inflation expectations remain near their longer-term averages and consistent with our 2 percent target. Thus, I anticipate, as the FOMC indicated in its most recent statement, that inflation will move back toward our target over time. Indeed, given the large amount of monetary accommodation we have added and continue to add to the economy, I think there is some upside risk to inflation in the longer term. Although growth in the first quarter is likely to be somewhat slower than the rapid pace we saw in the second half of last year, overall, I anticipate economic growth of around 3 percent this year, a pace that is slightly above trend. Everyone would like to see robust growth of 5 to 6 percent, but I dont see that happening. Nevertheless, I do see steady progress and an improving economy. This growth is sufficient to result in a continued decline in the unemployment rate, which should reach about 6.2 percent by the end of 2014. Of course, with any forecast, there are risks. The current volatility in emerging market currencies could pose a risk if it were to spill over more broadly into other financial
7

markets. But at this point, I do not consider it a significant risk to the U.S. economy. While there continues to be some downside risks, for the first time in a few years, I see a potential for some upside risks to the economic outlook. We need to consider this possibility as we calibrate monetary policy. Monetary Policy The Federal Reserve has taken extraordinary policy actions to support the economic recovery. The Fed has lowered its policy rate the federal funds rate to essentially zero, where it has been for more than five years. Since the policy rate cannot go any lower, the Fed has attempted to provide additional accommodation through large-scale asset purchases. We are now in our third round of this quantitative easing, or, as it is commonly called, QE3. These purchases have greatly expanded the size and lengthened the maturity of the assets on the Feds balance sheet. The Fed is also using forward guidance as a policy tool, which is intended to inform the public about the way monetary policy is likely to evolve in the future. In this dimension, the FOMC has indicated that it intends to leave the policy rate near zero well past the time that the unemployment rate falls below the 6.5 percent threshold. The FOMC had previously indicated this was the earliest point at which it would consider raising interest rates, especially if projected inflation continues to run below the Committees 2 percent target. On asset purchases, the FOMC has indicated that it will continue the purchases until the outlook for the labor market has improved substantially in the context of price stability. Yet, with the economy having improved substantially over the last year and the outlook brightening, the time has come for the FOMC to slow the pace at which it is adding monetary accommodation, which is to say, ease our foot off the accelerator. 2 My
2

For more on the subject, see Charles I. Plosser, The Outlook and the Hazards of Accelerationist Policy, remarks before The University of Delaware Center for Economic Education and Entrepreneurship, Newark, DE, February 14, 2012.

personal view is that the process should have started sooner and proceeded more expeditiously. Nevertheless, the FOMC did decide in December to take a very modest step by reducing asset purchases from $85 billion to $75 billion per month, and then reduced this by another $10 billion to $65 billion a month in January. The FOMC indicated that if incoming information broadly supports the expectation of ongoing improvement in labor market conditions and inflation moving back toward its longer-run objective, then well likely reduce the pace of purchases further in measured steps at future meetings. Former Chairman Ben Bernanke indicated in his December press conference that if we are making progress in terms of inflation and continued job gains, then the program would be concluded late in 2014. Notice that even though we are reducing the pace at which we are purchasing longerterm assets, we are still adding monetary policy accommodation. So, the foot is still on the accelerator. My preference would be that we conclude the purchases sooner rather than later. I believe a good case can be made for speeding up the pace of our taper if the economic outlook plays out as I expect. As I noted earlier, the unemployment rate fell 1.2 percentage points last year, and again in January to 6.6 percent. This is a much sharper decline than anticipated when we started the purchase program in September 2012. We are now only one-tenth of a point from the 6.5 percent threshold in our forward guidance for interest rates. The FOMC has indicated that it doesnt anticipate raising rates when the economy crosses that threshold. However, I do believe that we have a communications challenge. We have not described how policy will be conducted after the unemployment rate passes 6.5 percent. Last summer, it was thought that we would stop asset purchases by the time unemployment reached 7 percent. Well, that didnt happen. What is the argument for continuing to increase monetary policy accommodation when labor market conditions are improving more quickly than
9

anticipated and inflation has stabilized and is projected to move back to goal? The longer we continue purchases in such an environment, the more likely we will fall behind the curve in reducing the extraordinary degree of monetary policy accommodation. With the economy awash in reserves, the costs of such a misfire could be considerably higher than usual, fomenting higher inflation and perhaps financial instability. My preference was, and remains, to scale back our purchase program at a faster pace to reflect the strengthening economy. Given the falling unemployment rate, our communications should be focused on how economic conditions will determine the interest rate path. Continuing to buy assets is neither helpful nor essential. Conclusion In summary, I believe that the economy is continuing to improve at a moderate pace. We are likely to see growth of around 3 percent in 2014. Prospects for labor markets will continue to improve, and I expect the unemployment rate will continue its decline, reaching 6.2 percent by the end of 2014. I also believe that inflation expectations will be relatively stable and that inflation will move up toward our goal of 2 percent over the next year. On monetary policy, we must back away from increasing the degree of policy accommodation in a manner commensurate with an improving economy. Reducing the pace of asset purchases to $65 billion a month is moving in the right direction, but that may prove to be insufficient if the economy continues to play out according to the FOMC forecasts. I believe the economy has met the criteria for ending the asset purchases as there has been significant improvement in labor market conditions. I also believe that further increases in the balance sheet are unlikely to provide appreciable benefits for the recovery and may have unintended consequences. A case can be made for ending the current asset purchase program sooner to reflect the improvement in the economic outlook and to lessen some of the communications problems we will face
10

with our forward guidance. Even after the asset purchase program has ended, monetary policy will still be highly accommodative. As the expansion gains traction, the challenge will be to reduce accommodation and to normalize policy in a way that ensures that inflation remains close to our target, that the economy continues to grow, and that we avoid sowing the seeds of another financial crisis. This means the Fed still has considerable work to do.

11

Yellen's Debut as Chair


Janet Yellen made her first public comments as Federal Reserve Chair in a grueling, nearly day-long, testimony to the House Financial Services Committee. Her testimony made clear that we should expect a high degree of policy continuity. Indeed, she said so explicitly. The taper is still on, but so too is the expectation of near-zero interest rates into 2015. Data will need to get a lot more interesting in one direction or the other for the Fed to alter from its current path. In here testimony, Yellen highlighted recent improvement in the economy, but then turned her attention to ongoing underemployment indicators: Nevertheless, the recovery in the labor market is far from complete. The unemployment rate is still well above levels that Federal Open Market Committee (FOMC) participants estimate is consistent with maximum sustainable employment. Those out of a job for more than six months continue to make up an unusually large fraction of the unemployed, and the number of people who are working part time but would prefer a full-time job remains very high. These observations underscore the importance of considering more than the unemployment rate when evaluating the condition of the U.S. labor market. A visual reminder of the issue:

This is a straightforward reminder of the Fed's view that the unemployment rate overstates improvement in labor markets and thus should be discounted when setting policy. Consequently, policymakers believe they have room to hold interest rates at rock bottom levels for an extended period. To be sure, there are challenges to this view, both internally and externally. For instance, Philadelphia Federal Reserve President Charles Plosser today reiterated his view that asset purchases should end soon and also fretted that the Fed will be behind the curve with respect to interest rates. Via Bloomberg: Im worried that were going to be too late to raise rates, Plosser told reporters after a speech at the University of Delaware in Newark. I dont want to chase the market, but we may have to end up having to do that if investors act on anticipation of higher rates.

That remains a minority view at the Fed. Matthew Boesler at Business Insider points us at UBS economists Drew Matus and Kevin Cummins, who challenge Yellen's belief that the long-term unemployed will keep a lid on inflation: We do not view the long-term unemployed as necessarily "ready for work" and therefore believe that their ability to restrain wage pressures is limited. In other words, the unusually high number of long-term unemployed suggests that the natural rate of unemployment has increased. Indeed, when we have tested various unemployment rates' ability to predict inflation we found that the standard unemployment rate outperforms all other broader measures reported by the Bureau of Labor Statistics. Although we disagree with Yellen regarding the long-term unemployed, our research does suggest that, perhaps unsurprisingly, the number of part-timers does have an impact on restraining inflation. I tend to think that we will not see clarity on this issue until unemployment approaches even nearer to 6%. That level has traditionally been associated with rising wages pressures in the past:

The Fed would likely see a faster pace of wage gains as lending credence to the story that the drop in labor force participation is mostly a structural story. At that point the Fed may begin rethinking the expected path of interest rates, depending on their interest in overshooting. But in the absence of such early signs of inflationary pressures, the Fed will be content to raise rates only gradually. With regards to monetary policy, Yellen reminds everyone that she helped design the current policy: Turning to monetary policy, let me emphasize that I expect a great deal of continuity in the FOMC's approach to monetary policy. I served on the Committee as we formulated our current policy strategy and I strongly support that strategy, which is designed to fulfill the Federal Reserve's statutory mandate of maximum employment and price stability. Yellen makes clear that the current pace of tapering is likely to continue:

If incoming information broadly supports the Committee's expectation of ongoing improvement in labor market conditions and inflation moving back toward its longer-run objective, the Committee will likely reduce the pace of asset purchases in further measured steps at future meetings. Later, during the question and answer period, Yellen does however, open the door for a pause in the taper. Via Pedro DaCosta and Victoria McGrane at the Wall Street Journal: I think what would cause the committee to consider a pause is a notable change in the outlook, Ms. Yellen told lawmakers... ...I was surprised that the jobs reports in December and January, the pace of job creation, was running under what I had anticipated. But we have to be very careful not to jump to conclusions in interpreting what those reports mean, Ms. Yellen said. Recent bad weather may have been a drag on economic activity, she added, saying it would take some time to get a true sense of the underlying trend. The January employment report was something of a mixed bag, with the unemployment rate edging down further to 6.6% while nonfarm payrolls disappointed again (!!!!) with a meager gain of 113k. That said, I still do not believe this should dramatically alter your perception of the underlying pace of activity. Variance in nonfarm payrolls is the norm, not the exception:

Her disappointment in the numbers raises the possibility - albeit not my central case - that another weak number in the February report could prompt a pause. My baseline case, however, is that even if it was weak, it would not effect the March outcome but instead, if repeated again, the outcome of the subsequent meeting. Remember, the Fed wants to end asset purchases. As long as they believe forward guidance is working, they will hesitate to pause the taper. Yellen was not deterred by the recent turmoil in emerging markets: We have been watching closely the recent volatility in global financial markets. Our sense is that at this stage these developments do not pose a substantial risk to the U.S. economic outlook. We will, of course, continue to monitor the situation.

Yellen reiterates the current Evans rule framework for forward guidance, giving no indication that the thresholds are likely to be changed. Jon Hilsenrath at the Wall Street Journal interprets this to mean that when the 6.5% unemployment rate threshold is breached, the Fed will simply switch to qualitative forward guidance. I tend to agree. Bottom Line: Circumstances have not change sufficiently to prompt the Federal Reserve deviate from the current path of policy.

When Will the Fed End Its Zero Rate Policy?


Jens Christensen
U.S. Treasury yields and other interest rates increased in the months leading up to the Federal Reserves December 2013 decision to cut back its large-scale bond purchases. This increase in rates probably at least partly reflected changes in what bond investors expected regarding future monetary policy. Recent research on this episode tentatively suggests that investors moved earlier the date when they believed the Fed would exit its zero interest rate policy, even though Fed policymakers made few changes in their projections of appropriate monetary policy.

The severe shock of the 200708 financial crisis prompted the Federal Reserve to quickly lower its target for its primary policy rate, the overnight federal funds rate, near to zero, where it has remained since. Despite this highly stimulatory stance of conventional monetary policy, the economic recovery has been sluggish and inflation has been low. For that reason, the Federal Open Market Committee (FOMC), the Feds policy body, has provided additional monetary stimulus by using unconventional measures to push down longer-term interest rates. One element of this unconventional policy has been largescale asset purchases (LSAPs). Another has been public guidance about how long the FOMC expects to keep its federal funds rate target exceptionally low. The effect of this forward guidance depends on how financial market participants interpret FOMC communications, in particular when they expect the Fed to exit from its near-zero rate policy, a shift often called liftoff (see Bauer and Rudebusch 2013). This Economic Letter examines recent research estimating when bond investors expect liftoff to take place (see Christensen 2013). This research suggests that bond investor expectations for the date of exit have moved forward notably in recent months, probably because they anticipated the FOMCs decision at its December 2013 meeting to cut back large-scale asset purchases. This research suggests that market participants expect the FOMC to start raising rates in the spring of 2015, but the exact timing is highly uncertain. Unconventional monetary policy Unconventional monetary policy designed to put downward pressure on longer-term interest rates has two aspects: large-scale asset purchases and forward guidance, that is, Fed communications about its expectations for future policy. LSAPs affect longer-term interest rates by shifting the term premium, the higher yield investors demand in exchange for holding a longer-duration debt security (see Gagnon et al. 2011). LSAPs were first announced in late 2008. The most recent program, initiated in September 2012, originally involved purchasing $40 billion in mortgage-backed securities (MBS) every month. It expanded in December 2012 to include $45 billion in monthly Treasury security purchases. The FOMC stated that it intended to continue the program until the outlook for the labor market improved substantially, provided inflation remained stable. Since then, the labor market has improved and the unemployment rate has dropped. As

a result, the FOMC decided at its December 2013 meeting to reduce the pace at which it adds to its asset holdings to $75 billion per month. Forward guidance affects longer-term rates by influencing market expectations about the level of short-term interest rates over an extended period. In August 2011, the FOMC stated that it intended to keep its federal funds rate target near zero until mid-2013, the first time it projected a liftoff date. More recently, Fed policymakers have indicated that they anticipate keeping the federal funds rate at that exceptionally low level at least as long as the unemployment rate remains above 6%, inflation one to two years ahead is projected to be no more than one-half percentage point above the FOMCs 2% longer-run target, and longer-term inflation expectations remain in check. In December 2013, the FOMC added that, based on current projections, it expects to maintain the zero interest rate policy well past when the unemployment rate falls below 6%. Figure 1 FOMC member projections of appropriate policy rate

FOMC projections versus Treasury market data Forward guidance also includes a set of projections on future federal funds rate levels that each FOMC participant makes four times per year, released in conjunction with the FOMC statement. Based on their views of appropriate monetary policy, these policymakers also forecast overall inflation; core inflation, which excludes volatile food and energy prices; the unemployment rate; and output growth. Figure 1 shows FOMC median, 25th percentile, and 75th percentile federal funds rate projections made in September and December. Only minor changes occurred from September to December. Figure 2 Treasury yield curves on three dates in 2013

The relatively stable FOMC projections stand in contrast to changes in the U.S. Treasury bond market over the same period. Figure 2 shows the Treasury yield curve, that is, yields on the full range of Treasury maturities, on the days of the September and December 2013 FOMC meetings as well as the December 27 reading. (The research is based on weekly Treasury yields recorded on Fridays. December 27 was the last Friday in 2013.) Medium- and longer-term Treasury yields increased notably during that period. Other analysis suggests that much of this increase in longer-term Treasuries reflected an increase in the term premium. But did the rise in longer-term rates also involve a shift in the markets views about expected short-term rates that seems out-of-step with FOMC guidance? To address this question, I use an innovative model of the Treasury yield curve developed in Christensen (2013) that delivers a distribution of estimates derived from Treasury security prices for the exit from the zero interest rate policy. A model of the Treasury yield curve In this model, it is assumed that the economy can be in one of two states: a normal state like that which prevailed before December 2008, and a state like the current one in which the monetary policy rate is stuck at its lower bound near zero. In the normal state, yield curve variation is captured by three factors that are not directly observable, but can be derived from the underlying data: the general level of rates; the slope of the yield curve; and the curvature, or shape, of the yield curve. Furthermore, it is assumed that, in the normal state, investors consider the possibility of the policy rate reaching zero to be negligible. This assumption implies that the transition to the zero-bound state that occurred in December 2008 was a surprise and did not affect bond prices before that, when the economy was in the normal state.

The zero-bound state is characterized by two key features. First, the shortest rate in the Treasury bond market is assumed to be constant at zero. Second, the state is viewed by bond investors and monetary policy makers as undesirable and temporary. They believe that the FOMC would like to return to normal as quickly as possible, consistent with the Feds price stability and maximum employment mandates. This implies that news about the U.S. economy prompts bond investors to revise their views about when the FOMC is likely to exit from its zero interest rate policy. In the model, that exit defines the transition from the zero-bound state to the normal state of the economy. One component of the variation of Treasury bond yields in the zero-bound state is how probable bond investors believe a return to the normal state to be. However, because bond investors are forward looking and consider the possibility of such a shift when they trade, the three factors that affect the yield curve in the normal state continue to affect it in the zerobound state. Figure 3 Intensity of exit time from the zero interest rate policy

Results To derive estimates of the date of the FOMCs first federal funds rate increase, I use weekly Treasury yields starting in January 1985 of eight maturities ranging from three months to ten years. The novel feature of the model I use is consideration of the implicit probability bond investors attach to a transition back to the normal state. This allows the entire distribution of probable dates of exit from the zero-bound state to be examined. Figure 3 shows the likelihood of leaving the zero-bound state at any point in time as of December 27, 2013. The exit date distribution is heavily skewed so that very late exit times are significantly probable. Still, the median exit date is in March 2015. In other words, the economy is just as likely to remain in the zero-bound state at that date as to

have exited before it. One takeaway is the considerable level of uncertainty about the exit date. The model suggests that there is about a one-in-three chance of remaining in the zero-bound state past 2015. Figure 4 Median exit time from the zero interest rate policy

Figure 4 shows the variation in the estimated median exit time since December 16, 2008, when the economy shifted to the zero-bound state. Included are five dates from 2009 to 2012 of major FOMC announcements regarding LSAPs or guidance about future monetary policy. The estimated median exit time from the zero-bound state moved notably later in the weeks after each announcement, except when the FOMC extended its forward guidance in January 2012. This suggests that unconventional policies derive part of their effect by sending signals that bond market participants interpret to mean that the federal funds rate will remain at its zero bound longer than previously expected (see Christensen and Rudebusch 2012). Consistent with these observations, Figure 4 also shows that the estimated median exit date from the near-zero federal funds rate moved forward significantly between the September and December 2013 FOMC meetings as market participants began anticipating the Feds decision to scale back LSAPs. According to the model, in anticipating the decision to trim LSAPs, the market also thought the first federal funds rate hike might come sooner than previously anticipated. This latter change in expectations held even though the FOMCs projections of the appropriate future fed funds rate hardly changed from September to December. As of December 27, 2013, the median exit time for the market was estimated at one year and three months, which implies that the odds of keeping the near-zero interest rate policy past March 2015 are identical to the odds of exiting before that date.

Conclusion A novel model of the Treasury yield curve allows an assessment of investor expectations of the exit date from the Feds near-zero interest rate policy. The results suggest that, as of the end of 2013, the expected exit date has moved forward notably since September 2013 despite only minor changes between September and December in FOMC participants projections of appropriate future monetary policy. However, the estimated distribution of the probable exit date is skewed so that the likelihood of an earlier or later exit is sizable. This finding is consistent with the inherent uncertainty about the outlook for inflation and unemployment, the economic variables that guide FOMC rate decisions. Jens Christensen is a senior economist in the Economic Research Department of the Federal Reserve Bank of San Francisco. References Bauer, Michael, and Glenn Rudebusch. 2013. Expectations for Monetary Policy Liftoff. FRBSF Economic Letter 2013-34 (November 18). Christensen, Jens H. E. 2013. A Regime-Switching Model of the Yield Curve at the Zero Bound. FRB San Francisco Working Paper 2013-34. Christensen, Jens H. E., and Glenn D. Rudebusch. 2012. The Response of Interest Rates to U.S. and U.K. Quantitative Easing. Economic Journal 122, pp. F385F414. Gagnon, Joseph, Matthew Raskin, Julie Remache, and Brian Sack. 2011. The Financial Market Effects of the Federal Reserves Large-Scale Asset Purchases. International Journal of Central Banking 7(1), pp. 343.

BUDGET BACKGROUNDER
MAY 2012 JUNE 2005 MAKING DOLLARS MAKE SENSE

WHAT MAKES A TAX SYSTEM FAIR?


While everyone believes a tax system ought to be fair, there is disagreement as to what constitutes a fair or equitable tax system. This Budget Backgrounder describes what economists generally believe makes a tax system fair, examines how fair Californias tax system is, and discusses why fairness matters. taxes will be proportional. Californias tax system is modestly regressive after taking into account taxpayers ability to deduct state income and property taxes for federal income tax purposes.1 Sales and excise taxes, such as alcohol and tobacco taxes, are regressive taxes (Table 1). An income tax with a graduated rate structure, such as Californias, is a progressive tax. Fuel and energy taxes including taxes based on carbon emissions are regressive taxes. Contrary to popular perception, Californias Vehicle License Fee is also a regressive tax.2 The overall regressivity of Californias tax system results from the relatively large share of income that lower-income households pay in the form of sales and excise taxes. While higher-income households pay a larger share of their incomes in personal income taxes, they can deduct these amounts from their federal income taxes, signicantly reducing the total amount of taxes that they pay. The regressivity of Californias tax system also reects the fact that low- and middle-income households spend all, or nearly all, of their incomes on necessities, including on many goods that are subject to tax. Sales and excise taxes are generally not deductible for federal tax purposes, and this exacerbates the disparities between low- and middle-income households and high-income households. Without considering federal deductibility, the distribution of Californias taxes would be U-shaped, with the lowest- and highest-income households paying the largest share of their incomes in taxes. This shape reects the signicant burden of sales and excise taxes on low-income households and the impact of Californias extremely progressive income tax on high-income households.

Dening a Fair Tax System


Most people agree that a fair tax system should ask taxpayers to contribute to the cost of public services based on ability to pay. Economists typically measure the equity of a tax system or a particular tax based on the percentage of a familys income. Taxes are often described as: Progressive when higher-income households pay a larger share of their incomes in taxes (Figure 1); Proportional, or at, when the share of income paid in taxes is the same at all income levels, regardless of how much or how little households earn; or Regressive when lower-income households pay a larger share of their incomes in taxes. Some people argue and public opinion research suggests that many people believe that proportional, or at, taxes are the fairest, since everyone pays the same tax rate. However, this argument does not account for the fact that lower-income households spend most or all of their incomes on necessities, while higher-income households have more discretionary income and can afford to pay more in taxes without cutting into what they can spend on shelter, food, health care, and other basic needs. The overall fairness of a tax system depends on the balance between the various taxes that make up the states revenues. A system that relies more heavily on progressive taxes will be more progressive, while one that is a mix of progressive and regressive

A Fair Tax System Treats Taxpayers in Similar Situations Similarly


Economists also talk about another type of equity horizontal equity. A tax system that is horizontally equitable treats taxpayers

1107 9th Street, Suite 310

Sacramento, CA 95814

P: (916) 444-0500 www.cbp.org

Figure 1: Comparing Different Tax Systems


A Regressive Tax
12% 10%

Share of Family Income

8% 6% 4% 2% 0% Bottom Fifth Second Fifth Middle Fifth Fourth Fifth Next 15 Percent Next 4 Percent Top 1 Percent

in similar economic situations similarly. Horizontal equity is important since it inuences taxpayers perceptions of the fairness of the tax system. A tax system that is horizontally equitable, for example, taxes all forms of income at the same rate. An inequitable system provides, for example, preferential treatment to investment income relative to wages. The federal income tax taxes income from capital gains at a lower rate than that from wages, and California taxes income from private pensions, but not from Social Security. Most economists believe that a good tax system is one that does not attempt to inuence the allocation of resources in the economy. Californias property tax provides preferential treatment to businesses that have owned their property for a long time, while property owned by new businesses is taxed at or near fair market value. Many critics argue that this feature of Californias tax code discourages new investment and economic development. Similarly, many economists argue that tax laws that provide special treatment to certain industries or activities leads to inefciency and encourages businesses to consider the tax consequences of their decisions, rather than respond to market demand. Economists across the political spectrum argue that the best taxes are those applied to a broad base and at a low rate. All tax systems provide some types of special treatment. For example, both the state and federal income taxes provide special benets to families with children and allow taxpayers who itemize their deductions to deduct their charitable contributions. Californias corporate income tax provides special treatment to businesses located in certain geographic areas and for research and development in certain, but not all, sectors of the economy. Economists argue that decisions to provide special treatment should be made explicitly and reviewed periodically. Periodic evaluation provides an opportunity to assess whether such policies have achieved their policy goals or whether they have resulted in unintended and potentially undesirable consequences. Table 1: Examples of Taxes That Are Typically Progressive or Regressive
Typically Progressive Taxes Corporate Income Tax Personal Income Tax
7.4%

A Progressive Tax
12% 10%

Share of Family Income

8% 6% 4% 2% 0% Bottom Fifth Second Fifth Middle Fifth Fourth Fifth Next 15 Percent Next 4 Percent Top 1 Percent

A Proportional or "Flat" Tax


12% 10%

Share of Family Income

8% 6% 4% 2% 0% Bottom Fifth Second Fifth Middle Fifth Fourth Fifth Next 15 Percent Next 4 Percent Top 1 Percent

California State and Local Taxes


12% 10.2% 10% 8.7% 8.1% 8% 6% 4% 2% 0% Bottom Fifth Second Fifth Middle Fifth Fourth Fifth Next 15 Percent Next 4 Percent Top 1 Percent 8.2% 7.5%

Share of Family Income

7.7%

Property Taxes Paid by Businesses Typically Regressive Taxes Fuel Taxes (Gasoline, Carbon) Property Taxes Paid by Homeowners Sales Tax Sin Taxes (Tobacco, Alcoholic Beverages) Vehicle License Fees

Why Does Fairness Matter?


Tax fairness is important for several reasons. First, a regressive tax system raises money from those who have the least of it. Income is distributed unequally, and Californias income distribution is becoming more unequal. The wealthiest 5 percent of Californians received 36.6 percent of the income reported for tax purposes in 2010, up from 27.7 percent of income in 1993. The share of income reported by the bottom 60 percent of personal income taxpayers dropped from 22.8 percent to 18.1 percent during the same period.3 Income from capital gains is distributed even more unequally, with the top 5 percent of personal income taxpayers receiving 94.2 percent of income from capital gains in 2009.4 Business income is also distributed unequally; the 0.2 percent of California corporations with incomes of $10 million or more earned 65.7 percent of the corporate income reported for tax purposes in 2010.5

that a good tax system should provide an appropriate level of revenue on a timely basis, distribute the cost of taxation fairly, promote economic growth and efciency, be easily administered, and ensure accountability.6 Debates over what constitutes a good or bad state business climate typically focus on tax rates and total tax collections on a per capita basis or as a share of a states economy. While more comprehensive studies show that various business climate indexes have little relationship to a states rate of economic growth and even less of a relationship to state residents economic well-being, these comparisons do matter. Economists generally believe that states should not be outliers: states should avoid tax levels that are signicantly above those in other states. California has relatively high personal and corporate income taxes, but relatively low property taxes due to the limits imposed by Proposition 13. Experts often argue that the states tax system would be better balanced by a greater reliance on property taxes, which tend to be relatively stable, and a lower reliance on income taxes.

What Other Factors Matter?


A complete discussion of what constitutes a good tax system is beyond the scope of this paper. However, in brief, experts argue

Jean Ross prepared this Budget Backgrounder with assistance from Samar Lichtenstein. The California Budget Project (CBP) was founded in 1994 to provide Californians with a source of timely, objective, and accessible expertise on state scal and economic policy issues. The CBP engages in independent scal and policy analysis and public education with the goal of improving public policies affecting the economic and social well-being of low- and middle-income Californians. General operating support for the CBP is provided by foundation grants, subscriptions, and individual contributions. Please visit the CBPs website at www.cbp.org.

ENDNOTES
1 Figure 1 shows that Californias lowest-income families pay the largest share of their incomes in state and local taxes (Institute on Taxation and Economic Policy). Federal 2 3 4 5 6

income tax law allows taxpayers who itemize their deductions to deduct income and property taxes paid, including vehicle license fees that are treated as a tax on personal property. Businesses can deduct all of the state and local taxes they pay from their federal income taxes. Jennifer Dill, Todd Goldman, and Martin Wachs, The Incidence of the California Vehicle License Fee (California Policy Research Center, University of California, Berkeley, 1999). Franchise Tax Board, Revenue Estimating Exhibits (May 2012), Exhibit A-10, p. 3. Franchise Tax Board, Revenue Estimating Exhibits (December 2011), Exhibit A-9, p. 1. Franchise Tax Board, Revenue Estimating Exhibits (May 2012), Exhibit B-2, p. 3. National Conference of State Legislatures and National Governors Association, Financing State Government in the 1990s (December 1993), p. 16.

Appendix A: California State and Local Taxes as a Share of Family Income for Non-Elderly Taxpayers
Top Fifth Bottom Fifth (Less Than $20,000) $12,600 6.5% 3.6% 3.5% 0.1% 0.1% 0.1% 0.0% 10.2% Second Fifth ($20,000 to $34,000) $27,100 5.4% 2.8% 2.7% 0.0% 0.5% 0.5% 0.0% 8.7% Middle Fifth ($34,000 to $55,000) $43,800 4.1% 3.0% 2.9% 0.1% 1.2% 1.1% 0.0% 8.3% Fourth Fifth ($55,000 to $94,000) $72,000 3.2% 3.1% 3.0% 0.1% 2.0% 1.9% 0.0% 8.3% Next 15 Percent ($94,000 to $208,000) $133,000 2.3% 3.0% 2.9% 0.1% 3.4% 3.3% 0.1% 8.8% Next 4 Percent ($208,000 to $620,000) $321,300 1.5% 2.4% 2.1% 0.3% 5.3% 5.1% 0.2% 9.2% Top 1 Percent ($620,000 or More) $2,257,000 0.8% 1.4% 0.7% 0.7% 7.5% 7.1% 0.5% 9.8%

Income Group Average Income in Group (2007) Sales, Excise, and Gross Receipts Taxes Property Taxes Property Taxes on Families Business Property Taxes Income Taxes Personal Income Tax Corporate Income Tax Total Before Offset for Federal Deductibility of State Taxes Offset for Federal Deductibility of State Taxes Total After Offset for Federal Deductibility of State Taxes

0.0%

0.0%

-0.2%

-0.6%

-1.2%

-1.0%

-2.3%

10.2%

8.7%

8.1%

7.7%

7.5%

8.2%

7.4%

Source: Institute on Taxation and Economic Policy

PERSPECTIVE

Abenomics and Japans Growth Prospects


RISABURO NEZU October 2013

Mr Abes Inheritance: Two Lost Decades


Prior to Mr Abes appointment as Prime Minister of Japan in December 2012, Japan had suffered two decades of economic stagnation and deflation. This economic malaise came immediately after the housing and stock price bubble collapses of the 1990s. In November 2012, the Tokyo Stock Exchange sat at around 8,000, less than a quarter of its peak in 1989, and land prices in urban commercial sectors were at one-fifth of their high water mark. Throughout these two decades, the real economic growth rate was less than 1 per cent per annum and the rate of inflation fell into the negative after the global financial crisis of 2008. What was particularly frustrating to the Japanese business community was the rampant appreciation of the yen, which accelerated despite such poor growth and the earthquake and tsunami of March 2011. After his election, Abe pressed the Bank of Japan to embark on a bold policy of monetary easing, succeeding in achieving both a weaker yen and a rising stock market.

to engage in aggressive monetary easing, with Haruhiko Kuroda, who had been arguing for quantitative expansion for some time, ensured the desired policy. This was a relief to Japanese export industries, which had struggled through years of a strong yen, and seemed likely to improve corporate profits. Rising stock prices brought a surge in consumption. In May 2013, it looked like a virtuous cycle was finally beginning to take hold. From 23 May, however, everything started to move backwards. Stock prices fell to 13,000 yen in less than a month. The yen soared again to 93 yen/dollar from 103. More surprisingly, despite quantitative easing, longterm interest rates rose from 0.3 per cent to 0.8 per cent during this period. Following ultra-easy monetary policy and fiscal stimulus, Abe released his third arrow in June. But this third arrow the structural reform programme failed to convince the market of the long-term growth viability of the Japanese economy, since it stopped short of the most desired reforms, such as liberalisation of farm land, labour regulation and strict control over the medical service industry. Tokyo stock prices tumbled even further. Abe hastily announced that there would be more reform initiatives to come in the second half of 2013 and indicated that further corporate tax cuts and investment incentives were likely.

Short-lived Excitement over Abenomics


The new economic policies of the Shinzo Abe administration produced two immediate results that surpassed expectations: a sharp rebound of stock prices and a fall in the yens exchange rate. This was achieved by pressing the Bank of Japan to embark on a bold policy of monetary easing. In just six months from the autumn of 2012, when it seemed likely that Abe would become the next prime minister, the dollar appreciated by 25 per cent against the yen, moving. The Nikkei Stock Index rose from 8,600 to over 15,000 in May. Replacing Bank of Japan Governor Masaaki Shirakawa, who was reluctant

Risk Factors of Abenomics


Seen from an economic perspective, Abenomics is characterised by a number of serious risk factors. The first concerns whether the Japanese economy will really be that much improved by a weaker yen and rising stock prices alone. Certainly a weaker yen means profits for export businesses, but what about imports? The prices of the gasoline and food that Japan imports are already

RISABURO NEZU | A PROGRESSIVE GROWTH STRATEGY FOR JAPAN

beginning to rise. In fiscal year 2012, Japanese exports totalled 64 trillion yen and imports 72 trillion yen. With exports exceeding imports by 8 trillion yen, a weaker yen is a negative for the Japanese economy. Conversely, because the balance on income the sum of dividends and interest on financial assets, such as stocks and bonds and overseas factories and offices owned by Japanese companies is denominated in foreign currency, it increases in yen terms when converted into a weaker yen. When all these factors are combined, the weaker yen has almost no overall effect. Over a longer period of time, it may well be that a weaker yen would boost Japanese exports and eventually the entire economy. However, to what extent this would happen is far from clear. Japanese manufacturing now prefers to produce locally rather than produce at home and export. The benefit of a cheap yen thus may be much less than expected.

increase in wages, people will have a harder time getting by and the economy is likely to slow down once again.

Abenomics Is Silent on Fiscal Issues!


In January 2013, Abes government decided the supplementary budget for fiscal year 2012, which included 10 trillion yen of public infrastructure building and other spending to boost local economies. This action, the socalled second arrow of Abenomics, was made possible by issuing 5 trillion yen of Japanese government bonds. The interest rate on these bonds rose in March and April. For fiscal year 2013, the size of the general budget is 93 trillion yen, of which 43 trillion yen or 46 per cent is financed by debt. This is equivalent to 9 per cent of GDP. Abe must present a credible plan to rein in government debt, which is already twice as large as GDP. The official position of the Japanese government is that it will stick to the plan to eliminate the deficit in the primary balance and to reach a positive primary balance by 2020. This plan was drawn up by the previous government and already Abes administration has strayed from the course by spending an additional 5 trillion yen to finance Abenomics second arrow. Abe promised to present his own fiscal plan in the autumn of 2013. In October, he announced that consumption tax is to be raised from 5 per cent to 8 per cent in April 2014. It remains open, however, whether the tax will be raised further to 10 per cent in October 2015, as the consumption tax law assumes. As a result, there may be greater scope for cutting back the deficit. Still, restoring the primary balance in six years time is a challenge. To accomplish this, greater tax increases would be necessary, coupled with spending cuts. These spending cuts would come from streamlining and cutting back social welfare programmes. On the revenue side, a further increase of consumption tax from 10 per cent to 15 or 20 per cent is an option. Other increases in inheritance tax and income tax are under consideration. All these tax increases and spending cuts will probably have a depressive effect on the economy and thus they should be implemented only when the economy is strong enough to bear the burden.

No End of Deflation in Sight


The second risk factor is that a weaker yen and rising stock prices may not necessarily beat deflation on their own. A weaker yen results in higher prices for imported goods that, to some degree, are probably shifted to consumers. The degree to which they shift depends on how far prices can rise without impacting sales. The income of ordinary workers, which makes up 60 per cent of Japans GDP, has fallen consistently for the past 15 years, declining by 13 per cent from 1998 levels. Whether imported or made at home, if the prices of products rise, consumers simply cut back further on their spending; the inability to raise prices will make it impossible to do away with deflation.

Wages Not Increasing


The third risk factor is wages. In order to compensate for deflation, it is essential that the incomes of ordinary consumers that is, their wages rise. This is why Prime Minister Abe and his principal cabinet ministers called upon the leaders of the business community to raise wages. This appears to have led a number of large corporations to increase their bonuses, but they remain reluctant to raise base wages. With conditions still difficult for small and medium-sized companies, the overall increase in wages this year is negligible and the outlook for next year and beyond is uncertain. If prices rise in the absence of an

RISABURO NEZU | A PROGRESSIVE GROWTH STRATEGY FOR JAPAN

Will Japan Replicate the Fates of Greece and Portugal?


Given this gloomy prospect, some economists and experts in the Japanese government argue that Japan will become like Greece and Portugal, unless it takes bold measures to reduce the deficit very soon. Two of Abes predecessors, former Prime Ministers Naoto Kan and Yoshihiko Noda of the Democratic Party of Japan (DPJ), share this view. They had both been finance minister before becoming prime minister. For a long time, there have been two sharply opposed views of the Japanese fiscal deficit, which is the largest in the world both in absolute volume and relative to GDP. One school of thought contends that such a large deficit poses an immediate threat to the Japanese economy. Proponents of this view, most notably finance ministry officials, believe in a balanced budget and have called for tax increases and spending cuts in social welfare programmes as soon as possible. They fear that unless Japan takes action immediately to cut the fiscal deficit, the country will soon follow in the footsteps of Greece and Portugal. The other school argues that, although the current fiscal deficit may be unsustainable in the long run, there is not much to worry about at present. This is because the Japanese fiscal deficit is fully financed by domestic savings. The fact is that the Japanese private sector, households and corporations generate savings equivalent to 10 per cent of Japanese GDP, while the government of Japan borrows 8 per cent. The remaining 2 per cent is funnelled into foreign countries as current surplus. This is the complete opposite of Southern European nations such as Greece or Portugal, which borrow heavily from abroad to finance their fiscal deficit. The interest rate on Japanese government bonds is the lowest in the world. It is difficult to argue that Japans situation is similar to those of Southern European countries. This debate on fiscal discipline versus economic growth seems to be identical to the discussion under way in Europe. While many economists take the view that consumption tax should be increased as planned, some of Abes most trusted advisors are more cautious. Both Japan and European countries must carefully calculate the effects of aging populations on public expenditure. Many social welfare programmes, including pensions and

medical insurance, must be streamlined and cut back or taxes will need to be raised unbearably high.

The Surplus in the Current Account Is Dwindling


Japan has been running a current account surplus for the past four decades. The composition of such a surplus, however, has evolved greatly. Until the middle of the 1990s, the trade surplus that is, exports minus imports was the major component of the current surplus, but since then it has tapered off and begun to decrease slowly. In its place, income revenue from the assets Japan holds overseas has generated an increasing amount of dividends and interest revenue. Since the earthquake and tsunami of 11 March 2011, Japan has been forced to import huge volumes of gas to offset the closure of its nuclear power plants. Consequently, since 2011 the trade balance has fallen into the red, cutting into the income surplus. Now, Japans current account surplus is about 1 per cent of GDP, down from 5 per cent in 2007. Economists and policymakers are beginning to worry that, sooner or later, Japan will become unable to finance its fiscal deficit with domestic savings. It is imperative for Japan to reduce the level of its fiscal debt to a manageable level before its current account is wiped out.

Problem for Japanese Banks


A massive injection of money by the Bank of Japan was supposed to lower the interest rate, increase borrowings and stimulate investment and personal spending. The primary goal of Abenomics is to stop deflation and create modest inflation of around 2 per cent per annum. With overall price levels going up, interest rates must go up, too, which means that bond holders will suffer losses. Given that banks and other financial institutions have a high ratio of Japanese government bonds to total assets, rising interest rates may drive them into a banking crisis similar to those seen in Southern Europe recently. In Europe, what began as a fiscal problem developed into a banking crisis because many European banks held a substantial amount of sovereign bonds of peripheral countries. Will the same banking crisis occur in Japan?

RISABURO NEZU | A PROGRESSIVE GROWTH STRATEGY FOR JAPAN

Perhaps not. The Japanese government can always pay the interest on Japanese government bonds and the principal when due. They can print the money, if necessary, since all Japanese government bonds are denominated in Japanese yen, including those held by overseas buyers. However, this does not mean that there will not be a problem for the Japanese banking sector. It is plain that the lions share of Japanese government bonds is held by Japanese banks, pension funds and other institutional investors. There is a lot to suggest that the price of Japanese government bonds is likely to fall and the holders will suffer losses. Already, the so-called megabanks are cutting back on their holdings of long-term Japanese government bonds, but small, local banks remain exposed to much greater risk, as they still hold a large amount of the bonds.

the like. Abe has yet to disclose the details of his reform programme. At present, Japan is conducting a series of trade negotiations with the EU, Canada, Australia and neighbouring Asian countries. But by far the most important is the Trans Pacific Partnership (TPP). Abes decision to join the TPP negotiations has been much welcomed by business leaders and economists, but vehemently opposed within his own party. Trade liberalization negotiations will enter a critical stage this fall, when he must decide on liberalization of imports of products that have been protected by high tariffs or other forms of barriers in the past. This includes such items as rice, beef, pork, wheat and sugar. Traditionally, these products have been regarded as sacrosanct and no prime minister has ever dared to touch them. But more and more people, including farmers, are beginning to realize that Japanese agriculture cannot survive under the current regime anyway. If Abe can bring these reform-minded farmers onto his side, he has a chance of winning enough support to see the negotiations through. Such liberalization will bring in more competition in the Japanese agricultural industry and create an environment for innovative farmers to take a greater slice of the market and even venture into the export business. Medical and care services is a sector in which bold reform is urgently needed. This is a much larger industry than agriculture and a growing industry due to Japans aging population. This industry is heavily regulated by cumbersome rules and regulations, which today work to the detriment of new technology and services. While Japans technology and equipment are internationally competitive, very few products have been allowed into the domestic market. The resistance to reform in this field comes not only from government agencies, but also from medical practitioners and pharmaceutical companies that enjoy dominant positions under the present system. They have the money and political connections to mobilize strong opposition, if they wish to do so. Another area which deserves a lot of attention is the labour market. In Japan, although it is legally possible, historically it has been very difficult to lay off workers. The law considers lay-offs to be an absolute last resort. In fact, mid-career lay-offs are almost unheard of. If a case is brought to court, the employer must prove that the company has exhausted all other options to avoid

No Exit Strategy in Sight


At present, there is no clear vision of how these swollen assets can be scaled back to a normal level. Mr Kuroda has said publicly that it is still premature to discuss an exit strategy. It is widely feared that the Japanese government bond market may collapse under its own. This might happen if holders were to sell their Japanese government bonds in a rush due to fears of sudden interest rate hikes. Given the basic principles of Abenomics, which purports to cause inflation of 2 per cent, it is inevitable that sooner or later interest rates will rise. The only questions are if and how to avoid a massive sell-off of Japanese government bonds. It is absolutely necessary for the Bank of Japan to present a credible plan to reduce its asset level in an orderly manner.

Success of Abenomics Depends on Structural Reform


While views are divided on the effectiveness of monetary easing and additional spending on public works, economists are in unanimous agreement that the most important part of Abenomics is its third arrow, namely, structural reform. This includes bold initiatives that should cut into the staunch vested interests of labour unions, medical doctor associations, farmers associations and

a lay-off. This is an impossible task. A proposal is being made by employers to amend labour law to pave the way for lay-offs with monetary compensation. This is being met by fierce opposition from labour unions. The electricity industry will also see major changes over the next decade. A law has already been passed that will split the current vertically integrated nine regional monopolies into electricity generating companies and transmission companies. New companies are allowed to generate and sell electricity, opening up the entire

electricity market for competition. There will be more supply from renewable sources such as solar, wind, biomass and geo-thermal. This will be particularly beneficial to Japan, as it must find alternative energy sources to fill the gap left by the closure of its nuclear power plants. Following through with all of these reforms will take a much greater effort than the first and second arrows. We will have to wait and see how Mr Abe spends his political capital to that end.

The views expressed in this publication are not necessarily those of the Friedrich-Ebert-Stiftung. Commercial use of all media published by the Friedrich-EbertStiftung (FES) is not permitted without the written consent of the FES.

About the author Risaburo Nezu is Senior Executive Fellow at the Fujitsu Research Institute. ISBN 978-3-86498-683-3

Friedrich-Ebert-Stiftung | International Dialogue | International Policy Analysis Hiroshimastr. 28 | 10785 Berlin | Germany | Tel.: ++49-30-269-35-7745 | Fax: ++49-30-269-35-9248 E-Mail: info.ipa@fes.de | www.fes.de/ipa

International Journal of Trade, Economics and Finance, Vol. 3, No. 5, October 2012

Protectionism and Free Trade: A Countrys Glory or Doom?


Regine Adele Ngono Fouda

AbstractThis paper does not only go over the ground of the generally traversed, and examined arguments commonly used, but, it carries the inquiry further than the controversialists on either side, yet ventured to go; also a sought to discover why protection retains such popular strength in spite of all exposures of its erroneous beliefs; to trace the connection between the tariff question and more important social questions, now rapidly become the "burning questions" of our times; and now to show what radical measures the principle of free trade logically leads to. Thus harmonizing the truths which free traders perceive with the facts that protectionists make their own theory plausible, its perceived that this might open ground upon which those separated by seemingly irreconcilable differences of opinion may unite for that full application of the free-trade principle which would, secure both the largest production and the fairest distribution of wealth. Also facts emanating a posteriori will contain the general viewpoint of a countrys recommended choice, as well as my view point. Index TermsFree trade, protectionism, glory(advantages), Doom(drawback), industries, taxes, import/export, case study(on economic growth).

Protectionism, an economic policy of restraining trade between nations, through methods such as tariffs on imported goods, restrictive quotas, and a variety of other restrictive government regulations is designed to discourage imports, and prevent foreign take-over of local markets and companies. This policy is closely aligned with anti-globalization. This term is mostly used in the context of economics; protectionism refers to policies or doctrines which "protect" businesses and "living wages" within a country by restricting or regulating trade between foreign nations. Adam Smith famously warned against the 'interested sophistry' of industry, seeking to gain advantage at the cost of the consumers. Virtually all modern day economists agree that protectionism is harmful in that its costs outweigh the benefits, and that it impedes economic growth. A lot of situations made it possible that the international world called upon a general meeting to try to solve this crisis. This was true in 1947 when the General Agreement on Tariff and Trade (GATT) was signed with objectives of encouraging international trade, and reducing tariff as possible. [1] A. A Countrys Protectionism: Blooming Glory A countrys protectionism will mean the protection of home industries or infant industries (until they are large enough to achieve economies of scale and strong enough to compete internationally.), producers and consumers. The fundamental of internal trade wouldnt involve trade activities across national borders. Therefore this will reduce the special problem which often subsequently arise during import and export; crisis such as, deals might have to be transacted in foreign languages, foreign laws customs and regulation (protectionism frees such trauma). Moreover, information from particular firms needed in order to trade might be difficult to obtain, as well as the exporting countries numerous cultural differences may have to be taken into account when trading. All these complicated tribulations rendered a lot of countries to think twice by staying at home and enjoying home industries goods and products, denying therefore to practice import and export. Furthermore, having to export products from foreign countries will mean taking into consideration its currency; worst of all if the exchange rate variation is wider than the protectionist (country). Therefore such situation will create an exposure of taking debts to fulfill the contract (partnership agreement) of import and export, e.g. Cameroon in its thirst of practicing import/export, is always stock in currency transaction, forcing it to take debts from the world bank in order to keep the agreement moving, or else will have to face a penalty of breach of contract. Also, there is a greater political risk (such as the imposition of restriction on imports etc); commercial risks are broad too, such as markets failure, products not appealing to foreign customers as its the case of many goods nowadays in foreign
351

I. INTRODUCTION Free trade is a system in which the trade of goods and services between or within countries flows unhindered by government-imposed restrictions and interventions. Interventions include taxes and tariffs, non-tariff barriers, such as regulatory legislation and quotas, and even inter-government managed trade agreements such as the North American Free Trade Agreement (NAFTA) and Central America Free Trade Agreement (CAFTA) (contrary to their formal titles.) Free trade opposes all such interventions. One of the strongest arguments for free trade was made by classical economist David Ricardo in his analysis of comparative advantage explains how trade will benefit both parties (countries, regions, or individuals) if they have different opportunity costs of production. Protectionism, in the other hand is an economic policy of restricting trade between nations. Trade may be restricted by high tariffs on imported or exported goods, restrictive quotas, a variety of restrictive government regulations designed to discourage imports, and anti-dumping laws designed to protect domestic industries from foreign take-over or competition.

II. PROTECTION OF HOME INDUSTRIES IN DISFAVOR OF FOREIGN GOODS

Manuscript received July 22, 2012; revised August 25, 2012. Regine Adele Ngono Fouda is with Shanghai Maritime University, China(e-mail: reginemilena@yahoo.fr)

International Journal of Trade, Economics and Finance, Vol. 3, No. 5, October 2012

soils; more again, there are financial risks, such as adverse movement in exchange rates; and finally, transportation risks. B. Protectionism As A Fatal Failure: The Doom. No nation has all of the commodities that it needs. Some countries are abundant in certain resources, while others may lack them. For example, Colombia and Brazil have the ideal climate for growing coffee beans. The US is a major consumer of coffee, yet it does not have the suitable climate to grow any of its own. Consequently, this has made Colombia and Brazil big coffee exporters and the US big coffee importer. In this case, if the US were practicing protectionism, then they cant import coffee; this is true too for Cameroon, producer of cocoa and France consumer. In short, the uneven distribution of resources around the world would not give room for protectionism. Moreover, a country might claim that it has more than enough particular items to meet its needs and enough technology to manufacture or transform its natural products, however, this country might be consuming more than it can produce internally and thus will need to import from others depicting equilibrium in trade.[2] Protectionism results in deadweight loss. It has been estimated by the economist Stephen P. Magee (International Trade and Distortions in Factors Market 1976) [3] that, the benefits of free trade outweigh the losses as much as 100 to 1. Moreover, Alan Greenspan, former chairman of the American Federal Reserve, has criticized protectionist proposals as leading "to an atrophy of our competitive ability. ... If the protectionist route is followed, more efficient industries will have less scope to expand, overall output and economic welfare will suffer." Protectionism has also been accused of being one of the major causes of wars. The constant warfare in the 17th and 18th centuries among European countries whose governments were predominantly mercantilist and protectionist; the American Revolution, which came about primarily due to British tariffs and taxes; as well as the protective policies preceding World War 1 and 2. According to Frederic Bastiat, "When goods cannot cross borders, armies will." History following this enunciation is not lacking example such as; in 1930, facing only a mild recession, US President Hoover ignored warning pleas in a petition by 1028 prominent economists and signed the notorious Smoot-Hawley Act, which raised some tariffs to 100% levels. Within a year, over 25 other governments had retaliated by passing similar laws. What was the result? World trade came to a grinding halt, and the entire world was plunged into the "Great Depression" for the rest of the decade. The depression in turn led to World War II. An enhanced clarification is, when the government of Country "A" puts up trade barriers against the goods of Country "B", the government of Country "B" will naturally retaliate by erecting trade barriers against the goods of Country "A". What was the result? A trade war in which both sides lose. But all too often a depressed economy is not the only negative outcome of a trade war . . .

III. A COUNTRYS NO RESTRICTION OR TAXES ON ITS IMPORT/EXPORTS Opponents often argue that the comparative advantage for free trade has lost its legitimacy in a globally integrated worldin which capital is free to move internationally. Herman Daly, a leading voice in the discipline of ecological economics, emphasizes that although Ricardo's theory of comparative advantage is one of the most elegant theories in economics, Free capital mobility totally undercuts Ricardo's comparative advantage argument for free trade in goods, because that argument is explicitly and essentially premised on capital (and other factors) being immobile between nations. Under the new globalization regime, capital tends simply to flow to wherever costs are lowestthat is, to pursue absolute advantage. Moreover, economist and trade theorist Paul Krugman once stated that, "If there were an Economists Creed, it would surely contain the affirmations 'I believe in the Principle of Comparative Advantage' and 'I believe in Free Trade'.[4] A. Practicing Free Trade: As an Advantage. The literature analyzing the economics of free trade is extremely rich with extensive work having been done on the theoretical and empirical effects. Though it creates winners and losers, the broad consensus among members of the economics profession in the U.S. is that free trade is a large and unambiguous net gain for society. A fruitful example of this vision is that, in a 2006 survey of American economists, "87.5% agree that the U.S. should eliminate remaining tariffs and other barriers to trade" and "90.1% disagree with the suggestion that the U.S. should restrict employers from outsourcing work to foreign countries." Quoting Harvard economics professor Gregory Mankiw. [5] Few propositions command as much consensus among professional economists as that open world trade increases economic growth and raises living standards. This fact grew up drastically when again it was found out that, Free trade creates more jobs than it destroys because it allows countries to specialize in the production of goods and services in which they have a comparative advantage. Moreover, Economists, such as Milton Friedman [6] and Paul Krugman have argued that free trade helps third world workers, even though they are not subject to the stringent health and labor standards of developed countries. This is because "the growth of manufacturing and of the penumbra of other jobs that the new export sector creates has a ripple effect throughout the economy" that creates competition among producers, lifting wages and living conditions. A simple economic analysis using the law of supply and demand and the economic effects of a tax can be used to show the theoretical benefits of free trade as shown below; It is believed that growth in productive capacity was fostered best in an environment, where people were free to pursue their own interests. Moreover stressed that, a government policy of laissez faire (allowing individuals to pursue their activities within the bounds of law and order respect for property rights) would best provide the environment for increasing a nations wealth, thus home industries boom up.
352

International Journal of Trade, Economics and Finance, Vol. 3, No. 5, October 2012

Fig. 1. A simple economic analysis using the law of supply and demand and the economic effects of a tax to show the theoretical benefits of free trade

protect their countries from what they see as an Americanization or commercialization of their countries. More again, theres a need of protection against dumping. The EU sold a lot of its food surplus from the CAP at very low prices on the world market. This caused problems for world farmers because they saw a big fall in their market prices. Also, environmentally, it is argued that free trade can harm the environment because LDC may use up natural reserves of raw materials to export. Also countries with strict pollution controls may find consumers import the goods from other countries where legislation is lax and pollution allowed. However supporters of free trade would argue that it is up to individual countries to create environmental legislation.

More again, self interest is a catalyst, whereby, any normal business person or entrepreneur will definitely like to work in an environment, where no barriers are levied to hinder what he/she is planning to set forth. Again, it would lead individuals to specialize and exchange goods services based on their own special abilities. Furthermore, competition is the automatic regulation mechanism, because, if there are taxes, there cant be competition; the reason being that many people cant afford the higher ratios. Again, the father of economic Adam Smith, in his work saw little need for government control of the economy. In The Wealth of Nations, 1776, [7] Adam explained not only the critical role the market played in the accumulation of a nations wealth but also the nature of the social order that it achieved and helped to maintain. B. Free Trade As A Grave Setback A strong argument lies on infant industries, where if developing countries have industries that are relatively new, then at the moment these industries would struggle against international competition. However if they invested in the industry then in the future they may be able to gain Comparative Advantage. Protection therefore would allow them to progress and gain. The Senile industry argument is that which, If industries are declining and inefficient they may require large investment to make them efficient again. Protection for these industries would act as an incentive for firms to invest and reinvent themselves. To diversify the economy, free trade tries to maintain but denies facing the fact that, many developing countries rely on producing primary products where they currently have a comparative advantage. However relying on agricultural products has several disadvantages such as; Prices can fluctuate due to environmental factors, as well as, Goods have a low income elasticity of demand. Therefore with international economic growth, demand will only increase a little. [8] Free trade is incapable of raising revenue for the government. This is because, Import taxes can be used to raise money for the government, however this will only be a small amount of money. Furthermore, free trade does help the Balance of Payments; reducing imports can help the current account. However in the long term this is likely to lead to retaliation. Finally, Cultural Identity which is not really an economic argument but political and cultural. Many countries wish to
353

IV. ANALYSIS OF PERTINENT CASES: CASE STUDY In this analysis, I would base myself on logical world situation nowadays such as, the Japanese consumers pay five times the world price for rice because of import restrictions protecting Japanese farmers. European consumers pay dearly for EC restrictions on food imports and heavy taxes for domestic farm subsidies. American consumers also suffer from the same double burden, paying six times the world price for sugar because of trade restrictions. The US Semiconductor Trade Pact, which pressured Japanese producers to cut back production of computer memory chips, caused an acute worldwide shortage of these widely used parts. Prices quadrupled and companies using these components in the production of electronic consumer goods, in various countries around the world, were badly hurt. However, my main case study will be the banana war as examined below. A. The Banana Trade: Case Study The progressive reduction of tariff barriers has caused World Trade to increase by several hundred per cent since 1945, and there is no doubt that this has created both work and prosperity. [9] It has also improved products: although the planned economies of the former Soviet Union and the other countries created industries that produced nearly as much as Western companies, the products were much less sophisticated, reliable or marketable, consequently they were excluded from the competition. Today, most economists argue that nations which try to shelter declining industries behind tariff barriers are simply resisting the inevitable, and that they could use those subsidies to create new jobs in more modern industries. In other words, tariff barriers penalize consumers, for example Japan (as stated above). As for many years, the banana industry had a special status. The European Union allowed former British and French colonies in Africa (notably Cameroon), the Caribbean and the pacific islands to export to Europe as many bananas as they wished, at slightly above world prices. Banana production costs are higher in the Caribbean than on the American owned plantations in Latin America, due to the small size of family-run farms, the difficult terrain, and the climate. In 1999, for example, the US-based company Chiquita Brand made a $500,000 donation to the Democratic Party. The very next day, the US government complained to the World Trade Organization about Europes banana trade, and

International Journal of Trade, Economics and Finance, Vol. 3, No. 5, October 2012

put a 100% import tariff on various European goods. Opponents of the American case pointed out that only 7% of the 2.5billion tones of bananas imported into Europe every year come from the Caribbean. The EUs banana policy only cost American companies about $200 million a year, whereas trade between the US and the EU is worth about 200billion Euros. Half the population of the Caribbean relies on the banana industry to supply their basic needs such as food, shelter and education. Small states such as Dominica depend on banana exports to the EU for around 70 per cent of all export earnings and much of their employment. No other countries in the world have the same degree of dependence on a single product. Nevertheless, if the Caribbean banana industry was taken away without farmers being given enough time to develop other ways of using the land, the countries economy would collapse. Moreover, the results of entirely free trade in banana could be disastrous. It could also be pointed out that American, Japanese and Europe farmers are currently subsidized by billions of dollars every year. Yet, America itself erected massive tariff barriers in the 19th century. Furthermore, the Americans wanted to end subsidies to Caribbean banana producers, even though the consequences might have included many of the farmers turning to drug production and trafficking, or trying to immigrate illegally to the US. The banana wars ended in July 2001 when the American ended their special import taxes on selected European goods after the European Union agreed to import more Latin American bananas from the large US banana companies, while still also buying bananas from their former colonies. B. Point of View: A Countrys Economic Growth Protectionists fault the free trade model as being reverse protectionism in disguise, that of using tax policy to protect foreign manufacturers from domestic competition. By ruling out revenue tariffs on foreign products, government must fully rely on domestic taxation to provide its revenue, which falls heavily disproportionately on domestic manufacturing. As Paul Craig Roberts [10] (US Falling Behind Across the Board) notes: "Foreign discrimination of US products is reinforced by the US tax system, which imposes no appreciable tax burden on foreign goods and services sold in the US but imposes a heavy tax burden on US producers of goods and services regardless of whether they are sold within the US or exported to other countries." Moreover, it is the stated policy of most First World countries to eliminate protectionism through free trade policies enforced by international treaties and organizations such as the World Trade Organization. Despite this, many of these countries still place protective and/or revenue tariffs on foreign products to protect some favored or politically influential industries. This creates an artificially profitable industry that discourages foreign innovation from taking place. Moreover, protectionist quotas can cause foreign producers to become more profitable, mitigating their desired effect. This happens because quotas artificially restrict supply, so it is unable to meet demand; as a result the foreign producer can command a premium price for its products. These increased profits are known as quota rents (CEE). [11] My point of view, as a freshman economist is to try to
354

understand the degree in which countries have suffered or gained out of these two principles. In spite of evidence of damage caused by trade restrictions, pressure for more "protectionist" laws persists. Who is behind this, and why? Those who gain from "protectionist" laws are special-interest groups, such as some big corporations, unions, and farmers' groups all of whom would like to get away with charging higher prices and getting higher wages than they could expect in a free marketplace. These special interests have the money and political clout for influencing politicians to pass laws favorable to them. Politicians in turn play on the fears of uninformed voters to rally support for these laws. Who are therefore the losers in this international game? YOU and all other ordinary consumers. Your freedom is being trampled into the dust by these laws, and you are literally being robbed, through taxes and higher prices, in order to line the pockets of a few politically-privileged "fat cats." This situation made it clear to economists mind that some are favored while othersalso "Protectionism is a misnomer. The only people protected by tariffs, quotas and trade restrictions are those engaged in uneconomic and wasteful activity. Free trade is the only philosophy compatible with international peace and prosperity." by Walter Block [12] (Senior Economist, Fraser Institute, Canada). Moreover, another great economist pointed out the fact that, the world enjoyed its greatest economic growth during the relatively free trade period of 1945-1970, a period that also saw no major wars. Yet we again see trade barriers being raised around the world by short-sighted politicians. Will the world again end up in a shooting war as a result of these economically-deranged policies? Can we afford to allow this to happen in the nuclear age? Well, I think not reallybecause I suppose there is still much to cover, for this better world were all fighting for. Another great economist pointed out the fact that, the economic war fought in our world today is due to a huge variety of economic philosophy of nationalism, as quoted below. "What generates war is the economic philosophy of nationalism: embargoes, trade and foreign exchange controls, monetary devaluation, etc. The philosophy of protectionism is a philosophy of war. Ludwig Von Mises (Great Economist). Furthermore, for thousands of years, the tireless efforts of productive men and women have been spent trying to reduce the distance between communities of the world by reducing the costs of commerce and trade. Over the same span of history, the slothful and incompetent protectionist has endlessly sought to erect barriers in order to prohibit competition thus, effectively moving communities farther apart. When trade is cut off entirely, the real producers may as well be on different planets. The protectionist represents the worst in humanity: fear of change, fear of challenge, and the jealous envy of genius. The protectionist is not against the use of every kind of force, even warfare, to crush his rival. If mankind is to survive, then these primeval fears must be defeated." Ken Schoolland (Former US International Trade Commission Economist).

V. CONCLUSION Being an economist nowadays is full of ambiguity

International Journal of Trade, Economics and Finance, Vol. 3, No. 5, October 2012

situations, this because after some international analysis of economic principles such as protectionism and free trade, you get filled with a lot of glories and dooms witnessed in both parties in the international aspects. Nevertheless, one can easily realize after the above analysis that protectionism is out dated for so many countries round the globe, yet still the standing peak of many others. Whereas, free trade is the mostly appreciated in a lot of countries nowadays, and of course progressively dragging many other countries to suddenly reduce protectionism and click in favor of free trade, in order to avoid economic wars led by restrictive borders tariffs, quotas, barriers etc. For Ludwig V. Mises, the whole basis of economics is human action, and human action means change. Human conditions also change: populations grow and shift, and younger members replace the older ones, bringing fresh ideas with them. Production methods change too, with new processes being invented and old ones fading into disuse.[13] This facet also leads for a wise elucidation that, economic in the international trade world changes as time goes by, especially as knowledge and technology inflates by (Regine Adele Ngono Fouda, student in Masters in International Trade science Dept. Nov. 2008).

REFERENCES
[1] K. Bagwell and R. W. Staiger, Multilateral Tariff Cooperation during the Formation of Regional Free Trade Areas, International economic Review, vol. 38, 1997. E. R. Baldwin, Equilibrium in International Trade: A Diagrammatic analysis, Quarterly journal of Economics, vol. 62, 1948. P. S. Magee, International Trade and Distortions In Factor Markets, New York: Marcel-Dekker, 1976. P. Krugman, Is Trade Passe? Economic Perspective, pp. 131-144, 1987, A Raspberry for Free Trade, Nov. 21, 1997, In Praise of Cheap Labor, Mar.21, 1997. G. Mankiw, Eastern Economic Journal, vol. 35, pp. 14 23, 2009. Milton and R. Friedman, Free to Choose, New York: Harcourt Brace Jovanovich, Inc., 1980, pp. 4041, See also Robert McGee, The Cost of Protectionism, The Asian Economic Review, vol. 32, no. 3, December 1990, pp. 34764, A Trade Policy for Free Societies, New York: Quorum Books, 1994, Walter Block and Robert McGee, Ethical Aspects of Initiating Anti-Dumping Actions, International Journal of Social Economics, vol. 24, no. 6, 1997, pp. 599608, Must Protectionism Violate Rights? International Journal of Social Economics, vol. 24, no. 4, pp. 393407, 1997. A. Smith, Free trade, and protection, a reprint of Book IV & Chapters II& III of Book V of "The wealth of nations", New York: Random House, Inc.,1937, pp. 415. A. G. Kenwood and A. L. Lougheed, 1999. The Growth of the international Economy 1820-2000, 4th ed. London: Routledge. K. R. Andrew, Do We Really Know That the WTO increase Trade? The American Economic Review, vol. 94 , no. 1, 2004. P. C. Roberts, US Falling Behind Across the Board, July 26, 2005. B. Jagdish, "CEE: Protectionism," Concise Encyclopedia of Economics, Library of Economics and Liberty, Also, Termites in the Trading System: How Preferential Agreements Undermine Free Trade. W. Block, The Necessity of Free Trade, Journal of Markets & Morality, vol. 1, no. 2, pp. 192-200, October 1998. L. Von Mises, A Primer, The Institute of Economic Affairs (IEA), Great Britain, 2010.

[2] [3] [4]

[5] [6]

[7]

[8] [9] [10] [11]

ACKNOWLEDGMENT I will like to say thank you firstly to my supervisor Mr. Huang Y. F. Rector/president of Shanghai Maritime University; my vice-supervisor Mrs. Chen and all my other lecturers at the university. I also want to thank my parents, my brothers and sister for all their moral and spiritual support; as well as thanking my friends who in one way or the other contributed in finalizing this paper such as: Elijah M., Aissata D., Gerald O., and all the others which I couldnt name.
[12] [13]

Regine Adele Ngono Fouda is a Student at Shanghai Maritime University in Shanghai, China. She is from Cameroon, and studies in the Department of Logistics Management. She has a background studies in Private Law, International Politics, International Trade and now Logistics. She is a very hard working person, who is determined to reach her goal.

355

Job polarisation and the decline of middle-class workers wages


Michael Boehm, 8 February 2014 Employment in traditional middle-class jobs has fallen sharply over the last few decades. At the same time, middle-class wages have been stagnant. This column reviews recent research on job polarisation and presents a new study that explicitly links job polarisation with the changes in workers' wages. Job polarisation has a substantial negative effect on middle-skill workers.
The decline of the middle class has come to the forefront of debate in the US and Europe in recent years. This decline has two important components in the labour market. First, the number of well-paid middle-skill jobs in manufacturing and clerical occupations has decreased substantially since the mid-1980s. Second, the relative earnings for workers around the median of the wage distribution dropped over the same period, leaving them with hardly any real wage gains in nearly 30 years.

Job polarisation and its cause


Pioneering research by Autor, Katz, and Kearney (2006), Goos and Manning (2007), and Goos, Manning, and Salomons (2009) found that the share of employment in occupations in the middle of the skill distribution has declined rapidly in the US and Europe. At the same time the share of employment at the upper and lower ends of the occupational skill distribution has increased substantially. Goos and Manning termed this phenomenon job polarisation and it is depicted for US workers in Figure 1. Figure 1. Changes in US employment shares by occupations since the end of the 1980s

Notes: The chart depicts the percentage point change in employment in the low-, middle- and high-skilled occupations in the National Longitudinal Survey of Youth (NLSY) and the comparable years and age group in the more standard Current Population Survey (CPS). The high-skill occupations comprise managerial, professional services and technical occupations. The middle-skill occupations comprise sales, office/administrative, production, and operator and labourer occupations. The low-skill occupations include protective, food, cleaning and personal service occupations.

In an influential paper, Autor, Levy, and Murnane (2003) provide a compelling explanation: they found that middle-skilled manufacturing and clerical occupations are characterised by a high intensity of procedural, rule-based activities which they call routine tasks. As it happens, these routine tasks can relatively easily be coded into computer programs. Therefore, the rapid improvements in computer technology over the last few decades have provided employers with ever cheaper machines that can replace humans in many middle-skilled activities such as bookkeeping, clerical work and repetitive production tasks. These improvements in technology also enable employers to offshore some of the routine tasks that cannot be directly replaced by machines (Autor 2010). Moreover, cheaper routine tasks provided by machines complement the non-routine abstract tasks that are intensively carried out in high-skill occupations. For example, data processing computer programs strongly increased the productivity of highly-skilled professionals. Machines also do not seem to substitute for the non-routine manual tasks that are intensively carried out in low-skill occupations. For example, computers and robots are still much less capable of driving taxis and cleaning offices than humans. Thus, the relative economy-wide demand for middle-skill routine occupations has declined substantially. This routinisation hypothesis, due to Autor, Levy, and Murnance, has been tested in many different settings and it is widely accepted as the main driving force of job polarisation.

The effect of job polarisation on wages

Around the same time as job polarisation gathered steam in the US, the distribution of wages started polarising as well. That is, real wages for middle-class workers stagnated while earnings of the lowest and the highest percentiles of the wage distribution increased. This is depicted in Figure 2. Figure 2. Percentage growth of the quantiles of the US wage distribution since the end of the 1980s

Notes: The chart depicts the change in log real wages along the quantiles of the wage distribution between the two cohorts for the NLSY and the comparable years and age group in the CPS.

It thus seems natural to think that the polarisation of wages is just another consequence of the declining demand for routine tasks. However, there exists some evidence that is not entirely consistent with this thought: virtually all European countries experienced job polarisation as well, yet most of them havent seen wage polarisation but rather a continued increase in inequality across the board. Moreover, other factors that may have generated wage polarisation in the US have been proposed (e.g. an increase in the minimum wage, de-unionisation, and classical skillbiased technical change). In my recent paper I try to establish a closer link between job polarisation and workers wages (Boehm 2013). In particular, I ask three interrelated questions:

First, have the relative wages of workers in middle-skill occupations declined as should be expected by the routinisation hypothesis? Second, have the relative wage rates paid per constant unit of skill in the middle-skill occupations dropped with polarisation? Third, can job polarisation explain the changes in the overall wage distribution?

I answer these questions by analysing two waves of a representative survey of teenagers in the US carried out in 1979 and 1997. The survey responses provide detailed and multidimensional characteristics of these young people that influence their occupational choices and wages when they are 27 years old in the end of the 1980s and the end of the 2000s. Using these characteristics, I compute the probabilities of workers in the 1980s and today choosing middle-skill occupations and then compare the wages associated with these probabilities over time. My empirical strategy relies on predicting the occupations that todays workers would have chosen had they lived in the 1980s and then comparing their wages to those of workers who actually chose these occupations at that time. The results from this approach show a substantial negative effect of job polarisation on middleskill workers. The positive wage effect associated with a 1% higher probability of working in highskill jobs (compared to middle-skill jobs) almost doubled between the 1980s and today. The negative wage effect associated with a 1% higher probability of working in low-skill services jobs compared with middle-skill jobs attenuated by over a third over the same period. I find similar results when controlling for college education, which is arguably a measure of absolute skill. This suggests that it is indeed the relative advantage in the middle-skill occupations for which the returns in the labour market have declined. In the next step of my analysis, I estimate the changes in relative market wage rates that are offered for a constant unit of skill in each of the three occupational groups. Again, the position of the middle-skill occupations deteriorates substantially: the wage rates paid in the high-skill occupations increased by 20% compared to the middle while the wage rate in the low-skill occupations rose by 30%. This decline in the relative attractiveness of working in middle-skill occupations is consistent with the massive outflow of workers from these jobs. Finally, I check what effect the changing prices of labour may have had on the overall wage distribution and whether they can explain the wage polarisation that we observe in the US. Figure 3 shows that the change in the wage distribution due to these price effects reproduces the overall distribution reasonably well in the upper half while it fails to match the increase of wages for the lowest earners compared to middle earners. Figure 3. Actual and counterfactual changes in the US wage distribution

Notes: The chart plots the actual and counterfactual changes in the wage distribution in the NLSY when workers in 1980s are assigned the estimated price changes in their occupations.

At first glance, this is surprising given the strong increase in relative wage rates for low-skill work and the increase in the wages of workers in low-skill occupations. The reason is that these workers now move up in the wage distribution, which lifts not only the (low) quantiles where they started out but also the (middle) quantiles where they end up. The inverse happens for workers in middle-skill occupations but with the same effect on the wage distribution.

Conclusions
Despite the above findings, my paper does not provide the last word about the effect of job polarisation on the bottom of the wage distribution. This is because, for example, my estimates do not take into account potential additional wage effects from workers moving out of the middle-skill occupations into low-skill occupations. Therefore, we cannot yet finally assess the role that job polarisation versus policy factors (such as the raise of the minimum wage) played on the lower part of the wage distribution in the US. However, what emerges unambiguously from my work is that routinisation has not only replaced middle-skill workers jobs but also strongly decreased their relative wages. Policymakers who intend to counteract these developments may want to consider the supply side: if there are investments in education and training that help low and middle earners to catch up with high earners in terms of skills, this will also slow down or even reverse the increasing divergence of wages between those groups. In my view, the rising number of programs that try to tackle early inequalities in skill formation are therefore well-motivated from a routinisation-perspective.

References

Acemoglu, D and D H Autor (2011), Skills, Tasks and Technologies: Implications for Employment and Earnings, in Handbook of Labor Economics edited by Orley Ashenfelter and David Card, Vol. 4B, Ch. 12, 1043-1171. Autor, D H (2010), "The polarization of job opportunities in the US labour market: Implications for employment and earnings", Center for American Progress and The Hamilton Project. Autor, D H and D. Dorn (2013), The Growth of Low-Skill Service Jobs and the Polarization of the US Labor Market, The American Economic Review, 103(5), 155397. Autor D H, L F Katz, and M S Kearney (2006), "The Polarization of the US Labor Market", The American Economic Review 96.2, 189-194. Autor D H, F Levy and R Murnane (2003), The Skill Content of Recent Technological Change: An Empirical Exploration, Quarterly Journal of Economics 118(4): 1279-1333. Boehm, M J (2013), The Wage Effects of Job Polarization: Evidence from the Allocation of Talents, Working Paper. Goos, M and A Manning (2007), "Lousy and lovely jobs: The rising polarization of work in Britain", The Review of Economics and Statistics 89.1, 118-133. Goos M, A Manning and A Salomons (2009), Explaining Job Polarization in Europe: The Roles of Technology, Globalization and Institutions, American Economic Review Papers and Proceedings 99(2): 58-63 Michaels G, A Natraj, and J Van Reenen (2013), Has ICT Polarized Skill Demand? Evidence from Eleven Countries over 25 Years, forthcoming in Review of Economics and Statistics; earlier version available as CEP Discussion Paper No. 987 (http://cep.lse.ac.uk/pubs/download/dp0987.pdf). Spitz-Oener, A (2006), "Technical change, job tasks, and rising educational demands: Looking outside the wage structure", Journal of Labor Economics 24.2, 235-270.

1 This figure and the ones below are based on two representative samples for 27 year old males in the United States (the National Longitudinal Survey of Youth (NLSY) and the Current Population Survey (CPS)). For qualitatively similar statistics on all prime age workers, refer to Acemoglu & Autor (2011). 2 Examples of tests of the routinization hypothesis include Michaels et al (2013) who find that industries with faster growth of information and communication technology had greater decreases in the relative demand for middle educated workers; Spitz-Oener (2006) who shows that job tasks have become more complex in occupations that rapidly computerized; and Autor and Dorn (2013) who show that local labour markets that specialised in routine tasks adopted information technology faster and experienced stronger job polarisation. 3 For the details of this estimation, please refer to the paper.

OXFORD REVIEW OF ECONOMIC POLICY, VOL. 16, NO. 4

HOW COMPLICATED DOES THE MODEL HAVE TO BE?


PAUL KRUGMAN Princeton University

Simple macroeconomic models based on IS-LM have become unfashionable because of their lack of microfoundations, and are in danger of being effectively forgotten by the profession. Yet while thinking about micro-foundations is a productive enterprise, complex models based on such foundations are not necessarily more accurate than simple, ad-hoc models. Three decades of attempts to base aggregate supply on rational behaviour have not displaced the Phillips curve; inter-temporal models of consumption do not offer reliable predictions about aggregate demand. Meanwhile, the ease of use of small models makes them superior for many practical applications. So we should not allow them to be driven out of circulation.

I. A VANISHING ART
Two years before writing this piece I was assigned by my then department to teach Macroeconomics I for graduate students. Ordinarily this course is taught by someone who specializes in macroeconomics; and whatever topics my popular writings may cover, my professional specialities are international trade and finance, not general macroeconomic theory. However, MIT had a temporary staffing problem, which is itself revealing of the current state of macro, and I was called in to fill the gap.

The problem was this: MITs first macro segment is a half-semester course, which is supposed to cover the workhorse models of the fieldthe standard approaches that everyone is supposed to know, the models that underlie discussion at, say, the Fed, Treasury, and the IMF. In particular, it is supposed to provide an overview of such items as the IS-LM model of monetary and fiscal policy, the AS-AD approach to short-run versus long-run analysis, and so on. By the standards of modern macro theory, this is crude and simplistic stuff, so you might think that any trained macroeconomist could teach it. But it turns out that that isnt true.

2000 OXFORD UNIVERSITY PRESS AND THE OXFORD REVIEW OF ECONOMIC POLICY LIMITED

33

OXFORD REVIEW OF ECONOMIC POLICY, VOL. 16, NO. 4

You see, younger macroeconomistssay, those under 40 or soby and large dont know this stuff. Their teachers regarded such constructs as the ISLM model as too ad hoc, too simplistic, even to be worth teachingafter all, they could not serve as the basis for a dissertation. Now MITs younger macro people are certainly very smart, and could learn the material in order to teach itbut they would find it strange, even repugnant. So in order to teach this course MIT has relied, for as long as I can remember, on economists who learned old-fashioned macro before it came to be regarded with contempt. For a variety of reasons, however, MIT couldnt turn to the usual suspects that year, and I had to fill the gap. Now you might say, if this stuff is so out of fashion, shouldnt it be dropped from the curriculum? But the funny thing is that while old-fashioned macro has increasingly been pushed out of graduate programmesit takes up only a few pages in either the BlanchardFischer (1989) or Romer (1996) textbooks that I assigned, and none at all in many other tractsout there in the real world it continues to be the main basis for serious discussion. After 25 years of rational expectations, equilibrium business cycles, growth and new growth, and so on, when the talk turns to the next move by the Fed, the European Central Bank, or the Bank of Japan, when one tries to see a way out of Argentinas dilemma, or ask why Brazils devaluation turned out relatively well, one almost inevitably turns to the sort of old-fashioned, small-model macro that I taught that spring. Why does the old-fashioned stuff persist in this way? I dont think the answer is intellectual conservatism. Economists, in fact, are in general neophiles, always looking for something radical and different. Anyway, I have seen over and over again how young economists, trained to regard IS-LM and all that with contempt if they even know what it is, find themselves turning to it after a few years in Washington or New York. Theres something about primeval macro that pulls us back to it; if Hicks hadnt invented IS-LM in 1937, we would end up inventing it all over again. But what is it that makes old-fashioned macro so compelling? The answer, I would argue, is that we

need small, ad-hoc models as part of our intellectual tool-box. Since the 1970s, macroeconomic theory has been driven in large part by an attempt to get rid of the adhockery. The most prominent and divisive aspect of that drive has been the effort to provide microfoundations for aggregate supply. But efforts to provide micro-foundations for aggregate demand by grounding individual decisions in intertemporal optimizationhave been almost equally determined. The result has been a turning away from simple, adhoc models of the IS-LM genre. What I would argue is that this tendency has gone too far. Of course we should do the more complicated models; of course we should strive for a synthesis that puts macroeconomics on a firmer micro-foundation. But for now, and for the foreseeable future, the little models retain a vital place in the discipline. To make this point, let me first revisit where the little models come from in the first place.

II. HOW THE HOC GOT ADDED


Aficionados know that much of what we now think of as Keynesian economics actually comes from John Hicks, whose Mr Keynes and the Classics (Hicks, 1937) introduced the IS-LM model, a concise statement of an argument that may or may not have been what Keynes meant to say, but has certainly ended up defining what the world thinks he said. But how did Hicks come up with that concise statement? To answer that question we need only look at the book he himself was writing at the time, Value and Capital, which has in a low-key way been as influential as Keyness General Theory . Value and Capital may be thought of as an extended answer to the question, How do we think coherently about the interrelationships among marketsabout the impact of the price of hogs on that of corn and vice versa? How does the whole system fit together? Economists had long understood how to think about a single market in

34

P. Krugman

Figure 1
PY /PZ X

Y Z

Y X

PX/PZ

isolationthats what supply-and-demand is all about. And in some areasnotably international tradethey had thought through how things fitted together in an economy producing two goods. But what about economies with three or more goods, where some pairs of goods might be substitutes, others complements, and so on? This is not the place to go at length into the way that Hicks (and others working at the same time) put the story of general equilibrium together. But to understand where IS-LM came fromand why it continues to reappearit helps to think about the simplest case in which something more than supply and demand curves becomes necessary: a three-good economy. Let us simply call the goods X, Y, and Z and let Z be the numeraire. Now equilibrium in a three-good model can be represented by drawing curves that indicate combinations of prices for which each of the three markets is in equilibrium. Thus in Figure 1 the prices of X and Y, both in terms of Z, are shown on the axes. The line labelled X shows price combinations for which demand and supply of X are equal; similarly with Y and Z. Although there are three curves, Walrass Law tells us that they have a common intersection, which defines equilibrium prices for the economy as a whole. The slopes of the curves are drawn on the assumption that own-price effects are

negative, cross-price effects positivethus an increase in the price of X increases demand for Y, driving the price of Y up, and vice versa; it is also, of course, possible to introduce complementarity into such a framework, which was one of its main points. This diagram is simply standard, uncontroversial microeconomics. What does it have to do with macro? Well, suppose you wanted a first-pass framework for thinking coherently about macro-type issues, such as the interest rate and the price level. At minimum such a framework would require consideration of the supply and demand for goods, so that it could be used to discuss the price level; the supply and demand for bonds, so that it could be used to discuss the interest rate; and, of course, the supply and demand for money. What, then, could be more natural than to think of goods in general, bonds, and money as if they were the three goods of Figure 1? Put the price of goodsaka the general price levelon one axis, and the price of bonds on the other; and you have something like Figure 2or, more conventionally putting the interest rate instead of the price of bonds on the vertical axis, something like Figure 3. And already we have a picture that is essentially Patinkins flexible-price version of IS-LM.

35

OXFORD REVIEW OF ECONOMIC POLICY, VOL. 16, NO. 4

Figure 2
Price of bonds Goods

Bonds

Money

Price of goods

Figure 3
Interest rate

Money

Bonds

Goods

Price level

If you try to read pre-Keynesian monetary theory, or for that matter talk about such matters either with modern laymen or with modern graduate students who havent seen this sort of thing, you quickly realize that this seemingly trivial formulation is actually a powerful tool for clarifying thought, precisely because it is a general-equilibrium framework that takes the interactions of markets into account. (I have heard the story of a famed general-equilib-

rium theorist who, late in a very distinguished career, saw an IS-LM diagram and asked who came up with that brilliant idea.) Here are some of the things it suddenly makes clear. (i) What Determines Interest Rates? Before KeynesHicksand even to some extent afterthere has seemed to be a conflict between

36

P. Krugman

the idea that the interest rate adjusts to make savings and investment equal, and that it is determined by the choice between bonds and money. Which is it? The answer, of coursebut it is only of course once youve approached the issue the right wayis both: were talking general equilibrium here, and the interest rate and price level are jointly determined in both markets. (ii) How Can an Investment Boom Cause Inflation (and an Investment Slump Cause Deflation)? Before Keynes this was a subject of vast confusion, with all sorts of murky stuff about lengthening periods of production, forced saving, and so on. But once you are thinking three-good general equilibrium, it becomes a simple matter. When investment (or consumer) demand is highwhen people are eager to borrow to buy real goodsthey are in effect trying to shift from bonds to goods. So both the bond-market and goods-market equilibrium schedules, but not the money-market schedule, shift; and the result is both inflation and a rise in the interest rate. (iii) How Can We Distinguish between Monetary and Fiscal Policy? Well, in a fiscal expansion the government sells bonds and buys goodsproducing the same shifts in schedules as an investment boom. In a monetary expansion it buys bonds and sells newly printed money, shifting the bonds and money (but not goods) schedules. Of course, this is all still a theory of money, interest, and prices (Patinkins title), not employment, interest, and money (Keyness). To make the transition we must introduce some kind of pricestickiness, so that incipient deflation is at least partly translated into output decline; and then we must consider the multiplier impacts of that output decline, and so on. But the basic form of the analysis still comes from the idea of a three-good generalequilibrium model in which the three goods are goods in general, bonds, and money. Sixty years on, the intellectual problems with doing macro this way are well known. First of all, the idea of treating money as an ordinary good begs many

questions: surely money plays a special sort of role in the economy. (For some reason, however, this objection has not played a big role in changing the face of macro.) Second, almost all the decisions that presumably underlie the schedules here involve choices over time: this is true of investment, consumption, even money demand. So there is something not quite right about pretending that prices and interest rates are determined by a static equilibrium problem. (Of course, Hicks knew about that, and was quite self-conscious about the limitations of his temporary equilibrium method.) Finally, sticky prices play a crucial role in converting this into a theory of real economic fluctuations; while I regard the evidence for such stickiness as overwhelming, the assumption of at least temporarily rigid nominal prices is one of those things that works beautifully in practice but very badly in theory. But step back from the controversies, and put yourself in the position of someone who must reach a judgement about the likely impact of a change in monetary policy, or an investment slump, or a fiscal expansion. It would be cumbersome to try, every time, to write out an intertemporal-maximization framework, with micro-foundations for money and price behaviour, and try to map that into the limited data available. Surely you will find yourself trying to keep track of as few things as possible, to devise a working modela scratch-pad for your thoughts that respects the essential adding-up constraints, that represents the motives and behaviour of individuals in a sensible way, yet has no superfluous moving parts. And that is what the quasi-static, goodsbondsmoney model isand that is why old-fashioned macro, which is basically about that model, remains so popular a tool for practical policy analysis. Of course, if we knewreally knewthat one got much more reliable results by doing the things that ad-hoc macro does not, it would be a tool to be used only for the most preliminary examination of issues. But do we know that? Let me turn briefly to the two main areas of contention, the two main areas in which the search for micro-foundations has been most aggressively pursued: aggregate supply and intertemporal spending decisions. In each case I want to ask: how sure are we that the microfounded version is really better?

37

OXFORD REVIEW OF ECONOMIC POLICY, VOL. 16, NO. 4

What I would argue is that in each case the first efforts to derive some micro-foundation had a big pay-off. That is, in each case a preliminary, rough application of the idea that there was a deeper structure underneath the quasi-static model produced a result that was both compelling and empirically confirmed. But in each case, also, the subsequent work, the elaboration of the micro-foundations, has produced little if any gain in predictive power. Thinking about micro-foundations does you a lot of good; taking care to put them into the model all the time, it seems, does not.

Better yet, the natural-rate hypothesis in effect predicted stagflation in advance, a victory that gave huge credibility to the whole enterprise of microfounded macro. It therefore created a favourable environment for the second stage. (ii) The Lucas Project Few ideas in economics have been so influential, yet left so little lasting impact, as the idea that nominal shocks have real effects because of rational confusion. When I was in graduate school the Lucas supply curve, with its lovely metaphors about islands and signal processing, but its alas very unlovely formal apparatus, was the central development in macroeconomics. But hardly anyone seems to think it a useful tool today. What went wrong? Analytically, it all comes down to the case of the economic agent who knew too much. In the real world recessions last far too long for us to believe that people are voluntarily withholding labour because they believe they are facing idiosyncratic shocks. And in theoretical models, even if we somehow imagine that agents cannot read about the ongoing recession in the Financial Times, just about any sort of financial market will, in conjunction with individual experience, eliminate the confusion that Lucas-type models rely on. Empirically, the main supporting evidence for the signal-processing aspect of the Lucas-type approach was the observation that the short-run aggregate supply curve appears steeper in countries with highly unstable inflation rates. But as Ball et al. (1988) pointed out, countries with very unstable inflation rates are also countries with high inflation rates, and one can think of many models in which high inflation would in effect reduce money illusion. In short, the Lucas projectwhich tried to build an aggregate supply curve in which nominal shocks had real effects on genuine micro-foundations failed. There were two responses to this. (iii) The Great Schism At the risk of a great oversimplification, in the 1980s macroeconomists reacted to the failure of the Lucas project in two ways. One reaction was to say that

III. AGGREGATE SUPPLY


The story of the aggregate supply wars is familiar to most economists, though everyone tells it a bit differently. Let me give my own version. As I see it, the effort to put micro-foundations beneath aggregate supply went through four stages. (i) The Natural-rate Hypothesis When macroeconomists first began thinking seriously about why nominal shocks really have real effects, they were led into a variety of models in which imperfect information might cause a confusion between real and nominal shocks. The famous Phelps volume (1970) included many of these models. The key implication of these models was some version of the principle that you cant fool all of the people all of the time, suggesting not just that prices would be flexible in the long run, but that persistent inflation would get built into expectations too. Thus was the natural-rate hypothesis born. While there was some resistance to the natural-rate ideaand some sophisticated challenges have been posed in recent yearsit was an idea that most economists found compelling. And its empirical predictions were soon seen to be roughly true. The Phillips curve did turn out to be unstable; for the USA, at least, a time series of inflation and unemployment seemed to go through clockwise spirals, just as you would have expected. (European data have never fitted very well, but that was easily explained away as the result of a secular upward trend in the natural rate.)

38

P. Krugman

because we had not managed to find a microfoundation for non-neutrality of money, money must be neutral after all; it might look to you as if the Fed has power to affect the real economy, but that must be a statistical illusion. And that led in the direction of real business-cycle theory. The other reaction was to say that we need some other explanation of apparent non-neutrality, resting in something like menu costs or bounded rationality. And that led in the direction of New Keynesian theory. Each of these approaches had something to commend it, but surely at this point we can say that neither approach really succeeded. I could go through the analytical and empirical objections to real business cycle theory at some length, but perhaps the crucial point to make is that the evidence for price stickiness has stubbornly refused to go away. My personal impression was that the tide really began to turn against equilibrium business-cycle models with the accumulation of data about co-movement of nominal and real exchange rates. As inflation rates in advanced countries fell during the 1980s, while nominal exchange rates continued to fluctuate wildly, it became more or less impossible to ignore the startling extent to which price levels remained stable even while large swings in the exchange rate took place. It is no accident that Obstfeld and Rogoff (1996)which arguably is the defining book for the state of modern macrobasically uses exchange-rate evidence as the main basis for arguing that prices are, sure enough, sticky and hence that nominal shocks have real effects. But if prices are sticky, why? New Keynesian macro offered an intellectually almost-satisfying answer. Suppose that there are distortions in labour and product markets, which give price-setters some monopoly or monopsony power. Then the loss to an individual price-setter from not reducing its price in the face of a fall in aggregate demand would be second-order in the size of the declinebut the cost to the economy as a whole of such price stickiness would be first-order. All that one needs, then, is some fairly trivial reason not to change prices menu costs, bounded rationality, or somethingand

one can justify the kind of price inflexibility that makes recessions possible. Its a terrific insight; I remember being thrilled when I realized that we finally had a way other than confusion to reconcile sticky prices with more or less rational behaviour. But what does one do, exactly, with that insight? Fifteen years after the publication of the original menu-cost papers of Mankiw (1985) and Akerlof and Yellen (1985), it seems that this line of thought has produced not so much a micro-foundation for macro as a micro-excuse. That is, one can now explain how price stickiness could happen. But useful predictions about when it happens and when it does not, or models that build from menu costs to a realistic Phillips curve, just dont seem to be forthcoming. (iv) Undeclared Peace So where are we now? The passion seems to have gone out of the aggregate supply wars. The typical modelling trick, now used by both sides, is to suppose that prices must for some reason be set one period ahead. This assumption is somewhat problematic empirically, since there is considerable evidence for inflation inertia that goes beyond preset prices. But what I want to emphasize here is that the end result of the whole attempt to provide rational foundations for aggregate supply has ended up with something that is almost as ad hoc as the original assumption of sticky prices. Notice the pattern. We got a big pay-offthe natural-rate hypothesisfrom the first, crude, application of rational modelling. The subsequent elaboration, the stuff that makes the models so complicated, has added to our understanding of the intellectual issues, but not noticeably to our ability to match the phenomena.

IV. AGGREGATE DEMAND


While there has been considerable theorizing and empirical work over other components of aggregate demand, consumption spending has clearly been the centre of research and debate. The history of this debate is almost, though not quite, as familiar as that

39

OXFORD REVIEW OF ECONOMIC POLICY, VOL. 16, NO. 4

of the aggregate supply wars; let me again give my own version. It began, of course, with the Keynesian consumption function, with consumption simply a function of disposable income. But this consumption function ran into empirical paradoxes and a forecasting dbcle almost as soon as it was formulated. The forecasting dbcle was the failure of consumption functions based on inter-war data to predict the post-war consumption boom. Indeed, consumption functions from the 1930swith a marginal propensity to consume much less than the average propensityseemed to lend plausibility to the secular stagnation hypothesis, with its suggestion that it would be persistently difficult to get the public to spend enough to use the economys growing capacity. Happily, it didnt work out that way. More broadly, the stylized facts on consumption didnt fit together if you tried to explain them with a consumption function of the form C = C(Y). The way I teach undergrads is to describe three facts: the procyclicality of the savings rate; the fact that high-income households have higher savings rates than low-income households; and the constancy of the savings rate over the very long run. The first two facts seem to suggest a marginal propensity to consume substantially less than the average propensity. But the third fact seems to imply a marginal propensity to consume that is more or less the same as the average propensity. Rational behaviour came to the rescue. The permanent-income / life-cycle approach pointed out that a rational individual bases his consumption on his wealth, including the present discounted value of expected future income, not just on current income. And all the stylized facts immediately fell into place: cyclical income gets saved because it is temporary, high-income families save more because they are also typically families with unusually high income for their own life cycle, but the long-term upward trend in income, because it is permanent, does not get reflected in a higher savings rate. As with the natural-rate hypothesis, then, the first application of rational behaviour to the model produced a big pay-off. And then?

As originally used, the permanent-income/life-cycle approach was a bit loose; it was used to suggest variables that might proxy for wealth, rather than be taken literally. But from the 1970s on it became much more hard-edged: we were supposed to abandon ad-hoc consumption functions and base everything on intertemporal maximization. What that meant, above all, was that the little macro models I described earlier were no longer allowable. Quite aside from the sticky-price issue, such models were now condemned as attempts to shoehorn an essentially dynamic, forward-looking problem into a quasi-static framework. And at one level this is certainly right. In international macroeconomics, there are some issues, notably current account determination, where you really want to have an intertemporal viewotherwise it is just very hard to get any good bearings. But should we rule out the use of the simple, ad-hoc, quasi-static models altogether? If fully fledged intertemporal consumption models were simply radically better at describing consumption behaviour, there would be a case for doing away with the ad-hoc models. But after a rousing initial start in Halls (1978) famous random walk paper, the evidence seems to have become distinctly mixed. The basic point seems to be that predictable changes in income should not lead to changes in consumptionthey should already be incorporated in the current level of consumption, just as predictable changes in profits are supposed to be reflected in the current stock price. But as the survey in Romer (1996) points out, there is considerable evidence that in fact such predictable changes in income have substantial effects, though not as much as a crude Keynesian consumption function would have predicted. And evidence for the strong implications of intertemporal consumption optimization, like Ricardian equivalence when it comes to budget deficits, is at best questionable. Why the disappointing results? Arguments similar to those used in New Keynesian models of aggregate supply also apply to aggregate demand. A consumer who uses a sensible rule of thumb will typically suffer only a small loss in lifetime utility compared with an obsessively perfect optimizer. If there are

40

P. Krugman

some costs to gathering and processing information, the rule of thumb may well be a better choice. Yet the implied consumption behaviour may be very different from that of a full optimizer. On the other hand, like new Keynesian price rigidity, this serves more as a micro-excuse than a micro-foundation for whatever we actually assume about spending. In short, a careful intertemporal approach to aggregate demand does not turn out to be all that helpful. It does not, it turns out, offer a clear improvement in predictive power over that of simpler, ad-hoc approaches. So the history of aggregate demand theorizing is in a fundamental sense very similar to that of aggregate supply. Introducing a whiff of rational behaviour into the model greatly improves its ability to reflect reality. Going the rest of the way, to use a fully specified model of rational choices over time, is at best an ambiguous improvement; it produces a model that is more satisfying intellectually, but not necessarily better in terms of accuracy.

what we know pretty well, from decades of trying to give micro-foundations to macro, is that logical completeness and intellectual satisfaction are not necessarily indications that a model will actually do a better of job of tracking what really happens. For many purposes the small, ad-hoc models are as good as or better than the carefully specified, maximizing intertemporal model. Let me discuss an exception that proves the rule. One of the most influential macro models of the 1990s, and deservedly so, is the revisitation of MundellFlemingDornbusch by Obstfeld and Rogoff (1995). Its a beautiful piece of work, integrating a new Keynesian approach to price stickiness (albeit with the ad-hoc assumption that prices are set for only one period) with a full intertemporal approach to aggregate demand; it addresses the classic question of the effect of a monetary shock on output, interest rates, and the exchange rate. Its also pretty hard work: as restated in their 1996 book, the model and its analysis take 76 equations to describe, even though the authors restrict themselves to a special case, and to log-linearized analysis around a steady state. The big pay-off to all this is a gorgeous welfare result, which says that the benefits of a monetary expansion accrue equally to both countries, wherever the monetary expansion takes placea result that is immediately understandable in terms of the welfare economics of monopoly in general. But ask two questions. First, is this model actually any better at predicting the impact of monetary expansions in the real world than a three- or fourequation, back-of-the-envelope MundellFleming model? The authors give us no reason to think so. Second, do you really believe that welfare result? Certainly notindeed, it is driven to an important extent by the details of the model, and can quite easily be undone. The result offers a tremendous clarification of the issues; its not at all clear that it offers a comparable insight into what really happens. None of this should be taken as a critique of the new open-economy macroeconomics, which is wonderful stuff. The point is that while I can and will use this approach along with the old ad-hoc models, I am not going to give up those old models. And when

V. IN DEFENCE OF LITTLE MODELS


We now come to the moral of the sermon: the case for little, simple models. That means IS-LM or some variant for domestic macro, with an adaptive-expectations Phillips curve; it means a slightly souped up version of MundellFleming for international macro, with some kind of regressive expectations on the real exchange rate to avoid the conclusion that all countries must have the same interest rate. The point is not that these models are accurate or complete, or that they should be the only models used. Clearly they are incomplete, quite inadequate to examining some questions, and remain as full of hoc as ever. But they are easy to use, particularly on real-world policy questions, and often seem to give more or less the right answer. The alternative is to use models that are less ad hoc, that are careful to derive as much as possible from individual maximization. Nobody can seriously say that such models should be banned from discourse; where they confirm the insights of the simpler models they are reassuring, and where they contradict those results they serve as a warning sign. But

41

OXFORD REVIEW OF ECONOMIC POLICY, VOL. 16, NO. 4

I write my next popular piece on international macro, it will probably be based on ad-hoc macro, not models with proper micro-foundations. In the long run, what this says is that we havent really got the right models. The small models havent gotten any better over the past couple of decades; what has happened is that the bigger, more microfounded models have not lived up to their promise. The core of my argument isnt that simple models are good, its that complicated models arent all they

were supposed to be. But pending the arrival, someday, of models that really do perform much better than anything we now have, the point is that the small, ad-hoc models are still useful. What that means, in turn, is that we need to keep them alive. It would be a shame if IS-LM and all that vanish from the curriculum because they were thought to be insufficiently rigorous, replaced with models that are indeed more rigorous but not demonstrably better.

REFERENCES
Akerlof, G., and Yellen, J. (1985), A Near-rational Model of the Business Cycle, with Wage and Price Inertia, Quarterly Journal of Economics, 100, 82338. Ball, L., Mankiw, G., and Romer, D. (1988), The New Keynesian Economics and the OutputInflation Tradeoff, Brookings Papers on Economic Activity, 1, 165. Blanchard, O., and Fischer, S. (1989), Lectures on Macroeconomics, Cambridge, MA, MIT Press. Hall, R. (1978), Stochastic Implications of the Life Cycle Permanent Income Hypothesis, Journal of Political Economy, 86(December), 97187. Hicks, J. R. (1937), Mr Keynes and the Classics, Econometrica, 5(April), 14759. Mankiw, N. G. (1985), Small Menu Costs and Large Business Cycles: A Macroeconomic Model of Monopoly, Quarterly Journal of Economics, 100, 52939. Obstfeld, M., and Rogoff, K. (1995), Exchange Rate Dynamics Redux, Journal of Political Economy, 103, 62460. (1996), Foundations of International Macroeconomics, Cambridge, MA, MIT Press. Phelps, E. (ed.) (1970), Microeconomic Foundations of Employment and Inflation Theory, Norton. Romer, D. (1996), Advanced Macroeconomics, New York, McGraw-Hill.

42

The return of schools of thought in macroeconomics


Simon Wren-Lewis, 24 February 2012 Just five years ago, macroeconomists talked about a new synthesis, bringing together Keynesian and Classical ideas in a unified, microfounded theoretical framework. Following the Great Recession, it appears that mainstream macroeconomics has once again split into schools of thought. This column explains why macroeconomics, unlike microeconomics, periodically fragments in this way.
In the 1970s and 1980s, macroeconomics was all about schools of thought. A popular textbook (Snowdon et al 1994) had the title A Modern Guide to Macroeconomics: An Introduction to Competing Schools of Thought. Macroeconomists tended to take sides, and different schools had clear ideological associations. Antagonists often talked across each other, and anyone not already on one side just got totally confused. Schools of thought fragmented mainstream macroeconomics in a way that had no parallel in mainstream microeconomics. But then things began to change. The discipline appeared to become much more unified. It would be going much too far to suggest that there was a general consensus, but to use a tired clich, most macroeconomists started talking the same language, even if they were not saying the same thing. Goodfriend and King (1997) coined the term New Neoclassical Synthesis. Other authors wrote along similar lines (eg Woodford 2009 and Arestis 2007).This synthesis did only apply to what is generally described as mainstream economics. Heterodox economists continued to organise in schools (for example neo-Marxists, post-Keynesians, and Austrians). The synthesis was reflected in masters-level textbooks (eg Romer 1996), which would typically begin by setting out a Ramsey-style model, then discuss real business cycle models, and finally move on to add sticky prices to get New Keynesian theory. There were two main factors behind this synthesis. The first was microfoundations, ie deriving the components of macro models from standard optimisation applied to representative agents. This gave macroeconomics the potential to achieve the same degree of unity as microeconomics. The second was the development of New Keynesian theory, which allowed an analysis of aggregate demand within a microfounded framework, and which integrated ideas like rational expectations and consumption-smoothing into Keynesian analysis. All models were now dynamic stochastic general equilibrium models. Following the Great Recession, things seem rather different. In popular discussion of macroeconomics, schools of thought in macro are definitely back. Bitter disputes have broken out between those advocating fiscal stimulus (Keynesians) and those against. (For just one example of such quarrelling, see DeLong 2012.) For those in freshwater departments like Chicago, the idea of an effective fiscal stimulus was something they thought had died with the rational expectations and New Classical revolutions. It must therefore have been something of a shock to see it being resurrected, and it is understandable that they might dismiss it as invoking longdiscredited fairy tales. It looked as if 30 years of progress in the discipline was being ignored. Those advocating stimulus and deploring premature austerity, on the other hand, were understandably taken aback to find their analysis dismissed in this way. They thought they were using mainstream macroeconomic theory, not the partisan analysis of a Keynesian school of thought. In a recent Vox column, Jonathan Portes (Portes 2012) describes his puzzlement at being labelled Keynesian, when he thought he was following synthesis macroeconomics.

So why have schools of thought within mainstream macroeconomics returned? One simple story is that schools of thought are associated with macroeconomic crises, and macro synthesis follows periods of calm. Keynesian theory itself was born out of the Great Depression. The first Neoclassical Synthesis arose from the period of strong growth and low inflation in the postwar period. Monetarism gained strength from the rapid inflation of the 1970s. The more recent synthesis may be a child of the Great Moderation, and now we have the Great Recession, schools of thought have returned. Because these crises are macroeconomic, and there are no equivalent crises involving microeconomic behaviour or policy, then fragmentation of the mainstream into schools will be a macro, not micro, phenomenon. However I think this is too simplistic a view of what is happening today. One interesting feature of the current divide is that the label Keynesian appears to be used more by those opposed to certain policies and in particular fiscal stimulus than those on the other side. Typically Keynesians see themselves as putting forward synthesis analysis, without the need for branding. What has become clear is that the New Neoclassical Synthesis was in many ways a celebration of New Keynesian theory which was not shared by many freshwater departments in the US. There may be good reasons why New Keynesian economists might have imagined that their analysis was now an uncontested part of the mainstream. In particular, it is used in nearly all central banks as their main tool in carrying out monetary policy. With monetary policy somewhat depoliticised through central bank independence, the successful implementation of New Keynesian theory during the Great Moderation allowed divisions among academic departments to remain dormant. On the other side, there was a belief that New Classical economics had been revolutionary, ie a successful counter-revolution against Keynesian ideas. Once again there were good reasons supporting this belief. On consumption, rational expectations, the Lucas critique and more, traditional Keynesians had unsuccessfully opposed New Classical ideas. Furthermore, many of the leaders of New Classical thought did not want to update Keynesian thinking; they wanted to destroy it. The label Keynesian was associated with much more than a belief that prices were sticky and that therefore aggregate demand mattered. Instead it became associated with state intervention. Wikipedia, in its third paragraph on Keynesian economics, says: Keynesian economics advocates a mixed economy predominantly private sector, but with a significant role of government and public sector.... The New Classical counter-revolution failed in one respect. While Keynesian analysis may have suffered a near-death experience, it survived and subsequently prospered. New Classical critiques led to fundamental and largely progressive changes. Yet, for many reasons including ideological ones, the would-be counter-revolutionaries did not want to give up their counterrevolution. Partly as a result, the degree to which New Keynesian theory was taught to graduate students differed widely among academic departments, at least in the US. So, perhaps unlike the first (postwar) neoclassical synthesis, the New Neoclassical Synthesis was partial in terms of its coverage among academics. This incompleteness was not apparent during the Great Moderation, because in central banks the synthesis was uncontested. The fault lines only became evident when monetary policy became relatively impotent at the zero bound after the Great Recession, and fiscal stimulus was used both in the US and UK. Once that happened, what might be called the Anti-Keynesian school re-emerged. Using this account, it is perhaps possible to view the current emergence of schools of thought as a historical aberration. The microfoundation of macroeconomics would seem to imply that mainstream macro should be as free from fragmentation into schools as microeconomics. As it

becomes clear that the New Classical counter-revolution was not successful, the New Neoclassical Synthesis may yet become complete. (For an argument along these lines, see Economist 2012) After all, New Keynesian models are essentially real business cycle models plus sticky prices, and the addition of price rigidity seems both empirically plausible and inoffensive in itself. Both sides could agree that for economies with a floating exchange-rate monetary policy is the stabilisation tool of choice, with fiscal policy only being used if monetary policy is constrained (Kirsanova et al 2009). When interest rates are stuck at the zero lower bound, synthesis models clearly show fiscal policy can be highly effective at stimulating output (Woodford 2011). What has been called demand denial appears not to make academic sense, particularly at a zero lower bound (Wren-Lewis 2011). This outcome may, however, represent wishful thinking by New Keynesians. An alternative reading is that the Keynesian/Anti-Keynesian division is always going to be with us, because it reflects an ideological divide about state intervention. That divide occurs all the time in microeconomics, but because it involves arguing about many different externalities or imperfections it does not lend itself to fragmentation into schools. In macro, however, there is one critical externality to do with price rigidity, and so disagreements about policy can easily be mapped into differences about theory. Demand denial is attractive because it gives a nonideological justification for what is essentially an ideological position about economic policy. Unfortunately, there is a danger that dividing mainstream analysis this way makes macroeconomics look more like a belief system than a science. Authors Note: This column combines a number of recent posts from my blog, mainly macro.

References
Arestis, P, ed, (2007), Is There a New Consensus in Macroconomics? London: Palgrave Macmillan. DeLong, B (2012), Understanding the Chicago Anti-Stimulus Arguments , blogpost, Grasping Reality with the Invisible Hand, 4 January. Economist (2012), Stimulus, austerity and the weltgeist, blogpost, Democracy in America, 1 February. Goodfriend, M and RG King (1997), The New Neoclassical Synthesis and the Role of Monetary Policy, in Bernanke, B and J Rotemberg, eds, NBER Macroeconomics Annual. Cambridge, MA: MIT Press, 23182. Kirsanova, T, C Leith, C and S Wren-Lewis (2009), Monetary and Fiscal Policy Interaction: The current consensus assignment in the light of recent developments, Economic Journal 119: F48296. Portes, J (2012), Fiscal Policy: What Does Keynesian mean?, VoxEU.org, 7 February. Romer, D (1996), Advanced Macroeconomics, New York: McGraw-Hill. Snowdon, B, H Vane, and P Wynarczyk (1994), A Modern Guide to Macroeconomics: An Introduction to Competing Schools of Thought, Aldershot, UK and Brookheld, USA: Edward Elgar.

Woodford, M (2009), Convergence in Macroeconomics: Elements of the New Synthesis, American Economic Journal: Macroeconomics 1(1): 26779. Woodford, M (2011), Simple Analytics of the Government Expenditure Multiplier, American Economic Journal: Macroeconomics 3(1): 135. Wren-Lewis, S (2011), Lessons from failure: fiscal policy, indulgence and ideology, National Institute Economic Review 217(1): R31R46.

Fiscal policy: What does Keynesian mean?


Jonathan Portes, 7 February 2012 What does it mean to be a Keynesian? This column argues that, like so much in economics, the label has become politicised. The cost is an impoverished policy debate that is resulting in millions of avoidable job cuts.

I joined the UK Treasury in 1987 and subsequently went to Princeton, where I studied with Rogoff and Campbell. Eventually, I ended up in the Cabinet Office, advising the Prime Minister, on the eve of the 2008 crisis. At no point during this period, however, did I think of myself as a Keynesian. Nor was it really a meaningful question. You might as well have asked a physicist if he was a Newtonian. Keynes was a great figure (indeed, one of the greatest Britons of the 20th century) and you had to understand his insights to understand macroeconomics; but the debate had moved on. The Treasury approach to macroeconomic management throughout this period was that while fiscal policy mattered, it wasn't - for largely pragmatic reasons - sensible to adjust policy in order to manage demand; monetary policy was quicker, more transparent, and less subject to political distortion. The theoretical argument behind this was famously set out in Nigel Lawsons 1984 Mais Lecture. And I fully subscribed to this view. Of course, post-2008, things are rather more complicated. So what could it mean to be a Keynesian? I can think of a number of possible definitions.

Definition 1
Going back to the 1930s, Keynes himself obviously defined himself in opposition to the Treasury View (often equated, perhaps somewhat unfairly, to Say's Law, that supply creates its own demand. See Quiggin 2011 for a historical discussion). The Treasury View argues that fiscal policy cannot, as an accounting identity, affect aggregate demand, because the government needs to get the extra money from somewhere, whether through taxes or borrowing. So a Keynesian is anyone who doesn't believe this identity means that fiscal policy can't affect demand. And this appears to be the definition espoused at one point by John Cochrane (2009) of Chicago University, who wrote: First, if money is not going to be printed, it has to come from somewhere. If the government borrows a dollar from you, that is a dollar that you do not spend, or that you do not lend to a company to spend on new investment. Every dollar of increased government spending must correspond to one less dollar of private spending. Jobs created by stimulus spending are offset by jobs lost from the decline in private

spending. We can build roads instead of factories, but fiscal stimulus cant help us to build more of both. This form of crowding out is just accounting, and doesn't rest on any perceptions or behavioural assumptions. As readers will know, this became the subject of a furious row in the econoblogosphere, with Paul Krugman, Brad Delong, and Simon Wren-Lewis accusing Cochrane of "undergraduate errors" (Wren-Lewis 2012b). Cochrane himself seems to have retreated from this position, as Delong and others have pointed out (Cochrane 2012 and Delong 2012). But leaving US academic disputes aside, obviously in this sense I am a Keynesian hence the title of my own blog! But then so is everybody else, including today's Treasury. Nobody, and I mean nobody, really believes that it is impossible by definition for fiscal policy to affect aggregate demand.

Definition 2
A more plausible, and traditional, definition is to say that a Keynesian is someone who believes that as an empirical matter, fiscal policy does have a substantial impact on aggregate demand; in contrast to those who believe that Ricardian equivalence means that changes to government spending and borrowing will be substantially or wholly offset by changes to private sector spending and saving. More recently, the doctrine of expansionary fiscal contraction went even further, and argued that tightening fiscal policy could, through exchange rate and confidence effects, actually increase demand and growth; a paper by Alesina and Ardagna (2009) was particularly influential in this respect, and even (tentatively and briefly) influenced the UK Treasury here, who argued in the 2010 Emergency Budget that: "These [the wider effects of fiscal consolidation] will tend to boost demand growth, could improve the underlying performance of the economy and could even be sufficiently strong to outweigh the negative effects" (HM Treasury 2010). The Treasury has not as far as I am aware repeated this argument, since the evidence shows precisely the opposite. The original paper has been widely questioned, debunked by further IMF research, and (more importantly) experience has hardly verified its claims. The conventional wisdom now is very much that of the IMF, which by October 2010 had already concluded that: "Fiscal consolidation typically lowers growth in the short term. Using a new data set, we find that after two years, a budget deficit cut of 1% of GDP tends to lower output by about 0.5% and raise the unemployment rate by one third of a percentage point." This result was later formalised in Leigh et al 2010. The IMF has if anything strengthened its views since, with the current Chief Economist, Olivier Blanchard, saying recently: "[fiscal consolidation] is clearly a drag on demand, it is a drag on growth" (Blanchard 2012). So I'm a Keynesian on this definition, but then so too are the Managing Director and Chief Economist of the IMF. And so indeed too are the Treasury, Bank of England, and the Office of Budget Responsibility; their models incorporate multipliers,

and I do not think that senior officials at any of these institutions would deny for a minute that fiscal consolidation has, in practice, had a negative impact on growth in the UK. For example, the Monetary Policy Committee said in November 2011: "Growth had been weak throughout the past year, reflecting a fall in real household incomes, persistently tight credit conditions and the effects of the continuing fiscal consolidation."

Definition 3
So under Definitions 1 and 2 I'm a Keynesian, but then so is pretty much everyone else whom one would take seriously. The final definition, then, of a Keynesian, appears to be a much more political one someone who thinks that slowing fiscal consolidation would be a sensible policy decision in the current UK (or US) economic context. But this definition seems to me to be misconceived, for two reasons. First, if Keynesian means anything, it must surely have a more general significance than indicating one's position on a particular policy choice in a particular country at a particular time. Surely it should indicate a philosophy, a theoretical view, or at least a view of what the empirical evidence means? Perhaps more importantly, it is quite clear that now that the expansionary fiscal contraction hypothesis has been discredited the main argument between those of us who favour slowing fiscal consolidation in the UK and those who think that this would be a dangerous mistake is not about whether the direct impact would be positive. It is whether the price of this direct positive impact would be credibility with financial markets, and hence a damaging rise in long-term interest rates that would more than offset the gains. I think this risk is hugely exaggerated, while the damaging social and economic consequences of inaction are correspondingly not recognised (see my previous articles, Portes 2011a and 2011b), but the point here is not who's right, but that this debate really has nothing to do with Keynes at all. It's about a lot of things how policymakers should deal with potential market irrationality, the role of the credit rating agencies, multiple equilibria, etc. But I don't see that taking one side or the other of these arguments makes you a Keynesian (or not). Finally, and returning to what I originally learned at the Treasury, there still remains the view that if we think demand is too low, then the right response is always through monetary rather than fiscal policy. Again, there is a vigorous debate among blogging economists on this topic (see Economist 2012 for an introduction to the debate). And here my perspective has indeed changed; I no longer subscribe to the Treasury View of the last two decades, described above, that fiscal policy never has any role to play in demand management, even though I don't think it should be the tool of first resort. (See the excellent discussion in Simon Wren-Lewis 2012a, especially the penultimate paragraph).

But just as this approach was motivated by pragmatism more than theory monetary policy was better suited to this task - my change of mind is similarly motivated. If monetary policy alone was indeed enough in practice, we wouldn't be where we are now, with unemployment in the UK a million higher than the official estimate of the natural rate, and no prospect of it coming down in the immediate future. As I have argued previously (Portes 2012), any demand management policy that delivers that outcome is not one that policymakers should regard as remotely adequate. So my views have indeed changed; not, I would argue, ideologically, but in recognition of the fact that life, and macroeconomics, is considerably more complicated than we thought. Again, this view is shared by Blanchard, who argues: "Weve entered a brave new world in the wake of the crisis; a very different world in terms of policy making and we just have to accept it. ... Macroeconomic policy [specifically fiscal and monetary policy] has many targets and many instruments." This pragmatic and questioning but evidence-based approach to macroeconomic policy is one I share. If he were here, I imagine Keynes would too.

References
Alesina, Alberto F and Silvia Ardagna (2009), Large Changes in Fiscal Policy: Taxes Versus Spending, NBER Working Paper No. 15438, October. Blanchard, O (2012), Driving the Global Economy with the brakes on, blogpost, January. Blanchard, O (2011). The future of macroeconomic policy, blogpost, March. Cochrane, J (2009), Fiscal Stimulus, Fiscal Inflation, or Fiscal Fallacies?, University of Chicago webpage, version 2.5, 27 February. Cochrane, J (2012), Stimulus and etiquette, blogpost, January. Delong, B (2012), John Cochrane says John Cochrane used to be a bullshit artist, blogpost, January.Lawson, N (1984), Mais Lecture. Economist (2012), The zero lower bound in our minds, 7 January. Guajardo, J, D Leigh, and A Pescatori (2010), Expansionary Austerity: New International Evidence, IMF Working Paper 11/158, Research Department, International Monetary Fund. HM Treasury (2010), Emergency Budget.

Leigh, D, P Devries, C Freedman, J Guajardo, D Laxton, and A Pescatori (2010), "Will it hurt? Macroeconomic effects of fiscal consolidation", World Economic Outlook, October, International Monetary Fund Monetary Policy Committee (2011), Minutes, Bank of England. Portes, J (2011b), Against Austerity, Spectator, October. Portes, J (2011a) The Coalitions Confidence Trick, New Statesman, August. Portes, J (2012), The largest and longest unemployment gap since World War 2, blogpost, January. Quiggin, J (2011), Blogging the Zombies: Expansionary Austerity Birth, blogpost, November. Wren-Lewis, S (2012a), Mistakes and ideology in macroeconomics, blogpost, 10 January. Wren-Lewis, S (2012b), The return of Schools-of-thought macro, blogpost, 27 January.

The Long and Short of It: The Impact of Unemployment Duration on Compensation Growth

M. Henry Linder, Richard Peach, and Robert Rich

How tight is the labor market? The unemployment rate is down substantially from its October 2009 peak, but two-thirds of the decline is due to people dropping out of the labor force. In addition, an unusually large share of the unemployed has been out of work for twentyseven weeks or morethe long-duration unemployed. These statistics suggest that there remains a great deal of slack in U.S. labor markets, which should be putting downward pressure on labor compensation. Instead, compensation growth has moved modestly higher since 2009. A potential explanation is that the long-duration unemployed exert less influence on wages than the short-duration unemployed, a hypothesis we examine here. While preliminary, our findings provide some support for this hypothesis and show that models taking into account unemployment duration produce more accurate forecasts of compensation growth.

The hypothesis that individuals who are unemployed for long durations have less impact on the behavior of wages than the recently unemployed is not new. Insider-outsider models make this prediction, and a paper by Ricardo Llaudes finds strong support for this proposed explanation in data for European countries. What is new is the relevance of this hypothesis for movements in wage rates in the United States. In particular, conventional modelssuch as Phillips curve modelshave generally underpredicted compensation growth since 2009. These models typically rely on the total unemployment rate as the measure of labor market tightness. If the long-duration unemployment rate has limited impact on the compensation growth process, then its relatively large share in the unemployment rate in recent years could account for the underprediction of standard Phillips curve models.

The chart below plots the total, long-duration, and short-duration unemployment rates, with the division between short- and longduration unemployment defined, respectively, by unemployment spells of 26 weeks or less and 27 weeks or more. Until the past few years, the U.S. experience has been that most fluctuations in the total unemployment rate were driven by the short-duration unemployment rate. The average of the long-duration unemployment rate was only 1.0 percent from 1960:Q1 to 2007:Q4, with deviations around the average fairly muted and short-lived. However, the long-duration unemployment rate rose to over 4.0 percent during 2009-10, and by December 2013 has only moved down to 2.6 percent.

So, is it important to distinguish between short-duration and long-duration unemployment in the United States? In a recent study, Robert Gordon of Northwestern University uses a Phillips curve model to examine the behavior of price inflation from the early 1960s through early 2013. His findings indicate that short-duration unemployment has a much greater impact on price inflation than does longduration unemployment. Further, out-of-sample forecasts using short-duration unemployment track price inflation much more closely than those based on the total unemployment rate, especially during the post-2008 period. Our analysis complements the recent work of Gordon and offers some evidence on the robustness of his results by looking at compensation growth, as well as by specifying a different Phillips curve model and by examining a shorter sample period that runs from 1997 through the present.

In a conventional compensation Phillips curve model, the indicator of resource utilization is the unemployment gap, measured as the difference between the unemployment rate and the non-accelerating inflation rate of unemployment (NAIRU). Conceptually, if the economy were operating with the unemployment rate at NAIRU, inflation would not have a tendency to either increase or decrease. Positive values of the unemployment gap indicate excess supply conditions in the labor market which should put downward pressure on compensation growth (and vice versa).

We compare forecasts of compensation growth using two alternative measures of the unemployment gapone based on the total unemployment rate and another based on the short-duration unemployment rate. The total unemployment gap measure is the difference, measured in percentage points, between the total unemployment rate and the Congressional Budget Office (CBO) estimate of NAIRU. The short-duration unemployment gap is constructed as the difference between the short-duration unemployment rate and our estimate of the short-duration NAIRU. The latter is the CBO estimate of NAIRU less the average of the long-duration unemployment rate calculated over the 1997-2007 period.

As shown in the chart below, there had been a very close correspondence between the two unemployment gaps until the onset of the most recent recession. Currently, the total unemployment gap indicates a large amount of slack in the labor market, while the shortduration unemployment gap indicates little, if any slack.

Following an earlier post on compensation growth, we specify a nonlinear compensation Phillips curve model (see this paper by Fuhrer, Olivei, and Tootell for a discussion of modeling the nonlinearity). The model relates the four-quarter growth rate in compensation per hour in the nonfarm business sector, relative to trend productivity growth and long-run inflation expectations, to resource utilization. For trend productivity growth, we use a twelve-quarter moving average of the (annualized) quarterly growth rate of productivity. For expected inflation, we construct a measure for the personal consumption expenditure index (PCE) by adjusting the Survey of Professional Forecasters ten-year expected CPI inflation series to account for the usual differential between CPI and PCE inflation. As in our previous post, we focus on the post-1997 period because it represents a low-inflation environment, based on the level and stability of the expected inflation series, and because we believe the nonlinearity may be especially relevant in such an environment.

We examine both the within-sample fit and out-of-sample forecasts of the models to evaluate the alternative unemployment gap measures. The out-of-sample forecast performance is based on estimation of the model using data through 2007:Q4. With the resulting estimated model, we input the actual values of the unemployment gap, trend productivity growth, and expected inflation series for the post-2007:Q4 period to generate forecasts of compensation growth. The first forecast corresponds to compensation growth from 2008:Q1 to 2009:Q1.

The next chart plots the four-quarter change in compensation growth, the within-sample fit of the models through 2007:Q4, and the post-2007:Q4 out-of-sample forecasts.

Not surprisingly, the within-sample fit of the two models is very similar due to the two unemployment gap measures closely tracking each other during this period. The out-of-sample forecasts, however, reveal the different implications of the two unemployment gap measures. While the compensation growth series displays some volatility and both models missed the initial slowing and subsequent rebound in the series, the forecast using the short-duration unemployment gap does a better job tracking the subsequent movements in compensation growth and is about 10 percent more accurate than the forecast that ignores the duration of unemployment. The graph also illustrates that the forecast using the total unemployment gap have consistently underpredicted compensation growth, a feature shared by price-inflation Phillips curve models. Although they are not shown, we obtain similar results if we start the out-of-sample forecasts in 2004:Q4.

Our results raise an interesting questionwhy has the distinction between short-duration and long-duration unemployment in the United States previously received so little attention? One answer is that the close correspondence between the total unemployment rate and the short-duration unemployment rate has masked the importance of the latter variable. If movements in the unemployment rate are largely driven by the short-duration unemployment rate, then the unemployment gap is a suitable proxy for measuring slack in the labor marketeven if the appropriate measure is the short-duration unemployment gap. It is only since the last recession and its aftermath, when the composition of the total unemployment rate deviated from its historical pattern, that we can observe the differential effects of unemployment duration on compensation growth.

Disclaimer The views expressed in this post are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.

Blogs review: The employment to population ratio


- does the drop in unemployment represent a genuine improvement in the US labor market?
by Jrmie Cohen-Setton on 17th February 2014

Whats at stake: The drop and lack of recovery in the employment to population (E/P) ratio has raised questions as to whether the drop in the unemployment rate represents a genuine improvement in the US labor market. A recent study by the NY Fed, which argued that the demographically adjusted E/P ratio was just 0.7 points below trend, has received considerable attention in the blogosphere and among Fed watchers as it corroborates the picture of a recovering labor market.

Assessing the health of the US labor market


Matthew Klein writes that in an effort to take into account the fact that many people stopped looking for work right about the time the economy tanked and have kept dropping out of the labor force throughout the gradual economic recovery, analysts have used other measures than the unemployment rate to gauge the health of the labor market. Sober Look writes that measures such as the employment-population ratio have completely diverged from the "headline" unemployment rate. Real Time Economics reports that the employment-to-population ratio, which measures the proportion of the adult population with jobs, has dropped 4.1 percentage points from its average since the start of the 2008-2009 recession, and has stayed pretty steady since. The drop and lack of recovery has led to questions about whether the labor market is doing as well as the current 6.7% unemployment rate suggests.

Source: Sober Look Thomas Klitgaard and Richard Peck write that roughly half of the difference in the unemployment rates between the US and the euro area is due to divergent trends in labor force participation over the last three years. The drop in labor force participation in the US has accentuated the fall in the U.S. unemployment rate and widened the gap between the unemployment rates of the US and the euro area. If the euro area had seen a drop in the labor force participation rate proportional to the decline experienced by the United States, its unemployment rate would be about 9.5 percent, below where it was at the beginning of the sovereign debt crisis.

A misleading indicator: demographic factors


Samuel Kapon and Joseph Tracy of the NY Fed write that the E/P ratio is a misleading indicator for the degree of the labor market recovery because it doesnt take into account demographic changes that would have impacted the ratio independent of the recent recession. The effect of population aging on the E/P ratio depends on the distribution of individuals in the economy across the rising and falling sections of their career employment rate profiles. The authors use 280 estimated career employment rate profiles to create a demographically adjusted E/P ratio.

Source: NY Fed Samuel Kapon and Joseph Tracy write that the result indicates that although the actual E/P ratio has not changed since the end of the recession, the E/P gap defined as the difference between the two lines is actually closing due to this decline in the demographically adjusted E/P ratio. That is, a relatively constant E/P ratio since the end of the recession represents improvement in the labor market relative to an underlying demographically adjusted E/P ratio that is declining. John Robertson of the Atlanta Fed writes that a simple, and admittedly crude, alternative to computing the demographically adjusted employment-to-population ratio trend is to look at a segment of the population that is on a relatively flat part of the employment (or participation) rate curve. A common standard for this is the so-called prime-aged population (people aged 25 to 54). These individuals are less likely to be making retirement decisions than older individuals and are less likely to be making schooling decisions than younger people. The prime-aged employment-to-population ratio declined almost 5 percentage points between the end of 2007 and 2009 (versus 4 percentage points overall) and since then has recovered about 25 percent of that decline.

A misleading indicator: baseline factors


Paul Krugman writes that the way the New York Fed researchers present their finding is likely to mislead because it mixes two propositions together. One, which is clearly true, is that the aging of the adult population would have meant a considerable decline in the employment-population ratio over the past 7 years even if the economy had remained near full employment. The other, which is far from obvious, is that the economy was highly overheated in late 2007, with employment far above sustainable levels. Samuel Kapon and Joseph Tracy adopt the normalization that over the thirty-one years in our data sample (19822013) any business-cycle deviations between the actual and the adjusted E/P ratios will average to zero. With this normalization, the authors argue that the E/P ratio was 1.6 percentage points above the demographically adjusted E/P ratio just prior to the onset

of the recession and is now only 0.7 points below trend. Pat Higgins of the Atlanta Fed writes that this methodology seems reasonable since one might typically expect business cycle effects to average out over 30 years. However, the 19822013 sample period is somewhat unusual in that the unemployment rate was elevated at both the starting and ending points. Paul Krugman writes that the dramatic-sounding result that we dont have much labor market slack is due to their normalization. Just doing the demographic correction reduces the employment gap but its still big unless you accept the idea that the U.S. was a highly overheated economy on the edge of major inflation before the crisis. Pat Higgins writes that If one were to normalize the Kapon and Tracy E/P trend so that its average value was equal to CBO's trend, then the November 2013 E/P gap is about 1.5 percentage points.

Remarks by Governor Ben S. Bernanke


Before the National Economists Club, Washington, D.C. November 21, 2002

Deflation: Making Sure "It" Doesn't Happen Here


Since World War II, inflation--the apparently inexorable rise in the prices of goods and services--has been the bane of central bankers. Economists of various stripes have argued that inflation is the inevitable result of (pick your favorite) the abandonment of metallic monetary standards, a lack of fiscal discipline, shocks to the price of oil and other commodities, struggles over the distribution of income, excessive money creation, selfconfirming inflation expectations, an "inflation bias" in the policies of central banks, and still others. Despite widespread "inflation pessimism," however, during the 1980s and 1990s most industrial-country central banks were able to cage, if not entirely tame, the inflation dragon. Although a number of factors converged to make this happy outcome possible, an essential element was the heightened understanding by central bankers and, equally as important, by political leaders and the public at large of the very high costs of allowing the economy to stray too far from price stability. With inflation rates now quite low in the United States, however, some have expressed concern that we may soon face a new problem--the danger of deflation, or falling prices. That this concern is not purely hypothetical is brought home to us whenever we read newspaper reports about Japan, where what seems to be a relatively moderate deflation--a decline in consumer prices of about 1 percent per year--has been associated with years of painfully slow growth, rising joblessness, and apparently intractable financial problems in the banking and corporate sectors. While it is difficult to sort out cause from effect, the consensus view is that deflation has been an important negative factor in the Japanese slump. So, is deflation a threat to the economic health of the United States? Not to leave you in suspense, I believe that the chance of significant deflation in the United States in the foreseeable future is extremely small, for two principal reasons. The first is the resilience and structural stability of the U.S. economy itself. Over the years, the U.S. economy has shown a remarkable ability to absorb shocks of all kinds, to recover, and to continue to grow. Flexible and efficient markets for labor and capital, an entrepreneurial tradition, and a general willingness to tolerate and even embrace technological and economic change all contribute to this resiliency. A particularly important protective factor in the current environment is the strength of our financial system: Despite the adverse shocks of the past year, our banking system remains healthy and well-regulated, and firm and household balance sheets are for the most part in good shape. Also helpful is that inflation has recently been not only low but quite stable, with one result being that inflation expectations seem well anchored. For example, according to the University of Michigan survey that underlies the index of consumer sentiment, the median expected rate of inflation during the next five to ten years among those interviewed was 2.9 percent in October 2002, as compared with

2.7 percent a year earlier and 3.0 percent two years earlier--a stable record indeed. The second bulwark against deflation in the United States, and the one that will be the focus of my remarks today, is the Federal Reserve System itself. The Congress has given the Fed the responsibility of preserving price stability (among other objectives), which most definitely implies avoiding deflation as well as inflation. I am confident that the Fed would take whatever means necessary to prevent significant deflation in the United States and, moreover, that the U.S. central bank, in cooperation with other parts of the government as needed, has sufficient policy instruments to ensure that any deflation that might occur would be both mild and brief. Of course, we must take care lest confidence become over-confidence. Deflationary episodes are rare, and generalization about them is difficult. Indeed, a recent Federal Reserve study of the Japanese experience concluded that the deflation there was almost entirely unexpected, by both foreign and Japanese observers alike (Ahearne et al., 2002). So, having said that deflation in the United States is highly unlikely, I would be imprudent to rule out the possibility altogether. Accordingly, I want to turn to a further exploration of the causes of deflation, its economic effects, and the policy instruments that can be deployed against it. Before going further I should say that my comments today reflect my own views only and are not necessarily those of my colleagues on the Board of Governors or the Federal Open Market Committee. Deflation: Its Causes and Effects Deflation is defined as a general decline in prices, with emphasis on the word "general." At any given time, especially in a low-inflation economy like that of our recent experience, prices of some goods and services will be falling. Price declines in a specific sector may occur because productivity is rising and costs are falling more quickly in that sector than elsewhere or because the demand for the output of that sector is weak relative to the demand for other goods and services. Sector-specific price declines, uncomfortable as they may be for producers in that sector, are generally not a problem for the economy as a whole and do not constitute deflation. Deflation per se occurs only when price declines are so widespread that broad-based indexes of prices, such as the consumer price index, register ongoing declines. The sources of deflation are not a mystery. Deflation is in almost all cases a side effect of a collapse of aggregate demand--a drop in spending so severe that producers must cut prices on an ongoing basis in order to find buyers.1 Likewise, the economic effects of a deflationary episode, for the most part, are similar to those of any other sharp decline in aggregate spending--namely, recession, rising unemployment, and financial stress. However, a deflationary recession may differ in one respect from "normal" recessions in which the inflation rate is at least modestly positive: Deflation of sufficient magnitude may result in the nominal interest rate declining to zero or very close to zero.2 Once the nominal interest rate is at zero, no further downward adjustment in the rate can occur, since lenders generally will not accept a negative nominal interest rate when it is possible instead to hold cash. At this point, the nominal interest rate is said to have hit the "zero bound." Deflation great enough to bring the nominal interest rate close to zero poses special problems for the economy and for policy. First, when the nominal interest rate has been reduced to zero, the real interest rate paid by borrowers equals the expected rate of

deflation, however large that may be.3 To take what might seem like an extreme example (though in fact it occurred in the United States in the early 1930s), suppose that deflation is proceeding at a clip of 10 percent per year. Then someone who borrows for a year at a nominal interest rate of zero actually faces a 10 percent real cost of funds, as the loan must be repaid in dollars whose purchasing power is 10 percent greater than that of the dollars borrowed originally. In a period of sufficiently severe deflation, the real cost of borrowing becomes prohibitive. Capital investment, purchases of new homes, and other types of spending decline accordingly, worsening the economic downturn. Although deflation and the zero bound on nominal interest rates create a significant problem for those seeking to borrow, they impose an even greater burden on households and firms that had accumulated substantial debt before the onset of the deflation. This burden arises because, even if debtors are able to refinance their existing obligations at low nominal interest rates, with prices falling they must still repay the principal in dollars of increasing (perhaps rapidly increasing) real value. When William Jennings Bryan made his famous "cross of gold" speech in his 1896 presidential campaign, he was speaking on behalf of heavily mortgaged farmers whose debt burdens were growing ever larger in real terms, the result of a sustained deflation that followed America's post-Civil-War return to the gold standard.4 The financial distress of debtors can, in turn, increase the fragility of the nation's financial system--for example, by leading to a rapid increase in the share of bank loans that are delinquent or in default. Japan in recent years has certainly faced the problem of "debtdeflation"--the deflation-induced, ever-increasing real value of debts. Closer to home, massive financial problems, including defaults, bankruptcies, and bank failures, were endemic in America's worst encounter with deflation, in the years 1930-33--a period in which (as I mentioned) the U.S. price level fell about 10 percent per year. Beyond its adverse effects in financial markets and on borrowers, the zero bound on the nominal interest rate raises another concern--the limitation that it places on conventional monetary policy. Under normal conditions, the Fed and most other central banks implement policy by setting a target for a short-term interest rate--the overnight federal funds rate in the United States--and enforcing that target by buying and selling securities in open capital markets. When the short-term interest rate hits zero, the central bank can no longer ease policy by lowering its usual interest-rate target.5 Because central banks conventionally conduct monetary policy by manipulating the shortterm nominal interest rate, some observers have concluded that when that key rate stands at or near zero, the central bank has "run out of ammunition"--that is, it no longer has the power to expand aggregate demand and hence economic activity. It is true that once the policy rate has been driven down to zero, a central bank can no longer use its traditional means of stimulating aggregate demand and thus will be operating in less familiar territory. The central bank's inability to use its traditional methods may complicate the policymaking process and introduce uncertainty in the size and timing of the economy's response to policy actions. Hence I agree that the situation is one to be avoided if possible. However, a principal message of my talk today is that a central bank whose accustomed policy rate has been forced down to zero has most definitely not run out of ammunition. As I will discuss, a central bank, either alone or in cooperation with other parts of the government, retains considerable power to expand aggregate demand and economic activity even when its accustomed policy rate is at zero. In the remainder of my talk, I will first discuss measures for preventing deflation--the preferable option if feasible. I will then turn

to policy measures that the Fed and other government authorities can take if prevention efforts fail and deflation appears to be gaining a foothold in the economy. Preventing Deflation As I have already emphasized, deflation is generally the result of low and falling aggregate demand. The basic prescription for preventing deflation is therefore straightforward, at least in principle: Use monetary and fiscal policy as needed to support aggregate spending, in a manner as nearly consistent as possible with full utilization of economic resources and low and stable inflation. In other words, the best way to get out of trouble is not to get into it in the first place. Beyond this commonsense injunction, however, there are several measures that the Fed (or any central bank) can take to reduce the risk of falling into deflation. First, the Fed should try to preserve a buffer zone for the inflation rate, that is, during normal times it should not try to push inflation down all the way to zero.6 Most central banks seem to understand the need for a buffer zone. For example, central banks with explicit inflation targets almost invariably set their target for inflation above zero, generally between 1 and 3 percent per year. Maintaining an inflation buffer zone reduces the risk that a large, unanticipated drop in aggregate demand will drive the economy far enough into deflationary territory to lower the nominal interest rate to zero. Of course, this benefit of having a buffer zone for inflation must be weighed against the costs associated with allowing a higher inflation rate in normal times. Second, the Fed should take most seriously--as of course it does--its responsibility to ensure financial stability in the economy. Irving Fisher (1933) was perhaps the first economist to emphasize the potential connections between violent financial crises, which lead to "fire sales" of assets and falling asset prices, with general declines in aggregate demand and the price level. A healthy, well capitalized banking system and smoothly functioning capital markets are an important line of defense against deflationary shocks. The Fed should and does use its regulatory and supervisory powers to ensure that the financial system will remain resilient if financial conditions change rapidly. And at times of extreme threat to financial stability, the Federal Reserve stands ready to use the discount window and other tools to protect the financial system, as it did during the 1987 stock market crash and the September 11, 2001, terrorist attacks. Third, as suggested by a number of studies, when inflation is already low and the fundamentals of the economy suddenly deteriorate, the central bank should act more preemptively and more aggressively than usual in cutting rates (Orphanides and Wieland, 2000; Reifschneider and Williams, 2000; Ahearne et al., 2002). By moving decisively and early, the Fed may be able to prevent the economy from slipping into deflation, with the special problems that entails. As I have indicated, I believe that the combination of strong economic fundamentals and policymakers that are attentive to downside as well as upside risks to inflation make significant deflation in the United States in the foreseeable future quite unlikely. But suppose that, despite all precautions, deflation were to take hold in the U.S. economy and, moreover, that the Fed's policy instrument--the federal funds rate--were to fall to zero. What then? In the remainder of my talk I will discuss some possible options for stopping a deflation once it has gotten under way. I should emphasize that my comments on this topic are necessarily speculative, as the modern Federal Reserve has never faced this situation nor has it pre-committed itself formally to any specific course of action should deflation arise.

Furthermore, the specific responses the Fed would undertake would presumably depend on a number of factors, including its assessment of the whole range of risks to the economy and any complementary policies being undertaken by other parts of the U.S. government.7 Curing Deflation Let me start with some general observations about monetary policy at the zero bound, sweeping under the rug for the moment some technical and operational issues. As I have mentioned, some observers have concluded that when the central bank's policy rate falls to zero--its practical minimum--monetary policy loses its ability to further stimulate aggregate demand and the economy. At a broad conceptual level, and in my view in practice as well, this conclusion is clearly mistaken. Indeed, under a fiat (that is, paper) money system, a government (in practice, the central bank in cooperation with other agencies) should always be able to generate increased nominal spending and inflation, even when the short-term nominal interest rate is at zero. The conclusion that deflation is always reversible under a fiat money system follows from basic economic reasoning. A little parable may prove useful: Today an ounce of gold sells for $300, more or less. Now suppose that a modern alchemist solves his subject's oldest problem by finding a way to produce unlimited amounts of new gold at essentially no cost. Moreover, his invention is widely publicized and scientifically verified, and he announces his intention to begin massive production of gold within days. What would happen to the price of gold? Presumably, the potentially unlimited supply of cheap gold would cause the market price of gold to plummet. Indeed, if the market for gold is to any degree efficient, the price of gold would collapse immediately after the announcement of the invention, before the alchemist had produced and marketed a single ounce of yellow metal. What has this got to do with monetary policy? Like gold, U.S. dollars have value only to the extent that they are strictly limited in supply. But the U.S. government has a technology, called a printing press (or, today, its electronic equivalent), that allows it to produce as many U.S. dollars as it wishes at essentially no cost. By increasing the number of U.S. dollars in circulation, or even by credibly threatening to do so, the U.S. government can also reduce the value of a dollar in terms of goods and services, which is equivalent to raising the prices in dollars of those goods and services. We conclude that, under a paper-money system, a determined government can always generate higher spending and hence positive inflation. Of course, the U.S. government is not going to print money and distribute it willy-nilly (although as we will see later, there are practical policies that approximate this behavior).8 Normally, money is injected into the economy through asset purchases by the Federal Reserve. To stimulate aggregate spending when short-term interest rates have reached zero, the Fed must expand the scale of its asset purchases or, possibly, expand the menu of assets that it buys. Alternatively, the Fed could find other ways of injecting money into the system-for example, by making low-interest-rate loans to banks or cooperating with the fiscal authorities. Each method of adding money to the economy has advantages and drawbacks, both technical and economic. One important concern in practice is that calibrating the economic effects of nonstandard means of injecting money may be difficult, given our relative lack of experience with such policies. Thus, as I have stressed already, prevention of deflation remains preferable to having to cure it. If we do fall into deflation, however, we can take comfort that the logic of the printing press example must assert itself, and sufficient

injections of money will ultimately always reverse a deflation. So what then might the Fed do if its target interest rate, the overnight federal funds rate, fell to zero? One relatively straightforward extension of current procedures would be to try to stimulate spending by lowering rates further out along the Treasury term structure--that is, rates on government bonds of longer maturities.9 There are at least two ways of bringing down longer-term rates, which are complementary and could be employed separately or in combination. One approach, similar to an action taken in the past couple of years by the Bank of Japan, would be for the Fed to commit to holding the overnight rate at zero for some specified period. Because long-term interest rates represent averages of current and expected future short-term rates, plus a term premium, a commitment to keep short-term rates at zero for some time--if it were credible--would induce a decline in longer-term rates. A more direct method, which I personally prefer, would be for the Fed to begin announcing explicit ceilings for yields on longer-maturity Treasury debt (say, bonds maturing within the next two years). The Fed could enforce these interest-rate ceilings by committing to make unlimited purchases of securities up to two years from maturity at prices consistent with the targeted yields. If this program were successful, not only would yields on medium-term Treasury securities fall, but (because of links operating through expectations of future interest rates) yields on longer-term public and private debt (such as mortgages) would likely fall as well. Lower rates over the maturity spectrum of public and private securities should strengthen aggregate demand in the usual ways and thus help to end deflation. Of course, if operating in relatively short-dated Treasury debt proved insufficient, the Fed could also attempt to cap yields of Treasury securities at still longer maturities, say three to six years. Yet another option would be for the Fed to use its existing authority to operate in the markets for agency debt (for example, mortgage-backed securities issued by Ginnie Mae, the Government National Mortgage Association). Historical experience tends to support the proposition that a sufficiently determined Fed can peg or cap Treasury bond prices and yields at other than the shortest maturities. The most striking episode of bond-price pegging occurred during the years before the Federal Reserve-Treasury Accord of 1951.10 Prior to that agreement, which freed the Fed from its responsibility to fix yields on government debt, the Fed maintained a ceiling of 2-1/2 percent on long-term Treasury bonds for nearly a decade. Moreover, it simultaneously established a ceiling on the twelve-month Treasury certificate of between 7/8 percent to 11/4 percent and, during the first half of that period, a rate of 3/8 percent on the 90-day Treasury bill. The Fed was able to achieve these low interest rates despite a level of outstanding government debt (relative to GDP) significantly greater than we have today, as well as inflation rates substantially more variable. At times, in order to enforce these low rates, the Fed had actually to purchase the bulk of outstanding 90-day bills. Interestingly, though, the Fed enforced the 2-1/2 percent ceiling on long-term bond yields for nearly a decade without ever holding a substantial share of long-maturity bonds outstanding.11 For example, the Fed held 7.0 percent of outstanding Treasury securities in 1945 and 9.2 percent in 1951 (the year of the Accord), almost entirely in the form of 90-day bills. For comparison, in 2001 the Fed held 9.7 percent of the stock of outstanding Treasury debt. To repeat, I suspect that operating on rates on longer-term Treasuries would provide sufficient leverage for the Fed to achieve its goals in most plausible scenarios. If lowering yields on longer-dated Treasury securities proved insufficient to restart spending, however,

the Fed might next consider attempting to influence directly the yields on privately issued securities. Unlike some central banks, and barring changes to current law, the Fed is relatively restricted in its ability to buy private securities directly.12 However, the Fed does have broad powers to lend to the private sector indirectly via banks, through the discount window.13 Therefore a second policy option, complementary to operating in the markets for Treasury and agency debt, would be for the Fed to offer fixed-term loans to banks at low or zero interest, with a wide range of private assets (including, among others, corporate bonds, commercial paper, bank loans, and mortgages) deemed eligible as collateral.14 For example, the Fed might make 90-day or 180-day zero-interest loans to banks, taking corporate commercial paper of the same maturity as collateral. Pursued aggressively, such a program could significantly reduce liquidity and term premiums on the assets used as collateral. Reductions in these premiums would lower the cost of capital both to banks and the nonbank private sector, over and above the beneficial effect already conferred by lower interest rates on government securities.15 The Fed can inject money into the economy in still other ways. For example, the Fed has the authority to buy foreign government debt, as well as domestic government debt. Potentially, this class of assets offers huge scope for Fed operations, as the quantity of foreign assets eligible for purchase by the Fed is several times the stock of U.S. government debt.16 I need to tread carefully here. Because the economy is a complex and interconnected system, Fed purchases of the liabilities of foreign governments have the potential to affect a number of financial markets, including the market for foreign exchange. In the United States, the Department of the Treasury, not the Federal Reserve, is the lead agency for making international economic policy, including policy toward the dollar; and the Secretary of the Treasury has expressed the view that the determination of the value of the U.S. dollar should be left to free market forces. Moreover, since the United States is a large, relatively closed economy, manipulating the exchange value of the dollar would not be a particularly desirable way to fight domestic deflation, particularly given the range of other options available. Thus, I want to be absolutely clear that I am today neither forecasting nor recommending any attempt by U.S. policymakers to target the international value of the dollar. Although a policy of intervening to affect the exchange value of the dollar is nowhere on the horizon today, it's worth noting that there have been times when exchange rate policy has been an effective weapon against deflation. A striking example from U.S. history is Franklin Roosevelt's 40 percent devaluation of the dollar against gold in 1933-34, enforced by a program of gold purchases and domestic money creation. The devaluation and the rapid increase in money supply it permitted ended the U.S. deflation remarkably quickly. Indeed, consumer price inflation in the United States, year on year, went from -10.3 percent in 1932 to -5.1 percent in 1933 to 3.4 percent in 1934.17 The economy grew strongly, and by the way, 1934 was one of the best years of the century for the stock market. If nothing else, the episode illustrates that monetary actions can have powerful effects on the economy, even when the nominal interest rate is at or near zero, as was the case at the time of Roosevelt's devaluation. Fiscal Policy Each of the policy options I have discussed so far involves the Fed's acting on its own. In practice, the effectiveness of anti-deflation policy could be significantly enhanced by cooperation between the monetary and fiscal authorities. A broad-based tax cut, for

example, accommodated by a program of open-market purchases to alleviate any tendency for interest rates to increase, would almost certainly be an effective stimulant to consumption and hence to prices. Even if households decided not to increase consumption but instead re-balanced their portfolios by using their extra cash to acquire real and financial assets, the resulting increase in asset values would lower the cost of capital and improve the balance sheet positions of potential borrowers. A money-financed tax cut is essentially equivalent to Milton Friedman's famous "helicopter drop" of money.18 Of course, in lieu of tax cuts or increases in transfers the government could increase spending on current goods and services or even acquire existing real or financial assets. If the Treasury issued debt to purchase private assets and the Fed then purchased an equal amount of Treasury debt with newly created money, the whole operation would be the economic equivalent of direct open-market operations in private assets. Japan The claim that deflation can be ended by sufficiently strong action has no doubt led you to wonder, if that is the case, why has Japan not ended its deflation? The Japanese situation is a complex one that I cannot fully discuss today. I will just make two brief, general points. First, as you know, Japan's economy faces some significant barriers to growth besides deflation, including massive financial problems in the banking and corporate sectors and a large overhang of government debt. Plausibly, private-sector financial problems have muted the effects of the monetary policies that have been tried in Japan, even as the heavy overhang of government debt has made Japanese policymakers more reluctant to use aggressive fiscal policies (for evidence see, for example, Posen, 1998). Fortunately, the U.S. economy does not share these problems, at least not to anything like the same degree, suggesting that anti-deflationary monetary and fiscal policies would be more potent here than they have been in Japan. Second, and more important, I believe that, when all is said and done, the failure to end deflation in Japan does not necessarily reflect any technical infeasibility of achieving that goal. Rather, it is a byproduct of a longstanding political debate about how best to address Japan's overall economic problems. As the Japanese certainly realize, both restoring banks and corporations to solvency and implementing significant structural change are necessary for Japan's long-run economic health. But in the short run, comprehensive economic reform will likely impose large costs on many, for example, in the form of unemployment or bankruptcy. As a natural result, politicians, economists, businesspeople, and the general public in Japan have sharply disagreed about competing proposals for reform. In the resulting political deadlock, strong policy actions are discouraged, and cooperation among policymakers is difficult to achieve. In short, Japan's deflation problem is real and serious; but, in my view, political constraints, rather than a lack of policy instruments, explain why its deflation has persisted for as long as it has. Thus, I do not view the Japanese experience as evidence against the general conclusion that U.S. policymakers have the tools they need to prevent, and, if necessary, to cure a deflationary recession in the United States. Conclusion Sustained deflation can be highly destructive to a modern economy and should be strongly resisted. Fortunately, for the foreseeable future, the chances of a serious deflation in the

United States appear remote indeed, in large part because of our economy's underlying strengths but also because of the determination of the Federal Reserve and other U.S. policymakers to act preemptively against deflationary pressures. Moreover, as I have discussed today, a variety of policy responses are available should deflation appear to be taking hold. Because some of these alternative policy tools are relatively less familiar, they may raise practical problems of implementation and of calibration of their likely economic effects. For this reason, as I have emphasized, prevention of deflation is preferable to cure. Nevertheless, I hope to have persuaded you that the Federal Reserve and other economic policymakers would be far from helpless in the face of deflation, even should the federal funds rate hit its zero bound.19

References Ahearne, Alan, Joseph Gagnon, Jane Haltmaier, Steve Kamin, and others, "Preventing Deflation: Lessons from Japan's Experiences in the 1990s," Board of Governors, International Finance Discussion Paper No. 729, June 2002. Clouse, James, Dale Henderson, Athanasios Orphanides, David Small, and Peter Tinsley, "Monetary Policy When the Nominal Short-term Interest Rate Is Zero," Board of Governors of the Federal Reserve System, Finance and Economics Discussion Series No. 2000-51, November 2000. Eichengreen, Barry, and Peter M. Garber, "Before the Accord: U.S. Monetary-Financial Policy, 1945-51," in R. Glenn Hubbard, ed., Financial Markets and Financial Crises, Chicago: University of Chicago Press for NBER, 1991. Eggertson, Gauti, "How to Fight Deflation in a Liquidity Trap: Committing to Being Irresponsible," unpublished paper, International Monetary Fund, October 2002. Fisher, Irving, "The Debt-Deflation Theory of Great Depressions," Econometrica (March 1933) pp. 337-57. Hetzel, Robert L. and Ralph F. Leach, "The Treasury-Fed Accord: A New Narrative Account," Federal Reserve Bank of Richmond, Economic Quarterly (Winter 2001) pp. 3355. Orphanides, Athanasios and Volker Wieland, "Efficient Monetary Design Near Price Stability," Journal of the Japanese and International Economies (2000) pp. 327-65. Posen, Adam S., Restoring Japan's Economic Growth, Washington, D.C.: Institute for International Economics, 1998. Reifschneider, David, and John C. Williams, "Three Lessons for Monetary Policy in a LowInflation Era," Journal of Money, Credit, and Banking (November 2000) Part 2 pp. 936-66. Toma, Mark, "Interest Rate Controls: The United States in the 1940s," Journal of Economic History (September 1992) pp. 631-50.

Footnotes 1. Conceivably, deflation could also be caused by a sudden, large expansion in aggregate supply arising, for example, from rapid gains in productivity and broadly declining costs. I don't know of any unambiguous example of a supply-side deflation, although China in recent years is a possible case. Note that a supply-side deflation would be associated with an economic boom rather than a recession. Return to text 2. The nominal interest rate is the sum of the real interest rate and expected inflation. If expected inflation moves with actual inflation, and the real interest rate is not too variable, then the nominal interest rate declines when inflation declines--an effect known as the Fisher effect, after the early twentieth-century economist Irving Fisher. If the rate of deflation is equal to or greater than the real interest rate, the Fisher effect predicts that the nominal interest rate will equal zero. Return to text 3. The real interest rate equals the nominal interest rate minus the expected rate of inflation (see the previous footnote). The real interest rate measures the real (that is, inflation-adjusted) cost of borrowing or lending. Return to text 4. Throughout the latter part of the nineteenth century, a worldwide gold shortage was forcing down prices in all countries tied to the gold standard. Ironically, however, by the time that Bryan made his famous speech, a new cyanide-based method for extracting gold from ore had greatly increased world gold supplies, ending the deflationary pressure. Return to text 5. A rather different, but historically important, problem associated with the zero bound is the possibility that policymakers may mistakenly interpret the zero nominal interest rate as signaling conditions of "easy money." The Federal Reserve apparently made this error in the 1930s. In fact, when prices are falling, the real interest rate may be high and monetary policy tight, despite a nominal interest rate at or near zero. Return to text 6. Several studies have concluded that the measured rate of inflation overstates the "true" rate of inflation, because of several biases in standard price indexes that are difficult to eliminate in practice. The upward bias in the measurement of true inflation is another reason to aim for a measured inflation rate above zero. Return to text 7. See Clouse et al. (2000) for a more detailed discussion of monetary policy options when the nominal short-term interest rate is zero. Return to text 8. Keynes, however, once semi-seriously proposed, as an anti-deflationary measure, that the government fill bottles with currency and bury them in mine shafts to be dug up by the public. Return to text 9. Because the term structure is normally upward sloping, especially during periods of economic weakness, longer-term rates could be significantly above zero even when the overnight rate is at the zero bound. Return to text

10. S See Hetzel and Leach (2001) for a fascinating account of the events leading to the Accord. Return to text 11. See Eichengreen and Garber (1991) and Toma (1992) for descriptions and analyses of the pre-Accord period. Both articles conclude that the Fed's commitment to low inflation helped convince investors to hold long-term bonds at low rates in the 1940s and 1950s. (A similar dynamic would work in the Fed's favor today.) The rate-pegging policy finally collapsed because the money creation associated with buying Treasury securities was generating inflationary pressures. Of course, in a deflationary situation, generating inflationary pressure is precisely what the policy is trying to accomplish. An episode apparently less favorable to the view that the Fed can manipulate Treasury yields was the so-called Operation Twist of the 1960s, during which an attempt was made to raise short-term yields and lower long-term yields simultaneously by selling at the short end and buying at the long end. Academic opinion on the effectiveness of Operation Twist is divided. In any case, this episode was rather small in scale, did not involve explicit announcement of target rates, and occurred when interest rates were not close to zero. Return to text 12. The Fed is allowed to buy certain short-term private instruments, such as bankers' acceptances, that are not much used today. It is also permitted to make IPC (individual, partnership, and corporation) loans directly to the private sector, but only under stringent criteria. This latter power has not been used since the Great Depression but could be invoked in an emergency deemed sufficiently serious by the Board of Governors. Return to text 13. Effective January 9, 2003, the discount window will be restructured into a socalled Lombard facility, from which well-capitalized banks will be able to borrow freely at a rate above the federal funds rate. These changes have no important bearing on the present discussion. Return to text 14. By statute, the Fed has considerable leeway to determine what assets to accept as collateral. Return to text 15. In carrying out normal discount window operations, the Fed absorbs virtually no credit risk because the borrowing bank remains responsible for repaying the discount window loan even if the issuer of the asset used as collateral defaults. Hence both the private issuer of the asset and the bank itself would have to fail nearly simultaneously for the Fed to take a loss. The fact that the Fed bears no credit risk places a limit on how far down the Fed can drive the cost of capital to private nonbank borrowers. For various reasons the Fed might well be reluctant to incur credit risk, as would happen if it bought assets directly from the private nonbank sector. However, should this additional measure become necessary, the Fed could of course always go to the Congress to ask for the requisite powers to buy private assets. The Fed also has emergency powers to make loans to the private sector (see footnote 12), which could be brought to bear if necessary. Return to text 16. The Fed has committed to the Congress that it will not use this power to "bail out" foreign governments; hence in practice it would purchase only highly rated

foreign government debt. Return to text 17. U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970, Washington, D.C.: 1976. Return to text 18. A tax cut financed by money creation is the equivalent of a bond-financed tax cut plus an open-market operation in bonds by the Fed, and so arguably no explicit coordination is needed. However, a pledge by the Fed to keep the Treasury's borrowing costs low, as would be the case under my preferred alternative of fixing portions of the Treasury yield curve, might increase the willingness of the fiscal authorities to cut taxes. Some have argued (on theoretical rather than empirical grounds) that a moneyfinanced tax cut might not stimulate people to spend more because the public might fear that future tax increases will just "take back" the money they have received. Eggertson (2002) provides a theoretical analysis showing that, if government bonds are not indexed to inflation and certain other conditions apply, a money-financed tax cut will in fact raise spending and inflation. In brief, the reason is that people know that inflation erodes the real value of the government's debt and, therefore, that it is in the interest of the government to create some inflation. Hence they will believe the government's promise not to "take back" in future taxes the money distributed by means of the tax cut. Return to text 19. Some recent academic literature has warned of the possibility of an "uncontrolled deflationary spiral," in which deflation feeds on itself and becomes inevitably more severe. To the best of my knowledge, none of these analyses consider feasible policies of the type that I have described today. I have argued here that these policies would eliminate the possibility of uncontrollable deflation. Return to text Return to top 2002 Speeches

Home | News and events Accessibility | Contact Us Last update: November 21, 2002

Clarifying the debate about deflation concerns


Mickey Levy, 21 February 2014 A popular view among economic commentators is that rich countries face a serious risk of deflation, and should adopt aggressive macroeconomic stimulus policies to ward it off. This column argues that despite similar headline inflation rates, the US, Europe, and Japan in fact face very different macroeconomic conditions. In the US, much of the recent disinflation is attributable to positive supply-side developments. In Europe, an aggressive round of quantitative easing might encourage policymakers to delay the reforms that are necessary to avoid a prolonged Japanese-style malaise.
A common theme among many economic policymakers, financial market participants, and the media is that rich industrialised nations face a high risk of deflation, and that deflation always harms economic performance and so must be combatted with aggressive macroeconomic stimulus. Such broad assessments are misleading, and under certain circumstances may lead to misguided policies. More clarity on the topic is required. Year-over-year inflation measures in the US, Europe, and Japan have converged close to 1% the US and the Eurozone have experienced disinflation, while in Japan modest inflation has replaced four years of mild deflation. However, the conditions influencing their respective outlooks vary widely. Figure 1. Inflation in the US, the Eurozone, and Japan

Observations and comparisons

There is far too much angst about the possibility of deflation in the US. With nominal GDP accelerating, labour markets improving, and the cost of housing shelter (owners occupied rent) accelerating, the probability of deflation is near zero. It is highly probable perhaps 80% to 90% that core inflation has troughed, and will rise modestly in 20142015. Moreover, far too little attention has been paid to the fact that the vast majority of the USs recent disinflation has resulted from technological innovations that raise economic performance and potential. This is very different from disinflation/deflation associated with insufficient aggregate demand, which may adversely affect spending behaviour and exacerbate economic underperformance. In contrast, Europe suffers from sluggish economic growth and ongoing adjustments following a sustained period of excesses. The most likely outcome is that inflation will stay very low and likely recede toward zero, and there is some nontrivial probability perhaps 20% of temporary mild deflation. The proper balance of policies must weigh the need for economic reforms and long-run objectives with near-term concerns about deflation. Japan faces a very different situation. Its two-decade-long struggle with on-and-off deflation and economic malaise illustrates the harmful effects of misguided policies that result in insufficient demand and constrain economic activity. The Abe administration is attempting to radically change expectations and boost performance. Success hinges on whether the Bank of Japans aggressive monetary stimulus boosts confidence sufficiently to generate sustained increases in spending, and whether the stimulus is accompanied by reforms that tackle economic inefficiencies and lift potential growth. Thus far, longer-run reforms are lacking. I expect that in 2014 core inflation will stay above 1%, but below the Abe administrations 2% goal.

Aggregate demand and productive capacity


In the US, the year-over-year core Personal Consumption Expenditures (PCE) index is at 1.2%, and the core CPI is at 1.7% down from 1.8% and 1.9%, respectively, in the previous year. This disinflation is a lagged consequence of the disappointingly modest growth in aggregate demand that has constrained business pricing power, and high unemployment that has dampened wages, along with lower prices of selected goods and services benefitting from technological innovations. However, nominal GDP the broadest measure of aggregate demand has grown comfortably faster than estimated real potential throughout the soft economic recovery. In the second half of 2013 it grew at an annualised rate of above 5% a significant acceleration from the previous years 3.1%. Such growth in aggregate demand far in excess of productive capacity virtually rules out deflation. If growth remains healthy, which seems likely, fuelled by monetary stimulus and a continued diminution of economic headwinds, the stronger economic activity and associated improvement in labour markets would influence price- and wage-setting behaviour, and begin to generate inflation and wage pressures. Figure 2. Nominal GDP growth in the US and the Eurozone

Adjustment in Europe
Europes recovery from financial crisis and its many necessary adjustments are at a much earlier stage. Weak aggregate demand puts downward pressure on product pricing. Nominal GDP in the Eurozone rose 0.7% in 2012 and an estimated 1% in 2013 below estimates of potential growth. High unemployment and slack labour markets are expected to persist, and still lower wages are needed in troubled nations to regain competitiveness. To date, the stickiness of wages which continue to rise in real terms in France and Italy highlights the slow adjustments to the new economic realities. Despite some positive reforms, an array of regulatory, economic, and fiscal policies continue to constrain productive capacity, and many reforms and austerity measures have been put on hold. Europes risks of deflation are nontrivial. In the last year, virtually every EU nation experienced downward price pressures, but inflation varies around the Eurozones harmonised rate of 0.9%. With the exception of Greece which has deflation of 1.8% inflation has been modest in all European nations, ranging from Spains 0.3% to Germanys 1.2%. Europes road to healthy economic performance will be difficult, and policymakers face tough choices and tradeoffs. Should the ECB move aggressively with a US- or Japanese-style round of quantitative easing in an attempt to avert deflation? No. The challenges facing Europe are real and not monetary, and the ECBs monetary policy is already accommodative, with a negative real policy rate and ample liquidity in the financial system. Unintended consequences of aggressive QE may be costly. Such an ECB shift may signal to Europes fiscal and economic policymakers that they may postpone necessary reforms. Or it may influence private-sector wage negotiations and result in delays in real wage adjustments. While lowering its policy rate or countering banks repayment of long-term refinancing operations by otherwise signalling that it will maintain

sufficient liquidity seems appropriate, avoiding QE and its potential unintended policy side effects would be best for Europes long-run potential. That is, a brush with price stability or even temporary deflation along the road to recovery may be the best albeit painful path of adjustment from earlier excesses.

Japans malaise
Japans challenges are very different. Its nominal GDP is barely above where it was in 1991. It rose through 1997, was flat through 2007, plummeted in 20082009 and currently remains nearly 7% below its prior expansion peak. This prolonged underperformance has resulted from a combination of insufficient aggregate demand (misguided monetary and banking policies), a wide array of growth-suppressing regulatory and economic policies, and a declining population and workforce. Real GDP has grown only as a consequence of a persistent decline in the deflator, with intermittent periods of deflation and recession. This sustained malaise dampened expectations, and constrained households and businesses propensity to spend. Figure 3. Japans malaise

Japan is trying to escape this environment. Early responses to the Bank of Japans aggressive QE and lower yen have been positive confidence has jumped, real growth has accelerated above potential, and inflation has increased to 1.1% year-over-year. But its far too early to claim success. Critics are correct that failure to implement economic, regulatory, and immigration reforms constrain long-run potential growth and drain confidence. But for 2014, sustained efforts to boost confidence which in all likelihood requires a weaker yen, a stronger Nikkei, and rising wages are likely to support core inflation (excluding the one-time impact of the consumption tax hike) between 1% and 2% and above-potential growth, despite the drag from the scheduled tax hike.

The Abe administrations goal of achieving 2% inflation is only an intermediate target. Along with monetary stimulus, expectations of rising prices (and higher wages) are hoped to support stronger economic growth (and a renewed positive attitude). Even aside from needed reform, theres a conflict in these objectives. Ultimately, maintaining negative real bond yields deemed a key element to reducing debt burdens will be inconsistent with sustained moderate inflation and healthy growth. At some point, financial market expectations will adjust, and bond yields will rise.

Bad deflation vs healthy price declines


Deflation stemming from insufficient demand and growth-constraining economic policies can drain confidence and become negatively reinforcing, as Japan has shown. In such situations, aggressive macroeconomic policy stimulus designed to jar expectations and boost demand is appropriate. Europes downward price and wage pressures are necessary adjustments to its earlier excesses, and relying excessively on aggressive monetary policy to stimulate demand is not a lasting economic remedy. Europe is not destined to fall into a Japanese-style prolonged malaise, but it must continue to pursue reforms that lift productive capacity and confidence. The US situation is very different. The economic expansion is gaining momentum (temporarily sidetracked by unseasonal winter storms), unemployment is falling steadily, personal income is growing faster than inflation, and household net worth is at an all-time high. Expectations of deflation are not apparent in either household or business behaviour. Concerns about lingering labour-market underperformance are warranted; angst about deflation is not. Prices of some goods and services in the US have been falling, benefitting from technological innovation, improved product design, or heightened competition and distribution efficiencies through the internet. Examples abound: flat-screen TVs, computers, automobiles, reduced fees on financial transactions, online consumer and business purchases, etc. These lower prices and quality improvements explain the vast majority of the recent deceleration in inflation the PCE deflator for goods continues to decline and is flat for nondurables, while it has been rising at a fairly steady pace of 2% for services. These innovation-based price reductions improve standards of living and free up disposable income to spend on other goods and services. They boost aggregate demand and enhance economic performance. And they contribute positively to longer-run potential growth. It is unclear why US policymakers and commentators fear disinflation that stems from innovationbased price reductions amid accelerating aggregate demand. European policymakers face tougher choices. Figure 4. Decomposition of the US PCE deflator

A new look at global growth convergence and divergence


Michele Battisti, Gianfranco di Vaio, Joseph Zeira, 9 January 2014 A key question in economics is whether poor countries will automatically close the income gap with rich countries. However, different empirical methods yield different answers growth regressions suggest convergence, whereas tests of distribution dynamics suggest divergence. This column discusses recent research that reconciles these two strands of the literature. It extends the benchmark growth regression model to include a parameter that determines the share of new technologies a country can adopt each year. The result is that, although each country converges to a growth path, the growth paths themselves may diverge.
The phenomenon of modern economic growth is fairly new. It started less than two centuries ago, but it changed our lives significantly. One of the main changes is that income gaps between countries have greatly increased. One of the main questions that concern economists who study economic growth is whether these gaps are still growing, or countries are instead converging to the same level of income. This question is empirical, but it has important theoretical implications, as our main growth theories predict convergence between countries. This question is clearly also important from a policy point of view. If the poor countries will converge to the global frontier anyway, there is no need to provide them with extensive assistance. But if there is no convergence and even divergence, then such countries need help badly.

Alternative methods of measuring convergence


The most common method of measuring convergence or divergence of output per capita across countries has been through growth regressions. This method was developed by Barro (1991) and by Mankiw et al. (1992). In this method, the average rate of growth of output per capita over a long period is regressed over the initial level of output per capita and some additional control variables.

If countries converge, the coefficient of initial output should be negative, meaning a country that begins poorer will grow faster while a rich country will grow more slowly.

The size of this negative coefficient measures the rate of convergence. The control variables used in such regressions are viewed as explanatory variables for economic growth, and they usually include variables that measure education, political stability, institutions, and similar variables.

Growth regressions have indeed shown that there is convergence, namely that the coefficient of initial output on the rate of growth is negative.

As for the size of the coefficient the rate of convergence results are mixed and vary between 0.02 (Barro and Sala-i-Martin 1992) and 0.14 (Caselli et al. 1996). Other studies of convergence, which have followed an alternative path a direct test of how the distribution of output across countries evolves over time reached very different conclusions.

These alternative empirical studies found that the levels of output per capita in countries tend to diverge over time.

See Bernard and Durlauf (1995) and Quah (1996), and the exhaustive survey of growth empirics by Durlauf et al. (2005). This discrepancy between growth regressions and between distributional studies of convergence is one of the main issues dealt with in our recent paper (Battisti et al. 2013). We show how to reconcile the two types of studies and their conflicting results.

Our approach
Our main point of departure is the benchmark model of growth regressions as described in Durlauf et al. (2005). The basic assumption of this model is that for each country the ratio of output per capita (or output per worker) y to total factor productivity A, that is, y/A, converges to a long-run value yE at a rate b. Note that early growth regressions examined economic growth only at short time horizons, due to lack of data, so the changes in A were small and were usually ignored. Later on, more data were accumulated and growth regressions turned from crosssection regressions to panel estimations, but these studies still did not specify the exact dynamics of productivity A for the estimation (see Durlauf et al. 2005, for a full description of these developments). Our study suggests adding a specification of the dynamics of productivity one that is simple, intuitive, and can be tested empirically. We assume in the paper that a countrys productivity follows the global frontier of technology F, but may follow it only partially. We assume that a country may adopt a share d of all new technologies every year, where d is a country-specific parameter that can vary from 0 to 1. If it is 0, the country has no technical change, while if d is 1, the country follows the global frontier fully.1 Adding this specification of productivity to the benchmark growth regression model leads to the following insights:

Each country converges to a long-run growth path which is determined by A. The rate of convergence to the long-run path is b, and it is equal to the convergence coefficient measured by all growth regressions. However, the long-run growth path itself can diverge from the global frontier F if the countrys d is less than 1.

In that case, the country becomes poorer, and poorer relative to the countries at the frontier.

Estimation
Embedding our productivity specification into the benchmark model yields an extended growth regression that combines convergence to the long-run growth path with the dynamics of this path itself. This allows us to empirically test this extended model, and estimate for each country the convergence coefficient b and also the coefficient d that measures global divergence. We use two methods of estimation.

Panel cointegration with estimation of heterogeneous coefficients for each country.

Here d is the cointegrating coefficient, and b is the error-correction coefficient.

Estimation by differences.

We find that the two methods of estimation yield similar results, but we present here the results of the panel cointegration only. Our empirical test of the extended growth regression model uses a panel of 140 countries over the years 19502008, in which we use GDP per capita as our main measure of economic development. The data are in PPP-adjusted units, 1990 GearyKhamis dollars, taken from the Groningen database. The global frontier is represented by US GDP per capita, both because the US leads global growth and also because its rate of growth is very stable over time. Interestingly, we do not need any additional control variables in our dynamic estimation. We use them in a later stage of the paper, not reported here, in which we estimate in a cross-country regression the effects of the usual explanatory variables on the parameters d and a. Hence, the dynamic estimation that finds these parameters for each country does not require the use of control variables. This is another important result of the paper, since the use of control variables in the dynamic estimation of growth regressions has been criticised as ad hoc and rather arbitrary.

Results
Our results show that countries both converge and diverge. Each country converges to its longrun growth path, and the rate of convergence is 0.04, which is quite similar to the convergence coefficient found in many growth regressions. But the long-run growth paths of countries diverge from the global frontier, as the average estimated d is 0.7 and it is significantly lower than 1. Hence, our study succeeds in reconciling the two strands of the literature: growth regressions that point at convergence, and distribution dynamics that point at divergence (for details, see Battisti et al. 2013).

Summary
In this column we show how we reconcile the results of growth regressions, which indicate convergence, with the results of distribution dynamics tests, which indicate divergence.

We find that almost all countries converge to their own steady-state paths. However, the steady-state paths of most countries diverge from the global frontier.

We also try to test how the common explanatory variables of growth regressions affect the coefficient d, that is, the divergence of a country. This means that our method enables us to test separately the effect of various empirical variables on the long-run and on the short-run of economic growth. In addition to the reconciliation, our work offers a methodological extension to growth regressions. We hope it will stimulate research by other economists who will use our approach in order to improve our understanding of economic growth across countries. Cross-country growth differentials are, after all, one of the main puzzles of our times.

Authors note: The views in this paper are those of the authors and do not necessarily represent their affiliated institutions.

References
Barro, R J (1991), Economic growth in a cross section of countries, Quarterly Journal of Economics 106(2), pp. 407443. Barro, R J and X Sala-i-Martin (1992), Convergence, Journal of Political Economy 100(2), pp. 223251. Battisti, M, G Di Vaio, and J Zeira (2013), Global Divergence in Growth Regressions, CEPR Discussion Paper 9687. Caselli, F, G Esquivel, and F Lefort (1996), Reopening the convergence debate: a new look at cross-country growth empirics, Journal of Economic Growth 1, pp. 363389. Durlauf, S N, P Johnson, and J Temple (2005), Growth Econometrics, in P Aghion and S N Durlauf (eds), Handbook of Economic Growth, Amsterdam: North Holland. Mankiw, N G, D Romer, and D N Weil (1992), A Contribution to the Empirics of Economic Growth, Quarterly Journal of Economics 107, pp. 408437. Quah, D T (1996), Twin Peaks: Growth and Convergence in Models of Distribution Dynamics, Economic Journal 106, pp. 10451055.

Are Jobs and Growth Still Linked?


Posted on February 7, 2014 by iMFdirect By Prakash Loungani

(Version in Espaol) Over 200 million people are unemployed around the globe today, over a fifth of them in advanced economies. Unemployment rates in these economies shot up at the onset of the Great Recession and, five years later, remain very high. Some argue that this is to be expected given that the economy remains well below trend and press for greater easing of macroeconomic policies (e.g. Krugman, 2011, Kocherlakota (2014)). Others suggest that the job losses, particularly in countries like Spain and Ireland, have been too large to be explained by developments in output, and may largely reflect structural problems in their labor markets. Even in the United States, where unemployment rates have fallen over the past year, there is concern that increasing numbers of people are dropping out of the labor force, thus decoupling jobs and growth.

Strong link between employment and growth What does the evidence show? In new research, Laurence Ball, Daniel Leigh and I find that the link between output growth and employment growth holds strongly in most advanced economies. We find no evidence that this linkwhich goes by the wonkish name of Okuns Lawbroke down during 2008 to 2013, including in high unemployment countries like Ireland and Spain. On average across the 20 advanced economies we study, a 1 percentage point increase in output growth leads to a percentage point increase in employment growth (Figure 1). This link between jobs and growth, called the Okun coefficient, varies across countries, ranging from high values of 1.5 for Spain and 0.7 for Ireland to low values of 0.3 for Italy and 0.2 for Austria. The high value of Spains Okun coefficient reflects the greater use of temporary contracts there compared to other countries: this means that when times are good many new jobs are created but there is also a lot of job loss during recessions. We find that Okuns Law survived the stress test of the Great Recession: there is little evidence that the link between growth and jobs changed appreciably over the course of the Great Recession. This is evident from Figure 2, where the Okuns coefficients for the full period 1980 to 2013 are essentially the same as what they were before the Great Recession.

Figure 3 shows tracks the relationship between employment and output growth for two countries in our sample, the United States and Spain. The trajectory of output and employment since 2007 is highlighted in red in the figures. For the United States, Okuns Law fits very well, that is, the relationship between output and employment growth is strong (Figure 3a). During the Great Recession, there was a year in which U.S. employment fell quite a bit more than expected, but the historical relationship was restored by 2013. (It is worth noting that a similar relationship for unemployment would indicate that U.S. unemployment today is about a percentage point lower than its historical relationship with output growth, consistent with the view of Erceg and Levin that people dropping out of the labor force could be accounting for some of the sharp drop in U.S. unemployment.)

In the case of Spain the jobs-growth link is very tight with little evidence that it broke down over the Great Recession; the trajectory shown in red remains close to the historical relationship (Figure 3b). Course of policy action Our evidence matters for the interpretation of employment movements and for policy. The alleged breakdown of the jobs-growth link is often used as support for the view that the problems in the labor market are structural in nature and would not be solved by policy responses aimed at bringing about a cyclical recovery. McKinsey (2011), for example, argues that Okuns Law has broken down and that a return to full employment will thus require not just healthy GDP growth [but also] major efforts in education, regulation, and even diplomacy. In contrast, our results suggest that, as IMF Managing Director Lagarde noted recently, the most effective way of boosting jobs is to get growth going again an additional percentage point of growth in the worlds advanced economies would lower unemployment there by about half of a percentage point, pulling over 4 million people back into jobs. So, in order to create jobs, we must lift economic growth. How can this be done? In the near term, there is no doubt that it will take smart monetary and fiscal policy to protect the recovery. Of course, the finding that Okuns Law holds does not in itself completely determine the course of policy action; that requires, as Nobel-laureate Peter Diamond stated at the IMFs annual research conference, also looking carefully at relationships such as the Beveridge Curvethe relationship between jobs and vacanciesand other indicators of mismatch in the labor market. But the alleged breakdown of Okuns Law is often a jumping-off point for arguing that structural reforms are needed to make a major dent in unemployment. Our results suggest that while there may be good reasons to recommend structural reforms to boost employment, proposing them in the belief that Okuns Law has broken down should not be one of them.

The Real Challenges to Growth


Tweet369 Share267 Share42 Share17 Share5

CommentsView/Create comment on this paragraph MILAN Advanced economies experience since the 2008 financial crisis has spurred a rapidly evolving discussion of growth, employment, and income inequality. That should come as no surprise: For those who expected a relatively rapid post-crisis recovery, the more things stay the same, the more they change. CommentsView/Create comment on this paragraph Soon after the near-collapse of the financial system, the consensus view in favor of a reasonably normal cyclical recovery faded as the extent of balance-sheet damage and the effect of deleveraging on domestic demand became evident. But, even with deleveraging now well under way, the positive effect on growth and employment has been disappointing. In the United States, GDP growth remains well below what, until recently, had been viewed as its potential rate, and growth in Europe is negligible. CommentsView/Create comment on this paragraph Employment remains low and is lagging GDP growth, a pattern that began at least three recessions ago and that has become more pronounced with each recovery. In most advanced economies, the tradable sector has generated very limited job growth a problem that, until 2008, domestic demand solved by employing lots of people in the non-tradable sector (government, health care, construction, and retail). CommentsView/Create comment on this paragraph Meanwhile, the adverse trends in income distribution both preceded the crisis and have survived it. In the US, the gap between the mean (per capita) income and the median income has grown to more than $20,000. The income gains from GDP growth have been mostly concentrated in the upper quartile of the distribution. Prior to the crisis, the wealth effect produced by high asset prices mitigated downward pressure on consumption, just as low interest rates and quantitative easing since 2008 have produced substantial gains in asset prices that, given weak economic performance, probably will not last.

CommentsView/Create comment on this paragraph

CommentsView/Create comment on this paragraph [To view a high-resolution version of this graph, click here.] CommentsView/Create comment on this paragraph The growing concentration of wealth, together with highly uneven educational quality, is contributing to declines in intergenerational economic mobility, in turn threatening social and political cohesion. Though causality is elusive, there has historically been a high correlation between inequality and political polarization, which is one reason why successful developing-country growth strategies have relied heavily on inclusiveness. CommentsView/Create comment on this paragraph Labor-saving technology and shifting employment patterns in the global economys tradable sector are important drivers of inequality. Routine white- and blue-collar jobs are disappearing, while lower-value-added employment in the tradable sector is moving to a growing set of developing economies. These powerful twin forces have upset the long-run equilibrium in advanced economies labor markets, with too much education and too many skills invested in an outmoded growth pattern. CommentsView/Create comment on this paragraph All of this is causing distress, consternation, and confusion. But stagnation in the advanced countries is not inevitable though avoiding it does require overcoming a daunting set of challenges. CommentsView/Create comment on this paragraph First, expectations are or have been out of line with reality. It takes time for the full impact of deleveraging, structural rebalancing, and restoring shortfalls in tangible and intangible assets via investment to manifest itself. In the meantime, those who are bearing the brunt of the transition costs the unemployed and the

young need support, and those of us who are more fortunate should bear the costs. Otherwise, the stated intention of restoring inclusive growth patterns will lack credibility, undercutting the ability to make difficult but important choices. CommentsView/Create comment on this paragraph Second, achieving full potential growth requires that the widespread pattern of public-sector underinvestment be reversed. A shift from consumption-led to investment-led growth is crucial, and it has to start with the public sector. CommentsView/Create comment on this paragraph The best way to use the advanced countries remaining fiscal capacity is to restore public investment in the context of a credible multi-year stabilization plan. This is a much better path than one that relies on leverage, low interest rates, and elevated asset prices to stimulate domestic demand beyond its natural recovery level. Not all demand is created equal. We need to get the level up and the composition right. CommentsView/Create comment on this paragraph Third, in flexible economies like that of the US, an important structural shift toward external demand is already underway. Exports are growing rapidly (outpacing import growth), owing to lower energy costs, new technologies that favor re-localization, and a declining real effective exchange rate (nominal dollar deprecation combined with muted domestic wage and income growth and higher inflation in major developing-country trading partners). Eventually, these structural shifts will offset a lower (and more sustainable) level of consumption relative to income, unless inappropriate increases in domestic demand short-circuit the process. CommentsView/Create comment on this paragraph Fourth, economies with structural rigidities need to take steps to remove them. All economies must be adaptable to structural change in order to support growth, and flexibility becomes more important in altering defective growth patterns, because it affects the speed of recovery. CommentsView/Create comment on this paragraph Finally, leadership is required to build a consensus around a new growth model and the burden-sharing needed to implement it successfully. Many developing countries spend a lot of time in a stable, no-growth equilibrium, and then shift to a more positive one. There is nothing automatic about that. In all of the cases with which I am familiar, effective leadership was the catalyst. CommentsView/Create comment on this paragraph So, while we can expect a multi-year process of rebalancing and closing the gap between actual and potential growth, exactly how long it will take depends on policy choices and the speed of structural adjustment. In southern Europe, for example, the process will take longer, because there are more missing components of recovery in these countries. But the lag in identifying the challenges, much less in responding to them, seems fairly long almost everywhere. CommentsView/Create comment on this paragraph Of course, the technological and demographic factors that underpin potential growth ebb and flow over longer (multi-decade) timeframes; and, regardless of whether the US and other advanced countries have entered a long-run period of secular decline, there really is no way to influence these forces.

CommentsView/Create comment on this paragraph But the immediate issue confronting many economies is different: restoring a resilient and inclusive growth pattern that achieves whatever the trend in potential growth permits.

Read more at http://www.project-syndicate.org/commentary/michael-spence-explains-why-secular-stagnation--is-not-a-problem-that-the-us-and-other-advanced-economies-should-tryto-solve#uUepVIgCSfgEgkvT.99

Monetary versus Fiscal: an odd debate


A long time ago, the debate between monetarists and Keynesians was the debate in macro. But it was a rather limited debate: both sides generally used the same model (IS-LM), and so it was all about parameter values. It was also, dare I say, a little dull. More recently, but before the recession, that debate had largely gone away, but since then it seems to have come back. This post asks why that is. Before the recession, what I have called the consensus view was this. Under flexible exchange rates, monetary policy was the instrument of choice for demand stabilisation. Textbooks tend to give you a list of reasons why this is, but as I and colleagues argue here, it follows naturally in a New Keynesian model. It is not because fiscal policy cannot stabilise demand, but because (in fairly simple cases) monetary policy is better at doing so. From a welfare point of view, it dominates. Does that mean, when monetary policy is unconstrained, it is all you ever need in a New Keynesian framework? In theory no. To take just one example, in a model with wage and price stickiness, real wages will deviate from their natural level following shocks, and in principle changes in income taxes or sales taxes could correct this. Even more obviously, these same taxes can in principle offset cost-push shocks. However you might describe this as a niche role for fiscal policy: the basic and fundamentally necessary task of stabilising demand is best handled by changing interest rates. The qualification that monetary policy is unconstrained is critical, of course. If you are a member of a monetary union, it is a completely different game, and now fiscal policys ability to influence demand makes it useful, although perhaps not essential. I have argued that a failure to understand this was a major factor behind the Eurozone crisis. The other example of a constrained monetary policy is of course the Zero Lower Bound. At the ZLB we have fiscal policy versus unconventional monetary policy. There are good reasons for thinking that unconventional monetary policy will not dominate fiscal policy as a stabilisation tool. Analysis of one particular unconventional policy, forward commitment (which is not the same as forward guidance), shows fiscal policy is not dominated in this case. So which is better comes down to parameter values and the details of the model used. Given the uncertainties here, the only reasonable approach to take in advising policymakers is to use a combination of both policies. If you are thinking that concerns about debt might rule out using fiscal policy, you are wrong: temporary balanced budget changes in government spending can still be a useful tool. This eclectic approach seems to me to be the one taken by, say, Paul Krugman or Brad DeLong. What I do not understand is why there are others who take a different view, and want to argue that monetary policy is still all you need. Where does this certainty about the powers of unconventional monetary policy come from? As I have already noted, it does not come from the theory that I know. So maybe it comes from a belief about parameter values, but given how untried this policy is, how can you be so certain as to not want to hedge your bets with fiscal action? But perhaps its not the effectiveness of unconventional monetary policy that is crucial here, but a profound mistrust concerning the effectiveness of fiscal actions. Yet I cannot see the macroeconomics behind that either. In its purest form, all fiscal stimulus involves is bringing forward public spending, just as monetary easing encourages the private sector to bring forward their spending. That is bound to be effective in combating demand deficiency. Is the fact that the spending takes place this year rather than in five years time that costly, particularly if the spending is investment like in character? So I remain mystified where this desire to downgrade the usefulness of fiscal policy at the ZLB comes from. I suspect I am missing something, and would like to know what that is.

DEMOCRACY VS. INEQUALITY


DARON ACEMOGLU AND JAMES ROBINSON President Obamas State of the Union address promised a renewed focus on economic inequality in the last two years of his administration. But many have already despaired about the ability of American democracy to tackle increasing economic inequalities. Indeed, wage and income inequality have continued to rise over the last four decades during both periods of economic expansion and contraction. But these trends are not unique to the United States. Many OECD countries have also experienced increasing wage income inequality over the last several decades. That these widening gaps between rich and poor should be taking place in established democracies is puzzling. The workhorse models of democracy are based on the idea that the median voter will use his democratic power to redistribute resources away from the rich towards himself. When the gap between the rich (or mean income in society) and the median voter (who is typically close to the median of the income distribution) is greater, this redistributive tendency should be greater. Moreover, as Meltzer and Richards seminal paper emphasized, the more democratic is a society (meaning the wider is the voting franchise), the more redistribution there should be. This is a simple consequence of the fact that with ae wider franchise, expanded towards the bottom of the income distribution, the median voter will be poorer and thus keener on redistributing away from the rich towards herself. These strong predictions notwithstanding, the evidence on this topic is decidedly mixed. Our recent paper, joint with Suresh Naidu and Pascual Restrepo, Democracy, Redistribution and Inequality revisits these questions. Theoretically, we point out why the relationship between democracy, redistribution and inequality may be more complex, and thus more tenuous, than the above expectations might suggest. First, democracy may be captured or constrained. In particular, even though democracy clearly changes the distribution of de jure power in society, policy outcomes and inequality depend not just on the de jure but also the de facto distribution of power. This is a point we had previously argued in Persistence of Power, Elites and Institutions. Elites who see their de jure power eroded by democratization may sufficiently increase their investments in de facto power, for example by controlling local law enforcement, mobilizing non-state armed actors, lobbying, or capturing the party system. This will then enable them to continue their control of the political process. If so, we would not see much impact of democratization on redistribution and inequality. Even if not thus captured, a democracy may be constrained by either other de jure institutions such as constitutions,

conservative political parties, and judiciaries, or by de facto threats of coups, capital flight, or widespread tax evasion by the elite. Second, democracy may lead to Inequality-Increasing Market Opportunities. Nondemocracy may exclude a large fraction of the population from productive occupations, for example from skilled occupations and entrepreneurship, as starkly illustrated by Apartheid South Africa or perhaps also by the former Soviet Union. To the extent that there is significant heterogeneity within this population, the freedom to take part in economic activities on a more level playing field with the previous elite may actually increase inequality within the excluded or repressed group and consequently within the entire society. Finally, consistent with Stiglers Directors Law, democracy may transfer political power to the middle class-rather than the poor. If so, redistribution may increase and inequality may be curtailed only if the middle class is in favor of such redistribution. So theory may not speak as loudly as one might have first thought. What about the facts? This is where the previous literature has been pretty contentious. Some have found inequality-reducing effects of democracy and some not. We argue that these questions cannot be easily answered with cross-sectional (cross-national) regressions because democracies are significantly different from nondemocracies in so many dimensions. Instead, we provide evidence from panel data regressions (with fixed effects) from a consistent postwar sample. The facts are intriguing. First, there is a robust and quantitatively large effect of democracy on tax revenues as a percentage of GDP (and also on total government revenues as a percentage of GDP). The long-run effect of democracy is about a 16 percent increase in tax revenues as a fraction of GDP. Second, there is also a significant impact of democracy on secondary school enrollment and the extent of structural transformation, for example as captured by the nonagricultural share of employment or output. Third, and in stark contrast to these results, there is a much more limited effect of democracy on inequality. Democracy just doesnt seem to affect inequality much. Though this might reflect the poorer quality of inequality data, there is likely more to this lack of correlation between democracy and inequality. In fact, we do find heterogeneous effects of democracy on inequality consistent with

the theories mentioned above, which would not have been possible if the poor quality of inequality data made it hard to find any empirical relationship. Overall, our results suggest that democracy does represent a real shift in political power away from elites and has first-order consequences for redistribution and government policy. But the impact of democracy on inequality may be more limited than one might have expected. Though our work does not shed light on why this is so, there are several plausible hypotheses. The limited impact of democracy on inequality might be because recent increases in inequality are market induced in the sense of being caused by technological change. But equally, this may be because, as in the Directors Law, the middle classes use democracy to redistribute to themselves. But the Directors s Law is unlikely to explain the inability of the US political system to confront inequality, since the middle classes have largely been losers in the widening inequality trends. Could it be that US democracy is captured? This seems unlikely when looked at from the viewpoint of our typical models of captured democracies. But perhaps there are other ways of thinking about this problem that might relate the increasingly paralyzing gridlock in US politics to capture-related ideas.

Вам также может понравиться