Who cares how fast GDP grows?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I came across an interesting post by Ed Dolan, on what we should do about slowing growth in the U.S. His answer is “Nothing”, and he gives a very capable explanation of why this is the case. His argument is that while GDP and human welfare (the general concept, not the government program) are correlated, once you are very rich the correlation drops enough that it is quite possible to raise human welfare without having GDP go up.

This is a really interesting point, and it relates to the marginal utility of consumption goods (which are goods and services that get counted as part of GDP) as compared to the marginal utility of what I’ll call intangibles. Intangibles are things like good health, or a clean environment, that we might value in and of themselves, but they are not necessarily tied to the production of real goods and services that are counted in GDP.

Very simply, let overall utility be

\displaystyle  U = u(C) + v(H) \ \ \ \ \ (1)

where {C} is consumption of tangible goods and services and {H} is the consumption of intangibles. We’ve got some stock of resources (labor, capital, natural resources, etc..) that we can use to produce things with. Consumption goods are {C = xR}, where {x} is the share of the resources we use in producing consumption. It’s a really simple model with only consumption goods, so GDP is just equal to {C}, so that {Y = xR}.

Intangible goods are {H = (1-x)R}, or they “use” the remaining share of resources. Note that I don’t necessarily mean that we have to use up resources to produce intangibles – you can think of {(1-x)} as the fraction of resources that we idle, or leave pristine, or shut down in order to enjoy better health, a nicer environment, or more free time.

Maximizing utility involves picking the optimal value for {x}, what share of our resources to commit to consumption. Before throwing some math at it, think of {R} as the total potential number of donuts I could produce using all available resources. The trade-off I face is how many donuts to actually produce. I’ll produce some ({C}), because donuts are yummy. But I’ll hold off on producing all the possible donuts because I want to be healthy enough to shoot baskets with my kids in the driveway ({H}). What is the optimal split of {R} into donuts and “health”? And will that split ever change?

The first-order condition here is

\displaystyle  u'(xR) = v'[(1-x)R], \ \ \ \ \ (2)

which just says that the marginal utility of consumption goods should be equal to the marginal utility of intangible goods. If they weren’t equal, then you could fiddle with the value of {x} and get higher overall utility.

What happens as {R} goes up? The marginal utility of both types of goods falls. If I already have lots of consumption goods (donuts, cars, iPhones) then the marginal utility of another one gets small. Similar for intangible goods – if I’ve got great health and lots of beautiful national parks to visit, then it’s hard to feel much better or visit an additional park.

The key is going to be how fast these marginal utilities fall. That is, how quickly does an extra donut get old and boring, versus how quickly better health gets old and boring. We often use log utility to describe consumption, of {u(C) = \ln{C}}, which means that the marginal utility of consumption is {u'(C) = 1/C}, or in terms of resources, {u'(C) = 1/xR}. As Chad Jones will tell you, log utility is “very curved”, meaning that the marginal utility quickly runs down towards zero as you load up on more donuts. [Aside: log utility, though, is less curved than other typical utility functions for consumption, so I’m probably understating how fast utility falls with more consumption].

What’s the utility function for intangible goods? I don’t know that there is any kind of consensus about what this looks like. But let me use a very simple utility function that will demonstrate the logic of not caring if GDP grows. Let’s have {v(H) = \theta H}, so that {v'(H) = \theta}. This function is linear in {H}, so that the marginal utility of intangible goods doesn’t depend on how much {H} you consume – you can never be too healthy, so to speak. The most important part here is that marginal utility falls more slowly for {H} than for consumption goods.

Back to our optimal choice of {x}. Using the assumed utility functions, I get that my first-order condition is

\displaystyle  \frac{1}{xR} = \theta, \ \ \ \ \ (3)

which solves out to

\displaystyle  x = \frac{1}{\theta R}. \ \ \ \ \ (4)

That is, the optimal fraction of resources to spend on consumption goods falls as {R} rises. As we get more resources (labor, capital, technology) we use fewer of them on actually producing consumption goods. The payoff in terms of utility is just too low compared to the payoff in utility from having more intangible goods.

Remember that GDP is just {Y = xR}, and under our optimal assumption for {x} this is just {Y = 1/\theta}. In other words, it would be optimal in this model for GDP to stay constant at {1/\theta}, even as the available resources {R} are increasing. We would willingly sacrifice additional GDP because it only enhances consumption goods without increasing intangibles. No growth in GDP is utility-maximizing.

By fiddling with the exact utility function for intangibles you could get a different answer. Perhaps GDP optimally rises very slowly (if intangible goods have a declining marginal utility), or GDP optimally falls over time (if intangible goods have an increasing marginal utility as you use them – think of enjoying national parks more if you are healthy enough to hike through them).

The ultimate point of Ed Dolan’s post, and this one, is that there is nothing inherently desirable about rising GDP. It is simply a statistical construct capturing the total value of currently produced goods and services. If we prefer things that are not currently produced goods and services, then who cares if GDP rises or falls?

Something that I didn’t address here is how we adapt to a lower fraction {x}. If {x} falls, this implies that we are idling resources, like labor. If I’m going to consume fewer donuts, I’m going to put some bakers out of business. If you’re lucky, the bakers don’t mind because they would have chosen to go backpacking through Yosemite anyway. If you’re not, then these unemployed bakers are looking for something to do. As usual in these kinds of questions, seeing the different equilibrium outcomes is a lot easier than seeing how to transition from one to the other.

Piketty and Income Shares

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Doug Gollin, Oxford University econ. professor, my coauthor, and an amateur ninja (I may have made up that last one), left a great reply to my post on Piketty and Growth Economics.

Let’s start with the first point Doug makes:

But for the record, it is not at all straightforward to read these income shares from national accounts data. The “true” labor and capital shares cannot be easily inferred from macro data – and they are also difficult to pin down in firm-level micro data.

Before we get going, let’s be clear that this is not a criticism about transcription errors or cherry-picking of results, as in Chris Giles‘ recent FT report. For more than you probably care to digest on that subject, see this link which has a nice roundup of the hubbub.

Specifically, Doug has the following concern:

The national income and product accounts for most countries report something called employee compensation. This sounds like labor income, but it leaves out some important forms of labor income, such as the labor income of the self-employed.

This is something that Doug has looked closely at before (see here), and that prior research is pretty clear that there are big adjustments to be made regarding the earnings of the self-employed. Simply put, earnings for the self-employed (which Piketty calls “mixed” income) are reported as capital income in national accounts, but contain both labor income (the implicit wage I pay myself while running my own firm) and capital income (the implicit return I get for having emptied my savings account to start a company). So the national accounts data do not accurately reflect the distribution of income between labor and capital.

Further, the amount of self-employed income as a share of national income tends to shrink as countries develop (think of people moving from farms to factories, and going from self-employed to wage-workers). Over the time frame that Piketty is looking at, there would have been distinct changes (declines, almost certainly) in the share of self-employed income within all of the countries he examines.

Piketty’s strategy with the “mixed” income of self-employed is to split it up into labor and capital income using the same ratio he observes in the reported labor and capital income. So if employee compensation is 2/3 of reported labor and capital income in the national accounts, then as I understand it Piketty assumes that self-employed income is also 2/3 labor income and 1/3 capital income.

Is this a problem? Here’s how it might be. If self-employment income is really fundamentally different (perhaps it is 90% labor and only 10% capital) then the shift over time away from self-employment would have necessarily changed the distribution of income between capital and labor. That is, Piketty could be overstating capital’s share in 1890 because he assumes that 1/3 of self-employment income is capital income, while really only 10% of self-employment income is capital income. Some of the deep declines in the capital share he documents around WWI and WWII may simply reflect the shift of workers out of self-employment (with it’s incorrectly small labor share) and into wage work (which is accurately measured as labor income). That period is one of rapid industrialization, so presumably the shift from self-employment to wage work would have been quite large.

So one point is that Piketty has possibly overstated capital’s share in the early period (roughly 1870-1910). Whether this is enough to materially change his overall story is unclear to me. You’d need data on self-employment shares and take some stand on how self-employment income is split between labor and capital. A second point is that Doug may have provided part of the explanation for the big drop in capital income around WWI and WWII. It may reflect a structural shift away from self-employment and towards wage work.

Doug also notes a concern over going from the functional distribution of income (labor vs. capital shares) to the size distribution of income (incomes of top 1% of individuals). As Doug notes: “Not all capital income accrues to rich people, and not all labor income goes to the poor or the working classes.”. And he is absolutely correct about that, but I don’t think that Piketty necessarily falls into that trap. The first section of the Piketty book is really about the functional distribution of income: capital’s share of national income. He establishes (perhaps shakily) some stylized facts on this share based on national accounts.

The second section of the Piketty book is about the size distribution of income, where he looks at variation in the earnings of the top 5% (or 1% or 10%) using individual tax records and surveys from various countries. Piketty is not confounding the capital share with the share going to the top 5%. He has separate sources for those two series. He then further digs into the tax records to establish where those top 5% are getting their income. Long story short, the top 5% are getting an increasing share of their income from wages (or what is reported as wages, at least, to tax authorities) over time. But it is also true that the top 5% earn almost 100% of the total reported capital earnings in the tax data (or what is reported as capital earnings to tax authorities). This is where Piketty then draws a link between capital’s share of national income and the income share of the top 5% – given that he observes that historically nearly all of capital income is earned by the top 5% or so, then an increase in capital’s share of income will lead to a larger share of income for that 5%.

(Mis)Allocation and Growth Reading List

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Back with another set of readings for my grad class this fall. As before, a PDF and BibTeX file of the papers I teach in this area are located under the Papers page.

One of the more active areas in growth right now is studying the allocation of factors of production. Think of productivity growth being decomposed into two sources: “within” productivity growth as each individual firm becomes more efficient, and “between” productivity growth as we shift factors of production from low-productivity firms into high-productivity ones.

A first question is: how much observed growth comes from “within” versus “between” sources? This requires looking at productivity at the firm-level data. That is not easy, because it is not something one can count like workers or machines. We have to back it out from estimations of firm’s production functions. So a lot of growth economists now find themselves hanging around the offices of their industrial organization colleagues, trying to look cool, and hoping to bum some data or get some help with Wooldridge’s (2009) technique for estimating firm-level production functions. Once you’ve got these estimates, doing the “within” and “between” calculations is relatively straightforward.

A different question is: how much potential for “between” productivity growth is there? In other words, how much higher would productivity be if I could rearrange factors of production until the marginal revenue product was equal across all firms? To answer these kinds of questions, you have to actually provide some kind of model of firm behavior so you can figure out how output will respond at each different firm when you start messing around.

For a really simple example, let a firms production be {Y = A L}, where {A} is productivity and {L} is labor. The firm has some market power, and the inverse demand curve is {P = Y^{-\epsilon}}, which says that if the firm produces more {Y}, the price it can charge for that output must fall. {\epsilon} is a measure of how much market power the firm has. If {\epsilon = 0}, the {P = 1}, and the firm is a price-taker. As {\epsilon} goes to one, their market power gets stronger. The firm hires workers at the wage {w}.

Growth economists favorite salad: the wedge

Profits for the firm are {\pi = (1+\tau)P Y - wL}. This extra term {\tau} is often called a wedge. It’s like a subsidy (if {\tau>0}) or tax (if {\tau<0}) facing the firm, although in most applications the wedge is not specifically associated with any tax or subsidy. It just is a stand-in for any kind of additional markup (or markdown) a firm can charge for its product. If I maximize profits for this firm, and solve for their choice of labor to use, I get

\displaystyle L^{\ast} = \left(\frac{(1+\tau)(1-\epsilon)A^{1-\epsilon}}{w} \right)^{1/\epsilon}. \ \ \ \ \ (1)

As you’d expect, if productivity {A} goes up, the firm will be larger. However, note that if the wedge is positive, then this expands the firm relative to how big it would be if {\tau = 0}. The wedge is acting like a shift up in the demand curve, and so the firm produces more, which requires it to hire more workers. If the wedge is negative, then this is like a shift down in the demand curve, and the firm will be smaller. The wedge means that firms can be large even if they are not productive, or small even if they are productive.

What papers then do is to remove the wedge from each firm, and recalculate the level of {L^{\ast}} for each firm. Once you know that, roll up the output produced across all firms to find out aggregate production without the wedges. Compare this to the observed output level (i.e. with wedges). This tells you how much higher output could be if these wedges didn’t exist. This is the potential “between” productivity growth in a country. And this potential between productivity growth is intriguing, because it doesn’t necessarily mean I have to adopt a new technology or acquire new capital or workers, I just need to reshuffle the capital and workers I have to more efficient firms.

Looking at these calculations across countries, you can talk about whether India is poor relative to the U.S. because it has bigger “wedges” than the U.S., for example. The implication of most of the papers on the reading list is that yes, the wedges in poor countries are bigger/worse. That is, in India and other poor countries, there are lots of frictions keep the marginal revenue product of labor (or capital) from being equal in different firms, and so lots of scope for “between” productivity growth. Those frictions cost India a lot of foregone output. In the U.S., the frictions are smaller (but certainly still exist).

A newer wave of research in this area involves more serious investigations of what these wedges actually represent. One possibility is that high-productivity firms would like to expand, but are limited in their ability to do so by an inability to find financing. In this case, financial sector sophistication is a key to improving allocations. Another likely suspect is the regulatory regime: entry costs, exit costs, and size-dependent rules for firms, for example. A nice concept for future research (hint, hint grad students) is to measure the impact of particular reforms on the measure of potential “between” growth. If the reform effectively opens up entry, or allows easier exit, or eliminates state-owned firms, etc.. etc… then the scope for “between” growth should fall over time as the economy gets more efficient and the wedges get smaller.

Is Robert Gordon Right about U.S. Growth?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Robert Gordon has a recent set of papers (2012, 2014) claiming that the growth rate of GDP per capita in the U.S. is going to slow down from roughly 2% per year to 0.9% per year, starting in 2007. The argument is based on a combination of factors including: an aging population, a slowdown in educational attainment, and debt levels. There are possible counter-arguments to be made on each of his individual points, but that’s not really what I’m interested in here. What I’d like to know is this: when will I know if Robert Gordon is right?

How do I figure this out? I need to get a little wonky about statistical testing. The null hypothesis I am testing is that the growth rate is 2% per year. But I don’t want to do a typical kind of statistical test. The question I want to ask is this: how long will it be before I have enough evidence to reject this hypothesis? I need to do a power calculation.

IF the null hypothesis is true, then log GDP per capita evolves over time according to

\displaystyle  \ln{y}_t = 0.02t + e_t \ \ \ \ \ (1)

where {e_t} is some noise term that captures the fact that GDP per capita will not necessarily be exactly on the trend line. For the time series fans out there, I’m completely ignoring any kind of auto-correlation here. Why? Because I did not consume enough Diet Coke this morning to re-learn time series econometrics.

IF Robert Gordon is right, then log GDP per capita evolves over time according to

\displaystyle  \ln{y}_t = 0.009t + e_t. \ \ \ \ \ (2)

As {t} goes up and up into the future in Gordon’s world, log GDP per capita will fall farther and farther behind the 2% per year trend that I am using as the null hypothesis. Evenutally, it will fall so far behind that I will have to reject the null.

When will this happen? That depends on how quickly I am willing to reject the null. If I have an itchy trigger finger, then the first time that I see GDP per capita below the 2% trend line, I’ll dismiss the 2% null hypothesis. But this is probably too hasty; remember the {e_t} term. Even if growth is really 2% per year, there will be years where GDP per capita falls below the 2% trend line. A recession, perhaps. So I don’t want to reject the null too easily.

I’ll assume a standard cut-off value for rejecting the null, 5%. If, statistically, there was only a 5% chance of observing the GDP per capita level that I did even though the 2% null is true, then I’ll reject the null. Basically, if I see a value for GDP that is a huge outlier given a 2% growth assumption, then I’ll reject that growth is equal to 2% per year. Note the the 5% is the chance that I reject the null even though it is true – it’s my willingness to accept Type I error.

Now, I can look at the data on GDP per capita over time in the U.S., and get some idea of what these cut-off levels of GDP per capita are.

\displaystyle  Lower_t = 0.02t - c SD(e_t) \ \ \ \ \ (3)

\displaystyle  Upper_t = 0.02t + c SD(e_t). \ \ \ \ \ (4)

The {Lower_t} is the one I’m really worried about here. This lower bound is basically what I’d expect GDP per capita to be at trend ({0.02t}) minus some adjustment for the possibility of a negative shock from {e_t}. How big is this adjustment? That depends on {c}, a critical value that it tied to the 5% probability of Type I error. The {SE(e_t)} tells me how variable the error term is; if shocks are really large, then it will be harder to reject the null when we see a low value of GDP per capita. The value of {c} is roughly 2 (1.96, actually), and from the data on GDP per capita the {SD(e_t) = 0.032}. If you multiply these two together, this tells us that GDP per capita tends to be within +/-6.4% of trend.

Ok, to answer my question, I need to figure out the probability that GDP per capita given Gordon’s assumption about growth falls below this lower limit:

\displaystyle  P(\ln{y}_t<Lower_t|Gordon is right). \ \ \ \ \ (5)

This is basically a power calculation. For any given year, I can calculate the probability that I’d reject the null (because {\ln{y}_t<Lower_t}) given that the null is false (and Gordon is right).

Using Gordon’s assumption from above, I therefore want

\displaystyle  P[0.009t + e_t < 0.02t - c SD(e_t) ] \ \ \ \ \ (6)


\displaystyle  P[e_t < (0.02 - .009)t - c SD(e_t)]. \ \ \ \ \ (7)

Given that I know something about the distribution of the {e_t} terms, this probability is something I can figure out. As I mentioned above, the {SD(e_t) = 0.032}. The mean of {e_t} is just zero, by construction. Finally, I’ll assume that {e_t} is Normally distributed (probably not right, but see the Diet Coke comment from above). The value of {c = 2}, given my choice of a 5% chance of Type I error.

For any given year {t} I can back out this probability from a Normal distribution, using Stata (or Excel, or the appendix of your old stats book). So what do I get? Here are probabilities that I will reject the null of 2% growth for some selected years:

  • 2009: 4.9%
  • 2010: 9.5%
  • 2014: 52.5%
  • 2018: 92.5%

You can see that early on, there was almost no way that I’d see data allowing me to reject the null of 2% growth. It’s only this year, 2014, that I even have a 50/50 shot of observing GDP per capita low enough to reject the null. But in just another 4 years, I’ve got a 93% chance of rejecting the null if it is actually false.

Be very careful here. These are not probabilities that Gordon is right. These are probabilities that I will be able to detect IF Gordon is right. So I have a 50/50 shot this year of getting a low enough GDP per capita number that it behooves me to reject the 2% growth rate. If GDP per capita is not low enough this year to reject 2% growth, then that just means I need to wait another year before I can test the hypothesis. By 2018, though, I have a 92.5% chance of getting enough information to reject 2% growth if 2% growth is actually wrong. By then, IF Gordon is correct, low growth rates will have made it almost statistically impossible to hold onto the 2% growth rate hypothesis.

These calculations are sensitive to how big a difference in growth rates I’m looking at. 2% versus 0.9% is pretty big. And 2% is roughly the average growth rate of GDP per capita since 1950 to 2007. But growth in GDP per capita has been lower than 2% that for about 2 decades now. From 1990 to 2007, the average growth rate is only about 1.6% per year. If I instead use a null of 1.6%, then it’s going to take me longer to get enough data to possibly reject that null.

Using the null of 1.6% growth (but leaving all else the same), here are the probabilities:

  • 2009: 3.7%
  • 2010: 5.9%
  • 2014: 24.6%
  • 2018: 57.4%
  • 2023: 90.0%

Now, it’s not until 2023 that I will be fairly sure to have enough data to detect if 1.6% growth is wrong, or another 10 years. Prior to that, it’s unlikely that we’ll experience such a low GDP per capita number that it would compel me to reject the 1.6% growth null hypothesis.

So IF 2% growth is wrong, then in another 4-ish years I should be able to tell. By 2018, I’ll either see GDP per capita levels definitively below what I’d expect given 2% growth, or I won’t. If I do see them that definitively low, then it’ll be time to reject the hypothesis of 2% growth. If I don’t see GDP per capita that low, this doesn’t mean that 2% growth is right. It just means that I cannot reject 2% growth. Until then, I don’t know.

If I reject 2% growth in 2018, then is Gordon right that growth is 0.9% per year? Kind of. He’ll be right that 2% is wrong. But it won’t be until about 2023 that I can reject 1.6% growth. And 2043 before I can reject 1.2% growth. And well after 2100 before I can reject 1.0% growth. So I really won’t be able to tell if Gordon is right about 0.9% growth until next century.

Claims that growth in GDP per capita is less than 2% per year should be relatively easy to validate in the next decade. But finding enough evidence to nail down that growth has fallen to 0.9% is actually going to take a long, long time. So the answer to my question is that it depends on what I mean by “Gordon is right”. Very soon we can establish whether he is right about growth being less than 2%. But it’ll take a century to know if he is right about the 0.9% growth rate.

Maybe the biggest lesson here is that statistics is annoying.

Agriculture and Growth Reading List

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I will be teaching graduate growth and development this fall, and I’m trying to get a head start on my reading list. Today’s effort is looking at the relationship of agriculture to overall economic development.

I think a really convenient way to see the main issues in this literature is the following figure. It plots output per worker (in logs) in both the agricultural and non-agricultural sectors against the share of labor in non-agriculture. Moving to the right, then, is associated with industrialization and/or structural transformation, whatever you want to call it. The data is from Francesco Caselli’s handbook chapter (data here).

Ag and Non-ag Output per Worker
The figure contains within it several important stylized facts that motivate current research. First, think about how to explain the variance in aggregate output per worker between countries. This depends on variance in agricultural output per worker and variance in non-agricultural output per worker. For agricultural output per worker, note that there is a much higher variance. The range along the y-axis is much bigger, from 6 (roughly $403 per worker) to almost 11 (nearly $60,000 per worker). The variance in non-agricultural output per worker is much smaller, ranging only from about 8.5 ($4900) to about 11 (again $60,000). A huge part of the variance in overall output per worker across countries is driven by differences in how productive agricultural workers are.

The second fact that jumps out is the tight correlation of the non-agricultural labor share and agricultural output per worker. Countries that have the lowest agricultural output per worker also tend to have the most people in that sector. The poorest countries – Nepal (NPL), Uganda (UGA), and Mozambique (MOZ) for example – are poor because their agricultural workers produce very little, and most of their workers are agricultural workers. It’s a double-whammy.

Third, there is a gap in output per worker between agricultural and non-agricultural workers, but this gap shrinks as the non-agricultural labor share rises. For the poorest countries, the gap implies that a non-agricultural worker produces something like 30 or 40 times more than an agricultural worker (and there are even more extreme examples, like Nepal, where the ratio is 130-1). But as the non-agricultural share of workers rises, these gaps fall to something like 1.5 to 1.

In thinking about why some countries are rich and some are poor, these stylized facts are of first-order importance. Standard one-sector growth theory sweeps all this under the rug. The papers I teach in this area take these facts as a jumping-off point. They generally work with a model that has a low income elasticity for agricultural goods (Engel’s Law), so that as productivity goes up in either sector, labor is pushed/pulled out of agriculture. Other papers take off from this to consider the existence of the productivity gaps, either trying to account for them more accurately, provide some explanation for their existence, and/or explain why they disappear as countries industrialize.

The reading list itself can be found on the “Papers” page on the site, below the introductory papers. It includes both a pdf and a BibTeX file.

Declining U.S. Dynamism?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

There’s a research report from Brookings that’s making it’s way around the inter-tubes. The title is “Declining Business Dynamism in the Untied States“, by Ian Hathaway and Robert Litan. The upshot is that business dynamism is declining. The firm entry rate (i.e. the number of firms less than one year old as a proportion of all firms) was about 15% in 1978, was about 11% in 2006, and is currently less than 8%. This trend is taken as a troubling sign of reduced dynamism. What dynamism exactly is supposed to mean isn’t clear, but on the assumption that it is synonymous with firm entry, then it is declining. There is some hand-waving about how this portends lower growth in the future.

But it isn’t immediately obvious that the declining rate of entry is a problem. I’m suspect that this is a measure that seems like it would be better if it went up, but that isn’t necessarily true. Imagine if the entry rate went to 100%; would it be good that literally every firm in the U.S. was less than 1 year old?

Their appendix figure A1 shows the actual number of firm entries per year. This held at roughly 500,000 new firms per year from 1978 to 2007, and then there is a deep plunge to 400,000 by 2010 before recovering slightly in 2011. So it is not that the U.S. is coming up with fewer new companies. Even in the midst of the worst economic downturn since the Great Depression (or 1981 depending on what rows your boat) the U.S. added 400,000 new companies. At the same time, the absolute number of exits has increased from about 300,000 a year in the early 1980’s to about 450,000 over the last few years. Because we have more firms now (remember all those entries?), this means the exit rate has held steady at about 9% of firms exiting each year.

So, does this imply some kind of loss of dynamism in the U.S. economy? As I said, it’s not obvious. One of the things I think is a pretty robust empirical fact is that productivity (labor productivity or total factor productivity) varies by a lot across firms in the U.S. and other countries (see Syverson, 2011). The stylized fact I have in my head is that the 90th percentile manufacturing firm in the U.S. is roughly 2 times more productive than the 10th percentile firm. A large portion of productivity growth comes from reallocation of inputs from low-productivity firms to high-productivity firms. Some of this reallocation takes place by having low-productivity firms exit and having high-productivity firms enter. But I don’t know that the research is conclusive that entry is the primary source of this reallocation – shifting inputs to existing high-productivity firms is a big part of productivity gains.

Foster, Haltiwanger, and Syverson (2008) is one of the best studies I know of these reallocation effects because they are able to distinguish physical productivity (number of widgets produced) from revenue productivity (number of dollars produced). What they find is that entry alone accounts for 14% of revenue productivity growth from 1977 to 1997. Entry accounts for 24% of physical productivity growth in the same period. A sizeable portion, but reallocation between existing firms and productivity growth within existing firms account for the remaining productivity growth observed.

However, FHS are looking only at very specific industries: coffee, boxes, bread, gasoline, sugar, plywood, and a few others. Basically, industries with very homogenous outputs that can be measured easily (e.g. gallons of gasoline). Their results may under- or over-state the true effect of entry on overall productivity growth in the U.S.. As they note, though, their study actually shows much higher effects of entry on productivity than prior work because of their separate data on revenue physical productivity. Their firms also have entry and exit rates (22% and 19%, respectively) well above the average for the U.S. as a whole, and if entry is very important for dynamism, then shouldn’t these industries show the strongest possible role of entry?

Even if entry does account for a sizeable portion of productivity growth, the Brookings report isn’t saying that entry has fallen to zero. If the entry and exit rate flatten out at, say, 9% each, then this means that every year 9% of all firms go out of business, and that number is exactly replaced by a entirely new group of firms. The total number of firms will remain constant, but each firm would have an average lifespan of about 11-12 years. Do we care if the total number of firms stays constant, so long as there is turnover?

There isn’t any reason to believe that a growing number of firms is necessarily good, especially as we move more and more to producing goods and services that scale easily. To be more clear, it makes sense to believe that the number of cement firms or bakeries increases proportionately with population, as these goods don’t transport well and so meeting demand requires new locations (and possibly new firms). But just because we have more people doesn’t mean we need another Microsoft (you could argue we don’t need the one we have already). So the fact that the growth rate of the number of firms is slowing down doesn’t necessarily bother me. It may just indicate a change in the nature of products we produce, or represent better screening by lenders/backers/VC firms.

I could of course be horribly wrong, and we’ll all be living in the woods in three years. I, at this point, am not planning on buying extra canned goods.

Persistence in Economic Development

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Last weekend I attended a conference at Brown University on “Deep-Rooted Factors in Economic Development“. The key theme that came out of that weekend was persistence. Nearly all of the papers gave evidence that economic shocks or initial differences in economic outcomes dissipate very, very slowly, if at all.

Oded Galor and Omer Ozak presented a paper on the origin of time preferences (e.g. patience). They find surprisingly strong empirical evidence linking inherent agricultural yields to survey responses regarding patience, with the argument being that places with high returns to settled agriculture (which requires patience) selected for higher patience populations over time. Stelios Michalopoulos, Louis Putterman, and David Weil presented evidence that African individuals descended from agriculturalist societies are more economically successful than those descended from pastoralists, even if we compare people who no longer reside in their traditional homelands. Both papers show the very long reach of history on current economic outcomes.

Related to this were a number of papers that looked at how economies responded to shocks. Eric Chaney and Richard Hornbeck showed that following the explusion of the Moriscos from Spain in 1609, the population and income per capita in the heavily-affected regions did not adjust immediately (i.e. in a decade) but over a much longer time frame (i.e. a century). Felipe Valencia-Caicedo provided evidence on the persistent effect of Jesuit missions in South America on human capital formation. Enrico Spolaore and Romain Wacziarg looked empirically at how long it took the “idea” of low fertility to spread from France across Europe. Melissa Dell presented work on how areas subject to insurgency during the Mexican Revolution (early 20th century) are still economically less developed than other areas of Mexico.

The fact that very early agricultural conditions are still influential in comparative development, or that economic shocks linger for centuries, is very hard to reconcile with how we typically think about economic growth. Even if one country/area starts with bad conditions, or is subject to a bad shock, all of our intuition is that they should eventually be able to catch back up. Physical and human capital can be accumulated through savings or education, and this accumulation should actually speed up the farther away from its potential that a country finds itself. Technological changes are, as we like to say, non-rival and non-excludable, so that countries can copy innovations relatively easily. While there is certainly a lag involved in either saving up new capital or adopting new technologies, we’re talking about lags measured in years, not centuries.

For technology, or productivity in general, it seems especially hard to understand long-run persistence. Why is it that poor places, or people, do not just adopt the higher-productivity techniques/ideas/processes that they see around them? My reaction to the papers I saw at Brown was that we are probably too cavalier in assuming that these ideas can be costlessly copied. We are probably better off thinking of ideas as embodied in people, like cultural traits. Ricardo Hausman just had an interesting post on this; he thinks knowledge is generally tacit, not explicit. Therefore it takes some kind of costly person-to-person transmission to move good ideas or techniques across populations.

With costly transmission, persistence makes more sense. We’re seeing the outcome of a slow diffusion of new ideas (on patience, fertility, the value of education, or agricultural techniques) through populations. That slow diffusion comes about because these ideas are transmitted as tacit knowledge from parents to children, masters to apprentices, teachers to students, or old employees to new employees. If a bad shock wipes out tacit knowledge (say by expelling experienced workers or changing the climate), then it isn’t necessarily true that an economy would eventually return back to its pre-shock equilibrium. Once the tacit knowledge is gone, its gone.

This is all telling me that I need to think harder about diffusion processes. Is there some kind of transmission of traits conducive to growth going on, either genetically (like in Galor and Moav, 2002) or culturally (like in Doepke and Zilibotti, 2008)? Are language differences and/or genetic differences the key to measuring the frictions in the diffusion process (like in Spolaore and Wacziarg, 2009)?

The reading list just keeps growing.