# What does Real GDP Measure?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Nearly all cross-country work on growth and development uses, if only for motivation, Penn World Table (PWT) estimates of real GDP for countries. And the PWT generates a single measure of “real GDP” for each country. How do they do this? Before I answer, let me say that much of what I’m going to say is said more thoroughly in Deaton and Heston (2010). So check that out if you’re into cross-national real GDP comparisons.

To start, let’s simplify and think just about two countries, A and B. To compare real GDP in the two countries, we’d want to value the quantities of goods they produce at some common set of prices. So say phones are $50, and haircuts are$10, and then for each country multiple the quantitify of phones times 50 and add the quantity of haircuts times 10. But what are the right prices to use? Why 50 and 10? Why not 60 and 5? You can imagine that we want the prices we use to be somewhat meaningful, and at least related to the observed prices in countries.

So here’s where it gets weird. We could say, whatever, let’s just use the prices from country B. I just need to pick one set of prices, right? But if you measure country A’s GDP using country B’s prices, then country A will look relatively rich compared to country B. This doesn’t mean it makes country A look absolutely richer than country B, just that country A now looks better in comparison. This works if you flip them around. If you measure country B’s GDP using country A’s prices, then country B will look relatively rich compared to country A. This isn’t a mathematical certainty, but reflects what actually happens if you use the prices underlying the PWT.

Let’s take the U.S. and Nigeria as an example. If we measure real GDP in both the US and Nigeria using Nigerian prices, then the US will appear to have an incredibly large lead over Nigeria in GDP per capita. If we measure real GDP in both the US and Nigeria using US prices, then the gap will appear smaller as this will make Nigeria look particularly good.

This doesn’t necessarily have to happen, it’s not some mathematical rule. But the data underlying the Penn World Tables shows that this is the case almost universally. So what is going on? It means that country A has (relatively) high prices for what country B has a lot of, and country A has (relatively) low prices for what country B has little of.

It’s easiest to see this in an example. So let the US produce ${Q^{US}_{phones} = 100}$ and ${Q^{US}_{haircuts} = 10}$. The US produces a lot of phones relative to haircuts. And in the US, ${P^{US}_{phones} = 10}$ and ${P^{US}_{haircuts} = 10}$, or haircuts and phones cost the same. [No, this doesn’t have to be a realistic relative price for this to work]. At US prices, real GDP in the US is

$\displaystyle GDP^{US} = Q^{US}_{phones} P^{US}_{phones} + Q^{US}_{haircuts} P^{US}_{haircuts} = 100 \times 10 + 10 \times 10 = 1100. \ \ \ \ \ (1)$

In Nigeria, we have ${Q^{N}_{phones} = 10}$ and ${Q^{N}_{haircuts} = 100}$, or Nigeria has very few phones, but lots of haircuts. And the prices in Nigeria reflect this, with ${P^{N}_{phones} = 100}$ and ${P^{N}_{haircuts} = 10}$. At Nigeria’s prices, real GDP in Nigeria is

$\displaystyle GDP^{N} = Q^{N}_{phones} P^{N}_{phones} + Q^{N}_{haircuts} P^{N}_{haircuts} = 10 \times 100 + 100 \times 10 = 2000. \ \ \ \ \ (2)$

Now, those two numbers are not comparable because they use different absolute prices to value the goods. To do a fair comparison of output in the two countries, we have to use the same prices.

Let’s value Nigeria’s output using the US prices

$\displaystyle GDP^{N}_{P-US} = Q^{N}_{phones} P^{US}_{phones} + Q^{N}_{haircuts} P^{US}_{haircuts} = 10 \times 10 + 100 \times 10 = 1100. \ \ \ \ \ (3)$

So using US prices, Nigeria looks really good. Their GDP is 1100, exactly equal to the US. They achieve this with lots of haircuts and few phones, so utility could be different in the two places, but their measured real GDP is as high as the US.

But we could equally argue that we should use Nigerian prices to value GDP in both countries. So for the US we get

$\displaystyle GDP^{US}_{P-N} = Q^{US}_{phones} P^{N}_{phones} + Q^{US}_{haircuts} P^{N}_{haircuts} = 100 \times 100 + 10 \times 10 = 10100. \ \ \ \ \ (4)$

The US now has GDP of 10,100, while Nigeria (at its own prices) only has a GDP of 2000. The US is roughly 5 times richer than Nigeria, when valued at Nigerian prices. Why? Because the US produces a lot of what Nigerians find expensive (phones), and little of what they don’t (haircuts).

Which comparison is right? Neither. There is nothing that says we should use the US prices or the Nigerian prices. For real GDP we simply need to pick some set of prices, and use them consistently across all countries. So much of the work in the Penn World Tables is to come up with a common price index. And the nature of this singular set of prices will matter a lot for real GDP comparisons. If the PWT uses prices that look alot like US prices, then this will make Nigeria (and other developing countries) look relatively well off compared to rich countries. But if the PWT used prices that look like Nigerian prices, then this will exaggerate the gap.

In practice, what do they do? They try to construct some kind of weighted average of the price of each good across all countries. The weights are in the PWT are calculated using what is called a Gheary-Khamis method, which essentially weights the prices from different countries by their share of total spending on that good. For phones, the weight for the U.S. is ${100/(100+10) = 0.91}$ because they produce/use 91% of all the phones. For haircuts, the weight for the U.S. is ${10/(100+10) = 0.09}$ because they produce/use about 9% of all haircuts.

Now in my simple example the weights are basically symmetric, because the US has most of the phones, and Nigeria has most of the haircuts. But in the real data, the US has far more phones and more haircuts than Nigeria. So in practice in the PWT, the weights are very large on U.S. prices, and very small on Nigerian prices. When they do these calculations across all countries, the weights on the US, Western Europe, and Japan dominate because they consume most of the stuff out there in the world. So the prices used by the PWT are really similar to a relatively rich Western nation [People have argued that the prices roughly correspond to Italy’s].

Which all means that every country in the PWT is getting valued at rich country prices. As we saw above, this inflates the real GDP of very poor countries, and makes them look “good” compared to rich countries. That is, the gap between the U.S. and Nigeria is much smaller using rich country (e.g. US) prices than Nigerian prices. So the PWT overall makes poor countries look very good. The true gaps in real GDP are likely larger (much larger?) than what the PWT captures.

This is not some kind of deliberate subterfuge by the PWT. “It does what it says on the tin” is a phrase that comes to mind. But that doesn’t mean it has some cosmic truth to it. The PWT isn’t doing anything wrong, but they are running up against the real fundamental problem: there is no set of prices that gives us a true measure of real living standards across countries.

What we’d like is some number that tells us that living standards in Nigeria are one-tenth, or one-twentieth, or one-fifth of those in the U.S. But what do you mean by living standards? No measure of real GDP captures actual welfare. Even if – as we’d assume was the case in a perfectly competitive market – relative prices capture relative marginal utilities, real GDP doesn’t measure welfare.

Multiplying the total quantity times the marginal utility of a good doesn’t tell me anything about the total utility that people enjoy from that good. The marginal utility of a 3rd car in my family is essentially zero, but that doesn’t mean that we get no utility from having 2. So even if there were some “right” set of prices we could use to value real GDP, it still wouldn’t measure welfare.

I think what would be useful for the PWT would be to have the full distribution of real GDP estimates for a country. That is, show me Nigeria’s real GDP valued at the prices found in every single other country in the PWT. I could plot that distribution of real GDP’s in Nigeria against the same distribution of real GDP’s for the U.S. This would at least show me something about the noise in the relative standing in real GDP for these countries. This sounds like something I can make a grad student do.

One last note about these comparisons. Recall that the result that measuring country A’s GDP in country B’s prices makes country A look relatively rich is not a certainty. It holds because there is a specific correlation of prices and quantities in the data. In each country, goods that are produced in large quantities (e.g. haircuts in Nigeria) tend to have low relative prices, and goods produced in small quantities (e.g. phones in Nigeria) tend to have high relative prices. In other words, price and quantity are negatively related. This implies that the main differences between countries are supply differences, not demand differences.

If Nigeria didn’t have a lot of phones because Nigerians didn’t like phones, then phones in Nigeria would be cheap compared to haircuts. And then valuing Nigeria’s output at the U.S. prices, which also has cheap phones compared to haircuts, wouldn’t make Nigeria look so rich. It might make them look poorer, in fact. So the empirical fact that valuing Nigeria’s output at U.S. prices makes Nigeria look relatively rich is evidence that Nigeria and the U.S. have different supply curves for phones and haircuts, not different demand curves [Yes, demand is probably different too. But relative to supply differences, these appear to be small].

# Housing, Productivity, and Nerds

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

A quick comment about Krugman’s last NYT op-ed about housing costs, mobility, and productivity. He starts with several facts that are, I think, uncontroversial at least in a broad sense.

1) On net, people are moving from NY and SF to Atlanta and Houston
2) Housing prices are higher in NY and SF than they are in Atlanta and Houston
3) Wages are higher in NY and SF than they are in Atlanta and Houston

Great. What Krugman concludes is that this represents a loss of productivity. The high wages in NY/SF must capture high productivity in those places compared to Atlanta/Houston, so the movement of people is lowering productivity. If housing prices were not so high in NY/SF, then people would not move, and they would stay in these those high-productivity areas – and perhaps people would even move from Atlanta/Houston back to NY/SF. So making housing more affordable in NY/SF would be a boost to productivity.

This argument is basically the slimmed down version of what Chang-tai Hsieh and Enrico Moretti study in a recent working paper that I saw presented at NBER this summer. Here’s the core concept, simplified from my casual reading of the Hsieh/Moretti paper, but hopefully capturing the basic idea. Each city has some productivity level. For the moment, let’s assume that productivity is highest in San Fran, and lower everywhere else. If each city had constant returns to scale in production and produced an identical good, then the optimal allocation of labor is for everyone to move to San Francisco, to work in the highest-productivity place. That allocation would maximize output.

Of course, this might not be the optimal allocation for welfare. The geographical space is limited, and furthermore there are lots of restrictions on building in the greater Bay area, so real estate prices would go to something approaching infinity if all of us moved there. For Hsieh and Moretti, and by extension Krugman, this what keeps us from achieving the allocation of labor that maximizes output. The sensitivity of housing prices to population is so high in the Bay area that it isn’t worth the higher wages to live there any more. So people leave for Houston and Atlanta. This raises movers welfare, but at the cost of lower overall output because we move farther away from the output-maximizing equilibrium.

[This, of course, is leaving out completely the concept that not everyone might want to live in San Francisco. There is a high danger of running into 49ers fans in San Francisco, which we can all agree is equivalent to getting a root canal. But in the Hsieh/Moretti set-up, people don’t have idiosyncratic preferences over cities, and it’s actually not crucial to their findings. This is also probably a good spot to mention that I live in Houston, but as you’ll see below I’m not going to argue that Houston or Atlanta are demonstrably better places to live than SF or NY. Preferences actually aren’t the story here.]

Let’s take the Hsieh/Moretti/Krugman setting as given. The implication is that we should act to lower the house price elasticity in SF (e.g. allow skyscrapers in Palo Alto), so that prices are not so sensitive to population, and then people can move from Atlanta and Houston back to SF where they will all be more productive. Output and welfare will rise.

Of course, there is an equivalent solution – move everyone in SF to Houston or Atlanta. The reason SF is the most productive city is not because of some fixed, inherent quality of the location at 37.78 degrees North, 122.41 degrees West. It’s certainly not because of it’s fantastic summer climate. San Fran is the most productive city because it so happened that a unique collection of nerds coalesced there starting in the 1960’s. More nerds were attracted to the bright, shiny things that the original nerds were making, and now I have an iPhone. But here’s the thing about nerds – they are easy to move. You can easily strap one to a dolly and wheel them anywhere you want.

If you want to maximize welfare in the Hsieh/Moretti/Krugman model, then you want the cost of housing to be relatively insensitive to city size. This allows all the people to congregate in the most productive city without driving up costs so much that people no longer want to live there. So you can either lower that sensitivity in places like NY or SF, or you can make Houston or Atlanta the most productive city. It’s non-obvious which is the right solution. Arguably, it is far easier to incent Bay area nerds to relocate south than it is to convince existing Bay area home-owners (a much larger group) to take a massive capital loss on their houses.

What do we make of the steady shift south, then? Perhaps for right now, it is productivity-lowering for all those people to leave San Fran and NY. But if housing prices remain so fantastically high in those places, then eventually Houston and/or Atlanta will become the high-productivity city, because at some point even the nerds will move. Maybe the right answer is to speed up this movement, not try to reverse it as Krugman suggests.

# Innovation does not equal GDP Growth

I’m way behind on this (it came out August 8th), but Joel Mokyr posted an op-ed in the Wall Street Journal about being optimistic regarding growth. I liked this particular passage:

The responsibility of economic historians is to remind the world what things were like before 1800. Growth was imperceptibly slow, and the vast bulk of the population was so poor that a harvest failure would kill millions. Almost half the babies born died before reaching age 5, and those who made it to adulthood were often stunted, ill and illiterate.

I’d like to think that growth economists are also here to spread this message. It’s easy to be pessimistic about the near-term economic future when we are slogging our way slowly out of a terrible recession. But extrapolating from the current situation to say that long run sustained growth is over is taking it too far.

Mokyr (and us mere growth economists) are more optimistic about things. Why? [Because we’re tenured professors who can’t be fired. But that’s only part of it.] Because the ultimate source of economic growth over history has been technological innovation, and there is still an essentially infinite scope for this to continue. Mokyr lays out a long list of innovations that are coming down the pipeline: driverless cars, nanotechnologies, materials science, biofuels, etc. etc. We aren’t running out of ideas, and just because you or I can’t think of what they could possibly invent anymore doesn’t mean that other people aren’t busy inventing things.

But will these new innovations really provide a boost to GDP? Maybe not, but that’s a failure of GDP, not of innovation. Let’s give the mike to Mokyr:

Many new goods and services are expensive to design, but once they work, they can be copied at very low or zero cost. That means they tend to contribute little to measured output even if their impact on consumer welfare is very large. Economic assessment based on aggregates such as gross domestic product will become increasingly misleading, as innovation accelerates. Dealing with altogether new goods and services was not what these numbers were designed for, despite heroic efforts by Bureau of Labor Statistics statisticians.

We measure GDP because we can, and because it gives us a good indication of very short-run variations in economic activity. But it is only a measure of “currently produced goods and services”. That is, GDP measures the new products or services provided in a specific window of time (e.g. the 3rd quarter of 2014, or all of 2013). If all the effort in producing a new product comes in development, but it is then copied for free, this means that there is a one-time contribution to GDP in the year it was developed, and then nothing afterwards.

Things like refrigerators, Diet Coke, and cars contribute to GDP every period because we have to make new versions of them over and over again. But in one sense that is a bug, not a feature. Imagine if, having invented Diet Coke, you could make copies for free. That would lower GDP, as Coca-Cola would drop to essentially zero revenue from here forward. But it’s demonstrably better, right? Free Diet Coke? Where do I put in the IV line?

Diet Coke is a good example here. Let’s say that you could replicate the physical inputs of Diet Coke for free, but that Coca-Cola still owned the recipe, and you had to pay them to use it. This would still lower GDP, as Coca-Cola would no longer be earning anything from the physical production of Diet Coke, only from renting out the recipe each time you wanted a Diet Coke. This is still a win, even though GDP goes down. Lots of current innovations are like making Diet Coke for free, but owning the recipe. They are worthwhile despite the fact that they do not necessarily contribute much to GDP, and might even detract from it.

# Lower Skill Demand in the 21st Century

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Having just posted something on de-skilling in the Industrial Revolution, I saw this post by Nick Bunker regarding the skills gap (or perceived skills gap) as an explanation for the current low employment rate in the U.S. currently. He links to a paper from last year by Paul Beaudry, David Green, and Benjamin Sand on the reversal in the demand for skills since 2000.

The Beaudry, Green, Sand (BGS) paper has the same kind of stylized fact, only over a much shorter time frame, as the paper on the Industrial Revolution by de Pleijt and Weisdorf I posted about. Basically, the level of skilled worker employment is falling. For BGS, this begins around the year 2000, and you can see the essential point they are making in their figure 13. Here you see that the employment share for “high-skilled” workers (managers, technical workers, professionals) peaked right around 2000, and since then has been falling. The drop since 2008 really seems to just be continuing the pre-2008 trend rather than a response to the financial crisis. In contrast, the trend for low-skilled service and labor employees is strongly positive (their figure 15), and despite the dip in 2008, this doesn’t seem to have reversed the general upward trend of the last few decades. So we appear to have a “de-skilling” going on in the U.S. since 2000.

BGS propose a theory for why this might be the case. In their model, the IT wave of the 1990’s required high-skilled workers to get the IT capital installed – put in the servers, write the underlying code, adapt existing business practices to new IT, put things on websites, etc. etc.. [Quick aside: Before I got my Ph.D. and inhabited this dark corner of the internet, I was one of those IT workers. One of my clients was United Airlines, and I worked on incorporating e-tickets into their back-office accounting system. It was as boring as it sounds, and hence here I am.]

Now that this IT capital is installed, we don’t need nearly as many high-skilled workers, as we’re down to maintenance work. [Example: the team of people I worked with at United are now all doing other things because you only need to program the accounting system for e-tickets once.] So according to BGS we now have an “overhang” of skilled workers with college degrees who aren’t really needed. They are pushing down the job food chain into jobs that would normally have gone to medium-skilled non-college-educated workers, which in turn forces those people down into low-skilled service or labor jobs. People with very low skills/education are pushed out of the labor force entirely or forced to work for less because there is an abundance of that kind of labor.

This has to be bad, right? It is, certainly, for everyone who has been caught out with “too many” skills for their jobs. Lots of people were investing in college educations in order to be those high-skilled technicians and managers, and now they can’t find those kind of jobs. They have to take less skill-intense positions, for which there is more competition, and hence they probably won’t be earning as much relative to the loans they took out to go to college. The people at the very bottom end of the job food chain are really out of luck because they are being replaced entirely.

But. But as we go forward, new workers who get added to the workforce can do so without acquiring as many costly skills. In short, they could get away with skipping college, or do it at a cheaper place, or get a 2-year rather than a 4-year degree. If the de-skilling trend continues (robots!), then it isn’t necessarily true that *new* workers are necessarily worse off. They may face a market that demands more low-skilled than high-skilled workers, but they would also need to invest far less in order to be hired. Imagine not needing to go into \$40,000 in student debt just to get a job. They may well be better off without the debt and with the lower-skill job. [Before someone gets all huffy in the comments, yes, I’m include my over-educated academic self in that bucket. If I was just 10 years old now, maybe I would end up with less formal education, a lower skill job, and do economic growth as a hobby. Who knows?] There are two responses to technical change. Raise output, or lower inputs. Since 2000 we’ve apparently been choosing the latter strategy, and that might be how we continue.

So, whether de-skilling is bad depends on perspective. From the perspective of existing workers with skill, yes it is bad. From the perspective of new workers who are making their choices about skill, no it is not. Which makes it no different from any other technology change. Was it obvious that the invention of the automobile was bad? From the perspective of trained farriers, yes. From the perspective of young people who could choose to become a machinist rather than a farrier, no. The change to autos presumably was skill-demanding (although, I don’t know, I never shoed a horse before), but that doesn’t change the fact that some people lose and some people win. We’ve lived through thirty years of increased demand for college educated people, with an attendant increase in their wages relative to the less educated. Is it necessarily bad that this trend reverses?

What about those people at the very bottom of the skill distribution? They’re getting shafted by this de-skilling. Yes. But they were getting shafted before 2000 by the skill-biased technical change favoring college-educated workers. De-skilling suggests that maybe how we help those at the bottom of the distribution should change. Maybe de-skilling means we need to rethink whether college prep as the point of education. Maybe the point should be on building marketable skills, not building college-applications? Nowhere is it written that technological change and economic growth must always and forever increase the demand for college-educated employees, so it may be time to adapt.

Does de-skilling mean that labor is going to “lose” compared to capital, or that de-skilling is a cause of increasing concentration of wealth and income? Maybe. To answer that you need to know about the elasticity of substitution between labor and capital. If it is big, then de-skilling could be the symptom of capital being substituted for labor in production, which in turn is going to lead to a lower share of national income for labor. If the elasticity is small, then de-skilling would eventually lead to an increase in the share of national income going to labor. My gut reaction is that the elasticity is relatively big, especially over longer time periods, and so if de-skilling were to continue, labor probably keeps earning a smaller share of national income. That’s an entirely different discussion to have about inequality and distribution, which takes you down the Piketty rabbit-hole.

# The Loss of Skill in the Industrial Revolution

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

There’s a recent working paper by Alexandra de Pleijt and Jacob Weisdorf that looks at skill composition of the English workforce from 1550 through 1850. They do this by looking at the occupational titles recorded in English parish records over that period, and code each observed worker by the skill associated with their occupation. They use the standardized Dictionary of Occupational Titles to infer the skill level for any given occupation. For example, a wright is a high-skilled manual laborer, a tailor is medium-skilled, while a weaver is a low-skilled manual laborer.

The big upshot to their paper is that there was substantial de-skilling over this period, driven mainly by a shift in the composition of manual laborers. In 1550, only about 25% of all manual laborers are unskilled (think ditch-diggers), while 75% are either low- or medium-skilled (weavers or tailors). However, over time there is a distinct growth in the the unskilled as a fraction of manual laborers, reaching 45% by 1850, while the low- and medium-skilled fall to 55% in the same period. You can see in their figure 10 that this shift really starts to take place by 1650, while before the traditional start of the Industrial Revolution.

Looking at more refined measures, de Pleijt and Weisdorf find that the fraction of workers classified as “high-quality workmen” – carpenters, joiners, wrights, turners – rose only from 3.9% to 4.9% of the workforce between 1550 and 1850. These are precisely the kinds of workers that Joel Mokyr claims are the crux of the Industrial Revolution in England. They built, improved, adapted, and micro-innovated all the classic inventions of the IR. While they were only between 4-5% of the workers, and this proportion didn’t expand rapidly, given population growth the absolute numbers of these high-quality workmen went up by a factor of 4 between 1700 and 1850 (from about 200K to 850K).

It’s a really interesting paper, and it’s neat to see how much information you can keep sucking out of these parish records from England. It leaves me with two big questions/ideas. First, does industrialization depend on a concentrated core of skills, rather than a broad distribution of skills? That is, if Mokyr is right about the source of English industrialization, then it’s those extra 650K high-skilled workers that really made all the difference. Industrialization didn’t involve spreading skills all around the (rapidly expanding) population, but in getting together a critical mass of skilled workers. Are we paying too much attention to average human capital levels when we talk about development and growth, and not enough to looking at when/how/if countries achieve that critical mass of skilled workers? Is the overall level of education irrelevant to industrialization?

Second, should we care about de-skilling? In a vacuum, telling someone that the share of unskilled workers in the economy rose from 25 to 40% of all workers would send up red flags. That must be a bad thing, right? Is it? As England added population, much of that new population was unskilled, presumably because there was no longer a demand for certain low- and medium- skilled professions that had been replaced by machines. Could this just mean that the economy was getting more efficient at using the human capital at hand? England didn’t need to waste all that time and effort skilling-up a big mass of workers. They could be used immediately, without much training.

True, real wages didn’t rise between 1550 and 1800 (but from 1800 to 1850 they seem to start taking off, see Clark, 2005). But they also didn’t fall. That is, despite the fact that even before the classic IR the population of England was deskilling, there wasn’t a demonstrable fall in living standards. So doesn’t that imply that England was getting more (output) from less (human capital)? That’s a good thing, right? If England had held the level of human capital constant, then this would have raised real wages per worker. Instead, they chose to lower the amount of human capital while leaving real wages per worker the same. Who’s to say that this is a worse outcome?

If we were talking about innovations that got more output from less energy, then holding output constant while lowering energy consumption would be what everyone hoped to see. Why should human capital be different?

# Farming Doesn’t Pay….For a Reason

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

An op-ed in the NYT showed up the other day, by Bren Smith, a farmer who lamented the fact that farmers like him (small, local suppliers) were having trouble making money. This despite the surge in farm-to-table restaurants and the “locavore” movement.

Why? Why haven’t the new crop of small, local, hands-on farmers been able to make money? For the same reasons that farmers throughout history have not been able to make money. Their particular product is homogenous across producers, and almost perfectly substitutable with other products. Farmers have essentially no market power. No market power, no profits. Farming is probably the closest thing we have to the perfectly competitive market of Econ 101 textbooks.

Your hand-grown tomato (or kale, or beef, or whatever) faces both competition from other hand-grown tomatoes, as well as competition from conventionally-grown tomatoes. Even more, there is competition from other foods; even farm-to-table restaurants will change their menus to use lower-cost foods if your tomatoes cost too much. So even if your tomato is the greatest, most loved, best-grown tomato ever, you aren’t going to be able to get a premium for it. And hence it’s going to be hard to make money farming those tomatoes, especially if you are using really inefficient methods that require lots of manual labor.

Understanding the economics of farming takes us back to David Ricardo: farmers don’t make money, landowners make money.

If you read beneath the surface of Mr. Smith’s suggested remedies, he understands this. In his words:

But now it’s time for farmers to shape our own agenda. We need to fight for loan forgiveness for college grads who pursue agriculture; programs to turn farmers from tenants into landowners; guaranteed affordable health care; and shifting subsidies from factory farms to family farms. We need to take the lead in shaping a new food economy by building our own production hubs and distribution systems. And we need to support workers up and down the supply chain who are fighting for better wages so that their families can afford to buy the food we grow.

If you want farmers to have money, then you have to give it them directly (subsidies), give it to them indirectly (loan forgiveness or cheap health care), or give them a money-producing asset (land or a distribution chain). But competition is a bitch, and there is no world (including a higher-wage world) in which the pure act of producing food is going to make money for farmers.

# Avoid Cliches like the Plague

I move to expunge the phrase “We examine XXXXXX through the lens of a XXXXXX model” from common usage in economics papers.

Three for three on the latest referee requests I received.

# Cyclones and Economic Growth

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I finally got a chance to read through a recent paper by Solomon Hsiang and Amir Jina on “The Causal Effect of Environmental Catastrophe on Long-Run Economic Growth: Evidence From 6,700 Cyclones”. The paper essentially does what it says on the tin – regresses the growth rate of GDP on lagged exposure to cyclones for a panel of countries over the period 1950-2008. By cyclone, the authors mean any hurricane, typhoon, cyclone, or tropical storm in this period.

One thing I like about this paper is that they do not bury the lede. Table 1 in the introduction gives you an instant grasp of the magnitude of what they find. They compare the cumulative effect of various disasters on GDP. A storm at the 90th percentile in strength (based on wind speeds and/or energy) reduces GDP by 7.4% after 20 years, similar in size to a banking or financial crisis. This is a big effect. As a point of reference, 20 years after World War II Germany’s GDP was already back on it’s pre-war trend.

We might think that this is due to particularly slow convergence rates following cyclones. That is, the cyclone is a big shock that pushes the economy below steady state, and then it simply takes a long time for the economy to recover back to that steady state. But Hsiang and Jina’s figure 9 shows that this isn’t the kind of trajectory we see in places hit by cyclones. The full effect of the cyclone isn’t felt until nearly 15 years after. So the cyclones appear to have long-lasting effects pushing economies below their pre-storm trends. This implies some sort of change in behavior – lowering savings/investment rates, increasing depreciation rates, lowering human capital accumulation, limiting technology adoption – something that puts a persistent drag on the level of GDP.

Making things worse is that countries are hit by multiple cyclones over time, and the negative impacts of one cyclone (as in their figure 9) is then accumulated with the negative impact of other cyclones to really push down GDP. They do some counter-factuals with their estimated effects to see what growth would have looked like across countries if there had been no cyclones at all from 1950-2008. Their figure 22 shows the distribution of growth rates in panel A with and without cyclones, and panel B shows the implied growth rate of world GDP with and without cyclones. There’s a sizable effect, with world GDP growth being about 1.4% per year higher without these storms.

For particular countries, the effects can be startlingly large. Take the Philippines, which has one of the highest exposures to tropical cyclones of any country in the world. In Hsiang and Jina’s counter-factual, GDP per capita would be higher by 2,000%, making the Philippines just about as rich as the U.S. Believable? Maybe not, but it gives you a sense of how much the negative impacts of these cyclones build upon each other through continued exposure. For places like Jamaica, Madagascar, or the Philippines, exposure to cyclones constitutes a persistent negative shock to GDP per capita that is difficult to overcome.

Time for some skepticism. In estimating these effects, Hsiang and Jina use 20-year lags of exposure to cyclones to estimate their effects, and hence are able to create figures like those in their figure 9 above. But their evidence does not rule out long-run convergence back to trend. If the shock of a cyclone is felt over about 15 years, and it then takes 30 years to return to trend, Hsiang and Jina will not be able to identify that. They’d only be capturing the initial negative shock, and not the recovery. This matters because we want to know whether the cyclones have (a) permanently lowered the standard of living or (b) act as temporary (but perhaps long-lived) reductions in standards of living. To put it into regular language, we want to know if the response to a cyclone is “Screw it, I’m not going to bother building a new house at all” or “Crap, it sure is going to take me a long time to rebuild my house”.

Hsiang and Jina do look at how exposure effects GDP for different sub-samples based on how repeated their exposure is to cyclones. For countries in the lowest two quintiles of exposure to cyclones, the implied negative effects are very large (I’m having a hard time interpreting the scale on their figure 19, so I’m not sure of the exact magnitude). For the three top quintiles, though, the effects of cyclones are much smaller in estimated size. The estimated effects are negative, and statistically indistinguishable from the effects in their pooled sample. However, the effects are also statistically indistinguishable from zero in most cases – except for the highest exposure countries.

This doesn’t quite settle the matter, though. Even though any individual storm may not cause any statistically significant drop in GDP per capita for high-exposure countries, this does not mean that they are unaffected by storm exposure. They may have adopted option (a) above – the “screw it” response – and so have a permanently lower trend for GDP per capita. The Hsiang and Jina paper cannot tell us anything about this, because they are only estimating the short-run effect of exposure to any particular storm, not the long-run adaptation to being exposed (which is differenced out and/or slurped up by the country-level time trend in their regressions).

Regardless, the paper is an interesting read, the latest in an increasing number of studies on economic growth that use detailed-level geographic/climate/weather data. Seeing the effect of the shocks of these cyclones out to 20 years in table 1 is a little startling, and gives you some appreciation for how geographic shocks remain as pertinent as economic ones to growth prospects.