Did We Evolve the Capacity for Sustained Growth?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I posted a few pieces (here and here) recently on genetics and growth. The Economist even picked up on Justin Cook’s work on lactose tolerance and development. Justin’s work on both lactose and the HLA system are about very specific genes, while the other research I mentioned is about genetic heritability of certain behaviors associated with growth, without specifying any particular genes.

There is another line of research on evolution and growth pioneered by Oded Galor and Omer Moav. They propose that natural selection over different types of individuals could have led to the onset of sustained economic growth. In particular, they focus on selection over preferences for the quantity and quality of kids. This is very much the second kind of research I mentioned above; it does not identify some specific gene that matters for growth, it suggests a mechanism through which selection could have operated. The original paper is linked here, but they have a nice summary article here that explains the logic without all the math.

Let’s be careful about terminology here. Evolution in general requires both mutation and natural selection. GM is really about natural selection, not mutation. They take as given the presence of two types of people in the population. “Rabbits” like to have large families, but do not invest much in their kid’s human capital. “Elephants” have a few kids, but invest a lot in those kids. Their theory is about the proportions of those types change over time due to economic forces, and eventually how a rising prevalence of Elephants leads to a speed-up in technological change. Yes, at some point there must have been a mutation that led to the differentiation between the types, but we can think of that as happening well back in history. They don’t propose that some mutation occurred at some specific year or a specific place to make this all work.

How does the underlying logic work? In the early Malthusian period, with very low income per capita, the Elephants actually have the evolutionary advantage. Why? In the Malthusian world, everyone is so poor that higher income leads to higher fertility no matter your type. Each Elephant kid has high human capital, and thus relatively high fertility compared to Rabbits. So the proportion of Elephants tends to increase in the population. And a higher proportion of Elephants means that average human capital is rising over time.

As the human capital rises, so does the pace of technological progress. At first this doesn’t do much, as the growth of technology is not sufficient to overcome the force of Malthusian population pressure. But eventually there is high enough human capital that technological change happens so rapidly that people reach the upper limit on fertility rates, and choose to spend any additional income on increasing their kids human capital rather than having more kids. This is the tipping point where human capital and technological change go into a virtuous cycle. Higher human capital leads to higher technological change, which leads to higher human capital, etc.. etc.. and you have sustained growth. Once this occurs, the relationship of income and fertility flips to become negative – the richer you are the fewer kids you have, just the opposite of the Malthusian period. This flip in sign is not unique to their explanation based on natural selection, the same type of flip is central to the general unified growth model in Galor and Weil.

After this transition point, the evolutionary advantage also flips to Rabbits. Why? Because the fertility rates decline with income, and as Elephants are richer due to their human capital, they have fewer kids than Rabbits. So Rabbits begin taking up a larger and larger proportion of the population. But everyone is already relatively rich, so this doesn’t mean that human capital levels are low generally. There is sufficient human capital to sustain technological progress.

Do we know if this exact mechanism is what generated sustained growth? No. To establish that you’d have to identify the precise genes that govern preferences for quantity/quality of kids and show that they varied within the population over time in a manner consistent with the GM model. But there are little bits and pieces of circumstantial evidence that work for GM. Greg Clark’s Farewell to Alms documents his research showing that in fact richer families tended to have more kids in pre-Industrial Revolution England. This fits with the selection mechanism proposed by GM. Similarly, Galor and Marc Klemp have a working paper out on the reproductive success of families in 17th and 18th century Quebec (a place and time with particularly detailed records), and the data shows that it was families with moderate fertility rates that actually had the most kids in subsequent generations, not those with the higher fertility rates. Again, it fits the selection mechanism proposed by GM for the Malthusian era.

Note that even if it isn’t true genetic differences in preferences for quantity/quality, you still need to have selection working for population composition to matter for sustained growth. Let’s say that quantity/quality preferences are purely cultural, passed on from parents to kids imperfectly but with some fidelity over time. Then the GM mechanism could still hold up, but it would be the cultural spread of preferences for high quality that generated the take-off, not the spread of specific genes.

There are reasons to be skeptical about this explanation, just as you should be skeptical about any hypothesis. But don’t dismiss it on the basis that natural selection moves far too slowly for this to have mattered for human populations. Galor and Moav have a number of very telling examples regarding the speed of selection within populations over just a few generations. The classic story is peppered moths during the Industrial Revolution. Peppered moths tend to be white, with little black spots on them – hence the name. But there are black varieties. With the rise of coal in the UK black moths became far more prevalent, as they were harder to spot for predators against the blackened sides of buildings. Within a few years the population jumped from predominantly white to predominantly black. And then flipped back to white when clean air regulations came into force. Given the variation in the population already exists, natural selection can take place very quickly to change population composition. So imagining that human population composition could change substantially over hundreds or thousands of years is reasonable.

Last, does GM mean that generating growth in poor countries is doomed to failure because their genetic composition is “wrong”? No. GM is a story about the rise of sustained growth at the global level. Suggesting that poor countries need to get their genetic mix right in order to grow is like suggesting that they need to adopt steam engines and telegraphs before they can step up to gas engines and mobile phones. The question of how to catch up to the frontier is an entirely different question than explaining how we got a frontier in the first place.

Genetic Origins of Economic Development

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I recently posted about the genetic component of savings behavior. The paper I reviewed there said that one could account for about 1/3 of variation in savings behavior by appealing to genetic differences. Whatever the authors of this study found (rightly or wrongly), they did not identify the gene(s) for savings. They identified the proportion of savings behavior that is correlated with some as-yet-unknown set of genes.

This is not atypical for a paper on economic or social outcomes and genetics. The findings support the idea that “genetics” explain some proportion of behavior, but this does not mean that we know the specific genes involved.

An entirely different kind of study is one where the researcher looks at a specific gene(s), with a known biological function, and examines whether this has a social or economic influence. I’m going to highlight two papers by Justin Cook, who has undertaken exactly this kind of research on genes and economic development.

Justin’s first paper is on disease resistance and development. There is a human leukocyte antigen (HLA) system, which is determined by a set of 239 genes. The HLA system identifies foreign pathogens so that your immune system can kill them. Within populations, there is a lot diversity in this system. That is, people vary in their alleles in the HLA system. At the population level, this is good, because this means that even if I cannot identify the pathogen (and hence die a horrific death), *your* body can identify it and survive to live another day. Populations that are very uniform in the HLA system are thus more susceptible to disease, as one bad bug (or mutation of that bug) can kill them off more effectively. So a lot of heterogeneity in the HLA system in your population is good for surviving diseases, as a population.

You can measure the HLA variation at ethnic-group levels, and then roll this up into HLA variation at country-group levels based on their underlying ethnic composition. This is what Justin does, and then looks at how life expectancy or mortality are related to it. Sure enough, Justin finds that in 1960 there is a significant relationship of HLA heterozygosity (i.e. variation in HLA alleles) and life expectancy across countries. But as you go forward in time, the relationship weakens. By 1990 the relationship has half the estimated strength, and by 2010 only one-fifth. Further, by 2010 the relationship is no longer statistically significant.

There are a couple of interesting implications of this result for thinking about genetics and development. First, it shows that genetics are not fate. Yes, having low HLA variation in a country was bad for life expectancy in 1960, but with the advent of the epidemiological transition after WWII, the effect starts to fall. With antibiotics, vaccinations, public health measures, etc.., the underlying HLA variation matters less and less for life expectancy.

Second, prior to the epidemiological transition, genetics could have played a (statistically) significant role in variation in living standards. Justin shows that HLA variation (which is good) is positively related to the years since the Neolithic revolution in your underlying population, and also positively related to the number of potential domesticable animals in your underlying population. Longer exposure to agriculture and animals generated benefits in dealing with disease, presumably because the populations were exposed longer and to more pathogens. (By “underlying population” I mean the ancestry-adjusted composition of your population today – so the US HLA variation depends mainly on European exposure to diseases). Thus places that had longer histories of civilization, by building up variation in HLA, would have enjoyed higher life expectancies and (assuming that living longer is good), higher living standards. You could spin this out further to speculate that places with higher life expectancies had greater incentives to invest in human capital and achieve even more gains in living standards historically.

The second paper is on lactose tolerance and development. Simply put, if you can digest milk, then you have an additional source of nutrition that lactose-intolerant people do not have. It changes the productivity of dairy-producing animals, making them a better investment. But no other mammal, and the vast majority of humans, do not produce lactase (the enzyme to break down lactose) beyond weaning from breast milk. At some point in time a sub-population of humans acquired a mutation that allowed them to keep producing lactase beyond weaning, meaning they could continue to consume dairy and use the nutrition available.

Justin backs out the ethnic composition of countries in 1500 (you can do this by using data on migration flows and known ethnic groups). He can then look at lactose tolerance in countries in 1500 by using the existing lactose tolerance of ethnic groups (which is presumed to not have changed much in 500 years). He finds that population density in 1500 is highly related to lactose tolerance in the population. This holds up even after you throw a lot of other controls into the specifications, including continent dummies – which is important in establishing that this is not just a proxy for some broader Asia/Europe difference.

Lactose tolerance acted like a Malthusian productivity boost, raising population density in 1500. Did this have long-run consequences for living standards? Maybe. Places that were densely population in 1500 tend to be relatively rich today, even if you control for their contemporary lactose tolerance levels. So through that channel, lactose tolerance may have helped push up living standards today. The story here would be something about dense populations having greater capacity for innovation, or density indicating broader potential for productivity increases.

I think what Justin’s papers show is that a useful way of thinking about genetics and development is in the sense of budget constraints. Gene(s) change the relative price of different activities or goods, which can alter social and/or economic outcomes, without implying that they make one person or population superior. People who can drink milk without getting sick are not making better decisions than people who cannot, they simply are less constrained in their budget set. Genes, in this sense, are just like geography, which creates different relative prices for populations in different areas. This is different than saying that genes “determine” behavior (e.g. a “patience” or “savings” gene) and that this creates variation in how people respond to an identical set of constraints.

Genetic Factors in Savings Behavior

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

There is a recent article by Henrik Cronqvist and Stephan Siegel on the origins of savings behavior (published in JPE, but link is for working paper). They use the Swedish Twin Registry, which gives them data on roughly 15,000 twins, and link that to the deep Swedish data on income, savings, employment, and other information. They use this to examine whether savings behavior has a genetic component. Essentially, they are asking whether genetically similar people (twins) have similar savings behaviors. Figuring this out is hard, as twins share not just genes but also share home environments.

To get around this, Cronqvist and Siegel use the differences between identical and fraternal twins to their advantage. Here is the basic idea. If genes matter for savings behavior, then identical twins should have a higher correlation of their savings behavior than fraternal twins because fraternal share (on average) 50% of their DNA while identical twins share 100%. On the other hand, twins of either type will experience similar environmental factors (i.e. parenting). That is, the assumption is that fraternal twins share 100% of the common environment, just like identical twins, and not just 50%.

You have to be careful. Savings behavior can be correlated across twins at 100%, and yet that doesn’t mean that genes matter. It may mean that two individuals raised in a similar environment share similar attitudes towards savings. So the absolute level of correlation is not important, but the pattern between identical and fraternal twins is. It is by comparing the correlations within the two groups that allow the authors to draw out the importance of genetics.

Here’s a crude first look at their data:
Cronqvist and Siegel 2015

You can see that identical twins do in fact have higher correlations in their savings rates than fraternal twins. Much of the remainder of the paper is confirming that this figure holds up with various controls included. Perhaps not surprisingly, it does hold up. You can argue with their exact measure of savings (changes in net worth divided by disposable income), but it is a measure used in other papers, and they are not trying to compare across countries so definitional issues in the dataset are less problematic.

The end result is that roughly 1/3 of variation in savings behavior can be accounted for by genetics (a little higher than this for men, and a little less for women). As an example, if you pulled two pairs of identical twins out of the population, you might find that Alice and Agnes saved 15% and 18% of their income, while Bob and Bubba saved 10% and 11%, respectively. About one-third of the difference in average savings (17.5% versus 10.5%) is due to genetic differences between the A girls and the B boys. The A family presumably has alleles that code to more patience on the “savings gene”, while the B family has alleles that code to less patience.

Maybe as interesting as the 1/3 number is that the share attributed to common family experience is essentially zero. Their paper supports a “nature” over “nurture” view on savings behavior. For completeness, the remaining 2/3 of variation in savings behavior is purely idiosyncratic. That is, 2/3 of Alice and Agnes’s higher saving rate is simply a result of Alice being Alice and Agnes being Agnes.

Do we know what or where “the savings gene” is? No. It is almost certainly not even a single gene, but rather some complex set of genes that combine to determine savings behavior. But what Cronqvist and Siegel establish is that it is reasonable to suspect that this complex set of genes actually exists.

From a growth perspective, research that examines heterogeneity in individual behaviors within economies is often useful in thinking about heterogeneity across countries. This is particularly true when you realize that much of the cross-country variation in economic development is driven by the composition of country’s population.

The Cronqvist and Siegel paper cannot tell us whether there are true genetic differences in savings behavior *between* different populations. The genetic variation in savings behavior within Sweden might be similar to genetic variation in savings behavior within Burundi, or Nepal, or Peru. But it opens up the possibility that there could be some genetic variation in savings behavior between countries. If there is a set of genes that code for savings (or patience, or long-run planning, or whatever) then it is certainly theoretically possible that populations vary as well.

Given the relative importance of population composition in accounting for differences in living standards, we cannot dismiss the idea that there is a genetic component involved. Note that this doesn’t mean that high-saving or low-saving populations are biologically different, any more than blue eyed populations and brown-eyed populations are biologically different. That is, high-savings populations are not super-patient mutants (who would make the worst X-men ever). They have a gene expression that may lead to higher savings rates.

There are starting to dribble into the research world studies that look at actual genetic differences across populations and the implication of those for economic development. We are no where close to a thorough accounting of the role of genetic variation in explaining development, but it is beginning to look as if we should accept that there is a meaningful role for it.

Geography is Kinda-Sorta Destiny

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I spent last weekend in Orlando with my wife and kids at Universal Studios. This had two effects. The first was to confirm everything I hate about large groups of people. The second was that it allowed me to read a number of books. So this is another post that is partly a book review.

I read Why the West Rules–for Now: The Patterns of History, and What They Reveal About the Future by Ian Morris. This is a book I was surprised I hadn’t already read. But nevertheless, I finally got around to it on the plane.

By itself, Morris’ book is fine. I think it falls in a grey area: it gets a little dense for a popular book, but isn’t thorough enough for an academic one. Parts of it are like reading a history textbook, where it becomes a list of events and names without a lot of context. I do like his summary of what drives history. “Change is caused by lazy, greedy, frightened people looking for easier, more profitable, and safer ways to do things. And they rarely know what they’re doing.”

The larger theme of the book is interesting. Morris stakes out a position that geography is really why the West “rules” at this point. Somewhat fixed characteristics like soil and general weather patterns ensured that Western Europe and China were bound to be relatively rich compared to most of the world. The additional advantages of western Europe were the relatively easy access they had to the geographic bonanza of the New World (which itself was due to the particular fact that Native Americans died from European diseases and not vice versa).

Given my not-overwhelming recommendation of Morris’ specific book, let me offer you some additional books that make the case for geography and/or biology being a major factor in economic development.

  1. Plagues and Peoples by William McNeill
  2. Guns, Germs, and Steel: The Fates of Human Societies by Jared Diamond
  3. The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor by David Landes. (Not the whole book, but the early chapters focus on geography)
  4. The European Miracle: Environments, Economies and Geopolitics in the History of Europe and Asia by Eric Jones. (Probably my favorite in this list)

I could go on, but I run into the “wedding invitation” problem. If I recommend another book in which geography features strongly, like Empire of Cotton, I feel compelled to recommend the other 10 books that I find similar in scope or quality. Pretty soon we’re talking about a long list. So stick with these for now as your entree to the world of geography as a determinant of development.

Morris and these other authors are often accused of “geographic determinism”. This is often slung about as a kind of epithet, implying that the author means that world economic history had to come out *exactly* like it did because of geography. This bothers people because it seems to exonerate western Europeans from all the awful things they did along the way to becoming rich. It can also be easily twisted into arguments about how Europeans are superior to other races or groups of people.

But that is setting up straw men in place of what these authors actually say. The mistake is to think that by asserting geography matters, this denies any role for human agency. Geography sets the budget constraint, affecting the slope (i.e. relative cost of land versus labor) and intercept (i.e. how many people land can support). But people set the utility function, making the choices about production, consumption, and innovation. To say that geography matters for development is to say that incentives matter, that’s all. Geography creates some subtle, and some not so subtle, differences in the constraints facing people, and they react accordingly. They look for easier, more profitable, and safer things to do within their given geographic conditions.

It is also a mistake to think that geography implies that relative development levels must be constant over time. Certain geographic characteristics are fixed, for all intents and purposes; North American is closer to Europe than to China. But nearly all other characteristics that we could lump under “geography” change over the course of human history. Think of the climate, with little ice ages and the Medieval warm period. And technological changes can make geographic characteristics change in their influence on development. Think of oil.

Geography doesn’t say that some populations are supposed to be rich, that they deserve to be rich, or that they will always be rich. It says that it isn’t terribly surprising that they are rich right now. Imagine that we could rewind and rerun human history over and over and over again. Each time, set the clock back to 15,000 BC and then let things go. Each time, it would be different as all the millions of coin flips in history came up heads or tails. Geography means the coins are not fair. Europe, blessed with productive agricultural land, lots of internal waterways, access to oceans, etc. etc.. comes up heads 55% of the time. Africa, with tough agricultural conditions, a bad disease environment, and a lack of natural transport networks comes up heads only 45% of the time. Over those thousands of versions of history, it would tend to be the case that Europeans would be relatively rich.

So when these authors say “geography matters”, take that as a statement similar to saying that a coefficient in a regression “matters”. It’s a statistical statement that the coefficient on geography is significant, not that the R-squared of the regression is 100%.

Is Progress Bad?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I saw this article on the Atlantic by Jeremy Caradonna, a professor of history at the U. of Alberta. It’s about whether “progress” is good for humanity. The article takes particular aim at “progress” as a concept associated with sustained economic growth since the Industrial Revolution.

The first point to make is that Caradonna mischaracterizes the conclusions that economic historians and growth economists make about the moral character of growth after the Industrial Revolution. None of them, at least the ones I’ve read, and I’ve read a lot of them, have ever suggested that humanity is morally superior for having achieved sustained growth. Here’s the quote he pulls from Joel Mokyr’s The Enlightened Economy

Material life in Britain and in the industrialized world that followed it is far better today than could have been imagined by the most wild-eyed optimistic 18th-century philosophe—and whereas this outcome may have been an unforeseen consequence, most economists, at least, would regard it as an undivided blessing.

And here is Caradonna’s reaction to that quote:

The idea that the Industrial Revolution has made us not only more technologically advanced and materially furnished but also better for it is a powerful narrative and one that’s hard to shake.

The only sense in which Mokyr means “we’re better for it” is precisely that it made us more materially furnished. We are superior in real consumption. Full stop. Nowhere does Mokyr make a claim that this superiority in real consumption implies any kind of superiority in virtue, morality, or ethics.

We are shockingly, amazingly, well off on a material basis compared to our ancestors not only 200 years ago even thirty years ago. This despite the fact that the population of the earth is now roughly 7-8 times higher than it was when the Industrial Revolution started.

So Caradonna has set up a straw man to take down. Fine, he’s hardly the first person to do that. What’s his real argument, then? Let me take a stab at summarizing it. After the Industrial Revolution, bad things happened in addition to good things. Caradonna thinks those bad things are particularly bad, and thinks we should give up some of the good things (gas-powered cars) in order to alleviate the bad things (global warming).

Okay. Great. I’m with you Prof. Caradonna. Seriously, I’m in for a carbon tax and expanded spending on alternative energy R-and-D. I want to drive around either an electric car, or one powered by hydrogen, or using gas produced by algae that actually pulls CO2 from the atmosphere.

But the idea that economic growth – progress – is somehow the enemy of that goal is misguided. To paraphrase Homer Simpson: “To economic growth, the cause – and solution – to all of life’s problems”. Economic growth created the conditions that allowed us to alleviate evils like starvation and infant mortality while at the same time giving us more clothes, better housing, faster ways to get around, means of communication, Diet Coke, and gigantic-ass TV’s. It also bequeathed us technologies that heat up the atmosphere. And that sucks. But it sucks less than starving.

Economic growth means we’ve got a new kind of constrained optimization problem to solve in the 21st century: how to maximize real consumption while minimizing environmental damage. Caradonna has a particular type of solution to that optimization in mind, one tilted more towards minimizing damage than maximizing consumption. But the world seems to be making a different kind of choice, and so he’s trying to persuade others to adopt his solution. More power to him. There is no one who can tell him (particularly me) that his choice of how to solve that optimization problem is wrong. It’s just about preferences.

But anything that alleviates the constraints in this problem is welcome, regardless of preferences. Innovations that mitigate global warming (or other environmental concerns) would help us regardless of our exact preferred solution. If we can invent hyper-efficient spray-on solar panels, that would be an incredible boon to humanity. Cheap, clean power. Everyone wins. You know what I would call something like that? Progress.

The underlying issue is not a concept like progress or economic growth, but the fact that constraints exist.

Cyclones and Economic Growth

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I finally got a chance to read through a recent paper by Solomon Hsiang and Amir Jina on “The Causal Effect of Environmental Catastrophe on Long-Run Economic Growth: Evidence From 6,700 Cyclones”. The paper essentially does what it says on the tin – regresses the growth rate of GDP on lagged exposure to cyclones for a panel of countries over the period 1950-2008. By cyclone, the authors mean any hurricane, typhoon, cyclone, or tropical storm in this period.

Hsiang and Jina 2014 Table 1

One thing I like about this paper is that they do not bury the lede. Table 1 in the introduction gives you an instant grasp of the magnitude of what they find. They compare the cumulative effect of various disasters on GDP. A storm at the 90th percentile in strength (based on wind speeds and/or energy) reduces GDP by 7.4% after 20 years, similar in size to a banking or financial crisis. This is a big effect. As a point of reference, 20 years after World War II Germany’s GDP was already back on it’s pre-war trend.

Hsiang and Jina 2014 Figure 9

We might think that this is due to particularly slow convergence rates following cyclones. That is, the cyclone is a big shock that pushes the economy below steady state, and then it simply takes a long time for the economy to recover back to that steady state. But Hsiang and Jina’s figure 9 shows that this isn’t the kind of trajectory we see in places hit by cyclones. The full effect of the cyclone isn’t felt until nearly 15 years after. So the cyclones appear to have long-lasting effects pushing economies below their pre-storm trends. This implies some sort of change in behavior – lowering savings/investment rates, increasing depreciation rates, lowering human capital accumulation, limiting technology adoption – something that puts a persistent drag on the level of GDP.

Hsiang and Jina 2014 Figure 22

Making things worse is that countries are hit by multiple cyclones over time, and the negative impacts of one cyclone (as in their figure 9) is then accumulated with the negative impact of other cyclones to really push down GDP. They do some counter-factuals with their estimated effects to see what growth would have looked like across countries if there had been no cyclones at all from 1950-2008. Their figure 22 shows the distribution of growth rates in panel A with and without cyclones, and panel B shows the implied growth rate of world GDP with and without cyclones. There’s a sizable effect, with world GDP growth being about 1.4% per year higher without these storms.

For particular countries, the effects can be startlingly large. Take the Philippines, which has one of the highest exposures to tropical cyclones of any country in the world. In Hsiang and Jina’s counter-factual, GDP per capita would be higher by 2,000%, making the Philippines just about as rich as the U.S. Believable? Maybe not, but it gives you a sense of how much the negative impacts of these cyclones build upon each other through continued exposure. For places like Jamaica, Madagascar, or the Philippines, exposure to cyclones constitutes a persistent negative shock to GDP per capita that is difficult to overcome.

Time for some skepticism. In estimating these effects, Hsiang and Jina use 20-year lags of exposure to cyclones to estimate their effects, and hence are able to create figures like those in their figure 9 above. But their evidence does not rule out long-run convergence back to trend. If the shock of a cyclone is felt over about 15 years, and it then takes 30 years to return to trend, Hsiang and Jina will not be able to identify that. They’d only be capturing the initial negative shock, and not the recovery. This matters because we want to know whether the cyclones have (a) permanently lowered the standard of living or (b) act as temporary (but perhaps long-lived) reductions in standards of living. To put it into regular language, we want to know if the response to a cyclone is “Screw it, I’m not going to bother building a new house at all” or “Crap, it sure is going to take me a long time to rebuild my house”.

Hsiang and Jina do look at how exposure effects GDP for different sub-samples based on how repeated their exposure is to cyclones. For countries in the lowest two quintiles of exposure to cyclones, the implied negative effects are very large (I’m having a hard time interpreting the scale on their figure 19, so I’m not sure of the exact magnitude). For the three top quintiles, though, the effects of cyclones are much smaller in estimated size. The estimated effects are negative, and statistically indistinguishable from the effects in their pooled sample. However, the effects are also statistically indistinguishable from zero in most cases – except for the highest exposure countries.

This doesn’t quite settle the matter, though. Even though any individual storm may not cause any statistically significant drop in GDP per capita for high-exposure countries, this does not mean that they are unaffected by storm exposure. They may have adopted option (a) above – the “screw it” response – and so have a permanently lower trend for GDP per capita. The Hsiang and Jina paper cannot tell us anything about this, because they are only estimating the short-run effect of exposure to any particular storm, not the long-run adaptation to being exposed (which is differenced out and/or slurped up by the country-level time trend in their regressions).

Regardless, the paper is an interesting read, the latest in an increasing number of studies on economic growth that use detailed-level geographic/climate/weather data. Seeing the effect of the shocks of these cyclones out to 20 years in table 1 is a little startling, and gives you some appreciation for how geographic shocks remain as pertinent as economic ones to growth prospects.

Who cares how fast GDP grows?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I came across an interesting post by Ed Dolan, on what we should do about slowing growth in the U.S. His answer is “Nothing”, and he gives a very capable explanation of why this is the case. His argument is that while GDP and human welfare (the general concept, not the government program) are correlated, once you are very rich the correlation drops enough that it is quite possible to raise human welfare without having GDP go up.

This is a really interesting point, and it relates to the marginal utility of consumption goods (which are goods and services that get counted as part of GDP) as compared to the marginal utility of what I’ll call intangibles. Intangibles are things like good health, or a clean environment, that we might value in and of themselves, but they are not necessarily tied to the production of real goods and services that are counted in GDP.

Very simply, let overall utility be

\displaystyle  U = u(C) + v(H) \ \ \ \ \ (1)

where {C} is consumption of tangible goods and services and {H} is the consumption of intangibles. We’ve got some stock of resources (labor, capital, natural resources, etc..) that we can use to produce things with. Consumption goods are {C = xR}, where {x} is the share of the resources we use in producing consumption. It’s a really simple model with only consumption goods, so GDP is just equal to {C}, so that {Y = xR}.

Intangible goods are {H = (1-x)R}, or they “use” the remaining share of resources. Note that I don’t necessarily mean that we have to use up resources to produce intangibles – you can think of {(1-x)} as the fraction of resources that we idle, or leave pristine, or shut down in order to enjoy better health, a nicer environment, or more free time.

Maximizing utility involves picking the optimal value for {x}, what share of our resources to commit to consumption. Before throwing some math at it, think of {R} as the total potential number of donuts I could produce using all available resources. The trade-off I face is how many donuts to actually produce. I’ll produce some ({C}), because donuts are yummy. But I’ll hold off on producing all the possible donuts because I want to be healthy enough to shoot baskets with my kids in the driveway ({H}). What is the optimal split of {R} into donuts and “health”? And will that split ever change?

The first-order condition here is

\displaystyle  u'(xR) = v'[(1-x)R], \ \ \ \ \ (2)

which just says that the marginal utility of consumption goods should be equal to the marginal utility of intangible goods. If they weren’t equal, then you could fiddle with the value of {x} and get higher overall utility.

What happens as {R} goes up? The marginal utility of both types of goods falls. If I already have lots of consumption goods (donuts, cars, iPhones) then the marginal utility of another one gets small. Similar for intangible goods – if I’ve got great health and lots of beautiful national parks to visit, then it’s hard to feel much better or visit an additional park.

The key is going to be how fast these marginal utilities fall. That is, how quickly does an extra donut get old and boring, versus how quickly better health gets old and boring. We often use log utility to describe consumption, of {u(C) = \ln{C}}, which means that the marginal utility of consumption is {u'(C) = 1/C}, or in terms of resources, {u'(C) = 1/xR}. As Chad Jones will tell you, log utility is “very curved”, meaning that the marginal utility quickly runs down towards zero as you load up on more donuts. [Aside: log utility, though, is less curved than other typical utility functions for consumption, so I’m probably understating how fast utility falls with more consumption].

What’s the utility function for intangible goods? I don’t know that there is any kind of consensus about what this looks like. But let me use a very simple utility function that will demonstrate the logic of not caring if GDP grows. Let’s have {v(H) = \theta H}, so that {v'(H) = \theta}. This function is linear in {H}, so that the marginal utility of intangible goods doesn’t depend on how much {H} you consume – you can never be too healthy, so to speak. The most important part here is that marginal utility falls more slowly for {H} than for consumption goods.

Back to our optimal choice of {x}. Using the assumed utility functions, I get that my first-order condition is

\displaystyle  \frac{1}{xR} = \theta, \ \ \ \ \ (3)

which solves out to

\displaystyle  x = \frac{1}{\theta R}. \ \ \ \ \ (4)

That is, the optimal fraction of resources to spend on consumption goods falls as {R} rises. As we get more resources (labor, capital, technology) we use fewer of them on actually producing consumption goods. The payoff in terms of utility is just too low compared to the payoff in utility from having more intangible goods.

Remember that GDP is just {Y = xR}, and under our optimal assumption for {x} this is just {Y = 1/\theta}. In other words, it would be optimal in this model for GDP to stay constant at {1/\theta}, even as the available resources {R} are increasing. We would willingly sacrifice additional GDP because it only enhances consumption goods without increasing intangibles. No growth in GDP is utility-maximizing.

By fiddling with the exact utility function for intangibles you could get a different answer. Perhaps GDP optimally rises very slowly (if intangible goods have a declining marginal utility), or GDP optimally falls over time (if intangible goods have an increasing marginal utility as you use them – think of enjoying national parks more if you are healthy enough to hike through them).

The ultimate point of Ed Dolan’s post, and this one, is that there is nothing inherently desirable about rising GDP. It is simply a statistical construct capturing the total value of currently produced goods and services. If we prefer things that are not currently produced goods and services, then who cares if GDP rises or falls?

Something that I didn’t address here is how we adapt to a lower fraction {x}. If {x} falls, this implies that we are idling resources, like labor. If I’m going to consume fewer donuts, I’m going to put some bakers out of business. If you’re lucky, the bakers don’t mind because they would have chosen to go backpacking through Yosemite anyway. If you’re not, then these unemployed bakers are looking for something to do. As usual in these kinds of questions, seeing the different equilibrium outcomes is a lot easier than seeing how to transition from one to the other.

Are We Doomed?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

The Guardian ran a piece on a forthcoming paper in Ecological Economics by Safa Motesharrei, Jorge Rivas, Eugenia Kalnay. The article is titled “Human and Nature Dynamics (HANDY): Modeling Inequality and Use of Resources in the Collapse or Sustainability of Species”. The model they construct has the feature that under certain conditions, either extreme inequality in wealth or overuse of resources will result in the collapse of society, in the sense that the number of people goes to zero.

This is a fairly standard “We’re all gonna die!” kind of model. It mechanically delivers the possibility that society could collapse. This is not some kind of blazing hot insight, it’s the equivalent of saying that you could get in a car wreck today if the conditions work out just right.

Here’s a simple way of thinking about this kind of model. Assume that you are driving on a one-way road, with a car in front of you and one following you. Those other cars are going a constant 40 mph and do not deviate from that speed ever. You drive according to two simple rules. (1) If you are getting closer to the car in front, slow down. (2) If you are getting closer to the car behind you, speed up.

Now, if your accelerator and brakes are sensitive enough, and you have particularly good reflexes, then this system is sustainable. You’ll find yourself travelling 40 mph as well, exactly between the two cars. But, if your accelerator and braking skills are a little sluggish, then eventually you are going to hit one of the cars.

That’s it. That’s the model. The Motesharrei et al model does this, except renaming the various components of the model. But in the end all they are asking is: given the existence of these other 40 mph cars, is it possible you will crash? The answer is, of course, yes. In fact, it’s almost certain you will unless your reactions are calibrated exactly right.

So why don’t we see widespread mayhem on highways? Massive fifty-car pileups multiple times a day in every city? Because the assumption that all the other cars will always go 40 mph is ridiculous. The rest of the system, the other cars, will all react to the situation as well. If you put on your brakes because you get to close to the car ahead, the car behind you will slow down as well, preventing you from being rear-ended.

Models like Motesharrei et al, in order to focus on some simple dynamics, ignore the possibility that the actors in their model will change behavior. They assume all the other cars just go 40 mph all the time. But just as other cars respond to your actions, technology can change (for better or worse in terms of using resources), people will alter their consumption behavior, the composition of the elite and commoner groups will change, and the distribution of wealth will be shifted. The system responds.

This is why economists always scream “you ignored prices!” when they see models like these. Because prices are like brake lights and turn signals, they provide information to those around you. They inform the system about what is scarce and what is abundant. They induce changes in behavior in the rest of the actors in the system. Behavior changes mean it is not inevitable that the system will collapse. Just like it is not inevitable that every time you get in a car you are going to get into a wreck.

Could we create some ecological disaster that dooms the planet? Sure. The ecology of Earth is so complex that I’m sure if we did something wrong we could unravel the whole thing. But this is not inevitable, whatever the equations in Motesharrei et al tell you.