Tyler, Noah, and Bob walk into a Chinese bar…

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I know that in internet-time I’m light-years behind this discussion, but Tyler Cowen recently put up a post questioning whether Chinese growth could be explained by Solow catch-up growth, and Noah Smith had a reply that said, “Yes, it could”. I just wanted to drop in on that to generally agree with Noah, and to indulge in some quibbles.

Tyler says that

It seems obvious to many people that Chinese growth is Solow-like catch-up growth, as the country was applying already-introduced technologies to its development.

and Noah rightly says that this isn’t what Solow-like catch-up growth is about.

Solow catch-up growth (convergence) is just about capital investment. That’s the convergence mechanism. And that mechanism says that if you are well below your potential, you’ll grow really fast as you accumulate capital rapidly. So the Solow story for China is that there was a profound shift(s) starting in the late 1970’s, early 1980’s that created a much higher potential level of output. That generates really rapid growth.

Does 10% growth make sense as being due to convergence? We can use my handy convergence-growth calculating equation from earlier posts to figure this out. In this case, Tyler was talking about aggregate GDP growth, so in what follows, {y} represents GDP.

\displaystyle  Growth = \frac{y_{t+1}-y_t}{y_t} = (1+g)\left[\lambda \frac{y^{\ast}_t}{y_t} + (1-\lambda)\right] - 1. \ \ \ \ \ (1)

The term {\lambda} is the convergence parameter, which dictates how fast a country closes the gap between actual GDP ({y_t}) and potential GDP ({y^{\ast}}). The rate {g} is the steady state growth rate of aggregate output.

{g} might be something like 3-4% for China, the combination of about 2% growth in output per capita, along with something like 1-2% population growth. The convergence term {\lambda} is around 0.02. We know that Chinese growth was around 10% per year for a while (not any longer). So what does {y^{\ast}} have to be relative to existing output to generate 10% growth? Turns out that you need to have

\displaystyle  \frac{y^{\ast}_t}{y_t} = 4.4 \ \ \ \ \ (2)

to get there. That is, starting in 1980-ish, you need Chinese potential GDP to be 4.4 times as high as actual GDP. If that happened, then growth would be 10%, at least for a while.

Is that reasonable? I don’t know for sure. It’s really a statement about how inefficient the Maoist system was, rather than a statment about how high potential GDP could be. GDP per capita in China was only about $220 (US 2005 dollars) in 1980. That’s really, really, poor. A 4.4 fold increase only implies that potential GDP per capita was $880 (US 2005 dollars) in 1980. We’re not talking about a change in potential that is ludicrous. There is a good reason to think that standard Solow-convergence effects could explain Chinese growth.

But not entirely. One issue with this Solow-convergence explanation is that growth should not have stayed at 10% for very long after the reforms. That is, the Solow model says that you close part of the gap between actual and potential GDP every year, so the growth rate should slow down until it hits {g}. That happens pretty fast.

After 10 years of convergence – about 1990 – China’s growth rate should have been about 6.7%, and it was lower in the early 90’s than in the 1980s. But after 20 years – about 2000 – China’s growth rate should have been down to 5.3%. Yet Chinese GDP growth has been somewhere between 8-10% since 2000, depending on how you want to average growth rates, and what data source you believe.

So why didn’t Chinese growth slow down as fast as the Solow model would predict? That requires us to think of potential GDP, {y^{\ast}}, taking even further jumps up over time. Somewhere in the time frame of 1995-2000, another jump in potential GDP took place in China, which then allowed growth to remain high at 8-10% until now. And now, we see growth in China starting to slow down, as we’d expect in the Solow convergence story.

I think I would take Tyler’s post as being about the source of that additional “jump” in potential GDP that kept growth up around 8-10%. It may be that China had some kind of special ability to absorb foreign technology (perhaps just it’s size?). But then again, in the late 1990’s, China actively negotiated for WTO accession, which took place in 2001. Hong Kong also reverted to Chinese control in 1997. Both could have created big boosts to potential GDP.

We do not necessarily need to think of some kind of special Chinese ability to absorb or adapt technology to explain it’s fast growth. Solow convergence effects get us most of the way there. Whatever happened in the 1990’s may reflect some unique Chinese ability to absorb technology, but I’d be wary of going down that route until I exhausted the ability of open trade and Hong Kong to explain the jump in potential.

Okay, last quibble. In Noah’s post, he said that we’d expect Chinese capital per worker to level off as they get close to potential GDP. No, it wouldn’t! The growth of capital per worker will slow down, yes, but will settle down to a rate about equal to the growth rate of output per worker. The growth rate of capital per worker won’t reach zero, if the Solow model is at all right about what is happening.

More on the Effect of Social Policy on Innovation and Growth

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

My last post was on the false trade-off between social policies and growth. In particular, I took a shot at an essay by Michael Strain, but his essay is simply a good example of an argument that gets made very often: social policies will lower growth. I said this was wrong, and a number of responses I got questioned my reasoning. So this post is meant to spell out the logic more clearly, and point out why precisely I think that Strain’s argument (and others like it) is flawed. Consider this an uber-response to comments on the site, some e-mails I got, and the discussion I had with my neighbor (who probably won’t read this, but whatever).

First, we need to be clear that we have to distinguish the effect of social policies on innovation from the effect of social policies on growth in GDP. They need not be identical, which I’ll get too in more detail below. So to begin, let’s think about the effect of these policies on innovation, which is what Strain and others acknowledge is the source of improvements in living standards.

I’m an economist, so I think of the flow of innovations as responding to incentives. When the value of coming up with a new idea goes up, we get more new ideas. Simple as that.

What’s the value of an idea? That depends on the flow of net profits that it generates. The profits of owning an idea are

\displaystyle  \pi = (1-\tau)(\mu-1)wQ \ \ \ \ \ (1)

where {\tau} is the “tax rate”, and this tax rate is meant to capture both formal taxation and any other frictions that limit profits (e.g. regulations).

{\mu>1} is the markup that the owner can charge over marginal cost for their idea. {(\mu-1)>0} is therefore the difference between price and marginal cost. The more indispensable your idea, the higher the markup you can charge. For instance, there are big markups on many heart medications because your demand for them is pretty inelastic. The markup on a new type of LCD TV is very low because there are lots and lots of almost identical substitutes.

{w} is the marginal cost, which here we can think of as the wage rate you pay to run the business that produces the good or service based on your idea. {Q} is the number of “units” of the idea that you sell (pills or TVs or whatever). Together, {wQ} represents “market size”. If the wage rate or quantity purchased go up, then your absolute profits rise. The effect of {Q} makes sense, but why do profits rise when wages rise? Because of the markup. If your costs are higher, the price you can charge is higher too.

The profits from an idea are the incentive to innovate. So anything that makes {\pi} goes up should generate more ideas. My issue with Michael Strain’s article, and others like it, is that when they think of “progressive social policy”, they think only of the cost {\tau} of funding that policy. So there is a direct trade-off between funding these social policies and innovation (and possibly growth).

My point is that those social policies have direct, positive, effects on market size, {w} and {Q}. Profits should be written as

\displaystyle  \pi = (1-\tau)(\mu-1)w(\tau)Q(\tau). \ \ \ \ \ (2)

If we raise {\tau} to pay for social policies that educate people or raise their living standards, there is a positive effect on market size. The wage goes up, either directly because we have higher-skilled workers, or indirectly because they have some kind of viable outside option.

Further, the size of the market increases because people appear to have non-homothetic preferences. That is, they buy a few essential goods no matter what. They only spend money on other goods once those essentials are dealt with. With non-homothetic preferences, the distribution of income matters a lot to the size of the market for your idea. If lots of people are very poor, or if the cost of essentials is very high, then they have little or no money to spend on your idea, and {Q} is small. If you provide them with more income or make essentials cheaper, they have more income to spend on your idea, and {Q} goes up.

To be clear, I think that the positive effects of {\tau} on {w} and {Q} outweigh the direct negative effect of {\tau}. That’s what I mean when I say progressive social policies are good for innovation, and why I said that there is not a direct trade-off between funding social policies and innovation (and possibly growth).

That doesn’t mean that funding social policies is always positive. There is a Laffer-curve type relationship here, and if {\tau} were too high the incentives to innovate would go to zero and that would be bad. But the innovation-maximizing level of {\tau} is not zero.

As an aside – there are plenty of costs that comparies or innovators have to pay that would have no direct benefit for wages or {Q}. Think of useless red tape regulations. I’m all for getting rid of those. But getting rid of red tape is not something that requires us to sacrifice social policies. It does not cost anything to remove red tape.

But wait, there’s more. The speed of innovation in an economy – {g_A} – is going to be governed by something like the following process

\displaystyle  g_A = \frac{R(\pi,H)}{A^{\phi}} \ \ \ \ \ (3)

where {R(\pi,H)} is a function that describes how many resources we put towards innovation, like how much time is spent doing R&D, or how much is spent on labs. That allocation depends on profits, {\pi}, which dictate how lucrative it is to come up with an innovation. But it also depends on the stock of resources available to do innovation, and here I think specifically of the amount of human capital available, {H}. Social policies can not only raise {\pi} indirectly, but can directly act to raise {H}. Education spending is the obvious case here. But policies that lower uncertainty (income support, health care coverage) allow people to either undertake risky innovation projects themselves, or work for those who are pursuing those projects, because they don’t have to worry about what happens if the risk fails to pay off. Social policy can act directly to raise {H}. Which means that social policies can, for two reasons, raise the growth rate of innovation, {g_A}. Even if the effect on profits is zero, innovation can still rise because the stock of innovators has been increased.

Aside: The term on the bottom, {A^{\phi}}, is a term that captures the effect of the level of innovation, {A}, on the growth rate, {g_A}. If you are of the Chad Jones semi-endogenous growth opinion, then {\phi>0}, and this means that the growth rate will end up pinned down in the long run, and social policies will have a positive level effect on innovation. If you are of the opinion that {\phi=0}, then policies have permanent effects on the growth rate. It isn’t important for my purposes which of those is right.

What does this mean for GDP growth? I said in the prior post that it isn’t clear that GDP growth is the right metric. We really want to encourage innovation, not necessarily GDP growth. Why? Because growth in GDP, {g_Y}, is just

\displaystyle  g_Y = g_A + g_{Inputs}. \ \ \ \ \ (4)

If we raise {g_A}, then what happens to {g_Y} depends on what happens to {g_{Inputs}}. We might imagine that {g_{Inputs}} remains constant, so {g_Y} rises when {g_A} goes up. But there is no reason we couldn’t have {g_{Inputs}} fall while {g_Y} remains constant. What if we take advantage of innovations to only work 30 hours a week? Then GDP growth could remain the same, {g_{Inputs}} falls, and yet we’re all better off. Or if innovation allows us to dis-invest in some capital (parking garages?) while still enjoying transportation services (self-driving cars?). GDP may not grow any faster, but we’d be better off by using fewer inputs to produce the same GDP growth rate.

The point is that the right metric for evaluating the effect of social policies is not GDP growth per se, it is the rate of innovation. It is {g_A} that dictates the pace of living standard increases, not {g_Y}. In lots and lots of models, we presume that growth in inputs is invariable, but that doesn’t mean it is how the world actually works.

Strain completely ignores the possible positive impacts of social policies on the growth of innovation, and that is what I’m saying is wrong about his essay. We can have a reasonable discussion about what the right level of {\tau} is to maximize the growth rate of innovation, but that answer is not mechanically zero. There is no strict trade-off between innovation growth and social policies. Which means there is even less of a strict trade-off between GDP growth and social policies.

Progressive Social Goals and Economic Growth

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Someone pointed me towards this Washington Post essay by Michael Strain, of the AEI, on “Why we need growth more than we need democratic socialism“. It’s something of a rebuttal to Bernie Sanders’ positive statements regarding the social democratic systems that are in place in Denmark, Sweden, and several other countries. Strain takes issue with this, suggesting that we cannot purse the progressive social goals that are part of this social democratic system because we would sacrifice economic growth, and that would be bad. The TL;DR version of my post is that Strain is wrong. Wrong about the nature of economic growth, and wrong about the effect of progressive social policies on growth.

To start, Strain engages in some ham-fisted hippie-punching. Except he’s punching Swedes and Danes, so I guess he’s Scandanavian-punching?

Yes, yes, while it didn’t turn out so well under Stalin and Mao, something of the dem-soc variety may work for the good people of Scandinavia.

This is a breathtakingly ridiculous connection to draw. Strain is lumping Stalin’s USSR and Mao’s China together with post-war Denmark and Sweden. These are economies and political systems fundamentally different in kind, not in degree. I’m fairly sure calling Stalin or Mao’s system “democratic” would be a stretch. “Socialist” is also wrong for their economies. I know, it’s confusing, they used “socialist” right there in the name of the USSR! Sometimes labels are wrong. Chilean sea bass ain’t Chilean or a bass.

The USSR and China were committed communist countries, with a lack of private ownership, and centrally planned economies. In contrast, Denmark and Sweden have free, fair elections, a free press, freedom of assembly, freedom of religion, and do not deliberately let giant swathes of their population starve. Oh, they also happen to have marginal tax rates of about 50% at the top, free health care, child care, and education. Which, sure, makes them exactly like the USSR or China under Mao.

Now that we’ve dealt with that, we can actually look at what Strain has to say about growth.

For one, demographic pressures are pushing the potential growth rate of the economy below its historic average. The nation is headed for a period of naturally slower growth, which means that we need to take pro-growth policies even more seriously now than in previous decades.

Why? If the economy is naturally slowing down due to demographic changes, then what precisely is the issue I am worried about? No one gets utility from the growth rate. If we have people getting utility from retiring, and the growth rate is lower, then explain why I should care. Is this an argument that the demographic pressures will put a greater burden on those still working to pay for Social Security and Medicare? Then we should be having an argument about the optimal tax rate, or benefits, or eligibility ages.

True, public policy cannot deliver 6 percent growth, no matter how great a deal Trump makes with the economy. But policy can get rid of a bad regulation (or 20) here, encourage people to participate in the workforce there, make savings and investment a bit more attractive, make entrepreneurship and innovation a bit more common, make the government’s footprint in the economy a bit smaller — on the margin, a range of policies can increase the rate of economic growth. And when you add up all those marginal changes, good policy can make the economy grow at a non-trivially faster rate.

If by “non-trivially” you mean by about 0.2% faster a year, then I might believe that. But notice that Strain tries to sound reasonable (“public policy cannot deliver 6 percent growth”), but never bothers to try and say how much pro-growth policies can actually raise the growth rate. Does he think pro-growth policies – and what precisely are those, by the way – mean growth of 3%, 4%, or 5%? The answer is that it would be a little over 2%, just a smidge higher than growth is today. And that is assuming that Strain’s non-specified growth policies actually have an incredibly massive effect of potential GDP. There is no magic fairy dust to make growth accelerate dramatically. It’s even plausible that pro-growth policies that raise the profit share of output to induce innovation would lower measured productivity growth simply due to how we calculate that productivity.

And the measured growth rate of GDP doesn’t even matter, really. What matters is the availability of innovations that improve living standards. Strain almost gets this right in the next quote:

Over the past two centuries, growth has increased living standards in the West unimaginably quickly. Many more babies survive to adulthood. Many more adults survive to old age. Many more people can be fed, clothed and housed. Much of the world enjoys significant quantities of leisure time. Much of the world can carve out decades of their lives for education, skill development and the moral formation and enlightenment that come with it. Growth has enabled this. Let’s keep growing.

No, innovation has enabled this. So let’s keep innovating. The fact that all these welfare-improving innovations contributed to a rise in measured GDP to rise does not mean that causing measured GDP to rise will raise welfare. Innovations can allow us to produce more with the same inputs (raising GDP) or allow us to produce the same amount with fewer inputs (possibly lowering GDP). Strain confuses measured GDP growth with innovation. They are not the same. What we want, as he says, is policies that foster innovations that improve human living standards. Whether they also happen to raise GDP growth rates is a side issue. Think of it this way. If the BEA came out tomorrow and said they had discovered that they had mistakenly understated GDP by $1 trillion a year since 1948 due to a calculation error, would your living standard be instantly higher? No. But if tomorrow someone announces that they’ve invented a 60% efficient solar panel, that would change your living standards.

Growth facilitates the flourishing life. By creating a dynamic environment characterized by increasing opportunity, growth gives the young the opportunity to dream and to strive. And it gives the rest of us the ability to apply our skills and talents as we see fit, to contribute to society, to provide for our families. A growing economy allows individuals to increase their living standards, facilitating economic and social mobility.

Oh, come on. This is vacuous drivel. Replace every instance of the word “growth” here with the word “liberty”, or “dignity”, or “patriotism”, or “human rights”, or “unicorns” and this paragraph is true. Replace it with “universal free college” and you’ve got Bernie Sanders’ stump speech. This paragraph is the equivalent of Gary Danielson saying “LSU would be helped by a touchdown on this drive.” It’s meaningless.

If we are interested in raising living standards for everyone, which Strain is saying he is for, then we need to promote the introduction and diffusion of innovations. Is there some either/or choice between promoting innovation and progressive social policies? Do we have to sacrifice innovation if we pursue progressive programs? No and no.

What we know about innovation is that it depends on market size and the stock of people who can do innovation. See any of the econosphere’s recent run of posts on Paul Romer’s original work on endogenous growth. By pursuing the progressive policies Strain is so wary of, we can positively affect both market size and the stock of innovators.

First, the policies let relatively poor families access the existing set of innovations, and the diffusion of these welfare-improving innovations accelerates. Think of Whole Foods. Whole Foods is an innovation in access to relatively healthy food. (Yes, some specific items are just overpriced bulls***, and some specific items are not healthier than other brands, but in general Whole Foods and stores like it make a healthier diet more accessible. I’m married to a nutritionist, I’ve had this conversation more than once.) Many poor families eat unhealthy food because it is cheap. Those progressive social programs give these families the purchasing power to access the innovation that is healthier food. Innovations are useless if no one can afford them.

Second, the incentives for innovation are based on the size of the market. Practically, this means that innovation is geared towards producing ideas for people with money. A concentration of income into a small group means innovation is skewed towards that group. Hello, Viagra. If we’re lucky, perhaps the innovations being sold to that small group have some spillovers in producing innovations that are available for the mass of people. But if you expand purchasing power of the mass of people, this raises the incentives to innovate directly for this mass of people. Rather than hoping we get lucky, the market will actively work to produce innovations that improve welfare of most people, not simply the small group with the most purchasing power. Under certain conditions, a concentration of income actively slows down innovation because there simply aren’t enough people with sufficient purchasing power to make it worth innovating (see Murphy, Shleifer, Vishny).

Finally, those progressive social policies that Strain is worried about expand the stock of people who can do innovation. Kids in poor families who receive income support do better in school. Support for vocational school or college raises the supply of people who are capable of innovation. Alleviating income uncertainty through health insurance and income support means that individuals with risky business ideas can pursue them without fearing they won’t be able to take their kids to the doctor.

So it is important to focus on another of the many fruits of economic growth: It provides the money to make targeted spending programs possible. In a nation as rich as ours, no one should fall too far — no one should go hungry, everyone should have a baseline level of education, no one should be bankrupted by a catastrophic medical event. Slow growth impedes progress toward social goals that require targeted spending, both because of the political climate it fosters and because those goals, even only those that are advisable, are expensive.

This, again, presumes that there is an either/or choice between growth and progressive social policies.

Hungry people are less productive. Uneducated people are less productive. People bankrupted by catastrophic medical events are not productive. Reaching those social goals is as much a contributor to growth, as growth is to achieving those social goals. These social goals are not a black hole into which we dump money. They have ramifications – positive ones – on our economy. If Strain wants the U.S. economy to grow faster, then invest in it. Invest in it with better educational opportunities, the elimination of extreme poverty, and the alleviation of the uncertainty associated with medical care. Educated, fed, securely healthy people are productive innovators.

Chad Jones on Paul Romer’s Contribution to Growth Theory

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I’m very pleased to host a guest post by Chad Jones celebrating the 25th anniversary of Romer (1990). Enjoy!

If you add one computer, you make one worker more productive. If you add a new idea — think of the the computer code for the first spreadsheet or word processor or even the internet itself — you can make any number of workers more productive.

The essential contribution of Romer (1990) is its clear understanding of the economics of ideas and how the discovery of new ideas lies at the heart of economic growth. The history behind that paper is fascinating. Romer had been working on growth for around a decade. The words in his 1983 dissertation and in Romer (1986) grapple with the topic and suggest that knowledge and ideas are important to growth. And of course at some level, everyone knew that this must be true (and there is an earlier literature containing these words). However, what Romer didn’t yet have — and what no research had yet fully appreciated — was the precise nature of how this statement comes to be true. By 1990, though, Romer had it, and it is truly beautiful. One piece of evidence that he at last understood growth deeply is that the first two sections of the 1990 paper are written very clearly, almost entirely in text and with the minimum required math serving as the light switch that illuminates a previously dark room.

Here is the key insight: ideas are different from essentially every other good in that they are nonrival. Standard goods in classical economics are rivalrous: my use of a pencil or a seat on an airplane or an accountant means that you cannot use that pencil, airplane seat, or accountant at the same time. This rivalry underlies the scarcity that is at the heart of most of economics and gives rise to the Fundamental Welfare Theorems of Economics.

Ideas, in contrast, are nonrival: my use of the Pythagorean theorem does not in any way mean there is less of the theorem available for you to use simultaneously. Ideas are not depleted by use, and it is technologically feasible for any number of people to use an idea simultaneously once it has been invented.

As an example, consider oral rehydration therapy, one of Romer’s favorite examples. Until recently, millions of children died of diarrhea in developing countries. Part of the problem is that parents, seeing a child with diarrhea, would withdraw fluids. Dehydration would set in, and the child would die. Oral rehydration therapy is an idea: dissolving a few minerals, salts, and a little sugar in water in just the right proportions produces a life-saving solution that rehydrates children and saves their lives. Once this idea was discovered, it could be used to save any number of children every year — the idea (the chemical formula) does not become increasingly scarce as more people use it.

How does the nonrivalry of ideas explain economic growth? The key is that nonrivalry gives rise to increasing returns to scale. The standard replication argument is a fundamental justification for constant returns to scale in production. If we wish to double the production of computers from a factory, one feasible way to do it is to build an equivalent factory across the street and populate it with equivalent workers, materials, and so on. That is, we replicate the factory exactly. This means that production with rivalrous goods is, at least as a useful benchmark, a constant returns process.

What Romer appreciated and stressed is that the nonrivalry of ideas is an integral part of this replication argument: firms do not need to reinvent the idea for a computer each time a new computer factory is built. Instead, the same idea — the detailed set of instructions for how to make a computer — can be used in the new factory, or indeed in any number of factories, because it is nonrivalrous. Since there are constant returns to scale in the rivalrous inputs (the factory, workers, and materials), there are therefore increasing returns to the rivalrous inputs and ideas taken together: if you double the rivalrous inputs and the quality or quantity of the ideas, you will more than double total production.

Once you’ve got increasing returns, growth follows naturally. Output per person then depends on the total stock of knowledge; the stock doesn’t need to be divided up among all the people in the economy. Contrast this with capital in a Solow model. If you add one computer, you make one worker more productive. If you add a new idea — think of the the computer code for the first spreadsheet or word processor or even the internet itself — you can make any number of workers more productive. With nonrivalry, growth in income per person is tied to growth in the total stock of ideas — an aggregate — not to growth in ideas per person.

It is very easy to get growth in an aggregate in any model, even in Solow, because of population growth. More autoworkers mean that more cars are produced. In Solow, this cannot sustain per capita growth because we need growth in cars per autoworker. But in Romer, this is not the case: more researchers produce more ideas, which makes everyone better off because of nonrivalry. Over long periods of recent history — twenty-five years, one hundred years, or even one thousand years — the world is characterized by enormous growth in the total stock of ideas and by enormous growth in the number of people making them. According to Romer’s insight, this is what sustains exponential growth in the long run.

Additional Links

The Romer (1990) paper
Romer’s blog entries on the 25th anniversary of the 1990 paper
Chad’s slides on “Growth and Ideas” (and a more in-depth paper)

Labor’s Share, Profits, and the Productivity Slowdown

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

There’s been a slowdown in measured productivity growth, particularly in the last few years, but generally since about 2000. This is something that I’ve poked around at several times, and if you’re reading economics blogs like this, then this shouldn’t be a revelation to you.

At the same time, there has been increasing attention given to the fact that labor’s share of GDP has been trending downward over the last 30 years or so. Piketty, perhaps, called the most public attention to the idea, but this is something that other people, like Loukas Karabarbounis and Brent Neiman have been working a lot on lately. The flip side of this declining labor share is a less well-documented sense that this is related to greater rents being collected by firms with more market power (Bob Solow on the topic).

What I want to do here is show how these two trends are related in some fundamental sense through how we measure productivity growth. The TL;DR version is that a falling labor share (and rising profit share of GDP) will necessarily lead to a decline in measured productivity growth, even if underlying innovation doesn’t change. The reason is that if firms have increasing market power, then they are using inputs less efficiently from an aggregate perspective, and measured productivity growth is about how efficiently we use inputs. So increased market power – captured by the decline in labor share – will put a drag on productivity growth.

Lots of math follows. None of it is too daunting, but it did end up pretty dense. When we want to measure productivity, we use a residual, because productivty cannot be directly observed. Call this measured residual productivity term {R}. You calculate it as

\displaystyle  R = \frac{Y}{K^{1-s_L}L^{s_L}} \ \ \ \ \ (1)

where {Y} is GDP, {K} is the capital stock, and {L} is the labor supply (which you could measure in units of human capital if you wanted). The term {s_L} is labor’s measured share in total output.

GDP is assumed to be produced according to a Cobb-Douglas function like

\displaystyle  Y = A K^{\alpha} L^{1-\alpha} \ \ \ \ \ (2)

where {A} is “true” productivity, which is what we are trying to get a measure of. The really important thing to note here is that {K} and {L} are raised to powers that depend on {\alpha}, not {s_L}.

{\alpha} and {1-\alpha} are “true” technological coefficients. The measure how GDP responds to stocks of capital and labor. But we don’t know them. All we know is {s_L}, labor’s share in GDP. We don’t even know capital’s share in GDP, all we know is that {1-s_L} is the left-over amount of GDP paid out as returns to capital and profits.

This wouldn’t be an issue if somehow {s_L = 1-\alpha}. And under a very precise set of conditions, these two things would be equal. If we had competition in output markets, and competition in factor markets, then {s_L = 1-\alpha}. But what are the chances that this describes the real world?

We can make a little headway if we allow for market power. The following relationship is something you can get by simply assuming that firms are cost-minimizers

\displaystyle  s_L = \frac{1-\alpha}{\mu} \ \ \ \ \ (3)

where {\mu} is the mark-up of price over its marginal cost. For example, if {\mu = 2}, then the price charged is twice the marginal cost of production (which is the cost of hiring labor and capital). Under competition, P=MC, so {\mu=1}, and {s_L = 1-\alpha}. But again, do we think we really have true competition at work in the economy? Probably not. So {\mu>1} to some extent.

Now that we know a little about {s_L}, go back to the residual calculation

\displaystyle  R = \frac{Y}{K^{1-s_L}L^{s_L}} = \frac{A K^{\alpha} L^{1-\alpha}}{K^{1-s_L}L^{s_L}} = A\left(\frac{K}{N}\right)^{s_L(1-\mu)}. \ \ \ \ \ (4)

The residual measure of productivity captures not only {A} – true productivity – but also this adjustment for the capital/labor ratio. So {R} is not a clean measure of {A} if {\mu >1}.

What is the growth rate of the residual measure of productivity? That is

\displaystyle  \frac{\dot{R}}{R} = \frac{\dot{A}}{A} - s_L(\mu-1)\frac{\dot{k}}{k} \ \ \ \ \ (5)

where I used {\dot{k}/k} as the growth rate of the capital/labor ratio, {K/N}. Again, if we had perfect competition and {\mu=1}, then the growth rate of the measured residual, {\dot{R}/R}, would be exactly equal to the growth rate of “true” productivity, {\dot{A}/A}. But once {\mu>1}, this is no longer the case, and what we can measure ({\dot{R}/R}) need not equal what we want to measure ({\dot{A}/A}).

This is a general issue. But it may not be totally deadly, because perhaps at least changes in {\dot{R}/R} could tell us about changes in {\dot{A}/A}. For example, let’s say that {s_L} and {\mu} are constant over time. And assume that the economy is essentially at steady state, so that {\dot{k}/k} is growing at the same rate as true productivity. Then if the growth rate of true productivity went down, {\dot{R}/R} would fall as well. Working that logic backwards, if the economy is at steady state and {s_L} and {\mu} are constant, then changes in the growth rate of {R} are informative about changes in the growth rate of {A}. The slowdown in measured productivity growth we see in the data would tell us that true productivity growth (innovation?) is also slowing down.

But, this isn’t true if {s_L} and {\mu} are changing. Are they changing? The labor share {s_L} is certainly falling over the last two to three decades. What about the markup, {\mu}? Is that changing?

It’s hard to measure that directly, but I think there is a way to infer that it almost certainly has been rising. Remember that relationship of {s_L = (1-\alpha)/\mu}? That came from assuming that firms are cost-minimizing (not necessarily profit-maximizing even, just cost-minimizing). That cost-minimization problem also implies that the following has to be true

\displaystyle  \text{Returns to scale} = \mu (1-s_{\pi}). \ \ \ \ \ (6)

“Returns to scale” captures the returns to scale of the true production function. What I wrote above has constant returns to scale ({\alpha} plus {1-\alpha} add up to 1), and so the returns to scale are equal to 1. We can have a long argument about whether that is correct or not, but it isn’t actually crucial for the point I’m making here.

{s_{\pi}} is the share of GDP that gets paid out as profits – A/K/A rents. What this relationship says is that if the share of output going to rents rises, then so must the markup. Or think about it the other way. If firms can charge higher markups, they must be earning more in rents/profits. This is just a mechanical relationship, so it doesn’t necessarily have to be driven by one or the other.

Let’s put this all together. We’ve had a decline in the labor share of GDP, {s_L}, over the last few decades. By necessity, this implies that the share of GDP going to rents or payments to capital have risen. If the share of GDP going to rents, {s_{\pi}}, went up at all, then the markup being charged by firms, {\mu}, must have risen as well.

Let’s throw some numbers at this. Assume that {\dot{k}/k = 0.015} over the last 30 years. Let the true growth rate of innovation be {\dot{A}/A = 0.02} over the entire last 30 years (yes, an assumption). Start out 30 years ago by assuming the labor share is {s_L = 0.65} and that the markup is {\mu=1.1}, so firms charge 10% over marginal cost. This means that measured productivity growth is

\displaystyle  \frac{\dot{R}}{R} = 0.02 - 0.65\times(1.1-1)\times0.015 = 0.018 \ \ \ \ \ (7)

or about 1.8% per year. This is pretty close to what you see in the data for the period from 1948-1973.

Now, let the labor share fall to {s_L = 0.60}, and let the markup rise to {\mu = 1.5}. This is a pretty big markup, but for the moment I’m just trying to establish a point, so bear with me. We get that measured productivity growth is

\displaystyle  \frac{\dot{R}}{R} = 0.02 - 0.6\times(1.5-1)\times0.015 = 0.015 \ \ \ \ \ (8)

or only about 1.5% per year. Measured productivity growth has fallen, even though the underlying true productivity growth rate did not change at all.

The point is that lower measured productivity growth – {\dot{R}/R} – does not necessarily mean that actual innovation has slowed down. The decline in labor share is consistent with a rise in markups (and profit’s share of output), which will produce a drag on measured productivity growth, {\dot{R}/R}. I don’t think this story explains all of why measured productivity growth has fallen recently, but it probably plays a part.

Measured productivity growth is about how efficiently we use our inputs, and that is only partially related to the true rate of innovation. Measured productivity growth also depends on market power, because that also dictates how efficiently we use our inputs. If firms are gaining market power – meaning they can charge a higher markup – then this implies that they will use inputs less efficiently from a social perspective. Each individual firm is producing less than the amount they would under competition (with costs = marginal costs), and so we are not getting everything we can out of our inputs. If market power has increased, this exacerbates that issue, and so measured productivity – the efficiency of input use – will fall.

You cannot look at measured productivity growth, {\dot{R}/R}, and make any definitive conclusions about what is happening to true innovation or productivity growth. You cannot infer that recent innovations are less useful or productive than those that came before just because {\dot{R}/R} is falling. It may be that the policies and norms transfering some share of GDP from labor to profits/rents are pushing down the growth rate of measured productivity as well.

It’s also quite possible that you could actively work to curtail the profit share of GDP – through taxes or regulation or whatever – and yet see measured productivity rise as the markup goes down. Think about the example above, and how measured productivity growth is higher even though the markup (and hence the profit share) is lower.

Or think about the opposite situation, where you propose a policy that actively favors the profit share (lower taxes on businesses or entrepreneurs, weaker labor laws, allowing concentration of industries). It isn’t even theoretically true that this will necessarily lead to higher measured productivity growth. In the example above, any policy that tried to use lower labor shares and higher markups would have to raise the underlying growth rate of innovation by 15% – from 2% to 2.3% per year – just to break even. That is a massive change, and I think it is fair to be completely skeptical that any of those policies could raise underlying rates of innovation by that much.

There is not an either/or choice between rapid productivity growth and a higher labor share. Repeat after me: there is not an either/or choice between rapid productivity growth and a higher labor share.

A last point is that we do care explicitly about measured productivity growth if we care at all about GDP. Measured productivity growth tells us how efficiently we use inputs to produce GDP, so anything that makes measured productivity go up – better technology ({A}) or lower markups – is good for us in terms of producing GDP.

Constant versus Balanced Growth

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Every theory of economic growth that I can think of is written to deliver “balanced growth” in the long run. Balanced growth means not only that the growth rate is constant as time goes off to infinity, but also that control variables (the savings rates, the fraction of labor allocated to R&D, the fraction of spending on education) are constant as well as time goes off to infinity.

Growth theories work hard to achieve this balanced growth, often making assumptions about functional forms to ensure that the model delivers balanced growth. Why?

The reasoning is that this is what we see in the data. Output per worker grows at roughly a constant rate in, at least, the major Western economies. If you’ve read this blog, you’ve seen a figure like this, which shows the constant growth rate over long periods of time for several economies.

But notice that all this figure indicates is that the growth rate has been constant for a Long Time. But a Long Time is not infinity, and constant growth of GDP per capita does not necessarily imply that we have a situation of balanced growth.

Just because growth is constant, this doesn’t mean that the control variables underlying growth are also constant, as is necessary for balanced growth. It is possible that we have achieved constant growth because of a fortuitous coincidence of control variables growing at rates that just offset each other such that output per person grows at a constant rate.

Chad Jones, and in a recent update Jones and Fernald, have explored whether in fact we should think of growth (in the US) as balanced growth or just as constant growth. What Jones originally suggested, and Jones and Fernald reassert, is that the control variables underlying growth in the US are not constant. Hence, the experience of the US up through 2015 may not represent balanced growth. And this in turn implies that we cannot necessarily expect the US to continue to follow the same constant growth rate in the future.

Jones and Fernald break down the roughly 2% growth in output per capita in the U.S. from 1950 to 2007 as follows:

  • 0 percentage points due to capital deepening. In short, the capital/output ratio in the US has remained roughly constant.
  • 0.4 percentage points due to increasing human capital. This is calculated from the fact that average years or schooling were rising in this period.
  • 0.4 percentage points due to scale effects. This captures the fact that increasing population generates more people doing R&D as well as larger markets that increase incentives to do R&D.
  • 1.2 percentage points due to increasing R&D intensity, meaning that the share of the labor force engaged in R&D was growing.

Of these, the increase in human capital and the increase in R&D intensity both reflect growth in control variables. In short, neither can grow forever, as they are bounded. Years of schooling is bounded by life-span (and actively removes labor from production) and the share of workers engaged in R&D cannot go above 1. So by necessity, both of those terms cannot continue to grow forever, and hence growth would have to fall below 2% as some point.

We can already see in the data that average years of education is starting to level off at about 14. And so that 0.4 p.p. we got from growing human capital may begin to disappear in the near future.

The percent of workers doing R&D has been generating much of the growth we saw over the last 50-60 years, according to Jones and Fernald. Can this continue? As I said, not forever, as that share is bounded above by 1. But we have to be careful here, as this is not just the share of workers in the US doing R&D, but something like a weighted average of the share of workers doing R&D across all countries. As China and India ramp up their shares, this can continue to pump up R&D intensity for potentially a long time. There is no obvious leveling off of R&D share, as there is with education. So perhaps we can continue on this constant (but not balanced) growth path for decades or a century longer?

Of all the terms above, only the scale effect is not a control variable, and hence is capable of continuing to provide growth forever. This means that the underlying balanced growth rate of the economy may be as low as 0.4% per year. But even that may be an overestimate, as population growth is slowing down over time.

Are we doomed to eventually see the growth rate slow down to 0.4% per year or less? Possibly. But underlying this all is a very distinct assumption about how technology evolves. Jones and Fernald, as well as nearly all growth models, assume that the flow of technology rises as technology accumulates, but at a decreasing rate. This reflects the concept that it is harder to invent new things as the number of technologies increases. By itself this would imply that the growth rate goes to zero, and it is only offset by the growth in the absolute number of R&D workers. If we stop jacking up the share of workers doing R&D, and the population size levels off, then we will no longer be able to offset this tendency for the growth rate of technology to fall towards zero.

But what if the flow of technology doesn’t have this tendency to decrease with the level of technology? We make that assumption because it delivers balanced growth in our models, but that doesn’t mean it is true. What if AI, or robots, or quantum computing, or BIG DATA, or the singularity, or aliens, or something else means that if we hit a certain level of technology, the flow of new technologies explodes? Then even with a constant number of R&D workers (or perhaps even a declining number) we could see technological growth rise and economic growth with it.

The question of what happens to growth over the next few decades boils down to two sub-questions. (A) Will the intensity of R&D effort level off, or will rich countries as well as India and China continue to push greater proportions of their resources and people into R&D? If so, then growth can be kept close to 2% for a long time. (B) Will there be a fundamental shift in the nature of technological progress?

A little aside from this discussion is that it isn’t exactly clear why we work so hard to make sure our growth models produce balanced growth, when we don’t necessarily see balanced growth in the data. Maybe its okay if your growth model has a very long-run prediction that growth is zero. We might just be on the transition path towards that zero growth rate, but it takes a very long time to get there. In the meantime, your model could be a good indicator of what is going on.

What is the Neo-classical Growth Model good for?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

This is a long post. It is partly in response to some questions from first-year grad students, and so it can be considered as notes for a lecture that might potentially be given to them at some point in the near future. For all that, it isn’t pitched at a high mathematical level. I think you could understand this without having ever done any kind of dynamic optimization work before.

Do savings matter for growth? This is an ill-formed question for several reasons. First, implicitly hiding in that question is the assumption that savings = investment. That doesn’t necessarily have to be true, and the events of the financial crisis in 2008 provide some evidence of this. So let us be more careful and say “Do investment rates matter for growth?”.

What do we mean by growth? You could really be referring to several things. “Do investment rates matter for the level of output per capita?” or “Does the investment rate today matter for the growth rate of output per capita today?” or “Does the investment rate – on average – matter for the trend growth rate of output per capita?”.

Let’s start with the simple cross-sectional relationship of investment rates and the level of GDP per capita. The following figure is from 2010, using Penn World Table 8.1 data. What you can see is that there is a (noisy) positive relationship between the two. Places with higher investment rates tend to be richer. If you used average investment rates over the last two decades or some other smoothing method, you’d get a similar picture. So we have some evidence that more investment is associated with a country being richer.

Investment and GDP per capita

Hsieh and Klenow (2007), following others, noted that investment rates measured in PPP prices varied a lot more than investment rates measured in domestic prices. More simply, the share of domestic income that is spent on investment goods is similar across countries. However, the amount of real investment goods that this spending can buy tends to be very low in poor countries. This doesn’t mean that higher investment rates do not generate higher income per capita, it just means that there is a lot less variation in investment rates than we might think. This implies that the ability of differences in investment rates to explain cross-country differences in income per capita is limited. Hsieh and Klenow go on to show how it probably has more to do with differences in productivity in the investment good sector.

Domestic Investment and GDP per capita

What about the effect of investment rates on growth rates, not the level of GDP per capita? Here we dip into the whole world of cross-country growth regressions. See Barro or Mankiw, Romer, Weil, or any of a few thousand papers from the 1990s. The simplest form is to regress the average growth rate of GDP per capita over some period (say 1960-2000) against initial GDP per capita (in 1960) and the investment rate (averaged over 1960-2000 as well). If you run this regression, then yes, you’ll get a positive coefficient on the investment rate. Places with higher investment rates grow faster – conditional on initial GDP per capita.

That conditional is crucial. Those regressions are not saying that higher investment rates raise the long-run growth rate of an economy. They are only saying that the level of GDP per capita will be higher along the balanced growth path of places with higher investment rates. The long-run growth rate along that balanced growth path is probably similar across all those countries. If you remember the post I did about convergence and the analogy with people driving down the highway, these regressions tell us that investment rates allow you to move farther up the line of cars, but ultimately you still have to settle in behind the sheriff and go 65.

Why is there a relationship of investment rates and growth? The “Duh” answer to this is that capital matters for producing output. More investment, more capital, more output. What we really want to ask if “Can we describe the dynamics of investment and growth?”.

The workhorse model for these dynamics is often called the Neo-classical growth model. It is often the first dynamic model you learn in graduate school, and it sits at the heart of all your favorite (or despised) DSGE models. The Neo-classical growth model goes by a lot of names. I tend to call it the Ramsey model, and will do so throughout this post. It’s also sometimes referred to as the Cass-Ramsey-Koopmans model, or more bluntly as the Solow-model-with-endogenous-savings.

The Ramsey model is a model of investment and growth. It is not a model of the effect of investment on growth. The effect of investment on growth is baked into the Ramsey (and the Solow, for that matter) by the accumulation equation for capital

\displaystyle  \Delta K_{t+1} = I_t - \delta K_t \ \ \ \ \ (1)

and the assumption that output depends on capital, {K_t}. If you raise {I_t}, then you accumulate more capital, and if you accumulate more capital output goes up. This is purely mechanical.

In the Solow model, {I_t} is s fixed fraction of output. In the Ramsey model, the choice of how much of output to invest – how big {I_t} will be – is determined by a forward-looking optimization problem, typically with a representative agent. Hence investment and growth are jointly determined in the Ramsey model, and both are dictated by the current state of the economy, which is captured by the current capital stock, {K_t}.

Does the Ramsey provide a good description of the world? More precisely, does it provide a good description of how investment rates are related to GDP per capita or the growth of GDP per capita?

For explaining the cross-country relationship of investment and levels of output per capita, the Ramsey model is no better than the Solow model. The Solow says that countries with higher exogenous investment rates (typically denoted {s}) will be richer. The Ramsey says that countries with higher exogenous patience preference parameters (typically denoted {\beta} or {\rho}) will be richer. This a distinction without a difference. The Solow might be superior here, in that it leaves open any possible reason for investment rates to differ, while the Ramsey forces you into thinking about patience parameters.

For explaining the time-series relationship of investment and output per capita, or investment and growth, the Ramsey model can actually offer us something. The Solow model’s fixed investment rate is just that, and hence there is no way (aside from a series of remarkable exogenous coincidences) that the investment rate will change as a country becomes richer or poorer. In contrast, the Ramsey model’s whole raison d’etre is to describe how investment rates change as output per worker changes.

What does the Ramsey model predict? That depends on the specific parameter values you choose. You can get the Ramsey model to predict investment rates that rise with output per capita or fall with output per capita if you tweak things just right. If you try to discipline the parameters in the typical way done in macro – a capital share of about 0.3-0.4, an inter-temporal elasticity of substitution of about 1/3 – then you get that savings should fall as output per worker rises. In other words, when a country is below its balanced growth path, it is predicted to save a big fraction of output, and this fraction is predicted to decline as it approaches it’s balanced growth path. Basically, the “high” elasticity of substitution means people are willing to invest a lot today (i.e. consume little) in return for lower investment rates in the future (i.e. high consumption). A country below its balanced growth path that invests a lot will grow quickly, and hence the Ramsey predicts that investment rates and growth rates should track each other.

Does this prediction hold up? In some cases, it looks great. Here’s a plot of the investment rate in Germany after WWII until now, along with the plot of the forward-looking 10-year average growth rate of output per capita. This looks qualitatively like exactly what the Ramsey model predicts.

Germany

So everything is great, right? Not quite. There are a number of other examples where the pattern actually looks nothing like what the Ramsey predicts. Here’s a plot from Japan, and from 1950 to 1980 this works exactly opposite of what the Ramsey says, but perhaps from 1980 forward it works?

Japan

Or consider Korea, where the spike in growth rates precedes the spike in investment rates, and then once growth starts to slow down after 1980 as Korea approaches the balanced growth path, the investment rate stays level. Or India, where there’s been a spike in investment rates recently, well after growth started to accelerate.

Korea

India

This doesn’t necessarily mean the Ramsey is wrong, but to explain these patterns we need to start delving into remarkable exogenous coincidences again. Productivity shocks that happen at just the right time, or unexplained shifts in patience parameters at exactly the right moment.

There’s another issue, though, which is that even the Germany figure doesn’t match the prediction of the Ramsey model. Under the typical parameters, the Ramsey says that both investment and the growth rate should have declined much faster than they actually did. Another way of saying this is that convergence speeds are predicted to be incredibly high in the Ramsey model. For Germany, given the starting point in 1950, the growth rate should have already dropped to about 2.5% and the investment rate to 20% by about 1965.

I’ve mentioned before that empirically, the convergence rate is about 2% per year, meaning that 2% of the gap between actual GDP and the balanced growth path closes each year. We find that estimate coming out of all sorts of settings. The Ramsey model predicts convergence rates of up to 30-40% for countries like Germany in 1950. It’s not even in the realm of empirically plausible.

A more thorough examination of the correlation between investment rates and growth can be found in Attanasio, Picci, and Scorcu (2000) paper, which builds on Carroll and Weil (1994). Both find that, if anything, higher investment rates Granger-cause lower growth rates. They also find that higher growth rates Granger-cause higher investment rates. In other words, a shock to growth is likely to be followed by higher investment rates in the future – which is backwards from what we baked into the Ramsey model. Second, shocks to investment rates are actually likely to be followed by lower growth rates – for this you could perhaps argue as demonstrating that higher investment causes convergence, and hence the growth rate would fall. But it is hard to reconcile with what we’d expect to see in the Ramsey model.

So….What exactly is the Ramsey model good for? Off the shelf, the Ramsey is a poor descriptor of the time-series evidence, and is needlessly complex for explaining cross-sectional relationships. But in failing, you can understand what might need to be added to match the time-series data.

To “slow down” the convergence speed in the Ramsey model, for instance, you can drop the assumption that all capital is perfectly substitutable across firms. I did this with Sebnem Kalemli-Ozcan and Indrit Hoxha, and once you lower the elasticity of substitution to 3 or 4, then predicted convergence speeds in the Ramsey become reasonable.

To match the Granger-causation of growth to investment rates, you can break the assumption that preferences are time-separable. This is what Carroll, Overland, and Weil (2000) do by adding habit formation into the Ramsey. Basically, your marginal utility depends on how much you consume today and how much you consumed yesterday. Because of this, your response to shocks is relatively slow, and so when growth ramps up (due to a productivity shock, for example) the savings rate doesn’t instantly jump up, but takes a while to respond.

To capture the time-series experiences of places like India or Japan you could appeal to some kind of exogenous coincidence. Just because they are implausible doesn’t mean they can’t occur. S*** happens, and so the Ramsey could be totally right, just masked by a series of crazy shocks to productivity, preferences, institutions, etc.. The take-away from this could be that you should stop worrying about trying to explain growth as something that has a common process in all places, and focus on explaining the growth of a specific place in detail.

You can also remember that the aggregate data we are looking at is exactly that, aggregate. Which means that it is the summation over millions of little individual decisions, and so the representative agent in the Ramsey model is probably not a good approximation. But saying that you should allow for heterogeneity in the model is a lot like saying that you should study each country individually, as the heterogeneity is going to be unique.

Study Slavery to Study the Impact of Robots on Workers

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I started writing a post that was collecting several recent pieces about robots/technology and their impact on workers. Let me quick post those:

As I started trying to sketch out what to write, I realized that I was just barfing up a big pile of confirmation bias. I read the above articles and was able to convince myself that they supported my general “robo-optimistic” outlook. That is co-incident with my opinion that our fetish for manufacturing jobs and output is just that, a fetish.

There are, of course, lots of “robo-pessimists” out there who feel that technology is going to be very bad for labor. Low wages, or even mass unemployment are possible consequences. I’ve gone back and forth a little in the past with Richard Serlin, who generally falls into the robo-pessimist camp.

Rather than writing a post that says, essentially, “Those posts agree with my priors”, let me try to switch gears. What would constitute a good argument for robo-pessimism? In other words, what kind of argument would make me change my mind about this?

A common analogy used by robo-pessimists is the horse. As engines, and particularly internal combustion engines come into use, horses were made obsolete. The absolute population of horses has declined dramatically over the last 100 years because they became costly relative to using an engine to drive your cart around. When robots can do what humans do, the analogy goes, humans will become costly relative to using robots, and so humans will become completely unemployable.

I don’t think this is a particular good argument, mainly because horses have no ability to innovate for themselves. No horse ever looked around and said, “You know, I feel like there is more I could do.” Horses didn’t offer to become drivers of the new horseless carriages, nor did horses think to learn how to repair engines or build them so that they had something to do besides pull wagons around.

But people can innovate and invent entirely new jobs for themselves. If you tell me that people won’t possibly be able to innovate new jobs when the robots arrive, then I think you have a ridiculously low opinion of people. And it isn’t necessary that everyone innovates, just a few who invent new jobs and professions that we cannot possibly think of today. If we could, we would have invented them already.

In place of the horse analogy, let me suggest a line of reasoning to robo-pessimists that, to me, has a better chance of producing a convincing argument for that pessimism. Slavery. You want to give examples of when free people were put out of work by slaves. I’m not talking about the effect of being enslaved, that is clearly negative. I want to know the outcomes of those who remain free (small-time white farmers in the South) when workers are introduced who can exactly mimic the skills of the free workers (slaves). These are free humans being replaced by the equivalent of a living robot, and those free workers can still innovate new work for themselves. The story thus doesn’t fall into the problem that the horse analogy has.

In addition, the slavery comparison doesn’t require us to think about capital-labor complementarity, and it doesn’t rely on the introduction of a new technology that both eliminated some tasks (hand weaving) but created others (monitoring weaving machines). Slaves effectively are robots, for the purposes of the economic discussion here. They can perfectly replace free labor, but do not necessarily create any kind of other work for free labor in their wake.

I haven’t read it in a long time, but I think Gavin Wright’s Old South, New South is where I’d start. Slavery ensured that wages were kept extremely low for free workers in the South. This may have created conditions that encouraged new businesses or industries to locate there, and this is in fact what Wright suggests happened in the early 20th century after slavery was abolished. But while slavery was in place, industry did not develop in the low wage South.

One reason for this is that that slave-owners were “labor-lords”, not “land-lords” (Wright’s terms). They had no incentive to build up the value of land, as they could simply relocate further west with their major asset and start over. Thus the improvements that helped make the North more prosperous for free workers – railroads, schools, dams, ports, etc.. – were not built. Without those improvements industry could not or would not relocate to the South.

So to use slaves as an analogy for robots in their effect on free workers, the causes for robo-pessimism are not that it leaves free workers with nothing to do, but that it frees the robot-owners from any incentives to invest. Since robot-owners are free to move their capital to new locations, what incentives do they have to agree to the equivalent to railroads or ports or infrastructure?

In addition, despite having decades to come up with something else to do, the free workers of the South never created a new set of jobs or activities that allowed them to keep up with the North. They remained poor farmers, and did not (could not?) coordinate to build the infrastructure themselves. They were left to essentially scrape out a living as subsistence farmers without many connections to the broader economy. What you’d want to do is present me with evidence that free white living standards were pushed down by the introduction of slavery. Perhaps wages in areas of the South prior to the arrival of cotton versus after?

Those last few paragraphs are the result of maybe thirty minutes of thinking about robots from this perspective. If you wanted to make a really compelling argument for robo-pessimism, I think taking this analogy and running with it would be the way to go. It’s one avenue by which I think I could be convinced to switch from vaguely robo-optimistic to robo-pessmistic.

So consider this post a bleg. Are there good historical examples of the introduction of slaves into economies where we can observe the effect on the non-slave population?

As a last aside, I think this post is instantly #1 for “Titles that tell you everything important about a post”.

Why Information Industrial Classification Diversity Grows

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I read Cesar Hidalgo’s Why Information Grows. Going into it, I really wanted to like it. I really wanted it to give me some insight into one of those fundamental growth questions: what drives the speed of knowledge acquisition?

This is not that book. The beginning is fun for describing basic information theory, and its relationship to entropy. It has some neat examples of how we end up “saving” information from entropy by encoding it in solids like cars, houses, or even the organized binary digits on my computer. But when it comes to translating this into an explanation for why economies grow, there is a breath-taking amount of hand-waving. I could feel the breeze whistling out of my Kindle as I read it.

In the end, Hidalgo says some places are rich because they have complex production structures, meaning they produce goods or services that require a large number of people or firms to interact in some kind of network. These networks embody the “knowledge and knowhow” of the economy. I haven’t quite decided whether this is tautological, but it’s close.

He attempts to offer evidence in favor of his claims by appealing to the data he built with Ricardo Hausmann. This uses detailed export data to build up a measure of how complex (read: diverse) is the number of exports a country produces.

There are a few issues with trying to use this data on complexity with any explanation of economic growth, much less information theory.

1. The measures of complexity are built on export data. That’s because you can get data on exports that is very fine-grained in terms of products, “6-digit” for those in the business. 6-digit classification means you’ve got things like 312120 – Breweries, or 424810 – Beer and Ale Wholesalers. Export data is also great because you can get it bilaterally for a lot of countries. You have data on how much beer Belgium exports to the US, and how much beer the US exports to Belgium.

Export data is available at this level of detail because the transactions get funneled through customs procedures, usually in a limited number of geographic points (i.e. ports), that let you track them closely. You cannot get similar data for an entire economy because there is no equivalent to customs houses tracking the minutiae of all your day to day purchases. Yes, conceptually that data is out there in Target’s or Whole Food’s computers, but we don’t track domestic transactions at that level centrally. Which leads to the first issue. Just because you don’t export a diverse set of products doesn’t mean you don’t have a complex economy. The vast, vast, vast majority of economic transactions are domestic-to-domestic, even in countries with large export sectors. So while I buy that an index of complexity built on export data is highly correlated with actual complexity, it doesn’t necessarily measure total complexity.

2. What is more of a problem is that the measure of complexity is built on the given NAICs system of coding products. As I’ve mentioned before, these kind of industrial classifications are skewed towards tracking manufactured goods, and have not caught up to the complexity of services and the like. The 6-digit code 541511 is “Custom Computer Programming Services”. That is essentially all types of software work: web design, sys admins, app designers, legacy COBOL programmers, etc..

In comparison, code 541511 is “Dog and Cat Food Manufacturing”. 311119 is “Other Animal Food Manufacturing”, like rabbit, bird, and fish food. So we are careful to track the difference in economic activity based on whether processed lumps of food goo are served to dogs as opposed to bunnies. But we do not distinguish between someone designing Flappy Birds from someone doing back-end server maintenance.

This means that your level of complexity depends simply on now detailed NAICs gets. Take two towns. In one, they have a single factory that produces both dog and rabbit food, and they export both. This town looks complex because it exports in two separate NAICs categories. In a second town, they have several firms that do outsourcing for major companies, with different firms doing web design, server maintenance, custom C++ programming, and say three of four other activities. Because all those programming activities fall under a single NAICs category, this second town appears to have a less complex economy. The “knowledge and knowhow” in the second town is likely larger, but NAICs cannot capture this.

This is like saying that bacteria are less genetically diverse than eukaryotes because bacteria are all in one kingdom, while we happen to classify eukaryotes into 5: protozoa, algae, plants, fungus, and animals. But bacteria are known to be more genetically diverse across species than eukaryotes. If you focus on the arbitrary divisions, things can look more or less diverse based solely on your choice of those divisions.

3. Leave all the complaints about the measure of complexity aside. Hidalgo tries to show how important this is for explaining economic growth by…..running a growth regression. He doesn’t call it that. He plots GDP per capita against economic complexity in 1985, and there is a positive relationship. He then says that countries with GDP per capita below the level expected given their complexity in 1985 grew faster from 1985 to 2000, and that this justifies his theory. But that is just a growth regression, except without any explicit coefficient estimate or standard error.

Several issues here. First, he doesn’t bother to mention whether this is statistically significant or not. Second, we’ve spent twenty years in growth complaining about exactly these kinds of regressions because they are completely unidentified. He doesn’t even bother to try and control for any of the obvious omitted variables like savings rates or population growth rates. Most likely, complexity is just another of the long list of things that are correlated with high incomes – institutions, savings, a lack of corruption, etc.. – without having any idea whether they are causal or not.

Somewhere in there, perhaps invisible behind the blur of waving hands, is some kind of insight into how information expands and builds upon itself. That would have been an interesting contribution to our thinking on growth. But the book, as it is, fails to provide it.

Hip-Hop History of Macro

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Do you find yourself a little lost trying to keep up with the history of macro posts that Romer (here, here, here), DeLong (here, here), and others have been posting? What did Lucas do, or not do, to change macro? Was it that big of a change? What is this saltwater versus freshwater stuff?

I’m here to help. The history of macro closely parallels the history of hip-hop, even down to the importance of the late 1970’s and early 80’s. Let me help you keep track of what is going on.

  • Solow and Tobin are Otis Redding and Sam Cooke.
  • Milton Friedman is James Brown.
  • Lucas and Sargent are Slick Rick and Doug E. Fresh. Their 1978 paper is the “La Di Da Di” of macro papers. Everyone samples from it.
  • Ed Prescott is Public Enemy’s Chuck D, which makes Finn Kydland Flava-Flav. Their 1982 “Time to Build” is the It Takes a Nation of Millions to Hold us Back of macro papers.
  • Robert Mundell (Run?) and Stan Fischer (Daryl?) are Run DMC.
  • Minneapolis is South Central LA.
  • The collection of economists at the Minneapolis Fed and U. of Minn. are N.W.A. (Play at home! Try to link specific economists to Dre, Easy, Ice-Cube, and MC Ren.)
  • New Keynesians are East Coast rap. Mike Woodford is Nas, Larry Summers is the Notorious B.I.G., and Blanchard, Mankiw, and Romer are all in the Wu-Tang Clan.

I spent way too much time thinking about this during a long car drive. But once you start, it’s hard to stop. There are so many unanswered questions. Who are the Eric B. and Rakim of real business cycles? What is the academic equivalent of the Tupac/Biggie feud, and who is the Suge Knight of macroeconomists? Is Bob Hall like Michael Jackson (not the weird stuff, the massively talented stuff) relative to hip-hop?

I had to work hard to stop myself from trying to do this with other fields. But in case you were wondering, Paul Romer is Kurt Cobain and Aghion and Howitt are Pearl Jam. I may have also convinced myself that Acemoglu is Beyonce and Raj Chetty is Taylor Swift.