Has the Long-run Growth Rate Changed?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

My actual job bothered to intrude on my life over the last week, so I’ve got a bit of material stored up for the blog. Today, I’m going to hit on a definitional issue that creates lots of problems in talking about growth. I see it all the time in my undergraduate course, and it is my fault for not being clearer.

If I ask you “Has the long-run growth rate of the U.S. declined?”, the answer depends crucially on what I mean by “long-run growth rate”. I think of there as being two distinct definitions.

  • The measured growth rate of GDP over a long period of time: The measured long-run growth rate of GDP from 1985 to 2015 is {(\ln{Y}_{2015} - \ln{Y}_{1985})/30}. Note that here the measurement does not have to take place using only past data. We could calculate the expected measured growth rate of GDP from 2015 to 2035 as {(\ln{Y}_{2035} - \ln{Y}_{2015})/20}. Measured growth rate depends on the actual path (or expected actual path) of GDP.
  • The underlying trend growth of potential GDP: This is the sum of the trend growth rate of potential output per worker (we typically call this {g}) and the trend growth rate of the number of workers (which we’ll call {n}).

The two ways of thinking about long-run growth inform each other. If I want to calculate the measured growth rate of GDP from 2015 to 2035, then I need some way to guess what GDP in 2035 will be, and this probably depends on my estimate of the underlying trend growth rate.

On the other hand, while there are theoretical avenues to deciding on the underlying trend growth rate (through {g}, {n}, or both), we often look back at the measured growth rate over long periods of time to help us figure trend growth (particularly for {g}).

Despite that, telling me that one of the definitions of the long-run growth rate has fallen does not necessarily inform me about the other. Let’s take the work of Robert Gordon as an example. It is about the underlying trend growth rate. Gordon argues that {n} is going to fall in the next few decades as the US economy ages and hence the growth in number of workers will slow. He also argues that {g} will fall due to us running out of useful things to innovate on. (I find the argument regarding {n} strong and the argument regarding {g} completely unpersuasive. But read the paper, your mileage may vary.)

Now, is Gordon right? Data on the measured long-run growth rate of GDP does not tell me. It is entirely possible that relatively slow measured growth from around 2000 to 2015 reflects some kind of extended cyclical downturn but that {g} and {n} remain just where they were in the 1990s. I’ve talked about this before, but statistically speaking it will be decades before we can even hope to fail to reject Gordon’s hypothesis using measured long-run growth rates.

This brings me back to some current research that I posted about recently. Juan Antolin-Diaz, Thomas Drechsel, and Ivan Petrella have a recent paper that finds “a significant decline in long-run output growth in the United States”. [My interpretation of their results was not quite right in that post. The authors e-mailed with me and cleared things up. Let’s see if I can get things straight here.] Their paper is about the measured growth rate of long-run GDP. They don’t do anything as crude as I suggested above, but after controlling for the common factors in other economic data series with GDP (etc.. etc..) they find that the long-run measured growth rate of GDP has declined over time from 2000 to 2014. Around 2011 they find that the long-run measured growth rate is so low that they can reject that this is just a statistical anomaly driven by business cycle effects.

What does this mean? It means that growth has been particularly low so far in the 21st century. So, yes, the “long-run measured growth rate of GDP has declined” in the U.S., according to the available evidence.

The fact that Antolin-Diaz, Drechsel, and Petrella find a lower measured growth rate similar to the CBO’s projected growth rate of GDP over the next decade does not tell us that {g} or {n} (or both) are lower. It tells us that it is possible to reverse engineer the CBO’s assumptions about {g} and {n} using existing data.

But this does not necessarily mean that the underlying trend growth rate of GDP has actually changed. If you want to establish that {g} or {n} changed, then there is no retrospective GDP data that can prove your point. Fundamentally, predictions about {g} and {n} are guesses. Perhaps educated guesses, but guesses.

Harry Potter and the Residual of Doom

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

The productivity term in an aggregate production function is tough to get one’s head around. When I write down

\displaystyle  Y = K^{\alpha}(AL)^{1-\alpha} \ \ \ \ \ (1)

for aggregate GDP, the term {A} is the measure of (labor-augmenting) productivity. What exactly does {A} mean, though? Sure, mathematically speaking if {A} goes up then {Y} goes up, but what is that supposed to mean? {Y} is real GDP, so what is this thing {A} that can make real GDP rise even if the stocks of capital ({K}) and labor ({L}) are held constant?

I think going to Universal Studios last week provided me with a good example. If you take all the employees (about 12,000 people) and capital (building supplies, etc..) at Universal Studios and set up a series of strip malls along I-4 in Orlando, then you’ll generate a little economic activity between people shopping at the Container Store and eating lunch at Applebee’s. But no one is flying to Orlando to go to those strip malls, and no one is paying hundreds of dollars for the right to walk around and *look* at those strip malls. The productivity, {A}, is very low in the sense that the capital and labor do not generate a lot of real GDP.

But call that capital “Diagon Alley” and dress the employees up in funny robes, and it is thick with thousands of people like me shelling out hundreds of dollars just for the right to walk around a copy of a movie set based on a book. Hundreds. Each.

This is pure productivity, {A}. The fictional character Harry Potter endows that capital and labor in Orlando with the magical ability to generate a much higher level of real GDP. No Harry Potter, no one visits, and real GDP is lower. The productivity is disembodied. It’s really brilliant. Calling this pile of capital “Gringotts” and pretending that the workers are wizard guards at a goblin bank creates real economic value. Economic transactions occur that would otherwise not have.

We get stuck on the idea that productivity, {A}, is some sort of technological change. But that is such a poor choice of words, as it connotes computers and labs and test tubes and machines. Productivity is whatever makes factors of production more productive. That is pretty great, because it means that we need not hinge all of our economic hopes on labs or computers. But it also stinks, because it means that you cannot pin down precisely what productivity is. It is necessarily an ambiguous concept.

A few further thoughts:

  • It doesn’t matter what is bought/sold, real GDP is real GDP. Spending 40 dollars at Universal to buy an interactive wand at Ollivander’s counts towards GDP just the same as spending 40 dollars on American Carbide router bits (We bought two. Wands, not router bits). There is no such thing as “good” GDP or “bad” GDP. Certain goods (tools!) do not count extra towards GDP because you can fix something with them.
  • Yes, you can create economic value out of “nothing”. Someone, somewhere, is writing the next Harry Potter or Star Wars or Lord of the Rings, and it is going to create significant productivity gains as someone else builds the new theme park, or lunch box, or action figure. This new character or story will endow otherwise unproductive capital and labor with the ability to produce GDP at a faster rate than before. {A} will go up just from imagining something cool.
  • This kind of productivity growth makes me think that we won’t necessarily end up working only 10 or 12 hours a week any time soon. The Harry Potter park doesn’t work without having lots of people walking around in robes playing the roles. It’s integral to the experience. So we pay to have those people there. Those people, in turn, pay to go see a Stones concert, where it is integral to have certain people working (Keith and Mick among others). We keep trading our time with each other to entertain ourselves. Markets are really efficient ways of allocating all of these entertainers to the right venues, times, etc.. so it wouldn’t surprise me if we all keep doing market work a lot of our time in the future.
  • “Long-tail” creative productivity gains like Harry Potter exacerbate inequality, maybe more than robots ever will. You can buy shares in the robot factory, even in a small amount. But you cannot own even a little bit of Harry Potter. You can’t copy it effectively (*cough* Rick Riordan *cough*). So J.K. Rowling gets redonkulously rich because ownership of the productivity idea is highly concentrated.

Why Did Consumption TFP Stagnate?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I’ve been trying to think more about why consumption-sector TFP flatlined from about 1980 forward. What I mentioned in the last post about this was that the fact that TFP was constant does not imply that technology was constant.

I then speculated that technology in the service sector may not have changed much over the last 30 years, partly explaining the lack of consumption productivity growth. By a lack of change, I mean that the service sector has not found a way to produce more services for a given supply of inputs, and/or produced the same amount of service with a decreasing supply of inputs. Take something that is close to a pure service – a back massage. A one-hour back massage in 1980 is almost identical to a one-hour back massage in 2014. You don’t get twice (or any other multiple) of the massage in 2014 that you got in 1980. And even if the therapist was capable of reducing back tension in 30 minutes rather than 60, you bought a 60-minute massage.

We often buy time when we buy services, not things. And it isn’t so much time as it is attention. And it is very hard to innovate such that you can provide the same amount of attention with fewer inputs (i.e. workers). Because for many services you very specifically want the attention of a specific person for a specific amount of time (the massage). You’d complain to the manager if the therapist tried to massage someone else at the same appointment.

So we don’t have to be surprised that even technology in services may not rise much over 30 years. But there were obviously technological changes in the service sector. As several people brought up to me, inventory management and logistics were dramatically changed by IT. This allows a service firm to operate “leaner”, with a smaller stock of inventory.

But this kind of technological progress need not show up as “technological change” in doing productivity accounting. That is, what we call “technology” when we do productivity accounting is not the only kind of technology there is. The “technology” in productivity accounting is only the ability to produce more goods using the same inputs, and/or produce the same goods using fewer inputs. It doesn’t capture things like a change in the shape of the production function itself, say a shift to using fewer intermediate goods as part of production.

Let’s say a firm has a production function of {Y = AK^{\alpha}L^{\beta}M^{\gamma}} where {A} is technology in the productivity sense, {K} is capital, {L} is labor, and {M} is intermediate goods. Productivity accounting could reveal to us a change in {A}. But what if an innovation in inventory management/logistics means that {\gamma} changes?

If innovation changes the shape of the production function, rather than the level, then our TFP calculations could go anywhere. Here’s an example. Let’s say that in 1980 production is {Y_80 = A_{1980}K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}}. Innovation in logistics and inventory management makes the production function in 2014 {Y_14 = A_{2014}K_{14}^{.4}L_{14}^{.4}M_{14}^{.2}}.

Total factor productivity in 1980 is calculated as

\displaystyle  TFP_{80} = \frac{Y_{80}}{K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}} \ \ \ \ \ (1)

and total factor productivity in 2014 is calculated as

\displaystyle  TFP_{14} = \frac{Y_{14}}{K_{14}^{.4}L_{14}^{.4}M_{14}^{.2}}. \ \ \ \ \ (2)

TFP in 2014 relative to 1980 (the growth in TFP) is

\displaystyle  \frac{TFP_{14}}{TFP_{80}} = \frac{Y_{14}}{K_{14}^{.3}L_{14}^{.3}M_{14}^{.4}} \times \frac{K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}}{Y_{80}} \times \frac{M_{14}^{.2}}{K_{14}^{.1}L_{14}^{.1}} \ \ \ \ \ (3)

which is an unholy mess. The first fraction is TFP in 2014 calculated using the 1980 function. The second fraction is the reciprocal of TFP in 1980, calculated normally. So the first two fractions capture the relative TFP in 2014 to 1980, holding constant the 1980 production function. The last fraction represents the adjustment we have to make because the production function changed.

That last term could literally be anything. Less than one, more than one, more than 100, less than 0.0001. If {K} and {L} rose by a lot while {M} didn’t go up much, this will lower TFP in 2014 relative to 1980. It all depends on the actual units used. If I decide to measure {M} in thousands of units rather than hundreds of units, I just made TFP in 2014 go down by a factor of 4 relative to 1980.

Once the production function changes shape, then comparing TFP levels across time becomes nearly impossible. So in that sense TFP could definitely be “getting it wrong” when measuring service-sector productivity. You’ve got an apples to oranges problem. So if we think that IT innovation really changed the nature of the service-sector production function – meaning that {\alpha}, {\beta}, and/or {\gamma} changed, then TFP isn’t necessarily going to be able to pick that up. It could well be that this looks like flat or even shrinking TFP in the data.

If you’d like, this supports David Beckworth‘s notion that consumption TFP “doesn’t pass the smell test”. We’ve got this intuition that the service sector has changed appreciably over the last 30 years, but it doesn’t show up in the TFP measurements. That could be due to this apples to oranges issue, and in fact consumption TFP doesn’t reflect accurately the innovations that occurred.

To an ambitious graduate student: document changes in the revenue shares of intermediates in consumption and/or services over time. Correct TFP calculations for these changes, or at least provide some notion of the size of that fudge factor in the above equation.

I Love the Smell of TFP in the Morning

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Very recently John Fernald of the SF Fed released a quarterly series on total factor productivity (TFP) in the US. One of the neat things about his series is that you can look separately at investment (equipment and consumer durables) and consumption (everything else). When you plot these out, you see a really big divergence.

Fernald 2014 TFP

(Note: My graph from Fernald’s data).

Consumption TFP essentially flat-lines from about 1980 until today. At the same time, investment TFP races ahead. Aggregate TFP is a weighted average of the two, and since investment is only about 20% of total spending, this means aggregate TFP exhibits a slight rise (Each series is normalized to 100 in 1947, so you cannot compare absolute levels across sectors like this).

The flat-line in consumption TFP has generated a few puzzled reactions. David Beckworth in particular said that the consumption series “does not pass the smell test”. He says that Fernald’s measure (and by implication other TFP calculations) must be flawed, and wants a better way to measure productivity.

This is an overreaction, and represents a misunderstanding of what TFP is, and what it measures. The first thing that often happens is that people confuse “labor productivity” with “TFP”. Labor productivity depends on TFP and on other factors of production, like capital. So labor productivity could be rising in the consumption sector even if TFP is not.

But leaving that possible misunderstanding aside, let’s think more carefully about what goes into TFP. As a rough guide, when we measure changes in TFP what we get is the following

Chg. TFP = Chg. Technology + Chg. Utilization + Markups x Chg. Inputs

You can be more technical about things, but this is roughly what you’ll get. What are those three parts?

  • Technology. This is what it sounds like – the ability to produce real goods/services with a given stock of real inputs. If technology improves, this will add to our measure of TFP.
  • Utilization. If the economy, or the sector we are talking about, is using their capital or labor more intensely, then this will show up as increased utilization, and will also pass through to higher TFP. For the given stock of inputs (workers or number of machines) you are getting more output.
  • Markups x Inputs. This term is a tricky one. If you charge price markups over marginal cost, then this is equivalent to saying that you do not produce as much as socially optimal (where P = MC). So if we increase inputs in your sector, this raises output, and gets us closer to the socially optimal point. So when markups exist, higher input use will translate to higher TFP.

The problem that plagues Beckworth and many others is that they are trying to exactly equate TFP with “technology”. That just isn’t the case. Technology can be improving in the consumption goods sector, but this could be offset by changes in utilization, markups, or input use. Flat-lining TFP doesn’t imply that there were no gains in technology.

So what could be going on with utilization and markups/inputs? If you dig through Fernald’s data, you can find that utilization in the consumption sector has fallen over time. The consumption sector uses factors about 97% as intensely as it did in the 1960s. That shows up as lower TFP.

An additional factor that would play into consumption TFP staying flat would be market power, and here I think Beckworth gets it right that whatever is going on in consumption is because of services. The service sector tends to have really low markups over marginal cost. Additionally – and I have nothing but some intuition to back this up – I think innovation in the service sector may typically take the form of lowering markups. Think Wal-Mart. It sells the same crap you can find in 100 other stores. It’s entire business model is selling it for less than everyone else. With low and falling markups, the contribution of additional inputs like capital (e.g. various IT investments) and labor would not have added to TFP growth.

So consumption TFP could reasonably have flat-lined. I don’t think this represents any kind of glaring flaw in the methodology. But you have to separate the idea of TFP from the idea of “technology”. Once you do that, flat-lining consumption TFP is reasonable.

On top of all that, the idea that consumption technology has not grown much over time isn’t that hard to believe. Consider this example. We just were forced to buy a new fridge because the old one konked out (long, very annoying story). The fridge is produced by the consumer durables sector. Our fridge is more efficient, quieter, colder, etc. etc. than a fridge from 10 years ago. There have been clear technological advances in fridge-making that I benefit from. If I wanted a fridge equivalent to what we had 10 years ago, I could get that for probably 1/4 of the price of the new fridge. So there is obvious technological change going on in the investment sector, and obvious TFP gains.

But I bought the fridge through Best Buy (as it turns out, another long, annoying story). Best Buy’s value-added, such as it was, is part of “consumption” because it is a service. And is Best Buy any better at selling fridges or screwing up delivery dates than they were ten years ago? Maybe, maybe not. If you told me that a major appliance retailer in 1990 was about as efficient at selling and delivering fridges as one today, I’d believe you. What is the major technological breakthrough in the service industry that I should think of from the last few decades? Those little headsets that people at the Gap wear?

Does that mean I shouldn’t care about slow growth in consumption TFP? No. We’d prefer to have faster TFP growth than slower TFP growth. But you shouldn’t dismiss TFP because it doesn’t match up to the notion in your head. If TFP doesn’t pass the “smell test”, it may be that you’re sniffing the wrong thing.

Focused or Broad-based Growth?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Do we care if productivity growth is “broad-based”, meaning that all sectors or firms tend to be getting more productive? Or is it better to have a few sectors or firms experience massive productivity increases, even at the expense of other sectors? Think of it as an allocation problem – I’ve got a fixed amount of resources to spend on R&D, so should I spread those out across sectors or spend them all in one place?

The answer depends on how willing we are to substitute across the output of different types of goods. If we are willing to substitute, then it would be better to just load up and focus on a single sector. Make it as productive as possible, and just don’t consume anything else. On the other hand, if we are unwilling to substitute, then we would prefer to spread around the productivity growth so that all sectors produce goods more cheaply.

That’s the intuition. Here’s the math, which you can skip past if you’re not interested. Let the price people will pay for output from sector {j} be {P_j = Y_j^{-\epsilon}}, so that {\epsilon} is the price elasticity (in absolute value). As {\epsilon} goes to one, demand is inelastic, and the price is very responsive to output. As {\epsilon} goes to zero, demand is elastic, and in fact fixed at {P_j = 1}.

There are {J} total sectors. Each one produces with a function of

\displaystyle  Y_j = \Omega_j Z_j^{1-\alpha} \ \ \ \ \ (1)

where {\Omega_j} is a given productivity term for the sector. {Z_j} is the factor input to production in sector {j}. {Z_j} can capture labor, human capital, and/or some physical capital. Raising it to {1-\alpha} just means there are diminishing marginal returns to moving factors into sector {j}. There is some total stock of {Z}, and units of {Z} are homogenous, so they can be used in any sector. So you could think of an element of {Z} being a laptop, and this can be used by someone to do work in any sector. If {Z} is labor, then this says that workers are equally capable of working in any sector. There are no sector-specific skills.

Now we can ask what the optimal allocation of {Z} is across the different sectors. By “optimal”, I mean the allocation that maximizes the total earnings of the {Z} factor. Each sector is going to pay {w}, the “wage”, for each unit of {Z} that it uses. What maximizes total earnings, {wZ}?

Within each sector, set the marginal product of {Z_j} equal to the wage {w}, which each sector takes as given. This allows you to solve for the optimal allocation of {Z_j} to each sector. Intuitively, the higher is productivity {\Omega_j}, the more of the input a sector will employ. If we put the optimal allocations together, we can solve for the following,

\displaystyle  wZ = \left(\sum_j \Omega_j^{(1-\epsilon)/(1-(1-\alpha)(1-\epsilon))}\right)^{1-(1-\alpha)(1-\epsilon)} Z^{1-\alpha} \ \ \ \ \ (2)

which is an unholy mess. But this mess has a few important things to tell us. Total output consists of a productivity term (the sum of the {\Omega_j} stuff) multiplied through by the total stock of inputs, {Z}. Total earnings are increasing with any {\Omega_j}. That is, real earnings are higher if any of the sectors get more productive. We knew that already, though. The question is whether it would be worth having one of the {\Omega_j} terms be really big relative to the others.

The summation term over the {\Omega_j}‘s depends on the distribution of the {\Omega_j} terms. Specifically, if

\displaystyle  \frac{1-\epsilon}{1-(1-\alpha)(1-\epsilon)} > 1 \ \ \ \ \ (3)

then {wZ} will be higher with an extreme distribution of {\Omega_j} terms. That is, we’re better off with one really, really productive sector, and lots of really unproductive ones.

Re-arrange that condition above into

\displaystyle  (1-\alpha) > \frac{\epsilon}{1-\epsilon}. \ \ \ \ \ (4)

For a given {\alpha}, it pays to have concentrated productivity if the price elasticity of output in each sector is particularly low, or demand is elastic. What is going on? Elastic demand means that you are willing to substitute between sectors. So if one sector is really productive, you can just load up all your {Z} into that sector and enjoy the output of that sector.

On the other hand, if your demand is inelastic ({\epsilon} is close to one), then you are unwilling to substitute between sectors. Think of Leontief preferences, where you demand goods in very specific bundles. Now having one really productive sector does you no good, because even though you can produce lots of agricultural goods (for example) cheaply, no one wants them. You’d be better off with all sectors having similar productivity levels, so that each was about equally cheap.

So where are we? Well, I’d probably argue that across major sectors, people are pretty unwilling to substitute. Herrendorf, Rogerson, and Valentinyi (2013) estimate that preferences over value-added from U.S. sectors is essentially Leontief. Eating six bushels of corn is not something I’m going to do in lieu of binge-watching House of Cards, no matter how productive U.S. agriculture gets. With inelastic demand, it is better to have productivity in all sectors be similar. I’d even trade off some productivity from high-productivity sectors (ag?) if it meant I could jack up productivity in low-productivity sectors (services?). I don’t know how one does that, but that’s the implication of inelastic demand.

But while demand might be inelastic, that doesn’t mean prices are necessarily inelastic. If we can trade the output of different sectors, then the prices are fixed by world markets, and it is as if we have really elastic demand. We can buy and sell as much output of each sector as we like. In this case, it’s like {\epsilon=0}, and now we really want to have concentrated productivity. I’m better off with one sector that is hyper-productive, while letting the rest dwindle. If I could, I would invest everything in raising productivity in one single sector. So a truly open economy that traded everything would want to load all of its R&D activity into one sector, make that as productive as possible, and just export that good to import everything else it wants.

Now, we do have lots of open trade in the world, but for an economy like the U.S. the vast majority of GDP is still produced domestically. So we’re in the situation where we’d like to spread productivity gains out across all sectors and/or firms.

Part of productivity is the level of human capital in the economy. If aggregate productivity is highest when productivity improvements are spread across lots of sectors, then we want to invest in broad-based human capital that is employable anywhere. That is, we don’t want to put all our money into training a few nuclear engineers with MD’s and an MBA, we want to upgrade the human capital of the whole range of workers. I think this is an argument for more basic education, as opposed to focusing so heavily on getting a few people through college, but I’m not sure if that is just an outcome of some implicit assumption I’ve made.

The Slowdown in Reallocation in the U.S.

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

One of the components of productivity growth is reallocation. From one perspective, we can think about the reallocation of homogenous factors (labor, capital) from low-productivity firms to high-productivity firms, which includes low-productivity firms going out of business, and new firms getting started. A different perspective is to look more closely at the shuffling of heterogenous workers between (relatively) homogenous firms, with the idea being that workers may be more productive in one particular environment than in another (i.e. we want people good at doctoring to be doctors, not lawyers). Regardless of how exactly we think about reallocation, the more rapidly that we can shuffle factors into more productive uses, the better for aggregate productivity, and the higher will be GDP. However, evidence suggests that both types of reallocation have slowed down recently.

Foster, Grim, and Haltiwanger have a recent NBER working paper on the “cleansing effect of recessions”. This is the idea that in recessions, businesses fail. But it’s the really crappy, low-productivity businesses that fail, so we come out of the recession with higher productivity. The authors document that in recessions prior to the Great Recession, downturns tend to be “cleansing”. Job destruction rates rise appreciably, but job creation rates remain about the same. Unemployment occurs because it takes some time for those people whose jobs were destroyed to find newly created jobs. But the reallocation implied by this churn enhances productivity – workers are leaving low productivity jobs (generally) and then getting high productivity jobs (generally).

But the Great Recession was different. In the GR, job destruction rose by a little, but much less than in prior recessions. Job creation in the GR fell demonstrably, much more than in prior recessions. So again, we have unemployment as the people who have jobs destroyed are not able to pick up newly created jobs. But because of the pattern to job creation and destruction, there is little of the positive reallocation going on. People are not losing low productivity jobs, becoming unemployed, and then getting high productivity jobs. People are staying in low productivity jobs, and new high productivity jobs are not being created. So the GR is not “cleansing”. It is, in some ways, “sullying”. The GR is pinning people into *low* productivity jobs.

This holds for firm-level reallocation well. In recessions prior to the GR, low productivity firms tended to exit, and high productivity firms tended to grow in size. So again, we had productivity-enhancing recessions. But again, the GR is different. In the GR, the rate of firm exit for low productivity firms did not go up, and the growth rate of high-productivity firms did not rise. The GR is not “cleansing” on this metric either.

Why is the GR so different? The authors don’t offer an explanation, as their paper is just about documenting these changes. Perhaps the key is that a financial crash has distinctly different effects than a normal recession. A lack of financing means that new firms cannot start, and job creation falls, leading to lower reallocation effects. A “normal” recession doesn’t involve as sharp a contraction in financing, so new firms can take advantage of others going out of business to get themselves going. Just an idea, I have no evidence to back that up.

[An aside: For the record, there is no reason that we need to have a recession for this kind of reallocation to occur. Why don’t these crappy, low-productivity firms go out of business when unemployment is low? Why doesn’t the market identify these crappy firms and compete them out of business? So don’t take Foster, Grim, and Haltiwanger’s work as some kind of evidence that we “need” recessions. What we “need” is an efficient way to reallocate factors to high productivity firms without having to make those factors idle (i.e. unemployed) for extended periods of time in between.]

In a related piece of work Davis and Haltiwanger have a new NBER working paper that discusses changes in workers reallocations over the last few decades. They look at the rate at which workers turn over between jobs, and find that in general this rate has declined since 1980 to today. Some of this may be structural, in the sense that as the age structure and education breakdown of the workforce changes, there will be changes in reallocation rates. In general, reallocation rates go down as people age. 19-24 year olds cycle between jobs way faster than 55-65 year olds. Reallocation rates are also higher among high-school graduates than among college graduates. So as the workforce has aged and gotten more educated from 1980 to today, we’d expect some decline in job reallocation rates.

But what Davis and Haltiwanger find is that even after you account for these forces, reallocation rates for workers are declining. No matter which sub-group you look at (e.g. 25-40 year old women with college degrees) you find that reallocation rates are falling over time. So workers are flipping between jobs *less* today than they did in the early 1980s. Which is probably somewhat surprising, as my guess is that most people feel like jobs are more fleeting in duration these days, due to declines in unionization, etc.. etc..

The worry that Davis and Haltiwanger raise is that lower rates of reallocation lower productivity growth, as mentioned at the beginning of this post. So what has caused this decline in reallocation rates across jobs (or across firms as the first paper described)? From a pure accounting perspective, Davis and Haltiwanger gives us several explanations. First, reallocation rates within the Retail sector have declined, and since Retail started out with one of the highest rates of reallocation, this drags down the average for the economy. Second, more workers tend to be with older firms, which have less turnover than young firms. Last, the above-mentioned shift towards an older workforce that tends to shift jobs less than younger workers.

Fine, but what is the underlying explanation? Davis and Haltiwanger offer several possibilities. One is increased occupational licensing. In the 1950s, only about 5 % of workers needed a government (state or federal) license to do their job. In 2008, that is now 29%. So it can be incredibly hard to reallocate to a new job or sector of work if you have to fulfill some kind of licensing requirement (which could involve up to 2000 hours of training along with fees). Second is a decreased ability of firms to fire-at-will. Starting in the 1980s there were a series of court decisions that made it harder for firms to just fire someone, which makes it both less likely for people to leave jobs, and less likely for firms to hire new people. Both act to lower reallocation between jobs. Third is employer-provided health insurance, which generates some kind of “job lock” where people are unwilling to move jobs because they don’t want to lose, or create a gap in, coverage.

Last is the information revolution which may have had perverse effects on reallocation. We might expect that IT allows more efficient reallocation as people can look for jobs more easily (e.g. Monster.com, LinkedIn) and firms can cast a wider net for applicants. But IT also allows firms to screen much more effectively, as they have access to credit reports, criminal records, and the like, that would have been prohibitive to acquire in the past.

So we appear to have, on two fronts, declining dynamic reallocation in the U.S. This certainly contributes to a slowdown in productivity growth, and may perhaps be a better explanation than “running out of ideas from the IT revolution” that Gordon and Fernald talk about. The big worry is that, if it is regulation-creep, as Davis and Haltiwanger suspect, we don’t know if or when the slowdown in reallocation would end.

In summary, reading John Haltiwanger papers can make you have a bad day.

Measuring Misallocation across Firms

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

One of the most active area of research in macro development (let’s not call it growth economics, I guess) is on misallocations. This is the concept that a major explanation for why some countries are rich, while others are poor, is that rich countries do a good job of allocating factors of production in an efficient manner across firms (or sectors, if you like). Poor countries do a bad job of this, so that unproductive firms use a lot of labor and capital up, while high productivity firms are too small.

One of the baseline papers in this area is by Diego Restuccia and Richard Rogerson, who showed that you could theoretically generate big losses to aggregate measures of productivity if you introduced firm-level distortions that acted like subsidies to unproductive firms and taxes on produtive firms. This demonstrated the possible effects of misallocation. A paper by Hsieh and Klenow (HK) took this idea seriously, and applied the basic logic to data on firms from China, India and the U.S. to see how big of an effect misallocations actually have.

We just went over this paper in my grad class this week, and so I took some time to get more deeply into the paper. The one flaw in the paper, from my perspective, was that it ran backwards. That is, HK start with a detailed model of firm-level activity and then roll this up to find the aggregate implications. Except that I think you can get the intuition of their paper much more easily by thinking about how you measure the aggregate implications, and then asking yourself how you can get the requisite firm-level data to make the aggregate calculation. So let me give you my take on the HK paper, and how to understand what they are doing. If you’re seriously interested in studying growth and development, this is a paper you’ll need to think about at some point, and perhaps this will help you out.

This is dense with math and quite long. You were warned.

What do HK want to do? They want to compare the actual measured level of TFP in sector {s}, {TFP_s}, to a hypothetical level of TFP in sector {s}, {TFP^{\ast}_s}, that we would measure if we allocated all factors efficiently between firms.

Let’s start by asking how we can measure {TFP_s} given observable data on firms. This is

\displaystyle  TFP_s = \frac{Y_s}{K_s^{\alpha}L_s^{1-\alpha}}, \ \ \ \ \ (1)

which is just measuring {TFP_s} for a sector as a Solow residual. {TFP_s} is not a pure measure of “technology”, it is a measure of residual productivity, capturing everything that influences how much output ({Y_s}) we can get from a given bundle of inputs ({K_s^{\alpha}L_s^{1-\alpha}}). It includes not just the physical productivity of individual firms in this sector, but also the efficiency of the distribution of the factors across those firms.

Now, the issue is that we cannot measure {Y_s} directly. For a sector, this is some kind of measure of real output (e.g. units of goods), but there is no data on that. The data we have is on revenues of firms within the sector (e.g. dollars of goods sold). So what HK are going to do is use this revenue data, and then make some assumptions about how firms set prices to try and back out the real output measure. It’s actually easier to see in the math. First, just write {TFP_s} as

\displaystyle  TFP_s =\frac{P_s Y_s}{K_s^{\alpha}L_s^{1-\alpha}}\frac{1}{P_s} = \overline{TFPR}_s \frac{1}{P_s} \ \ \ \ \ (2)

which just multiplies and divides by the price index for sector {s}. The first fraction is revenue productivity, or {\overline{TFPR}_s}, of sector {s}. This is a residual measure as well, but measures how produtive sector {s} is at producing dollars, rather than at producing units of goods. The good thing about {TFPR_s} is that we can calculate this from the data. Take the revenues of all the firms in sector {s}, and that is equal to total revenues {P_s Y_s}. We can add up the reported capital stocks across all firms, and labor forces across all firms, and get {K_s} and {L_s}, respectively. We can find a value for {\alpha} based on the size of wage payments relative revenues (which should be close to {1-\alpha}). So all this is conceptually measurable.

The second fraction is one over the price index {P_s}. We do not have data on this price index, because we don’t know the individual prices of each firms output. So here is where the assumptions regarding firm behavior come in. HK assume a monopolistically competitive structure for firms within each sector. This means that each firm has monopoly power over producing its own brand of good, but people are willing to substitute between those different brands. As long as the brands aren’t perfectly substitutable, then each firm can charge a price a little over the marginal cost of production. We’re going to leave aside the micro-economics of that structure for the time being. For now, just trust me that if these firms are monopolistically competitive, then the price index can be written as

\displaystyle  P_s = \left(\sum_i P_i^{1-\sigma} \right)^{1/(1-\sigma)} \ \ \ \ \ (3)

where {P_i} are the individual prices from each firm, and {\sigma} is the elasticity of substitution between different firms goods.

Didn’t I just say that we do not observe those individual firm prices? Yes, I did. But we don’t need to observe them. For any individual firm, we can also think of revenue productivity as opposed to their physical productivity, denoted {A_i}. That is, we can write

\displaystyle  TFPR_i = P_i A_i. \ \ \ \ \ (4)

The firms productivity at producing dollars ({TFPR_i}) is the price they can charge ({P_i}) times their physical productivity ({A_i}). We can re-arrange this to be

\displaystyle  P_i = \frac{TFPR_i}{A_i}. \ \ \ \ \ (5)

Put this expression for firm-level prices into the price index {P_s} we found above. You get

\displaystyle  P_s = \left(\sum_i \left[\frac{TFPR_i}{A_i}\right]^{1-\sigma} \right)^{1/(1-\sigma)} \ \ \ \ \ (6)

which depends only on firm-level measure of {TFPR_i} and physical productivity {A_i}. We no longer need prices.

For the sector level {TFP_s}, we now have

\displaystyle  TFP_s = \overline{TFPR}_s \frac{1}{P_s} = \frac{\overline{TFPR}_s}{\left(\sum_i \left[\frac{TFPR_i}{A_i}\right]^{1-\sigma} \right)^{1/(1-\sigma)}}. \ \ \ \ \ (7)

At this point, there is just some slog of algebra to get to the following

\displaystyle  TFP_s = \left(\sum_i \left[A_i \frac{\overline{TFPR}_s}{TFPR_i}\right]^{\sigma-1} \right)^{1/(\sigma-1)}. \ \ \ \ \ (8)

If you’re following along at home, just note that the exponents involving {\sigma} flipped sign, and that can hang you up on the algebra if you’re not careful.

Okay, so now I have this description of how to measure {TFP_s}. I need information on four things. (1) Firm-level physical productivities, {A_i}, (2) sector-level revenue productivity, {\overline{TFPR}_s}, (3) firm-level revenue productivities, {TFPR_i}, and (4) a value for {\sigma}. Of these, we can appeal to the literature and assume a value of {\sigma}, say something like a value of 5, which implies goods are fairly substitutable. We can measure sector-level and firm-level revenue productivities directly from the firm-level data we have. The one big piece of information we don’t have is {A_i}, the physical productivity of each firm.

Before describing how we’re going to find {A_i}, just consider this measurement of {TFP_s} for a moment. What this equation says is that {TFP_s} is a weighted sum of the individual firm level physical productivity terms, {A_i}. That makes some sense. Physical productivity of a sector must depend on the productivity of the firms in that sector.

Mechanically, {TFP_s} is a concave function of all the stuff in the parentheses, given that {1/(\sigma-1)} is less than one. Meaning that {TFP_s} goes up as the values in the summation rise, but at a decreasing rate. More importantly, for what HK are doing, this implies that the greater the variation in the individual firm-level terms of the summation, the lower is {TFP_s}. That is, you’d rather have two firms that have similar productivity levels than one firm with a really big productivity level and one firm with a really small one. Why? Because we have imperfect substitution between the output of the firms. Which means that we’d like to consume goods in somewhat rigid proportions (think Leontief perfect complements). For example, I really like to consume one pair of pants and one shirt at the same time. If the pants factory is really, really productive, then I can lots of pants for really cheap. If the shirt factory is really un-productivie, I can only get a few shirts for a high price. To consume pants/shirts in the desired 1:1 ratio I will end up having to shift factors away from the pants factor and towards the shirt factory. This lowers my sector level productivity.

There is nothing that HK can or will do about variation in {A_i} across firms. That is taken as a given. Some firms are more productive than others. But what they are interested in is the variation driven by the {TFPR_i} terms. Here, we just have the extra funkiness that the summation depends on these inversely. So a firm with a really high {TFPR_i} is like having a really physically unproductive firm. Why? Think in terms of the prices that firms charge for their goods. A high {TFPR_i} means that firms are charging a relatively high price compared to the rest of the sector. Similarly, a firm with a really low {A_i} (like our shirt factory above) would also be charging a relatively high price compared to the rest of the sector. So having variation in {TFPR_i} across firms is like having variation in {A_i}, and this variation lowers {TFP_s}.

However, as HK point out, if markets are operating efficiently then there should be no variation in {TFPR_i} across firms. While a high {TFPR_i} is similar to a low {A_i} in its effect on {TFP_s}, the high {TFPR_i} arises for a fundamentally different reason. The only reason a firm would have a high {TFPR_i} compared to the rest of the sector is if it faced higher input costs and/or higher taxes on revenues than other firms. In other words, firms would only be charging more than expected if they had higher costs than expected or were able to keep less of their revenue.

In the absence of different input costs and/or different taxes on revenues, then we’d expect all firms in the sector to have identical {TFPR_i}. Because if they didn’t, then firms with high {TFPR_i} could bid away factors of production from low {TFPR_i} firms. But as high {TFPR_i} firms get bigger and produce more, the price they can charge will get driven down (and vice versa for low {TFPR_i} firms), and eventually the {TFPR_i} terms should all equate.

For HK, then, the level of {TFP_s} that you could get if all factors were allocated efficiently (meaning that firms didn’t face differential input costs or revenue taxes) is one where {TFPR_i = \overline{TFPR}_s} for all firms. Meaning that

\displaystyle  TFP^{\ast}_s = \left(\sum_i A_i^{\sigma-1} \right)^{1/(\sigma-1)}. \ \ \ \ \ (9)

So what HK do is calculate both {TFP^{\ast}_s} and {TFP_s} (as above), and compare.

To do this, I already mentioned that the one piece of data we are missing is the {A_i} terms. We need to know the actual physical productivity of firms. How do we get that, since we cannot measure physical output at the firm level? HK’s assumption about market structure will allow us to figure that out. So hold on to the results of {TFP_s} and {TFP^{\ast}_s} for a moment, and let’s talk about firms. For those of you comfortable with monopolistic competition models using CES aggregators, this is just textbook stuff. I’m going to present it without lots of derivations, but you can check my work if you want.

For each firm, we assume the production function is

\displaystyle  Y_i = A_i K_i^{\alpha}L_i^{1-\alpha} \ \ \ \ \ (10)

and we’d like to back out {A_i} as

\displaystyle  A_i = \frac{Y_i}{K_i^{\alpha}L_i^{1-\alpha}} \ \ \ \ \ (11)

but we don’t know the value of {Y_i}. So we’ll back it out from revenue data.

Given that the elasticity of substitution across firms goods is {\sigma}, and all firms goods are weighted the same in the utility function (or final goods production function), then the demand curve facing each firm is

\displaystyle  P_i = Y_i^{(\sigma-1)/\sigma - 1} X_s \ \ \ \ \ (12)

where {X_s} is a demand shifter that depends on the amount of the other goods consumed/produced. We going to end up carrying this term around with us, but it’s exact derivation isn’t necessary for anything. Total revenues of the firm are just

\displaystyle  (P_i Y_i) = Y_i^{(\sigma-1)/\sigma} X_s. \ \ \ \ \ (13)

Solve this for {Y_i}, leaving {(P_i Y_i)} together as revenues. This gives you

\displaystyle  Y_i = \left(\frac{P_i Y_i}{X_s}\right)^{\sigma/(\sigma-1)}. \ \ \ \ \ (14)

Plug this in our equation for {A_i} to get

\displaystyle  A_i = \frac{1}{X_s^{\sigma/(\sigma-1)}}\frac{\left(P_i Y_i\right)^{\sigma/(\sigma-1)}}{K_i^{\alpha}L_i^{1-\alpha}}. \ \ \ \ \ (15)

This last expression gives us a way to back out {A_i} from observable data. We know revenues, {P_i Y_i}, capital, {K_i}, and labor, {L_i}. The only issue is this {X_s} thing. But {X_s} is identical for each firm – it’s a sector-wide demand term – so we don’t need to know it. It just scales up or down all the firms in a sector. Both {TFP_s} and {TFP^{\ast}_s} will be proportional to {X_s}, so when comparing them {X_s} will just cancel out. We don’t need to measure it.

What is our {A_i} measure picking up? Well, under the assumption that firms in fact face a demand curve like we described, then {A_i} is picking up their physical productivity. If physical ouput, {Y_i}, goes up then so will revenues, {P_i Y_i}. But not proportionally, as with more output the firm will charge a lower price. Remember, the pants factory has to get people to buy all those extra pants, even though they kind of don’t want them because there aren’t many shirts around. So the price falls. Taking revenues to the {\sigma/(\sigma-1)} power captures that effect.

Where are we? We now have a firm-level measure of {A_i}, and we can measure it from observable data on revenues, capital stocks, and labor forces at the firm level. This allows us to measure both actual {TFP_s}, and the hypothetical {TFP^{\ast}_s} when each firm faces identical factor costs and revenues taxes. HK compare these two measures of TFP, and find that in China {TFP^{\ast}_s} is about 86-115% higher than {TFP_s}, or that output would nearly double if firms all faced the same factor costs and revenue taxes. In India, the gain is on the order of 100-120%, and for the U.S. the gain is something like 30-43%. So substantial increases all the way around, but much larger in the developing countries. Hence HK conclude that misallocations – meaning firms facing different costs and/or taxes and hence having different {TFPR_i} – could be an important explanation for why some places are rich and some are poor. Poor countries presumably do a poor job (perhaps through explicit policies or implicit frictions) in allocating resources efficiently between firms, and low-productivity firms use too many inputs.

* A note on wedges * For those of you who know this paper, you’ll notice I haven’t said a word about “wedges”, which are the things that generate differences in factor costs or revenues for firms. That’s because from a purely computational standpoint, you don’t need to introduce them to get HK’s results. It’s sufficient just to measure the {TFPR_i} levels. If you wanted to play around with removing just the factor cost wedges or just the revenue wedges, you would then need to incorporate those explicitly. That would require you to follow through on the firms profit maximization problem and solve for an explicit expression for {TFPR_i}. In short, that will give you this:

\displaystyle  TFPR_i = \frac{\sigma}{\sigma-1} MC_s \frac{(1+\tau_{Ki})^{\alpha}}{1-\tau_{Yi}}. \ \ \ \ \ (16)

The first fraction, {\sigma/(\sigma-1)}, is the markup charged over marginal cost by the firm. As the elasticity of substitution is assumed to be constant, this markup is identical for each firm, so generates no variation in {TFPR_i}. The second term, {MC_s}, is the marginal cost of a bundle of inputs (capital and labor). The final fraction are the “wedges”. {(1+\tau_{Ki})} captures the additional cost (or subsidy if {\tau_{Ki}<0}) of a unit of capital to the firm relative to other firms. {(1-\tau_{Yi})} captures the revenue wedge (think of a sales tax or subsidy) for a firm relative to other firms. If either of those {\tau} terms are not equal to zero, then {TFPR_i} will deviate from the efficient level.

* A note on multiple sectors * HK do this for all manufacturing sectors. That’s not a big change. Do what I said for each separate sector. Assume that each sector has a constant share of total expenditure (as in a Cobb-Douglas utility function). Then

\displaystyle  \frac{TFP^{\ast}_{all}}{TFP_{all}} = \left(\frac{TFP^{\ast}_1}{TFP_1}\right)^{\theta_1} \times \left(\frac{TFP^{\ast}_2}{TFP_2}\right)^{\theta_2} \times ... \ \ \ \ \ (17)

where {\theta_s} is the expenditure share of sector {s}.

Productivity Pessimism from Productivity Optimists

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

The projected future path of labor productivity in the U.S. is perhaps the most important input to the projected future path of GDP in the U.S. There are lots of estimates floating around, many of them pessimistic in the sense that they project labor productivity growth to be relatively slow (say 1.5-1.8% per year) over the next few decades compared to the relatively fast rates (roughly 3% per year) seen from 1995-2005. Robert Gordon has laid out the case for low labor productivity growth in the future. John Fernald has documented that this slowdown probably pre-dates the Great Recession, and reflects a loss of steam in the IT revolution starting in about 2007. This has made Brad DeLong sad, which seems like the appropriate response to slowing productivity growth.

An apparent alternative to that pessimism was published recently by Byrne, Oliner, and Sichel. Their paper is titled “Is the IT Revolution Over?”, and their answer is “No”. They suggest that continued innovation in semi-conductors could make possible another IT boom, and boost labor productivity growth in the near future above the pessimistic Gordon/Fernald rate of 1.5-1.8%.

I don’t think their results, though, are as optimistic as they want them to be. A different way of saying this is: you have to work really hard to make yourself optimistic about labor productivity growth going forward. In their baseline estimate, they end up with labor productivity growth of 1.8%, which is slightly higher than the observed rate of 1.56% per year from 2004-2012. To get themselves to their optimistic prediction of 2.47% growth in labor productivity, they have to make the following assumptions:

  1. Total factor productivity (TFP) growth in non-IT producing non-farm businesses is 0.62% per year, which is roughly twice their baseline estimate of 0.34% per year, and ten times the observed rate from 2004-2012 of 0.06%.
  2. TFP growth in IT-producing industries is 0.46% per year, slightly higher than their baseline estimate of 0.38% per year, and not quite double the observed rate from 2004-2012 of 0.28%
  3. Capital deepening (which is just fancy econo-talk for “build more capital”) adds 1.34% per year to labor productivity growth, which is one-third higher than their baseline rate of 1.03% and, and double the observed rate from 2004-2012 of 0.74%

The only reason their optimistic scenario doesn’t get them back to a full 3% growth in labor productivity is because they don’t make any optimistic assumptions about labor force quality/participation growth.

Why these optimistic assumptions in particular? For the IT-producing industries, the authors get their optimistic growth rate of 0.46% per year by assuming that prices for IT goods (e.g. semi-conductors and software) fall at the fastest pace observed in the past. The implication of very dramatic price declines is that productivity in these sectors must be rising very quickly. So essentially, assume that IT industries have productivity growth as fast as in the 1995-2005 period. For the non-IT industries, they assume that faster IT productivity growth will raise non-IT productivity growth to it’s upper bound in the data, 0.62%. Why? No explanation is given. Last, the more rapid pace of productivity growth in IT and non-IT will induce faster capital accumulation, meaning that its rate rises to 1.34% per year. This last point is one that comes out of a simple Solow-type model of growth. A shock to productivity will temporarily increase capital accumulation.

In the end here is what we’ve got: they estimate labor productivity will grow very fast if they assume labor productivity will grow very fast. Section IV of their paper gives more detail on the semi-conductor industry and the compilation of the price indices for that industry. Their narrative explains that we could well be under-estimating how fast semi-conductor prices are falling, and thus under-estimating how fast productivity in that industry is rising. Perhaps, but this doesn’t necessarily imply that the rest of the IT industry is going to experience rapid productivity growth, and it certainly doesn’t necessarily imply that non-IT industries are going to benefit.

Byrne et al 2014

Further, even rapid growth in productivity in the semi-conductor industry is unlikely to create any serious boost to US productivity growth, because the semi-conductor industry has a shrinking share of output in the U.S. over time. The above figure is from their paper. The software we run is a booming industry in the U.S., but the chips running that software are not, and this is probably in large part due to the fact that those chips are made primarily in other countries. If you want to make an optimistic case for IT-led productivity growth in the U.S., you need to make a software argument, not a hardware argument.

I appreciate that Byrne, Oliner, and Sichel want to provide an optimistic case for higher productivity growth. But that case is just a guess, and despite the fact that they can lay some numbers out to come up with a firm answer doesn’t make it less of a guess. Put it this way, I could write a nearly exact duplicate of their paper which makes the case that expected labor productivity growth is only something like 0.4% per year simply by using all of the lower bound estimates they have.

Ultimately, there is nothing about recent trends in labor productivity growth that can make you seriously optimistic about future labor productivity growth. But that doesn’t mean optimism is completely wrong. That’s simply the cost of trying to do forecasting using existing data. You can always identify structural breaks in a time series after the fact (e.g. look at labor productivity growth in 1995), but you cannot ever predict a structural break in a time series out of sample. Maybe tomorrow someone will invent cheap-o solar power, and we’ll look back ten years from now in wonder at the incredible decade of labor productivity growth we had. But I can’t possibly look at the existing time series on labor productivity growth and get any information on whether that will happen or not. Like it or not, extrapolating current trends gives us a pessimistic growth rate of labor productivity. Being optimistic means believing in a structural break in those trends, but there’s no data that can make you believe.

Slow Growth in Potential GDP for the U.S.?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Robert Gordon released a paper recently where he presents his estimates of potential GDP for the U.S. going forward. I had planned on writing a longer post discussing about why you should take his projections seriously, and maybe some speculation about what would have to happen to reverse his conclusions. Seriously, I had a few paragraphs written, a couple figures cut out and ready to go, and then Jim Hamilton put up precisely the post I wanted to write. So the first thing you should do is go read Jim’s post.

…and if you are still here, let me provide an uber-quick summary of my own before I talk about how you could convince yourself that Gordon is overly-pessimistic (he’s probably not, but if you want to rock yourself to sleep at night, this might help).

Gordon 2014 Figure 11

So, what does Gordon find? Basically, GDP growth is equal to growth in GDP per hour worked (productivity) times growth in hours worked. Hours worked are unlikely to grow much, given that unemployment has already fallen back to 6%, that average hours per worker have recovered much of their decline, and that the labor force participation rate is unlikely to recover much of its recent decline. So the only way for GDP to grow fast enough to hit our prior level of potential GDP (which is essentially what the CBO projects it will do) is for productivity (GDP per hour) to grow much faster than it has at any time in the last decade.

You can see his implications in the above figure. The red line is Gordon’s projection for potential GDP based on his assumed lower productivity growth rate, which is closer to recent averages. The CBO potential GDP path is driven by what Gordon says are aggressive assumptions about how fast productivity will grow.

In short, we aren’t going to recover back to the pre-Great Recession trend line for potential GDP any time soon. One might quibble a little about Gordon’s assumptions, and perhaps we won’t diverge from the prior trend line (the yellow one) as much as he suggests. But it’s really hard to come up with reasonable evidence that the CBO is making the right assumption regarding productivity growth.

Now, if I want to go to bed at night believing that we might be able to get back on that prior trend line, what should I tell myself? I’m not going to tell myself that we’ll be magically saved by some kind of technology boom. It could happen, I guess, but that’s not something I could rely on, or having any way of reliably predicting.

What I might tell myself is that productivity – because of the way it is backed out of the data – is not simply a measure of technical productivity, it’s a measure of revenue productivity. I talked about this in a prior post, but the difference is that revenue productivity measures firms ability to generate dollars, not their ability to generate widgets. Revenue productivity can thus experience a temporary burst of growth if firms are able to exert some market power, in the same way that revenue productivity can experience a temporary sag if firms lose pricing power during a recession. So if some of the distinct drop in measured productivity growth over the 2008-2014 period was because firms lost pricing power (and not because of a slowdown in innovation/technology growth), then this could be recovered if firms are able to reassert that pricing power.

A few points on this. Why would firms gain (or why did they lose) pricing power? My guess is that it depends on the willingness of consumers to “shop around”, which in turn is based on economic conditions. When things get bad in 2008, people become more sensitive to price changes, and so firms lose pricing power, and hence revenue productivity falls. If consumers were to recover in the sense of becoming less sensitive to price changes, then firms could gain pricing power and that would raise measured productivity. Will that happen? My guess is yes, it will, I just don’t know when. Will that boost to revenue productivity be sufficient to put potential GDP back on the pre-Great Recession path? I don’t know.

The last thing to point out is that revenue productivity is exactly what we want to measure in Gordon’s case, where he ultimately is worried about Debt/GDP ratios. Because the debt is denominated in dollars, what I care about is the economy’s ability to generate dollars, not widgets. There’s an entirely different post to be written about why the Debt/GDP ratio is a stupid way to measure the debt burden, but I’ll leave that alone for now.

In short, if you want to be optimistic about bouncing back to the pre-Great Recession trend for potential GDP, then part of that optimism is that firms regain lost pricing power, and thus experience a boost to their revenue productivity. This can occur in the absence of any change in the underlying pace of real technological change, and isn’t tied to our expectations about the usefulness or arrival of new technologies.

Robots as Factor-Eliminating Technical Change

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

A really common thread running through the comments I’ve gotten on the blog involve the replacement of labor. This is tied into the question of the impact of robots/IT on labor market outcomes, and the stagnation of wages for lots of laborers. An intuition that a lot of people have is that robots are going to “replace” people, and this will mean that wages fall and more and more of output gets paid to the owners of the robots. Just today, I saw this figure (h/t to Brad DeLong) from the Center on Budget and Policy Priorities which shows wages for the 10th and 20th percentile workers in the U.S. being stagnant over the last 40 years.
CBPP Wage Figure

The possible counter-arguments to this are that even with robots, we’ll just find new uses for human labor, and/or that robots will relieve us of the burden of working. We’ll enjoy high living standards without having to work at it, so why worry?

I’ll admit that my usual reaction is the “but we will just find new kinds of jobs for people” type. Even though capital goods like tractors and combines replaced a lot of human labor in agriculture, we now employ people in other industries, for example. But this assumes that labor is somehow still relevant somewhere in the economy, and maybe that isn’t true. So what does “factor-eliminating” technological change look like? As luck would have it, there’s a paper by Pietro Peretto and John Seater called …. “Factor-eliminating Technical Change“. Peretto and Seater focus on the dynamic implications of the model for endogenous growth, and whether factor-eliminating change can produce sustained growth in output per worker. They find that it can under certain circumstances. But the model they set is also a really useful tool for thinking about what the arrival of robots (or further IT innovations in general) may imply for wages and income distribution.

I’m going to ignore the dynamics that Peretto and Seater work through, and focus only on the firm-level decision they describe.

****If you want to skip technical stuff – jump down to the bottom of the post for the punchline****

Firms have a whole menu of available production functions to choose from. The firm-level functions all have the same structure, {Y = A X^{\alpha}Z^{1-\alpha}}, and vary only in their value of {\alpha \in (0,\overline{\alpha})}. {X} and {Z} are different factors of production (I’ll be more specific about how to interpret these later on). {A} is a measure of total factor productivity.

The idea of having different production functions to choose from isn’t necessarily new, but the novelty comes when Peretto/Seater allow the firm to use more than one of those production functions at once. A firm that has some amount of {X} and {Z} available will choose to do what? It depends on the amount of {X} versus the amount of {Z} they have. If {X} is really big compared to {Z}, then it makes sense to only use the maximum {\overline{\alpha}} technology, so {Y = A X^{\overline{\alpha}}Z^{1-\overline{\alpha}}}. This makes some sense. If you have lots of some factor {X}, then it only makes sense to use a technology that uses this factor really intensely – {\overline{\alpha}}.

On the other hand, if you have a lot of {Z} compared to {X}, then what do you do? You do the opposite – kind of. With a lot of {Z}, you want to use a technology that uses this factor intensely, meaning the technology with {\alpha=0}. But, if you use only that technology, then your {X} sits idle, useless. So you’ll run a {X}-intense plant as well, and that requires a little of the {Z} factor to operate. So you’ll use two kinds of plants at once – a {Z} intense one and a {X} intense one. You can see their paper for derivations, but in the end the production function when you have lots of {Z} is

\displaystyle  Y = A \left(Z + \beta X\right) \ \ \ \ \ (1)

where {\beta} is a slurry of terms involving {\overline{\alpha}}. What Peretto and Seater show is that over time, if firms can invest in higher levels of {\overline{\alpha}}, then by necessity it will be the case that we have “lots” of {Z} compared to little {X}, and we use this production function.

What’s so special about this production function? It’s linear in {Z} and {X}, so their marginal products do not decline as you use more of them. More importantly, their marginal products do not rise as you acquire more of the other input. That is, the marginal product of {Z} is exactly {A}, no matter how much {X} we have.

What does this possibly have to do with robots, stagnant wages, and the labor market? Let {Z} represent labor inputs, and {X} represent capital inputs. This linear production function means that as we acquire more capital ({X}), this has no effect on the marginal product of labor ({Z}). If we have something resembling a competitive market for labor, then this implies that wages will be constant even as we acquire more capital.

That’s a big departure from the typical concept we have of production functions and wages. The typical model is more like Peretto and Seater’s case where {X} is really big, and {Y = A X^{\overline{\alpha}}Z^{1-\overline{\alpha}}}, a typical Cobb-Douglas. What’s true here is that as we get more {X}, the marginal product of {Z} goes up. In other words, if we acquire more capital, then wages should rise as workers get more productive.

The Peretto/Seater setting says that, at some point, technology will progress to the point that wages stop rising with the capital stock. Wages can still go up with general total factor productivity, {A}, sure, but just acquiring new capital will no longer raise wages.

While wages are stagnant, this doesn’t mean that output per worker is stagnant. Labor productivity ({Y/Z}) in this setting is

\displaystyle  \frac{Y}{Z} = A \left(1 + \beta \frac{X}{Z}\right). \ \ \ \ \ (2)

If capital per worker ({X/Z}) is rising, then so is output per worker. But wages will remain constant. This implies that labor’s share of output is falling, as

\displaystyle  \frac{wZ}{Y} = \frac{AZ}{A \left(Z + \beta X\right)} = \frac{Z}{\left(Z + \beta X\right)} = \frac{1}{1 + \beta X/Z}. \ \ \ \ \ (3)

With the ability to use multiple types of technologies, as capital is acquired labor’s share of output falls.

Okay, this Peretto/Seater model gives us an explanation for stagnant wages and a declining labor share in output. Why did I present this using {X} for capital and {Z} for labor, not their traditional {K} and {L}? This is mainly because the definition of what counts as “labor”, and what counts as “capital”, are not fixed. “Capital” might include human as well as physical capital, and so “labor” might mean just unskilled labor. And we definitely see that unskilled labor’s wage is stagnant, while college-educated wages have tended to rise.

***** Jump back in here if you skipped the technical stuff *****

The real point here is that whether technological change is good for labor or not depends on whether labor and capital (i.e. robots) are complements or substitutes. If they are complements (as in traditional conceptions of production functions), then adding robots will raise wages, and won’t necessarily lower labor’s share of output. If they are substistutes then adding robots will not raise wages, and will almost certainly lower labor’s share of output. The factor-eliminating model from Peretto and Seater says that firms will always invest in more capital-intense production functions and that this will inevitably make labor and capital substitutes. We happen to live in the period of time in which this shift to being substitutes is taking place. Or one could argue that it already has taken place, as we see those stagnant wages for unskilled workers, at least, from 1980 onwards.

What we should do about this is a different question. There is no equivalent mechanism or incentive here that would drive firms to make labor and capital complements again. From the firms perspective, having labor and capital as complements limits their flexibility, because they then depend on the other. They’d rather have the marginal product of robots and people independent of one other. So once we reach the robot stage of production, we’re going to stay there, absent a policy that actively prohibits certain types of production. The only way to raise labor’s share of output once we get the robots is through straight redistribution from robot owners to workers.

Note that this doesn’t mean that labor’s real wage is falling. They still have jobs, and their wages can still rise if there is total factor productivity change. But that won’t change the share of output that labor earns. I guess a big question is whether the increases in real wages from total factor productivity growth are sufficient to keep workers from grumbling about the smaller share of output that they earn.

I for one welcome….you know the rest.