Plows were the Robots of the 13th Century

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Jury duty this morning, which meant lots of quiet reading time and in the end no *actual* jury duty (yeah for settlements!).

I am reading Rural Economy and Country Life in the Medieval West, by Georges Duby. I came across the following description of how the development of improved harnesses and plows in the Medieval period displaced a large fraction of rural labor (p. 116):

On the other hand, manual laborers without draught animals underwent no technical progress and sustained no rise in yields: on the contrary there was a relative fall in their living conditions…..That the increased value of farming equipment strengthened the hold of the wealthy over the peasantry cannot be denied….Everywhere the lord maintained his authority over his men by helping them to acquire livestock or by threatening them with its confiscation. When in some provinces in the thirteenth century servitude was born anew and flourished, it was the need to acquire agricultural equipment, efficient though costly, which led poorer peasants to bind themselves into dependence. The same needs held them in servitude, for although they had the right to decamp….they could do so only…by giving up their plough animals. In fact because of this, agricultural growth appears to have been a very powerful agent of social differentiation.

A couple of things struck me about the passage. First, the analysis of the disruption caused by the introduction of a new technology embodied in capital goods (plows, harnesses, and horses) sounds similar to some worries regarding the introduction of robots. With capital owned by only a few, those without capital become dependent on the wealthy and have their living standards driven down. Second, innovation favors those with the skills to work with the new technology. Skilled ploughmen – who only got that way by having a team of horses and a plough to begin with – were the high human capital workers of their day.

Mainly, though, it is just an interesting example of how the same issues with innovation, technology, and displacement have been occurring forever. The question of what happens when robots are plentiful is not a question unique to robots, it is a question about how we adapt to disruptive technology. The evidence suggests that whoever owns the technology or the capital associated with it will use it as leverage over those who do not, just like always.

By the way, I think the lady next to me in the jury room would have looked less shocked if I had told her I was reading a porn magazine.

These are Not the (Modeling Assumptions About) Droids You are Looking For

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Let me start by saying that all future arguments about robots should use the word “droids” instead so that we can use more Star Wars references.

Benzell, Kotlikoff, LaGarda, and Sachs (BKLS) have a new NBER working paper out on robots (sorry, I don’t see an ungated version). They are attempting to do what needs to be done: provide a baseline model of economic growth that explicitly accounts for the ability of software in the form of robots to replace human workers. With such a baseline model we could then see under what conditions we get what they call “immiserating growth” where we are actually made worse off by inventing robots. Perhaps we could then use the model to test out different policies and see how to alleviate or prevent such immiserating growth.

Thus I am totally on board with the goals of the paper. But I don’t know that this particular model is the right way to think about how robots will affect us in the future. There are several reasons:

Wealth Distribution. The model has skilled workers and unskilled workers, yes, but does not distinguish between those with high initial wealth (capable of saving a lot in the form of robots) from those with little or none. This eliminates the possibility of having wealthy robot owners run the wage down so far that no human is employable anymore. While I don’t think that is going to happen, the model should allow for it so we can see under what conditions that might be right or wrong.

Modeling Code. The actual model of how capital (robots) and code work together seems too crude. Essentially, robots and code work together in a production function. Code today is equal to some fraction of the code from yesterday (the rest presumably becomes incompatible) plus whatever new code we hire skilled workers to write. The “shock” that BKLS study is a dramatic increase in the fraction of code that lasts from one period to the next. In their baseline, zero percent of the code lasts, meaning that to work with capital we have to continually reprogram them. Robotics, or AI, or whatever it is that they are intending to capture then shocks this percent up to 70%.

Is this how we should think about code? Perhaps it is a stock variable we could think about, sure. But is the coming of robots really a positive shock to the persistence of that code? I feel like I can tell an equally valid story about how robots and AI will mean that code becomes less persistent over time, and that we will continually be reprogramming them to suit our needs. Robots, by operating as general purpose machines, can easily be re-programmed every day with new tasks. A hammer, on the other hand, is “programmed” once into a heavy object useful for hitting things and then is stuck doing that forever. The code embedded in our current non-robot tools is very, very persistent because they are built for single tasks. Hammers don’t even have USB ports, for crying out loud.

Treating Code as a Rival Good. Leaving aside the issue of code’s persistence, their choice of production function for goods does not seem to make sense for how code operates. The production function depends on robots/capital (K) and code (A). Given their assumed parameters, the production function is

\displaystyle  Y = K^{\alpha}A^{1-\alpha}, \ \ \ \ \ (1)

and code is treated like any other rival, exclusive factor of production. Their production function assumes that if I hold the amount of code constant, but increase the number of robots, then code-per-robot falls. Each new robot means existing ones will have less code to work with? That seems obviously wrong, doesn’t it? Every time Apple sells an iPhone I don’t have to sacrifice an app so that someone else can use it.

The beauty of code is precisely that it is non-rival and non-exclusive. If one robot uses code, all the other robots can use it too. This isn’t a problem with treating code as a “stock variable”. That’s fine. We can easily think of the stock of code depreciating (people get tired of apps, it isn’t compatible with new software) and accumulating (coders write new code). But to treat it like a rival, exclusive, physical input seems wrong.

You’re going to think this looks trivial, but the production function should look like the following

\displaystyle  Y = K^{\alpha} A. \ \ \ \ \ (2)

I ditched the {(1-\alpha)} exponent. So what? But this makes all the difference. This modified production function has increasing returns to scale. If I double both robots and the amount of code, output more than doubles. Why? Because the code can be shared across all robots equally, and they don’t degrade each other’s capabilities.

This is going to change a lot in their model, because now even if I have a long-run decline in the stock of robots {K}, the increase in {A} can more than make up for it. I can have fewer robots, but with all that code they are all super-capable of producing goods for us. The original BKLS model assumes that won’t happen because if one robot is using the code, another one cannot.

But I’m unlikely to have a long-run decline in robots (or code) because with IRS the marginal return to robots is rising with the number of robots, and the marginal return to code is rising with the amount of code. The incentives to build more robots and produce more code are rising. Even if code persists over time, adding new code will always be worth it because of the IRS. More robots and more code mean more goods produced in the long-run, not fewer as BKLS find.

Of course, this means we’ll have produced so many robots that they become sentient and enslave us to serve as human batteries. But that is a different kind of problem entirely.

Valuing Consumption. Leave aside all the issues with production and how to model code. Does their baseline simulation actually indicate immiseration? Their measure of “national income” isn’t defined clearly, so I’m not sure what to do with that number. But they do report the changes in consumption of goods and services. We can back out a measure of consumption per person from that. They index the initial values of service and good consumption to 100. Then, in the “immiserating growth” scenario, service consumption rises to 127, but good consumption falls to 72.

Is this good or bad? Well, to value both initial and long-run total consumption, we need to pick a relative price for the two goods. BKLS index the relative price of services to 100 in the initial period, and the relative price falls to 43 in the long-run.

But we don’t want the indexed price, we want the actual relative price. This matters a lot. If the relative price of services is 1 in the initial period, then initial real consumption is

\displaystyle  C = P_s Q_s + Q_g = 1 \times 100 + 100 = 200. \ \ \ \ \ (3)

In the long-run we need to use the same relative price so that we can compare real consumption over time. In the long-run, with a relative price of services of 1, real consumption is

\displaystyle  C = 1 \times 127 + 72 = 199. \ \ \ \ \ (4)

Essentially identical, and my guess is that the difference is purely due to rounding error.

Note what this means. With a relative price of services of 1, real consumption is unchanged after the introduction of robots in their model. This is not immiserating growth.

But wait, who said that the relative price of services had to be 1? What if the initial price of services was 10? Then initial real consumption would be {C = 10 \times 100 + 100 = 1100}, and long-run real consumption would be {C = 10 \times 127 + 72 = 1342}, and real consumption has risen by 22% thanks to the robots!

Or, if you feel like being pessimistic, assume the initial relative price of services is 0.1. Then initial real consumption is {C = .1 \times 100 + = 110}, and long-run consumption is {C = .1 \times 127 + 72 = 84.7}, a drop of 23%. Now we’ve got immiserating growth.

The point is that the conclusion depends entirely on the choice of the actual relative price of services. What is the actual relative price of services in their simulation? They don’t say anywhere that I can find, they only report the indexed value is 100 in the initial period. So I don’t know how to evaluate their simulation. I do know that their having service consumption rise by 27% and good consumption fall by 28% does not necessarily imply that we are worse off.

Their model is too disconnected from reality (as are most models, this isn’t a BKLS failing) that we cannot simply look at a series from the BLS on service prices to get the right answer here. But we do know that the relative price of services to goods rose a bunch from 1950 to 2010 (see here). From an arbitrary baseline of 1 in 1950, the price of services relative to manufacturing was about 4.33 in 2010. You can’t just plug in 4.33 to the above calculation, but it gives you a good idea of how expensive services are compared to manufacturing goods. On the basis of this, I would lean towards assuming that the relative price of services is bigger than 1, and probably significantly bigger, and that the effect of the BKLS robots is an increase in real consumption in the long-run.

Valuing Welfare. BKLS provide some compensating differential measurements for their immiserating scenario, which are negative. This implies that people would be willing to pay to avoid robots. They are worse off.

This valuation depends entirely on the weights in the utility function, and those weights seem wrong. The utility function they use is {U = 0.5 \ln{C_s} + 0.5 \ln{C_g}}, or equal weights on the consumption of both services and goods. With their set-up, people in the BKLS model will spend exactly 50% of their income on services, and 50% on goods.

But that isn’t what expenditure data look like. In the US, services take up about 70-80% of expenditure, and goods only the remaining 20-30%. So the utility function should probably look like {U = 0.75 \ln{C_s} + 0.25 \ln{C_g}}. And this changes the welfare impact of the arrival of robots.

Let {C_g} and {C_s} both equal 1 in the baseline, pre-robots. Then for BKLS baseline utility is 0, and in my alternative utility is also 0. So we start at the same value.

With robots, goods consumption falls to 0.72 and service consumption rises to 1.27. For BKLS this gives utility of {U = 0.5 \ln{1.27} + 0.5 \ln{0.72} = -.045}. Welfare goes down with the robots. With my weights, utility is {U = 0.75 \ln{1.27} + 0.25 \ln{0.72} = 0.097}. Welfare goes up with the robots.

Which is right? It depends again on assumptions about how to value services versus goods. If you overweight goods versus services, then yes, the reduction of goods production in the BKLS scenario will make things look bad. But if you flip that around and overweight services, things look great. I’ll argue that overweighting services seems more plausible given the expenditure data, but I can’t know for sure. I am wary, though, of the BKLS conclusions because their assumptions are not inconsequential to their findings.

What Do We Know. If it seems like I’m picking on this paper, it is because the question they are trying to answer is so interesting and important, and I spent a lot of time going through their model. As I said above, we need some kind of baseline model of how future hyper-automated production influences the economy. BKLS should get a lot of credit for taking a swing at this. I disagree with some of the choices they made, but they are doing what needs to be done. I do think that you have to allow for IRS in production involving code, though. It just doesn’t make sense to me to do it any other way. And if you do that goods production is going to go up, not down, as they find.

The thing that keeps bugging me is that I have this suspicion that you can’t eliminate the measurement problem with real consumption or welfare entirely. This isn’t a failure of BKLS in particular, but probably an issue with any model of this kind. We don’t know the “true” utility function, so there is no way we’ll ever be able to say for sure whether robots will or will not raise welfare. In the end it will always rest on assumptions regarding utility weights.

Meta-post on Robots and Jobs

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I don’t know that I have anything particularly original to say on the worry that robots will soon replace humans in many more tasks, and what implications this has for wages, living conditions, income distribution, the introduction of the Matrix or Skynet, or anything else. So here I’ve just collected a few relevant pieces of information from about the inter-tubes that are useful in thinking about the issue.

Let’s start with some data on “routine” jobs and what is happening to them. Cortes, Jaimovich, Nekarda, and Siu have a recent voxeu post (and associated paper) on the flows into routine versus non-routine work. In the 1980’s, about 1/3 of American workers did “routine” work, now this number is only about 1/4. Routine work tended (and tends) to be middle-waged; it pays pretty well. What the authors find is that the decline in the share of people doing these middle-wage routine jobs is due to slower flows *in* to those jobs, but not due to faster flows *out*. That is, routine workers were not necessarily getting let go more rapidly, but companies were simply not hiring new routine workers.

Unsurprisingly, people with more education were better able to adapt to this. Higher education meant a higher likelihood of shifting into non-routine “cognitive” tasks, which also is a move up the wage scale (upper-middle wages, say). Perhaps more surprising is that women have been more likely, holding education constant, to move into these cognitive tasks. It is low education males who represent the group that is failing to get routine middle-wage jobs. To the extent that these lower-educated males get work, it tends to be in “brawn” jobs, low-wage manual work.

This last fact is somewhat odd in the context of the robot-overlord thesis. Robots/computers are really good at doing routine tasks, but so far have not replaced manual labor. If there was a group that should have a lot to worry about, I’d think it would be low-education males, who could well be replaced as robots become more robust to doing heavy manual labor. One thought I have is that this indicates that manual work (think landscaping) is not as low-skill as routine tasks like data entry. I think there is more cognitive processing that is going on in these jobs than we tend to give them credit for (where to dig, how deep, should I move this plant over a little, what if I hit a root?, does this shrub look right over here, etc.. ), and that their wages are low simply because the supply of people who can do those jobs is so large.

Brad DeLong took on the topic by considering Peter Thiel‘s comments in the Financial Times. Thiel is relatively optimistic about the arrival of robots – he uses the computer/human mix at Paypal to detect fraud as the example of how smarter machines or robots will benefit workers. Brad worries that Thiel is making a basic error. Yes, machines relieve us of drab, boring, repetitive work. But whether workers benefit from that (as opposed to the owners of the machines) depends not on the average productivity of that worker, but on the productivity of the marginal worker who is not employed. That is, if I can be replaced at Paypal by an unemployed worker who has no other options, then my own wage will be low, regardless of how productive I am. By replacing human workers in some jobs, robots/machines drive up the supply of humans in all the remaining jobs, which lowers wages.

To keep wages high for workers, we will need to increase demand for human-specific skills. What are those? Brad likes to list 6 different types of tasks, and leaves humans with persuasion, motivation, and innovation as things that will be left for humans to do. Is there sufficient demand for those skills to keep wages elevated? I don’t know.

David Autor has a recent working paper that is relatively optimistic about robots/machines. He thinks there is more complementarity between machines and humans than we think, so it echoes Thiel’s optimism. Much of Autor’s optimism stems from what he calls “Polyani’s Paradox”, which is essentially that we are incapable of explaining in full what we know. And if we cannot fully explain exactly what we know how to do (whether that is identifying a face in a crowd, or making scrambled eggs, writing an economics paper, or building a piece of furniture) then we cannot possibly program a machine to do it either. The big limit of machines, for Autor, is that they have no tacit knowledge. Everything must be precisely specified for them to work. There is no “feel” to their actions, so to speak. As long as there are tasks like that, robots cannot replace us, and it will require humans – in conjunction with machines, maybe – to actually do lots of work. Construction workers are his example.

But I am a little wary of that example. Yes, construction workers today work with a massive array of sophisticated machines, and they serve as the guidance systems for those machines, and without construction workers nothing would get done. But that’s a statement about average product, not marginal product. The wage of those workers could still fall because better machines could make *anyone* capable of working at a construction site, and the marginal product of any given worker is very low. Further, adding better or more construction machines can reduce the number of construction workers necessary, which again only floods the market with more workers, lowering the marginal product.

Autor gets interviewed in this video from Ryan Avent at the Economist. It’s a fairly good introduction to the ideas involved with robots replacing workers.

Robots as Factor-Eliminating Technical Change

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

A really common thread running through the comments I’ve gotten on the blog involve the replacement of labor. This is tied into the question of the impact of robots/IT on labor market outcomes, and the stagnation of wages for lots of laborers. An intuition that a lot of people have is that robots are going to “replace” people, and this will mean that wages fall and more and more of output gets paid to the owners of the robots. Just today, I saw this figure (h/t to Brad DeLong) from the Center on Budget and Policy Priorities which shows wages for the 10th and 20th percentile workers in the U.S. being stagnant over the last 40 years.
CBPP Wage Figure

The possible counter-arguments to this are that even with robots, we’ll just find new uses for human labor, and/or that robots will relieve us of the burden of working. We’ll enjoy high living standards without having to work at it, so why worry?

I’ll admit that my usual reaction is the “but we will just find new kinds of jobs for people” type. Even though capital goods like tractors and combines replaced a lot of human labor in agriculture, we now employ people in other industries, for example. But this assumes that labor is somehow still relevant somewhere in the economy, and maybe that isn’t true. So what does “factor-eliminating” technological change look like? As luck would have it, there’s a paper by Pietro Peretto and John Seater called …. “Factor-eliminating Technical Change“. Peretto and Seater focus on the dynamic implications of the model for endogenous growth, and whether factor-eliminating change can produce sustained growth in output per worker. They find that it can under certain circumstances. But the model they set is also a really useful tool for thinking about what the arrival of robots (or further IT innovations in general) may imply for wages and income distribution.

I’m going to ignore the dynamics that Peretto and Seater work through, and focus only on the firm-level decision they describe.

****If you want to skip technical stuff – jump down to the bottom of the post for the punchline****

Firms have a whole menu of available production functions to choose from. The firm-level functions all have the same structure, {Y = A X^{\alpha}Z^{1-\alpha}}, and vary only in their value of {\alpha \in (0,\overline{\alpha})}. {X} and {Z} are different factors of production (I’ll be more specific about how to interpret these later on). {A} is a measure of total factor productivity.

The idea of having different production functions to choose from isn’t necessarily new, but the novelty comes when Peretto/Seater allow the firm to use more than one of those production functions at once. A firm that has some amount of {X} and {Z} available will choose to do what? It depends on the amount of {X} versus the amount of {Z} they have. If {X} is really big compared to {Z}, then it makes sense to only use the maximum {\overline{\alpha}} technology, so {Y = A X^{\overline{\alpha}}Z^{1-\overline{\alpha}}}. This makes some sense. If you have lots of some factor {X}, then it only makes sense to use a technology that uses this factor really intensely – {\overline{\alpha}}.

On the other hand, if you have a lot of {Z} compared to {X}, then what do you do? You do the opposite – kind of. With a lot of {Z}, you want to use a technology that uses this factor intensely, meaning the technology with {\alpha=0}. But, if you use only that technology, then your {X} sits idle, useless. So you’ll run a {X}-intense plant as well, and that requires a little of the {Z} factor to operate. So you’ll use two kinds of plants at once – a {Z} intense one and a {X} intense one. You can see their paper for derivations, but in the end the production function when you have lots of {Z} is

\displaystyle  Y = A \left(Z + \beta X\right) \ \ \ \ \ (1)

where {\beta} is a slurry of terms involving {\overline{\alpha}}. What Peretto and Seater show is that over time, if firms can invest in higher levels of {\overline{\alpha}}, then by necessity it will be the case that we have “lots” of {Z} compared to little {X}, and we use this production function.

What’s so special about this production function? It’s linear in {Z} and {X}, so their marginal products do not decline as you use more of them. More importantly, their marginal products do not rise as you acquire more of the other input. That is, the marginal product of {Z} is exactly {A}, no matter how much {X} we have.

What does this possibly have to do with robots, stagnant wages, and the labor market? Let {Z} represent labor inputs, and {X} represent capital inputs. This linear production function means that as we acquire more capital ({X}), this has no effect on the marginal product of labor ({Z}). If we have something resembling a competitive market for labor, then this implies that wages will be constant even as we acquire more capital.

That’s a big departure from the typical concept we have of production functions and wages. The typical model is more like Peretto and Seater’s case where {X} is really big, and {Y = A X^{\overline{\alpha}}Z^{1-\overline{\alpha}}}, a typical Cobb-Douglas. What’s true here is that as we get more {X}, the marginal product of {Z} goes up. In other words, if we acquire more capital, then wages should rise as workers get more productive.

The Peretto/Seater setting says that, at some point, technology will progress to the point that wages stop rising with the capital stock. Wages can still go up with general total factor productivity, {A}, sure, but just acquiring new capital will no longer raise wages.

While wages are stagnant, this doesn’t mean that output per worker is stagnant. Labor productivity ({Y/Z}) in this setting is

\displaystyle  \frac{Y}{Z} = A \left(1 + \beta \frac{X}{Z}\right). \ \ \ \ \ (2)

If capital per worker ({X/Z}) is rising, then so is output per worker. But wages will remain constant. This implies that labor’s share of output is falling, as

\displaystyle  \frac{wZ}{Y} = \frac{AZ}{A \left(Z + \beta X\right)} = \frac{Z}{\left(Z + \beta X\right)} = \frac{1}{1 + \beta X/Z}. \ \ \ \ \ (3)

With the ability to use multiple types of technologies, as capital is acquired labor’s share of output falls.

Okay, this Peretto/Seater model gives us an explanation for stagnant wages and a declining labor share in output. Why did I present this using {X} for capital and {Z} for labor, not their traditional {K} and {L}? This is mainly because the definition of what counts as “labor”, and what counts as “capital”, are not fixed. “Capital” might include human as well as physical capital, and so “labor” might mean just unskilled labor. And we definitely see that unskilled labor’s wage is stagnant, while college-educated wages have tended to rise.

***** Jump back in here if you skipped the technical stuff *****

The real point here is that whether technological change is good for labor or not depends on whether labor and capital (i.e. robots) are complements or substitutes. If they are complements (as in traditional conceptions of production functions), then adding robots will raise wages, and won’t necessarily lower labor’s share of output. If they are substistutes then adding robots will not raise wages, and will almost certainly lower labor’s share of output. The factor-eliminating model from Peretto and Seater says that firms will always invest in more capital-intense production functions and that this will inevitably make labor and capital substitutes. We happen to live in the period of time in which this shift to being substitutes is taking place. Or one could argue that it already has taken place, as we see those stagnant wages for unskilled workers, at least, from 1980 onwards.

What we should do about this is a different question. There is no equivalent mechanism or incentive here that would drive firms to make labor and capital complements again. From the firms perspective, having labor and capital as complements limits their flexibility, because they then depend on the other. They’d rather have the marginal product of robots and people independent of one other. So once we reach the robot stage of production, we’re going to stay there, absent a policy that actively prohibits certain types of production. The only way to raise labor’s share of output once we get the robots is through straight redistribution from robot owners to workers.

Note that this doesn’t mean that labor’s real wage is falling. They still have jobs, and their wages can still rise if there is total factor productivity change. But that won’t change the share of output that labor earns. I guess a big question is whether the increases in real wages from total factor productivity growth are sufficient to keep workers from grumbling about the smaller share of output that they earn.

I for one welcome….you know the rest.