Meta-post on Robots and Jobs

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I don’t know that I have anything particularly original to say on the worry that robots will soon replace humans in many more tasks, and what implications this has for wages, living conditions, income distribution, the introduction of the Matrix or Skynet, or anything else. So here I’ve just collected a few relevant pieces of information from about the inter-tubes that are useful in thinking about the issue.

Let’s start with some data on “routine” jobs and what is happening to them. Cortes, Jaimovich, Nekarda, and Siu have a recent voxeu post (and associated paper) on the flows into routine versus non-routine work. In the 1980’s, about 1/3 of American workers did “routine” work, now this number is only about 1/4. Routine work tended (and tends) to be middle-waged; it pays pretty well. What the authors find is that the decline in the share of people doing these middle-wage routine jobs is due to slower flows *in* to those jobs, but not due to faster flows *out*. That is, routine workers were not necessarily getting let go more rapidly, but companies were simply not hiring new routine workers.

Unsurprisingly, people with more education were better able to adapt to this. Higher education meant a higher likelihood of shifting into non-routine “cognitive” tasks, which also is a move up the wage scale (upper-middle wages, say). Perhaps more surprising is that women have been more likely, holding education constant, to move into these cognitive tasks. It is low education males who represent the group that is failing to get routine middle-wage jobs. To the extent that these lower-educated males get work, it tends to be in “brawn” jobs, low-wage manual work.

This last fact is somewhat odd in the context of the robot-overlord thesis. Robots/computers are really good at doing routine tasks, but so far have not replaced manual labor. If there was a group that should have a lot to worry about, I’d think it would be low-education males, who could well be replaced as robots become more robust to doing heavy manual labor. One thought I have is that this indicates that manual work (think landscaping) is not as low-skill as routine tasks like data entry. I think there is more cognitive processing that is going on in these jobs than we tend to give them credit for (where to dig, how deep, should I move this plant over a little, what if I hit a root?, does this shrub look right over here, etc.. ), and that their wages are low simply because the supply of people who can do those jobs is so large.

Brad DeLong took on the topic by considering Peter Thiel‘s comments in the Financial Times. Thiel is relatively optimistic about the arrival of robots – he uses the computer/human mix at Paypal to detect fraud as the example of how smarter machines or robots will benefit workers. Brad worries that Thiel is making a basic error. Yes, machines relieve us of drab, boring, repetitive work. But whether workers benefit from that (as opposed to the owners of the machines) depends not on the average productivity of that worker, but on the productivity of the marginal worker who is not employed. That is, if I can be replaced at Paypal by an unemployed worker who has no other options, then my own wage will be low, regardless of how productive I am. By replacing human workers in some jobs, robots/machines drive up the supply of humans in all the remaining jobs, which lowers wages.

To keep wages high for workers, we will need to increase demand for human-specific skills. What are those? Brad likes to list 6 different types of tasks, and leaves humans with persuasion, motivation, and innovation as things that will be left for humans to do. Is there sufficient demand for those skills to keep wages elevated? I don’t know.

David Autor has a recent working paper that is relatively optimistic about robots/machines. He thinks there is more complementarity between machines and humans than we think, so it echoes Thiel’s optimism. Much of Autor’s optimism stems from what he calls “Polyani’s Paradox”, which is essentially that we are incapable of explaining in full what we know. And if we cannot fully explain exactly what we know how to do (whether that is identifying a face in a crowd, or making scrambled eggs, writing an economics paper, or building a piece of furniture) then we cannot possibly program a machine to do it either. The big limit of machines, for Autor, is that they have no tacit knowledge. Everything must be precisely specified for them to work. There is no “feel” to their actions, so to speak. As long as there are tasks like that, robots cannot replace us, and it will require humans – in conjunction with machines, maybe – to actually do lots of work. Construction workers are his example.

But I am a little wary of that example. Yes, construction workers today work with a massive array of sophisticated machines, and they serve as the guidance systems for those machines, and without construction workers nothing would get done. But that’s a statement about average product, not marginal product. The wage of those workers could still fall because better machines could make *anyone* capable of working at a construction site, and the marginal product of any given worker is very low. Further, adding better or more construction machines can reduce the number of construction workers necessary, which again only floods the market with more workers, lowering the marginal product.

Autor gets interviewed in this video from Ryan Avent at the Economist. It’s a fairly good introduction to the ideas involved with robots replacing workers.

15 thoughts on “Meta-post on Robots and Jobs

  1. Facebook ™ can already identify a face in a crowd. And it’s by no means the most sophisticated software for that task. Autor is already wrong in this example.

    The only thing holding back robots is the lack of a full complement of fully developed senses: sight, hearing, chemoreception (smell), pressure/temperature/texture/slippage (touch), nociception (pain, sensing movement past limits or dangerous environments), proprioception, balance and acceleration. Possibly one or two others. The hardware exists for each of these; it’s just a matter of painstakingly building up the software layers for each of them and to integrate the combination of them. And that is happening now.

    I give it a decade, maybe two, before leading-edge robots have as much “tacit knowledge” of the world around them as does a child of seven. And seven-year-olds were put to work, 150 years ago. They work now, in least developed countries.

    Construction now is very much like clothing was in the era of bespoke tailoring. Several companies are trying to develop factory methods for construction (with minimal final assembly on site). It’s probably only a matter of time…

    DeLong is right, but omits playing as something that is left for humans to do. And being a status symbol (think of early modern monarchs and their hordes of hyperspecialised attendants – the keeper of the king’s slippers, etc.).

  2. Pingback: Links for 10-04-14 | The Penn Ave Post

  3. “David Autor has a recent working paper that is relatively optimistic about robots/machines. He thinks there is more complementarity between machines and humans than we think, so it echoes Thiel’s optimism. Much of Autor’s optimism stems from what he calls “Polyani’s Paradox”, which is essentially that we are incapable of explaining in full what we know. And if we cannot fully explain exactly what we know how to do (whether that is identifying a face in a crowd, or making scrambled eggs, writing an economics paper, or building a piece of furniture) then we cannot possibly program a machine to do it either.”

    I’m sorry, but this is outdated thinking. You don’t have to describe everything in specific detail in a program, you can be more general and learning based. The evidence? Your very own example of picking out a face — Impossible for computers to do nearly as well as humans? Already they do it better:

    http://www.theatlantic.com/technology/archive/2014/06/bad-news-computers-are-getting-better-than-we-are-at-facial-recognition/372377/

    This would say computers will never drive cars, as many said, or said it would be far future. Making scrambled eggs? All of this stuff is falling very fast. It’s partly AI, and partly just massive learning and pattern picking from big data at ultra high speed — See Brynhofsson and McAfee’s book, and Oxford’s Frey and Osborne’s recent paper, “The Future of Employment: How Susceptible are Jobs to Computerisation”. That paper at its core relies on evaluations from top scientists and engineers at Oxford, people who, especially jointly, I’m pretty sure, know a lot more about this stuff than Autor, and may be significantly less ideologically biased. B&M, and very clearly F&O, both point to computers and robots next coming for the unskilled in a huge way. Quoting F&O:

    “Figure IV reveals that both wages and educational attainment exhibit a strong negative relationship with the probability of computerisation. We note that this prediction implies a truncation in the current trend towards labour market polarization, with growing employment in high and low-wage occupations, accompanied by a hollowing-out of middle-income jobs. Rather than reducing the demand for middle-income occupations, which has been the pattern over the past decades, our model predicts that computerisation will mainly substitute for low-skill and low-wage jobs in the near future. By contrast, high-skill and high-wage occupations are the least susceptible to computer capital.” (page 42)

    FWIW, my opinions on this are largely in a guest post at Carola Binder’s (Berkeley econ PhD student):

    http://carolabinder.blogspot.com/2014/03/guess-post-second-machine-age-book.html

    • “Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera…

      Neeraj Kumar, a researcher at the University of Washington who has worked on face verification and recognition, says that Facebook’s results show how finding enough data to feed into a large neural network can allow for significant improvements in machine-learning software. “I’d bet that a lot of the gain here comes from what deep learning generally provides: being able to leverage huge amounts of outside data in a much higher-capacity learning model,” he says.

      The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people. “Since they have access to lots of data of this form, they can successfully train a high-capacity model,” says Kumar.”

      at: http://www.technologyreview.com/news/525586/facebook-creates-software-that-matches-faces-almost-as-well-as-you-do/

    • Thanks for the info. On how easily computers can substitute for certain tasks, I’m with you that we probably are *under*-estimating how much they can do. But I was just trying to give some sense of Autor’s take on the subject. But I’m not sure what Autor’s ideological position is supposed to be here. Is there something inherently political about how capable you think computers can be?

      • More libertarian economists, who will bias help sell their ideology, don’t like any economics that shows a benefit to government action or a government role. The more people think this, the more they support more government, which (extreme) libertarians are against no matter how large the benefit to total societal utils. This, I think, is a big part of the reaction against Keynesian economics, and even against any monetary policy, except for, say, constant zero inflation or a gold standard. Actually, my biggest achievement in life was being quoted by Paul Krugman on this:

        http://krugman.blogs.nytimes.com/2010/06/14/antipathy-to-low-rates/?_php=true&_type=blogs&_r=0

        With regards to the robots/computer revolution, if it gets to the point where robots/computers make it impossible for a large percentage of the population to get a job, then if a libertarian admits that, politically it’s very hard for him to argue, oh well, libertarianism, freedom, man! We let them starve to death before we have the government do some redistribution. You’ll always lose at the ballot box that way. So, instead, a libertarian, to get public support, has to argue, No, freedom — no government — they can still get jobs if they make an effort to do such-and-such. Computers aren’t going to get that good, no need for massive government programs in education and training, Heckman style early development programs, and certainly not more redistribution! We’re fine, we have a way for just about everyone around these computer advances.

        This is true to an extent, and important, but the such-and-such is going to get a lot more challenging. People who don’t have the health or youth or brains, more and more aren’t going to be able to get the skills, even with great and admirable effort.

      • To put it more concisely, if you admit that computers/robots are going to get really good, and so for most people to be able to have a market wage above minimum they’re going to have to have a lot of education and training — at least — this means that a lot more government will be needed to avoid massive unemployment and poverty. Libertarians don’t want people to want, and vote for, a lot more government, so they deny this — It’s not that difficult a problem. If people just make an effort they can get around it. No need for big increases in spending on education and Heckman style early human development, nothing to see here, move on,…

        To a strong libertarian we shouldn’t even have government paid for K-12, let alone unemployment insurance, social security, and medicare. That’s all government taking away peoples freedom, forcing them to pay for these things. Anything that makes government action and expansion look more attractive and necessary to the voters they have a strong incentive to deny and discount, from Keynesian economics to active monetary policy, to massive computer/robot substitution.

  4. “Much of Autor’s optimism stems from what he calls “Polyani’s Paradox”, which is essentially that we are incapable of explaining in full what we know….The big limit of machines, for Autor, is that they have no tacit knowledge. Everything must be precisely specified for them to work. There is no “feel” to their actions, so to speak.”

    But that is exactly the point of machine learning: each action is not precisely specified. The only thing that must be specified is whether the job was performed in a satisfactory fashion and to provide lots of examples of correct and incorrect performance–the machine will create the tacit knowledge. Richard Serlin’s example of deep learning brings that home: it’s a whole domain of research focused on providing context for one-shot problems.

  5. Pingback: The Part To Remember Over The Robots Are Coming To Take All Our Jobs Story | The Part To Remember Over The Robots Are Coming To Take All Our Jobs Story | Social Dashboard

  6. Pingback: Best of the Web: 14-10-03, nr 1080 | Best of the Web

  7. Pingback: News: Real Estate, Risk, Economics. Oct. 7, 2014 | PropertyPak

  8. Pingback: Techno-neutrality | The Growth Economics Blog

  9. Pingback: ZeeConomics | Inequality and zero growth

  10. Pingback: Beating a Dead Robotic Horse | The Growth Economics Blog

Leave a comment