Plows were the Robots of the 13th Century

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Jury duty this morning, which meant lots of quiet reading time and in the end no *actual* jury duty (yeah for settlements!).

I am reading Rural Economy and Country Life in the Medieval West, by Georges Duby. I came across the following description of how the development of improved harnesses and plows in the Medieval period displaced a large fraction of rural labor (p. 116):

On the other hand, manual laborers without draught animals underwent no technical progress and sustained no rise in yields: on the contrary there was a relative fall in their living conditions…..That the increased value of farming equipment strengthened the hold of the wealthy over the peasantry cannot be denied….Everywhere the lord maintained his authority over his men by helping them to acquire livestock or by threatening them with its confiscation. When in some provinces in the thirteenth century servitude was born anew and flourished, it was the need to acquire agricultural equipment, efficient though costly, which led poorer peasants to bind themselves into dependence. The same needs held them in servitude, for although they had the right to decamp….they could do so only…by giving up their plough animals. In fact because of this, agricultural growth appears to have been a very powerful agent of social differentiation.

A couple of things struck me about the passage. First, the analysis of the disruption caused by the introduction of a new technology embodied in capital goods (plows, harnesses, and horses) sounds similar to some worries regarding the introduction of robots. With capital owned by only a few, those without capital become dependent on the wealthy and have their living standards driven down. Second, innovation favors those with the skills to work with the new technology. Skilled ploughmen – who only got that way by having a team of horses and a plough to begin with – were the high human capital workers of their day.

Mainly, though, it is just an interesting example of how the same issues with innovation, technology, and displacement have been occurring forever. The question of what happens when robots are plentiful is not a question unique to robots, it is a question about how we adapt to disruptive technology. The evidence suggests that whoever owns the technology or the capital associated with it will use it as leverage over those who do not, just like always.

By the way, I think the lady next to me in the jury room would have looked less shocked if I had told her I was reading a porn magazine.

These are Not the (Modeling Assumptions About) Droids You are Looking For

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Let me start by saying that all future arguments about robots should use the word “droids” instead so that we can use more Star Wars references.

Benzell, Kotlikoff, LaGarda, and Sachs (BKLS) have a new NBER working paper out on robots (sorry, I don’t see an ungated version). They are attempting to do what needs to be done: provide a baseline model of economic growth that explicitly accounts for the ability of software in the form of robots to replace human workers. With such a baseline model we could then see under what conditions we get what they call “immiserating growth” where we are actually made worse off by inventing robots. Perhaps we could then use the model to test out different policies and see how to alleviate or prevent such immiserating growth.

Thus I am totally on board with the goals of the paper. But I don’t know that this particular model is the right way to think about how robots will affect us in the future. There are several reasons:

Wealth Distribution. The model has skilled workers and unskilled workers, yes, but does not distinguish between those with high initial wealth (capable of saving a lot in the form of robots) from those with little or none. This eliminates the possibility of having wealthy robot owners run the wage down so far that no human is employable anymore. While I don’t think that is going to happen, the model should allow for it so we can see under what conditions that might be right or wrong.

Modeling Code. The actual model of how capital (robots) and code work together seems too crude. Essentially, robots and code work together in a production function. Code today is equal to some fraction of the code from yesterday (the rest presumably becomes incompatible) plus whatever new code we hire skilled workers to write. The “shock” that BKLS study is a dramatic increase in the fraction of code that lasts from one period to the next. In their baseline, zero percent of the code lasts, meaning that to work with capital we have to continually reprogram them. Robotics, or AI, or whatever it is that they are intending to capture then shocks this percent up to 70%.

Is this how we should think about code? Perhaps it is a stock variable we could think about, sure. But is the coming of robots really a positive shock to the persistence of that code? I feel like I can tell an equally valid story about how robots and AI will mean that code becomes less persistent over time, and that we will continually be reprogramming them to suit our needs. Robots, by operating as general purpose machines, can easily be re-programmed every day with new tasks. A hammer, on the other hand, is “programmed” once into a heavy object useful for hitting things and then is stuck doing that forever. The code embedded in our current non-robot tools is very, very persistent because they are built for single tasks. Hammers don’t even have USB ports, for crying out loud.

Treating Code as a Rival Good. Leaving aside the issue of code’s persistence, their choice of production function for goods does not seem to make sense for how code operates. The production function depends on robots/capital (K) and code (A). Given their assumed parameters, the production function is

\displaystyle  Y = K^{\alpha}A^{1-\alpha}, \ \ \ \ \ (1)

and code is treated like any other rival, exclusive factor of production. Their production function assumes that if I hold the amount of code constant, but increase the number of robots, then code-per-robot falls. Each new robot means existing ones will have less code to work with? That seems obviously wrong, doesn’t it? Every time Apple sells an iPhone I don’t have to sacrifice an app so that someone else can use it.

The beauty of code is precisely that it is non-rival and non-exclusive. If one robot uses code, all the other robots can use it too. This isn’t a problem with treating code as a “stock variable”. That’s fine. We can easily think of the stock of code depreciating (people get tired of apps, it isn’t compatible with new software) and accumulating (coders write new code). But to treat it like a rival, exclusive, physical input seems wrong.

You’re going to think this looks trivial, but the production function should look like the following

\displaystyle  Y = K^{\alpha} A. \ \ \ \ \ (2)

I ditched the {(1-\alpha)} exponent. So what? But this makes all the difference. This modified production function has increasing returns to scale. If I double both robots and the amount of code, output more than doubles. Why? Because the code can be shared across all robots equally, and they don’t degrade each other’s capabilities.

This is going to change a lot in their model, because now even if I have a long-run decline in the stock of robots {K}, the increase in {A} can more than make up for it. I can have fewer robots, but with all that code they are all super-capable of producing goods for us. The original BKLS model assumes that won’t happen because if one robot is using the code, another one cannot.

But I’m unlikely to have a long-run decline in robots (or code) because with IRS the marginal return to robots is rising with the number of robots, and the marginal return to code is rising with the amount of code. The incentives to build more robots and produce more code are rising. Even if code persists over time, adding new code will always be worth it because of the IRS. More robots and more code mean more goods produced in the long-run, not fewer as BKLS find.

Of course, this means we’ll have produced so many robots that they become sentient and enslave us to serve as human batteries. But that is a different kind of problem entirely.

Valuing Consumption. Leave aside all the issues with production and how to model code. Does their baseline simulation actually indicate immiseration? Their measure of “national income” isn’t defined clearly, so I’m not sure what to do with that number. But they do report the changes in consumption of goods and services. We can back out a measure of consumption per person from that. They index the initial values of service and good consumption to 100. Then, in the “immiserating growth” scenario, service consumption rises to 127, but good consumption falls to 72.

Is this good or bad? Well, to value both initial and long-run total consumption, we need to pick a relative price for the two goods. BKLS index the relative price of services to 100 in the initial period, and the relative price falls to 43 in the long-run.

But we don’t want the indexed price, we want the actual relative price. This matters a lot. If the relative price of services is 1 in the initial period, then initial real consumption is

\displaystyle  C = P_s Q_s + Q_g = 1 \times 100 + 100 = 200. \ \ \ \ \ (3)

In the long-run we need to use the same relative price so that we can compare real consumption over time. In the long-run, with a relative price of services of 1, real consumption is

\displaystyle  C = 1 \times 127 + 72 = 199. \ \ \ \ \ (4)

Essentially identical, and my guess is that the difference is purely due to rounding error.

Note what this means. With a relative price of services of 1, real consumption is unchanged after the introduction of robots in their model. This is not immiserating growth.

But wait, who said that the relative price of services had to be 1? What if the initial price of services was 10? Then initial real consumption would be {C = 10 \times 100 + 100 = 1100}, and long-run real consumption would be {C = 10 \times 127 + 72 = 1342}, and real consumption has risen by 22% thanks to the robots!

Or, if you feel like being pessimistic, assume the initial relative price of services is 0.1. Then initial real consumption is {C = .1 \times 100 + = 110}, and long-run consumption is {C = .1 \times 127 + 72 = 84.7}, a drop of 23%. Now we’ve got immiserating growth.

The point is that the conclusion depends entirely on the choice of the actual relative price of services. What is the actual relative price of services in their simulation? They don’t say anywhere that I can find, they only report the indexed value is 100 in the initial period. So I don’t know how to evaluate their simulation. I do know that their having service consumption rise by 27% and good consumption fall by 28% does not necessarily imply that we are worse off.

Their model is too disconnected from reality (as are most models, this isn’t a BKLS failing) that we cannot simply look at a series from the BLS on service prices to get the right answer here. But we do know that the relative price of services to goods rose a bunch from 1950 to 2010 (see here). From an arbitrary baseline of 1 in 1950, the price of services relative to manufacturing was about 4.33 in 2010. You can’t just plug in 4.33 to the above calculation, but it gives you a good idea of how expensive services are compared to manufacturing goods. On the basis of this, I would lean towards assuming that the relative price of services is bigger than 1, and probably significantly bigger, and that the effect of the BKLS robots is an increase in real consumption in the long-run.

Valuing Welfare. BKLS provide some compensating differential measurements for their immiserating scenario, which are negative. This implies that people would be willing to pay to avoid robots. They are worse off.

This valuation depends entirely on the weights in the utility function, and those weights seem wrong. The utility function they use is {U = 0.5 \ln{C_s} + 0.5 \ln{C_g}}, or equal weights on the consumption of both services and goods. With their set-up, people in the BKLS model will spend exactly 50% of their income on services, and 50% on goods.

But that isn’t what expenditure data look like. In the US, services take up about 70-80% of expenditure, and goods only the remaining 20-30%. So the utility function should probably look like {U = 0.75 \ln{C_s} + 0.25 \ln{C_g}}. And this changes the welfare impact of the arrival of robots.

Let {C_g} and {C_s} both equal 1 in the baseline, pre-robots. Then for BKLS baseline utility is 0, and in my alternative utility is also 0. So we start at the same value.

With robots, goods consumption falls to 0.72 and service consumption rises to 1.27. For BKLS this gives utility of {U = 0.5 \ln{1.27} + 0.5 \ln{0.72} = -.045}. Welfare goes down with the robots. With my weights, utility is {U = 0.75 \ln{1.27} + 0.25 \ln{0.72} = 0.097}. Welfare goes up with the robots.

Which is right? It depends again on assumptions about how to value services versus goods. If you overweight goods versus services, then yes, the reduction of goods production in the BKLS scenario will make things look bad. But if you flip that around and overweight services, things look great. I’ll argue that overweighting services seems more plausible given the expenditure data, but I can’t know for sure. I am wary, though, of the BKLS conclusions because their assumptions are not inconsequential to their findings.

What Do We Know. If it seems like I’m picking on this paper, it is because the question they are trying to answer is so interesting and important, and I spent a lot of time going through their model. As I said above, we need some kind of baseline model of how future hyper-automated production influences the economy. BKLS should get a lot of credit for taking a swing at this. I disagree with some of the choices they made, but they are doing what needs to be done. I do think that you have to allow for IRS in production involving code, though. It just doesn’t make sense to me to do it any other way. And if you do that goods production is going to go up, not down, as they find.

The thing that keeps bugging me is that I have this suspicion that you can’t eliminate the measurement problem with real consumption or welfare entirely. This isn’t a failure of BKLS in particular, but probably an issue with any model of this kind. We don’t know the “true” utility function, so there is no way we’ll ever be able to say for sure whether robots will or will not raise welfare. In the end it will always rest on assumptions regarding utility weights.

When an Op-Ed About Growth Fails

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

There’s a column in the NYT today by Daniel Cohen, titled “When the Growth Model Fails“. It is…well, I don’t know what it is. A lament? A rant?

Daniel Cohen is a good economist, so it is a shame that the column reads like the work of a politician who occasionally reads the business section of a newspaper. It is a series of disconnected tropes without any meaningful point. It is thick with “truthiness”, but nothing in the form of actual facts.

Let’s take a look:

And yet, at least in the West, the growth model is now as fleeting as Proust’s Albertine Simonet: Coming and going, with busts following booms and booms following busts, while an ideal world of steady, inclusive, long-lasting growth fades away.

But in its desperate search for scapegoats, the West skirts the key question: What would happen if our quest for never-ending economic growth has become a mirage? Would we find a suitable replacement for the system, or sink into despair and violence?

What does it mean that the “growth model” is fleeting? Is Bob Solow fading in and out of existence? I presume that the implication is that economic growth is fleeting, and is coming and going.

Am I supposed to believe that booms and busts are a new feature of Western economies? That is patently untrue, and Cohen knows this. Business cycles did not start happening in the last decade. A few minutes looking at long-run data (like here) will show you that even in France the frequency and severity of booms and busts were both much, much higher before World War II than after. Took me 10 minutes to download the data for France, plot it, and run some quick regressions. 10 minutes.

France GDP per capita

“..steady, inclusive, long-lasting growth fades away”. You have to unpack this with care. Steady, inclusive, and long-lasting are three separate characteristics, and there is nothing that necessitates that they appear together or in any particular combination when growth occurs. Steady? Again, look at some data. What I see is from 1820 to 1940 steady growth at about 1.2% per year, punctuated by severe recessions and booms. After 1980, I see steady growth at 1.4% per year, higher than the pre-war rate. In between WWII and 1980 I see a country experiencing a level shift to a higher balanced growth path, probably due in part to integration within Europe and technology adoption.

Long-lasting? France has been experiencing steady GDP per capita growth for 190 years. Am I supposed to believe that the downturn you can see at the tail end of the figure in 2007 represents the end of that? That the dip in French GDP per capita in 2007 implies that we either have to “replace the system”, whatever that means, or sink into despair and violence? Get some perspective.

I think what Prof. Cohen means is that the era of rapid transitional growth that France experience from 1950 to 1980 is over. Yes, it is. But did you really think that growth of 3.8% was going to last forever, when there is not a single example – ever – of a country growing at that rate in the long run? Again, perspective.

Inclusive? Now here is where we get some traction. Cohen cites that 80 percent of Americans have not seen real wage growth in 30 years. You can quibble with the exact figure, but he’s right on. The last three decades have not been good for everyone, particularly in the U.S. We do not have a problem with “the growth model”, meaning a problem with economic growth. We have a problem with the “distribution model”. So write an op-ed proposing changes to tax rules, or supporting education, or opposing excessive licensing of occupations.

Moving on:

Will economic growth return, and if it doesn’t, what then? Experts are sharply divided.

No, not really. Cohen cites Robert Gordon as a growth pessimist. Gordon is, but he doesn’t predict that growth is ending. Gordon thinks that the growth rate of GDP per capita will drop from the historical 1.8-2% per year to about 0.9-1.2% per year. This is primarily due to a slowdown in the accumulation of human capital as the population ages and the rates of college and high school completion level off. So even the pessimists don’t believe growth is over, just that it will be slower. Gordon also assumes that total factor productivity growth will be lower than in the past, which is completely unknowable. Gordon gets very “cranky old man” about how useless innovations today are (those kids and their Insta-Snap-gram-Book!).

To decide who is right, one must first recognize that the two camps aren’t focusing on the same things: For the pessimists, it’s the consumer who counts; for the optimists, it’s the machines.

Uh, no. To decide who is right we need data. Like several more years of data to see if in fact growth rates have fallen significantly. I wrote a post about this a while back. We won’t be able to to definitively say if growth has fallen below 2% per year until about 2025. Until then, there will be too much noise in growth rates to extract a signal.

What matters is whether they will substitute for human labor or whether they will complement it, allowing us to be even more productive.

Uh, no. Regardless of whether machines/robots/Skynet are a substitute or complement for human labor, we as an aggregate economy will be more productive. Whether particular individuals find themselves displaced and unable to find work depends on their own set of skills. How we treat those people is a distributional question, not a growth question.

The logical conclusion, then, is that both sides in this debate are right: We’re living an industrial revolution without economic growth. Powerful software is doing the work of humans, but the humans thus replaced are unable to find productive jobs.

Uh, no. See above regarding economic growth. It hasn’t ended just because we had a recession, and a very bad one at that. On the job replacement thing, see here. We experienced similar kinds of disruptions in the past. Can we handle this with more sympathy towards those temporarily displaced by technology? Yes. Absolutely. Again, that is a distributional problem, not a growth problem.

The point is this: If workers are to be productive again, then we must come up with new motivation schemes. No longer able to promise their employees higher earnings over time, companies will now have to adjust, compensate, and make work more inspiring.

Wait, who said workers were unproductive? Did I miss the part where everyone forgot how to do their job? And this seems close to 180 degrees from how companies would respond to an economy that stopped growing. No growth would mean a lack of new firms and/or new types of jobs, so workers wouldn’t have outside options. Firms would have even more power to motivate through fear of losing your job, because there wouldn’t be new jobs out there to escape to.

Cohen suggests that firms will have to focus on giving workers autonomy to keep them happy. He cites the Danish situation as one that produces happy workers. They are treated respectfully and given autonomy, and in return they are very productive. They have a significant safety net in place so that people don’t have to keep bad jobs just to pay the bills. Denmark self-reports as being very happy.

I am all for “the Danish model”. Here’s the thing. It’s a good idea no matter what happens to economic growth. Why should I wait to see if growth slows down to encourage companies to adopt a more positive work environment? If anything, higher growth rates would make it easier to transition to a system like this because economic growth gives people outside options.

The biggest sin of this op-ed is the lack of perspective. It presumes that we are living through not just a shift in long-run growth rates, but a cataclysmic collapse of them. If you want to make that case, then you have to bring some…what’s the word? Evidence.

But bonus points for the Proust quote to give it that affected tinge of world-weary seriousness.

Techno-neutrality

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I’ve had a few posts in the past few months (here and here) about the consequences of mechanization for the future of work. In short, what will we do when the robots take our jobs?

I wouldn’t call myself a techno-optimist. I don’t think the arrival of robots necessarily makes everything better. But I do not buy the strong techno-pessimism that comes up in many places. Richard Serlin has been a frequent commenter on this blog, and he generally has a gloomy take on where we are going to end up once the robots arrive. I’m not bringing up Richard to pick on him. He writes thoughtful comments on this subject (and lots of others), and it is those comments that pushed me to try and be more clear on why I’m “techno-neutral”.

The economy is more creative than we can imagine. The coming of robots to mechanize away our jobs is the latest in a long, long, long, history of technology replacing workers. And yet here we still are, working away. Timothy Taylor posted this great selection a few weeks ago. This is a quote from Time Magazine:

The rise in unemployment has raised some new alarms around an old scare word: automation. How much has the rapid spread of technological change contributed to the current high of 5,400,000 out of work? … While no one has yet sorted out the jobs lost because of the overall drop in business from those lost through automation and other technological changes, many a labor expert tends to put much of the blame on automation. … Dr. Russell Ackoff, a Case Institute expert on business problems, feels that automation is reaching into so many fields so fast that it has become “the nation’s second most important problem.” (First: peace.)
The number of jobs lost to more efficient machines is only part of the problem. What worries many job experts more is that automation may prevent the economy from creating enough new jobs. … Throughout industry, the trend has been to bigger production with a smaller work force. … Many of the losses in factory jobs have been countered by an increase in the service industries or in office jobs. But automation is beginning to move in and eliminate office jobs too. … In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries. … Today’s new industries have comparatively few jobs for the unskilled or semiskilled, just the class of workers whose jobs are being eliminated by automation.

That quote is from 1961. This is almost word for word the argument you will get about robots and automation leading to mass unemployment in the future. 50 years ago we were just as worried about this kind of thing, and in those 50 years we do not have massive armies of unemployed workers wandering the streets. The employment/population ratio in 1961 was about 55%, and then it steadily rose until the late 90’s when it topped out at about 64%. Even after the Great Recession, the ratio is still 59% today, higher than it was in 1961.

This didn’t happen without disruption and dislocation. And the robots will cause similar dislocation and disruption. Luddites weren’t wrong about losing their jobs, they were just wrong about the economy losing jobs in aggregate. But I don’t see why next-generation robots are any different than industrial robots, mainframes, PC’s, tractors, mechanical looms, or any other of the ten million innovations made in history that replaced labor. We can handle this with some sympathy and try to smooth things out for those dislocated, or we can do what usually happens and let them hang out to dry. The robots aren’t the problem here, we are.

What exactly are those new jobs that will be created? If I knew, then I wouldn’t be writing this blog post, I’d be out starting a company. The fact that I cannot conceive of an innovation myself is not evidence that innovation has ceased. But I do believe in the law of large numbers, and somewhere among the 300-odd million Americans is someone who *is* thinking of a new kind of company with new kinds of jobs.

Robots change prices as well as wages. An argument for pessimism goes like this. People have subsistence requirements, meaning they have a wage floor below which they cannot survive. Robots will be able to replace humans in production and this will drive the wage below that subsistence requirement. Either no firm will hire workers at the subsistence wage or people who do work will not meet subsistence.

The problem with this argument is that it ignores the impact of robots on the price of that subsistence requirement. Subsistence requirements are in real terms (I need clothes and housing and food), not nominal terms (I need $2000 dollars). The “subsistence wage” is a a real wage, meaning it is the nominal wage divided by the price level of a subsistence basket of goods. Robots lowering marginal costs of production lowers the nominal human wage, but it also lowers the price of goods. It is not necessary or even obvious that real wages have to fall because of robots. History says that despite all of the labor-saving technological change that has gone on over the last few hundred years, real wages have risen as the lower costs outweigh the downward pressure on wages.

Who is going to buy what the robots produce? Call this the “Henry Ford” argument. If you are going to invest in opening up a factory staffed entirely by robots, then who precisely is supposed to buy that output? Ford raised wages at his highly mechanized (for the time) plants so that he had a ready-made market for his cars. The Henry Fords of robot factories are going to need a market for the stuff they build. Rich people are great, but diminishing marginal utility sets in pretty quick. That means robot owners either need to lower prices or raise wages for the people they do hire in order to generate a big enough market. Depending on the fixed costs involved in getting these proverbial robot factories up and running, robot owners may be a strong force for keeping wages high in the economy, just like Henry Ford was back in the day.

The wealthy are wealthy because they own productive assets. A tiny fraction of the value of those assets is due to the utility to the owner of the widgets they kick out. The majority of the value of those assets is due to the fact that you can *sell* that output for money and use that money to buy other widgets. Rockefeller wasn’t wealthy because he had a lot of oil. He was wealthy because he could sell it to other people. No other people, no wealth. Just barrel after barrel of useless black gunk.

The same holds for robot owners. Those robots and robot factories have value because you can sell them or the goods they make in the wider economy. And that means continuing to exchange with the non-wealthy. You cannot be wealthy in a vacuum. Bill Gates on an island with robots and a stack of 16 billion dollar bills is Gilligan with a lot of kindling.

Wealthy robot owners will do what wealthy (fill in capital stock here) owners have done for eons. They’ll trade access to the capital, or the goods it produces, to the non-wealthy in exchange for services, effort, flattery, and new ideas on what to do with that wealth.

Wealth concentration would be a problem with or without robots. The worry here is that because the wealthy will be the only ones able to build the robots and robot factories, they will control completely the production of goods and the demand for labor. That’s not a problem that arises with robots, that is a problem that arises with, well, settled agriculture 10,000 years ago. Wealth concentration makes owners both monopolists (market power selling goods) and monopsonists (market power buying labor), which is a bad combination. It gives them the ability to drive (real) wages down to minimum subsistence levels. This is bad, absolutely. But this was bad when (fill in example of a landed elite) did it in (fill in historical era here). This is bad in “company towns”. This is bad now, today. So if you want to argue against wealth concentration and the pernicious influence it has on wages, get started. Don’t wait for the robots, they’ve got nothing to do with it.

Again, be clear that in arguing against techno-pessimism I am not arguing that robots will generate a techno-utopia with ponies and rainbows. I just do not buy the dystopian view that somehow it’s all going to come crashing down around our ears because of the very particular innovations coming in the near future.

The Slowdown in Reallocation in the U.S.

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

One of the components of productivity growth is reallocation. From one perspective, we can think about the reallocation of homogenous factors (labor, capital) from low-productivity firms to high-productivity firms, which includes low-productivity firms going out of business, and new firms getting started. A different perspective is to look more closely at the shuffling of heterogenous workers between (relatively) homogenous firms, with the idea being that workers may be more productive in one particular environment than in another (i.e. we want people good at doctoring to be doctors, not lawyers). Regardless of how exactly we think about reallocation, the more rapidly that we can shuffle factors into more productive uses, the better for aggregate productivity, and the higher will be GDP. However, evidence suggests that both types of reallocation have slowed down recently.

Foster, Grim, and Haltiwanger have a recent NBER working paper on the “cleansing effect of recessions”. This is the idea that in recessions, businesses fail. But it’s the really crappy, low-productivity businesses that fail, so we come out of the recession with higher productivity. The authors document that in recessions prior to the Great Recession, downturns tend to be “cleansing”. Job destruction rates rise appreciably, but job creation rates remain about the same. Unemployment occurs because it takes some time for those people whose jobs were destroyed to find newly created jobs. But the reallocation implied by this churn enhances productivity – workers are leaving low productivity jobs (generally) and then getting high productivity jobs (generally).

But the Great Recession was different. In the GR, job destruction rose by a little, but much less than in prior recessions. Job creation in the GR fell demonstrably, much more than in prior recessions. So again, we have unemployment as the people who have jobs destroyed are not able to pick up newly created jobs. But because of the pattern to job creation and destruction, there is little of the positive reallocation going on. People are not losing low productivity jobs, becoming unemployed, and then getting high productivity jobs. People are staying in low productivity jobs, and new high productivity jobs are not being created. So the GR is not “cleansing”. It is, in some ways, “sullying”. The GR is pinning people into *low* productivity jobs.

This holds for firm-level reallocation well. In recessions prior to the GR, low productivity firms tended to exit, and high productivity firms tended to grow in size. So again, we had productivity-enhancing recessions. But again, the GR is different. In the GR, the rate of firm exit for low productivity firms did not go up, and the growth rate of high-productivity firms did not rise. The GR is not “cleansing” on this metric either.

Why is the GR so different? The authors don’t offer an explanation, as their paper is just about documenting these changes. Perhaps the key is that a financial crash has distinctly different effects than a normal recession. A lack of financing means that new firms cannot start, and job creation falls, leading to lower reallocation effects. A “normal” recession doesn’t involve as sharp a contraction in financing, so new firms can take advantage of others going out of business to get themselves going. Just an idea, I have no evidence to back that up.

[An aside: For the record, there is no reason that we need to have a recession for this kind of reallocation to occur. Why don’t these crappy, low-productivity firms go out of business when unemployment is low? Why doesn’t the market identify these crappy firms and compete them out of business? So don’t take Foster, Grim, and Haltiwanger’s work as some kind of evidence that we “need” recessions. What we “need” is an efficient way to reallocate factors to high productivity firms without having to make those factors idle (i.e. unemployed) for extended periods of time in between.]

In a related piece of work Davis and Haltiwanger have a new NBER working paper that discusses changes in workers reallocations over the last few decades. They look at the rate at which workers turn over between jobs, and find that in general this rate has declined since 1980 to today. Some of this may be structural, in the sense that as the age structure and education breakdown of the workforce changes, there will be changes in reallocation rates. In general, reallocation rates go down as people age. 19-24 year olds cycle between jobs way faster than 55-65 year olds. Reallocation rates are also higher among high-school graduates than among college graduates. So as the workforce has aged and gotten more educated from 1980 to today, we’d expect some decline in job reallocation rates.

But what Davis and Haltiwanger find is that even after you account for these forces, reallocation rates for workers are declining. No matter which sub-group you look at (e.g. 25-40 year old women with college degrees) you find that reallocation rates are falling over time. So workers are flipping between jobs *less* today than they did in the early 1980s. Which is probably somewhat surprising, as my guess is that most people feel like jobs are more fleeting in duration these days, due to declines in unionization, etc.. etc..

The worry that Davis and Haltiwanger raise is that lower rates of reallocation lower productivity growth, as mentioned at the beginning of this post. So what has caused this decline in reallocation rates across jobs (or across firms as the first paper described)? From a pure accounting perspective, Davis and Haltiwanger gives us several explanations. First, reallocation rates within the Retail sector have declined, and since Retail started out with one of the highest rates of reallocation, this drags down the average for the economy. Second, more workers tend to be with older firms, which have less turnover than young firms. Last, the above-mentioned shift towards an older workforce that tends to shift jobs less than younger workers.

Fine, but what is the underlying explanation? Davis and Haltiwanger offer several possibilities. One is increased occupational licensing. In the 1950s, only about 5 % of workers needed a government (state or federal) license to do their job. In 2008, that is now 29%. So it can be incredibly hard to reallocate to a new job or sector of work if you have to fulfill some kind of licensing requirement (which could involve up to 2000 hours of training along with fees). Second is a decreased ability of firms to fire-at-will. Starting in the 1980s there were a series of court decisions that made it harder for firms to just fire someone, which makes it both less likely for people to leave jobs, and less likely for firms to hire new people. Both act to lower reallocation between jobs. Third is employer-provided health insurance, which generates some kind of “job lock” where people are unwilling to move jobs because they don’t want to lose, or create a gap in, coverage.

Last is the information revolution which may have had perverse effects on reallocation. We might expect that IT allows more efficient reallocation as people can look for jobs more easily (e.g. Monster.com, LinkedIn) and firms can cast a wider net for applicants. But IT also allows firms to screen much more effectively, as they have access to credit reports, criminal records, and the like, that would have been prohibitive to acquire in the past.

So we appear to have, on two fronts, declining dynamic reallocation in the U.S. This certainly contributes to a slowdown in productivity growth, and may perhaps be a better explanation than “running out of ideas from the IT revolution” that Gordon and Fernald talk about. The big worry is that, if it is regulation-creep, as Davis and Haltiwanger suspect, we don’t know if or when the slowdown in reallocation would end.

In summary, reading John Haltiwanger papers can make you have a bad day.

Meta-post on Robots and Jobs

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I don’t know that I have anything particularly original to say on the worry that robots will soon replace humans in many more tasks, and what implications this has for wages, living conditions, income distribution, the introduction of the Matrix or Skynet, or anything else. So here I’ve just collected a few relevant pieces of information from about the inter-tubes that are useful in thinking about the issue.

Let’s start with some data on “routine” jobs and what is happening to them. Cortes, Jaimovich, Nekarda, and Siu have a recent voxeu post (and associated paper) on the flows into routine versus non-routine work. In the 1980’s, about 1/3 of American workers did “routine” work, now this number is only about 1/4. Routine work tended (and tends) to be middle-waged; it pays pretty well. What the authors find is that the decline in the share of people doing these middle-wage routine jobs is due to slower flows *in* to those jobs, but not due to faster flows *out*. That is, routine workers were not necessarily getting let go more rapidly, but companies were simply not hiring new routine workers.

Unsurprisingly, people with more education were better able to adapt to this. Higher education meant a higher likelihood of shifting into non-routine “cognitive” tasks, which also is a move up the wage scale (upper-middle wages, say). Perhaps more surprising is that women have been more likely, holding education constant, to move into these cognitive tasks. It is low education males who represent the group that is failing to get routine middle-wage jobs. To the extent that these lower-educated males get work, it tends to be in “brawn” jobs, low-wage manual work.

This last fact is somewhat odd in the context of the robot-overlord thesis. Robots/computers are really good at doing routine tasks, but so far have not replaced manual labor. If there was a group that should have a lot to worry about, I’d think it would be low-education males, who could well be replaced as robots become more robust to doing heavy manual labor. One thought I have is that this indicates that manual work (think landscaping) is not as low-skill as routine tasks like data entry. I think there is more cognitive processing that is going on in these jobs than we tend to give them credit for (where to dig, how deep, should I move this plant over a little, what if I hit a root?, does this shrub look right over here, etc.. ), and that their wages are low simply because the supply of people who can do those jobs is so large.

Brad DeLong took on the topic by considering Peter Thiel‘s comments in the Financial Times. Thiel is relatively optimistic about the arrival of robots – he uses the computer/human mix at Paypal to detect fraud as the example of how smarter machines or robots will benefit workers. Brad worries that Thiel is making a basic error. Yes, machines relieve us of drab, boring, repetitive work. But whether workers benefit from that (as opposed to the owners of the machines) depends not on the average productivity of that worker, but on the productivity of the marginal worker who is not employed. That is, if I can be replaced at Paypal by an unemployed worker who has no other options, then my own wage will be low, regardless of how productive I am. By replacing human workers in some jobs, robots/machines drive up the supply of humans in all the remaining jobs, which lowers wages.

To keep wages high for workers, we will need to increase demand for human-specific skills. What are those? Brad likes to list 6 different types of tasks, and leaves humans with persuasion, motivation, and innovation as things that will be left for humans to do. Is there sufficient demand for those skills to keep wages elevated? I don’t know.

David Autor has a recent working paper that is relatively optimistic about robots/machines. He thinks there is more complementarity between machines and humans than we think, so it echoes Thiel’s optimism. Much of Autor’s optimism stems from what he calls “Polyani’s Paradox”, which is essentially that we are incapable of explaining in full what we know. And if we cannot fully explain exactly what we know how to do (whether that is identifying a face in a crowd, or making scrambled eggs, writing an economics paper, or building a piece of furniture) then we cannot possibly program a machine to do it either. The big limit of machines, for Autor, is that they have no tacit knowledge. Everything must be precisely specified for them to work. There is no “feel” to their actions, so to speak. As long as there are tasks like that, robots cannot replace us, and it will require humans – in conjunction with machines, maybe – to actually do lots of work. Construction workers are his example.

But I am a little wary of that example. Yes, construction workers today work with a massive array of sophisticated machines, and they serve as the guidance systems for those machines, and without construction workers nothing would get done. But that’s a statement about average product, not marginal product. The wage of those workers could still fall because better machines could make *anyone* capable of working at a construction site, and the marginal product of any given worker is very low. Further, adding better or more construction machines can reduce the number of construction workers necessary, which again only floods the market with more workers, lowering the marginal product.

Autor gets interviewed in this video from Ryan Avent at the Economist. It’s a fairly good introduction to the ideas involved with robots replacing workers.

Bad Population Reporting

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

So I spotted this article in the Guardian, by one Damian Carrington, who gives us an example of how not to write about new research. The article is about the release of this paper in Science, by Gerland et al.

Let’s take a little walk through the article to see how Mr. Carrington mangles nearly everything important about this research.

  1. The title of the article is “World population to hit 11bn in 2100”. The Gerland et al article is about using Bayesian techniques to arrive at confidence intervals for the size of global population, meaning that their article is about how much uncertainty there is in a population projection. The entire point of their work is that statements like “World population to hit 11bn in 2100” are stupid because they do not tell you about the uncertainty in that estimate.
  2. “A ground-breaking analysis….” is how the Gerland et al article is introduced. I’m sure Gerland and his co-authors are very capable scholars. But this is not ground-breaking analysis. How do I know that? Because they base all their work on the existing 2012 United Nations Population Projections, so they do not fundamentally change our estimates of future population. What they do add is the Bayesian analysis to give more accurate confidence intervals to those United Nations projections. This technique was developed by some of the co-authors a while ago, see this paper. *That* technique could arguably called ground-breaking, but the current Science paper is not.
  3. “The work overturns 20 years of consensus that global population, and the stresses it brings, will peak by 2050 at about 9bn people.” What consensus? The UN’s 2012 population projection that this Science article is based on predicts that population will be 11 billion by 2100, and that it will still be growing at that point. The UN population projection in 2010 also predicted population would be 11 billion in 2100. Population projects by the UN from around 2000 suggest that population would hit 9 billion in 2050, but never said it would max out there. The UN just didn’t project out populations past 2050 back then.
  4. “Lack of healthcare, poverty, pollution and rising unrest and crime are all problems linked to booming populations, he [Prof. Adrian Raftery, U. of Washington] said.” Mr. Carrington does not feel compelled to support these statements by citing any evidence that (a) the links exist and (b) are causal. I’d like to think that Prof. Raftery at least tried to provide this kind of evidence, but we don’t know.
  5. “The research, conducted by an international team including UN experts, is published in the journal Science and for the first time uses advanced statistics to place convincing upper and lower limits on future population growth.” Statistics? No – advanced statistics. See the difference? One is more advanced. Convincing? Convincing of what? That upper and lower limits exist? Of what relevance is it that the team was international? Do the advanced statistics require people with different passports to run the code? This is just such a ridiculous sentence. The stupid, it burns us.
  6. “But the new research narrows the future range to between 9.6bn and 12.3bn by 2100. This greatly increased certainty – 80% – allowed the researchers to be confident that global population would not peak any time during in the 21st century.” They didn’t increase certainty at all. The prior UN projections were mechanical, and had no uncertainty associated with them at all. Gerland et al have created confidence intervals where none existed. This didn’t increase certainty, it quantified it.

By the way, it took me all of about 10 minutes on the Google machine to find the references I just cited here, and to look up the old UN projections. And I didn’t use any special PhD kung-fu to do this. So I don’t want to hear “well, science is hard for the layman to understand”. This is click-bait crap reporting, period.

Is Progress Bad?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I saw this article on the Atlantic by Jeremy Caradonna, a professor of history at the U. of Alberta. It’s about whether “progress” is good for humanity. The article takes particular aim at “progress” as a concept associated with sustained economic growth since the Industrial Revolution.

The first point to make is that Caradonna mischaracterizes the conclusions that economic historians and growth economists make about the moral character of growth after the Industrial Revolution. None of them, at least the ones I’ve read, and I’ve read a lot of them, have ever suggested that humanity is morally superior for having achieved sustained growth. Here’s the quote he pulls from Joel Mokyr’s The Enlightened Economy

Material life in Britain and in the industrialized world that followed it is far better today than could have been imagined by the most wild-eyed optimistic 18th-century philosophe—and whereas this outcome may have been an unforeseen consequence, most economists, at least, would regard it as an undivided blessing.

And here is Caradonna’s reaction to that quote:

The idea that the Industrial Revolution has made us not only more technologically advanced and materially furnished but also better for it is a powerful narrative and one that’s hard to shake.

The only sense in which Mokyr means “we’re better for it” is precisely that it made us more materially furnished. We are superior in real consumption. Full stop. Nowhere does Mokyr make a claim that this superiority in real consumption implies any kind of superiority in virtue, morality, or ethics.

We are shockingly, amazingly, well off on a material basis compared to our ancestors not only 200 years ago even thirty years ago. This despite the fact that the population of the earth is now roughly 7-8 times higher than it was when the Industrial Revolution started.

So Caradonna has set up a straw man to take down. Fine, he’s hardly the first person to do that. What’s his real argument, then? Let me take a stab at summarizing it. After the Industrial Revolution, bad things happened in addition to good things. Caradonna thinks those bad things are particularly bad, and thinks we should give up some of the good things (gas-powered cars) in order to alleviate the bad things (global warming).

Okay. Great. I’m with you Prof. Caradonna. Seriously, I’m in for a carbon tax and expanded spending on alternative energy R-and-D. I want to drive around either an electric car, or one powered by hydrogen, or using gas produced by algae that actually pulls CO2 from the atmosphere.

But the idea that economic growth – progress – is somehow the enemy of that goal is misguided. To paraphrase Homer Simpson: “To economic growth, the cause – and solution – to all of life’s problems”. Economic growth created the conditions that allowed us to alleviate evils like starvation and infant mortality while at the same time giving us more clothes, better housing, faster ways to get around, means of communication, Diet Coke, and gigantic-ass TV’s. It also bequeathed us technologies that heat up the atmosphere. And that sucks. But it sucks less than starving.

Economic growth means we’ve got a new kind of constrained optimization problem to solve in the 21st century: how to maximize real consumption while minimizing environmental damage. Caradonna has a particular type of solution to that optimization in mind, one tilted more towards minimizing damage than maximizing consumption. But the world seems to be making a different kind of choice, and so he’s trying to persuade others to adopt his solution. More power to him. There is no one who can tell him (particularly me) that his choice of how to solve that optimization problem is wrong. It’s just about preferences.

But anything that alleviates the constraints in this problem is welcome, regardless of preferences. Innovations that mitigate global warming (or other environmental concerns) would help us regardless of our exact preferred solution. If we can invent hyper-efficient spray-on solar panels, that would be an incredible boon to humanity. Cheap, clean power. Everyone wins. You know what I would call something like that? Progress.

The underlying issue is not a concept like progress or economic growth, but the fact that constraints exist.

Robots as Factor-Eliminating Technical Change

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

A really common thread running through the comments I’ve gotten on the blog involve the replacement of labor. This is tied into the question of the impact of robots/IT on labor market outcomes, and the stagnation of wages for lots of laborers. An intuition that a lot of people have is that robots are going to “replace” people, and this will mean that wages fall and more and more of output gets paid to the owners of the robots. Just today, I saw this figure (h/t to Brad DeLong) from the Center on Budget and Policy Priorities which shows wages for the 10th and 20th percentile workers in the U.S. being stagnant over the last 40 years.
CBPP Wage Figure

The possible counter-arguments to this are that even with robots, we’ll just find new uses for human labor, and/or that robots will relieve us of the burden of working. We’ll enjoy high living standards without having to work at it, so why worry?

I’ll admit that my usual reaction is the “but we will just find new kinds of jobs for people” type. Even though capital goods like tractors and combines replaced a lot of human labor in agriculture, we now employ people in other industries, for example. But this assumes that labor is somehow still relevant somewhere in the economy, and maybe that isn’t true. So what does “factor-eliminating” technological change look like? As luck would have it, there’s a paper by Pietro Peretto and John Seater called …. “Factor-eliminating Technical Change“. Peretto and Seater focus on the dynamic implications of the model for endogenous growth, and whether factor-eliminating change can produce sustained growth in output per worker. They find that it can under certain circumstances. But the model they set is also a really useful tool for thinking about what the arrival of robots (or further IT innovations in general) may imply for wages and income distribution.

I’m going to ignore the dynamics that Peretto and Seater work through, and focus only on the firm-level decision they describe.

****If you want to skip technical stuff – jump down to the bottom of the post for the punchline****

Firms have a whole menu of available production functions to choose from. The firm-level functions all have the same structure, {Y = A X^{\alpha}Z^{1-\alpha}}, and vary only in their value of {\alpha \in (0,\overline{\alpha})}. {X} and {Z} are different factors of production (I’ll be more specific about how to interpret these later on). {A} is a measure of total factor productivity.

The idea of having different production functions to choose from isn’t necessarily new, but the novelty comes when Peretto/Seater allow the firm to use more than one of those production functions at once. A firm that has some amount of {X} and {Z} available will choose to do what? It depends on the amount of {X} versus the amount of {Z} they have. If {X} is really big compared to {Z}, then it makes sense to only use the maximum {\overline{\alpha}} technology, so {Y = A X^{\overline{\alpha}}Z^{1-\overline{\alpha}}}. This makes some sense. If you have lots of some factor {X}, then it only makes sense to use a technology that uses this factor really intensely – {\overline{\alpha}}.

On the other hand, if you have a lot of {Z} compared to {X}, then what do you do? You do the opposite – kind of. With a lot of {Z}, you want to use a technology that uses this factor intensely, meaning the technology with {\alpha=0}. But, if you use only that technology, then your {X} sits idle, useless. So you’ll run a {X}-intense plant as well, and that requires a little of the {Z} factor to operate. So you’ll use two kinds of plants at once – a {Z} intense one and a {X} intense one. You can see their paper for derivations, but in the end the production function when you have lots of {Z} is

\displaystyle  Y = A \left(Z + \beta X\right) \ \ \ \ \ (1)

where {\beta} is a slurry of terms involving {\overline{\alpha}}. What Peretto and Seater show is that over time, if firms can invest in higher levels of {\overline{\alpha}}, then by necessity it will be the case that we have “lots” of {Z} compared to little {X}, and we use this production function.

What’s so special about this production function? It’s linear in {Z} and {X}, so their marginal products do not decline as you use more of them. More importantly, their marginal products do not rise as you acquire more of the other input. That is, the marginal product of {Z} is exactly {A}, no matter how much {X} we have.

What does this possibly have to do with robots, stagnant wages, and the labor market? Let {Z} represent labor inputs, and {X} represent capital inputs. This linear production function means that as we acquire more capital ({X}), this has no effect on the marginal product of labor ({Z}). If we have something resembling a competitive market for labor, then this implies that wages will be constant even as we acquire more capital.

That’s a big departure from the typical concept we have of production functions and wages. The typical model is more like Peretto and Seater’s case where {X} is really big, and {Y = A X^{\overline{\alpha}}Z^{1-\overline{\alpha}}}, a typical Cobb-Douglas. What’s true here is that as we get more {X}, the marginal product of {Z} goes up. In other words, if we acquire more capital, then wages should rise as workers get more productive.

The Peretto/Seater setting says that, at some point, technology will progress to the point that wages stop rising with the capital stock. Wages can still go up with general total factor productivity, {A}, sure, but just acquiring new capital will no longer raise wages.

While wages are stagnant, this doesn’t mean that output per worker is stagnant. Labor productivity ({Y/Z}) in this setting is

\displaystyle  \frac{Y}{Z} = A \left(1 + \beta \frac{X}{Z}\right). \ \ \ \ \ (2)

If capital per worker ({X/Z}) is rising, then so is output per worker. But wages will remain constant. This implies that labor’s share of output is falling, as

\displaystyle  \frac{wZ}{Y} = \frac{AZ}{A \left(Z + \beta X\right)} = \frac{Z}{\left(Z + \beta X\right)} = \frac{1}{1 + \beta X/Z}. \ \ \ \ \ (3)

With the ability to use multiple types of technologies, as capital is acquired labor’s share of output falls.

Okay, this Peretto/Seater model gives us an explanation for stagnant wages and a declining labor share in output. Why did I present this using {X} for capital and {Z} for labor, not their traditional {K} and {L}? This is mainly because the definition of what counts as “labor”, and what counts as “capital”, are not fixed. “Capital” might include human as well as physical capital, and so “labor” might mean just unskilled labor. And we definitely see that unskilled labor’s wage is stagnant, while college-educated wages have tended to rise.

***** Jump back in here if you skipped the technical stuff *****

The real point here is that whether technological change is good for labor or not depends on whether labor and capital (i.e. robots) are complements or substitutes. If they are complements (as in traditional conceptions of production functions), then adding robots will raise wages, and won’t necessarily lower labor’s share of output. If they are substistutes then adding robots will not raise wages, and will almost certainly lower labor’s share of output. The factor-eliminating model from Peretto and Seater says that firms will always invest in more capital-intense production functions and that this will inevitably make labor and capital substitutes. We happen to live in the period of time in which this shift to being substitutes is taking place. Or one could argue that it already has taken place, as we see those stagnant wages for unskilled workers, at least, from 1980 onwards.

What we should do about this is a different question. There is no equivalent mechanism or incentive here that would drive firms to make labor and capital complements again. From the firms perspective, having labor and capital as complements limits their flexibility, because they then depend on the other. They’d rather have the marginal product of robots and people independent of one other. So once we reach the robot stage of production, we’re going to stay there, absent a policy that actively prohibits certain types of production. The only way to raise labor’s share of output once we get the robots is through straight redistribution from robot owners to workers.

Note that this doesn’t mean that labor’s real wage is falling. They still have jobs, and their wages can still rise if there is total factor productivity change. But that won’t change the share of output that labor earns. I guess a big question is whether the increases in real wages from total factor productivity growth are sufficient to keep workers from grumbling about the smaller share of output that they earn.

I for one welcome….you know the rest.

Innovation does not equal GDP Growth

I’m way behind on this (it came out August 8th), but Joel Mokyr posted an op-ed in the Wall Street Journal about being optimistic regarding growth. I liked this particular passage:

The responsibility of economic historians is to remind the world what things were like before 1800. Growth was imperceptibly slow, and the vast bulk of the population was so poor that a harvest failure would kill millions. Almost half the babies born died before reaching age 5, and those who made it to adulthood were often stunted, ill and illiterate.

I’d like to think that growth economists are also here to spread this message. It’s easy to be pessimistic about the near-term economic future when we are slogging our way slowly out of a terrible recession. But extrapolating from the current situation to say that long run sustained growth is over is taking it too far.

Mokyr (and us mere growth economists) are more optimistic about things. Why? [Because we’re tenured professors who can’t be fired. But that’s only part of it.] Because the ultimate source of economic growth over history has been technological innovation, and there is still an essentially infinite scope for this to continue. Mokyr lays out a long list of innovations that are coming down the pipeline: driverless cars, nanotechnologies, materials science, biofuels, etc. etc. We aren’t running out of ideas, and just because you or I can’t think of what they could possibly invent anymore doesn’t mean that other people aren’t busy inventing things.

But will these new innovations really provide a boost to GDP? Maybe not, but that’s a failure of GDP, not of innovation. Let’s give the mike to Mokyr:

Many new goods and services are expensive to design, but once they work, they can be copied at very low or zero cost. That means they tend to contribute little to measured output even if their impact on consumer welfare is very large. Economic assessment based on aggregates such as gross domestic product will become increasingly misleading, as innovation accelerates. Dealing with altogether new goods and services was not what these numbers were designed for, despite heroic efforts by Bureau of Labor Statistics statisticians.

We measure GDP because we can, and because it gives us a good indication of very short-run variations in economic activity. But it is only a measure of “currently produced goods and services”. That is, GDP measures the new products or services provided in a specific window of time (e.g. the 3rd quarter of 2014, or all of 2013). If all the effort in producing a new product comes in development, but it is then copied for free, this means that there is a one-time contribution to GDP in the year it was developed, and then nothing afterwards.

Things like refrigerators, Diet Coke, and cars contribute to GDP every period because we have to make new versions of them over and over again. But in one sense that is a bug, not a feature. Imagine if, having invented Diet Coke, you could make copies for free. That would lower GDP, as Coca-Cola would drop to essentially zero revenue from here forward. But it’s demonstrably better, right? Free Diet Coke? Where do I put in the IV line?

Diet Coke is a good example here. Let’s say that you could replicate the physical inputs of Diet Coke for free, but that Coca-Cola still owned the recipe, and you had to pay them to use it. This would still lower GDP, as Coca-Cola would no longer be earning anything from the physical production of Diet Coke, only from renting out the recipe each time you wanted a Diet Coke. This is still a win, even though GDP goes down. Lots of current innovations are like making Diet Coke for free, but owning the recipe. They are worthwhile despite the fact that they do not necessarily contribute much to GDP, and might even detract from it.