Growth Links for Halloween

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I don’t know if these are scary or not. Anyway, they’ve been sitting as open tabs in Chrome for too long, and I haven’t thought of anything clever to say about them, so here they are straight.

  1. Our World in Data. This site has some really top-notch graphics on a number of topics: population, war and peace, poverty, health. I ended up there because of a specific graph regarding the decline in war deaths since 1945, but the whole site is excellent.
  2. Gavyn Davies post on whether growth is permanently lower. The post is good example of how you have to be clear about whether you are talking about “growth in aggregate GDP” or “growth in GDP per capita”. Growth in aggregate GDP is falling, and has been falling for decades in most rich countries, but that is due in large part to declining population growth rates.
  3. Morgan Spurlock has a set of web videos explaining the economy in simple terms. You can quibble about lots of little points here, but they are funny.
  4. Lant Pritchett on why we shouldn’t get hung up on thresholds for poverty. It’s a response to a SNL sketch spoofing those “39 cents a day” commercials asking you to give money, where all the people in the village keep telling the guy in the commercial to ask for more money. h/t to Chris Blattman for the link.
Advertisement

Scale, Profits, and Inequality

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

After my post last week on inequality, I got a number of (surprisingly reasonable) responses. I pulled one line out of a recent comment, not to call out that particular commenter, but because it encapsulates an argument for *not* caring about inequality.

Gates and the Waltons really did probably add more value to humanity than the janitor at my school.

The general argument here is about incentives. Without the possibility of massive profits, people like Bill Gates or Sam Walton will not bother to innovate and create Microsoft and Walmart. So we should not raise taxes because those people deserve, in some sense, the fruits of their genius. More important, without them innovating, the economy wouldn’t grow.

But if we take seriously the incentives behind innovation, then it isn’t simply the genius of the individual that matters for growth. The scale of the economy is equally relevant. In any typical model of innovation and growth, the profits of a firm are going to be something like Profits = Q(Y)(P-MC), where (P-MC) is price minus marginal cost. Q(Y) is the quantity sold, and this depends on the aggregate size of the economy, Y.

The markup of price over marginal cost (P-MC), is going to depend on how much market power you have, and on the nature of demand for your product. This markup depends on your individual genius, in the sense that it depends on how indispensable people find your product. Apple is probably the better example here. They sell iPhones for way over marginal cost because they’ve convinced everyone through marketing and design that substitutes for iPhones are inferior.

The scale term, Q(Y) does not depend on genius. It depends on the size of the market you have to sell to. If we stuck Steve Jobs, Jon Ive, and some engineers on a remote island, they wouldn’t earn any profits no matter how many i-Devices they invented, because there would be no one to sell them to.

People like Gates and the Waltons earn profits on the scale effect of the U.S. economy, which they did not invent, innovate on, or produce. So the “rest of us”, like the janitor mentioned above, have some legitimate reason to ask whether those profits are best used in remunerating Bill Gates and the Walton family, or could be put to better use.

There isn’t necessarily any kind of efficiency loss from raising taxes on Gates, Walton, and others with large incomes. They may, on the margin, be slightly less willing to innovate. But if the taxes are put to use expanding the scale of the U.S. economy, then we might easily increase innovation by through the scale effect on profits. Investing in health, education, and infrastructure all will raise the aggregate size of the U.S. economy, and make innovation more lucrative. Even straight income transfers can raise the effective scale of the U.S. economy be transferring purchasing power to people who will spend it.

Can we argue about exactly how much of the profits are due to “genius” (the markup) and how much to scale? Sure, there is no precise answer here. But you cannot dismiss the idea of taxing high-income “makers” because their income represents the fruits of their individual genius. It doesn’t. Their incomes derive from a combination of ability and scale. And scale doesn’t belong to individuals.

The value-added of “the Waltons” is particularly relevant here. Sam Walton innovated, but the profits of Walmart are almost entirely derived from the scale of the U.S. (and world) economy. It’s the presence of thousands and thousands of those janitors in the U.S. that generates a huge portion of Walmart’s profits, not the Walton family’s unique genius.

Alice Walton is worth around $33 billion. She never worked for Walmart. She is a billionaire many times over because her dad was smart enough to take advantage of the massive scale of the U.S. economy. I’m not willing to concede that Alice has added more value to humanity than anyone in particular. So, yes, I’ll argue that Alice should pay a lot more in taxes than she does today. And no, I’m not afraid that this will prevent innovation in the future, because those taxes will help expand the scale of the economy and incent a new generation of innovators to get to work.

Random Growth Links

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Just a few interesting things to think about:

  1. I feel like the Einstellung effect is something that could be worked into some sort of model of technology adoption. “…the Einstellung effect operates by biasing attention towards problem features that are associated with the familiar solution rather than the optimal solution.”
  2. An example of why I’m still a techno-optimist in general. “Dubbed the compact fusion reactor (CFR), the device is conceptually safer, cleaner and more powerful than much larger, current nuclear systems that rely on fission, the process of splitting atoms to release energy……The superhot plasma is controlled by strong magnetic fields that prevent it from touching the sides of the vessel and, if the confinement is sufficiently constrained, the ions overcome their mutual repulsion, collide and fuse.”
  3. Don’t confuse welfare (or utility) with happiness. People trying to maximize welfare take happiness into account, but not just happiness.
  4. Related to this, never underestimate the endowment effect. People hate losing *way* more than they love winning.
  5. Malnutrition and the necessity of pollinators (e.g. bees) are highly correlated. “Crops vary in the degree to which they benefit from pollinators, and many of the most pollinator-dependent crops are also among the richest in micronutrients essential to human health. This study examines regional differences in the pollinator dependence of crop micronutrient content and reveals overlaps between this dependency and the severity of micronutrient deficiency in people around the world.”
  6. Do we dare question economic growth? Yes, yes we do. There are all sorts of reasons that progress and technological advancement do not necessarily imply GDP growth – and we could certainly see future growth involving declining input usage rather than expanding output production.

Why I Care about Inequality

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

“Inequality” is a term that has been tossed about quite a bit. The Occupy movement, to Piketty’s book, to debates over the minimum wage, to Greg Mankiw‘s defense of the 1%. Just today Mark Thoma published an op-ed on inequality. A few days ago John Cochrane had a post about why we care about inequality.

One of Cochrane’s main points is that the term “inequality” has been used in so many contexts, and to refer to so many different things, that it is ceasing to lose meaning. I’ll agree with him on this completely. If you want to talk about “inequality”, you have to be very clear about what precisely you mean.

There are three things that people generally mean by “inequality”:

  1. The 1% versus the 99%. That is, the difference in average annual income of the top 1% of all households versus the average annual income of the bottom 99%.
  2. The stagnation of median real wages and those below the median.
  3. The college premium, or the gap in earnings between those who finished college and those who did not (or did not attend).

When I say I care about inequality, I mean mainly the second – the stagnation of median wages – but this is going to take me into territory covered by the first – the growth in top 1% income. There are things to say about the college premium, but I’m not going to say them here.

Why do I care about the stagnation of median wages?

  • Because I’m going to be better off if everyone shares in prosperity. I want services like education, health care, and home repairs to be readily available and cheap. The way to achieve that is to invest in developing a large pool of skilled workers – teachers, nurses, electricians, carpenters. Those at the bottom of the distribution don’t have sufficient income to make those investments privately, so that requires public provision of those investments (i.e. schools) or transfers to support private investments. You want to have an argument about whether public provision or transfers are more efficient? Okay. But the fact that there is an argument on implementation doesn’t change the fact that stagnant wages are a barrier to these investments right now.
  • Because people at the bottom of the income distribution aren’t going to disappear. We can invest in these people, or we can blow our money trying to shield ourselves from them with prisons, police officers, and just enough income support to keep them from literally starving. I vote for investment.

One response to this is that I don’t care about inequality per se, I care about certain structural issues in labor markets, education, and law enforcement. So why don’t we address those fundamental structural issues, rather than waving our hands around about inequality, which is meaningless? Because these strutural issues are a problem of under-investment. The current allocation of income/wealth across the population is not organically producing enough of this investment, so that allocation is a problem. In short, if you care about these structural issues, you cannot escape talking about the distribution of income/wealth. In particular, you have to talk about another kind of inequality, the 1%/99% kind.

Let me be very clear about this too, because I don’t want anyone to think I’m trying to be clever and hide something. I would take some of the income and/or wealth from people with lots of it, and then (a) give some of that to currently poor people so they can afford to make private investments and (b) use the rest to invest in public good provision like education, infrastructure, and health care.

Would I use a pitchfork and torches to do this? No. Would I institute “confiscatory taxation” on rich people? No, that’s a meaningless term that Cochrane and others use to suggest that somehow rich people are going to be persecuted for being rich. I am talking about raising marginal income tax rates and estate tax rates back to the archaic levels seen in the 1990s.

Why do I not feel bad about taxing rich people further?

  • Because rich people spend their money on useless stuff. Not far from where I live, there is a new house going up. It will be over 10,000 square feet when it is complete. 2,500 of those square feet will be a closet that has two separate floors, one for regular clothes and one for formal wear. If that is what you are spending your money on, then yes, I believe raising your taxes to fund education, infrastructure, and health spending is a net gain for society.

    Don’t poor people spend money on stupid stuff? Of course they do. Isn’t the government an inefficient provider of some of these goods, like education? Maybe. But even if both those things are true, public investment and/or transfers to poor people will result in some net investment that I’m not currently getting from the mega-closet family. I’m happy to talk about alternative institutional settings that would ensure a greater proportion of the funds get spent on actual investments.

  • Because I’m not afraid that some embattled, industrious core of “makers” will decide to “go Galt” and drop out of society, leaving the rest of us poor schleps to fend for ourselves. Oh, however will we figure out how to feed ourselves without hedge fund managers around to guide us?

    This is actually a potential feature of higher marginal tax rates, by the way, not a bug. You’re telling me that a top tax rate at 45% will convince a number of wealthy self-righteous blowhards (*cough* Tom Perkins *cough*) to flee the country? Great. Tell me where they live, I’ll help them pack. And even if these self-proclaimed “makers” do stop working, the economy is going to be just fine. How do I know? Imagine that the entire top 1% of the earnings distribution left the country, took all of their money with them, and isolated themselves on some Pacific island. Who’s going to starve first, them or the remaining 300-odd million of us left here? The income and wealth of the top 1% have value only because there is an economy of another 300-odd million people out there willing to provide services and make goods in exchange for some of that income and wealth.

So, yes, I care about 1%/99% inequality itself, because I cannot count on the 1% to privately make good investment decisions regarding the human capital of the bottom 99%. And the lack of investment in the human capital of the bottom part of the income distribution is a colossal waste of resources.

The Slowdown in Reallocation in the U.S.

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

One of the components of productivity growth is reallocation. From one perspective, we can think about the reallocation of homogenous factors (labor, capital) from low-productivity firms to high-productivity firms, which includes low-productivity firms going out of business, and new firms getting started. A different perspective is to look more closely at the shuffling of heterogenous workers between (relatively) homogenous firms, with the idea being that workers may be more productive in one particular environment than in another (i.e. we want people good at doctoring to be doctors, not lawyers). Regardless of how exactly we think about reallocation, the more rapidly that we can shuffle factors into more productive uses, the better for aggregate productivity, and the higher will be GDP. However, evidence suggests that both types of reallocation have slowed down recently.

Foster, Grim, and Haltiwanger have a recent NBER working paper on the “cleansing effect of recessions”. This is the idea that in recessions, businesses fail. But it’s the really crappy, low-productivity businesses that fail, so we come out of the recession with higher productivity. The authors document that in recessions prior to the Great Recession, downturns tend to be “cleansing”. Job destruction rates rise appreciably, but job creation rates remain about the same. Unemployment occurs because it takes some time for those people whose jobs were destroyed to find newly created jobs. But the reallocation implied by this churn enhances productivity – workers are leaving low productivity jobs (generally) and then getting high productivity jobs (generally).

But the Great Recession was different. In the GR, job destruction rose by a little, but much less than in prior recessions. Job creation in the GR fell demonstrably, much more than in prior recessions. So again, we have unemployment as the people who have jobs destroyed are not able to pick up newly created jobs. But because of the pattern to job creation and destruction, there is little of the positive reallocation going on. People are not losing low productivity jobs, becoming unemployed, and then getting high productivity jobs. People are staying in low productivity jobs, and new high productivity jobs are not being created. So the GR is not “cleansing”. It is, in some ways, “sullying”. The GR is pinning people into *low* productivity jobs.

This holds for firm-level reallocation well. In recessions prior to the GR, low productivity firms tended to exit, and high productivity firms tended to grow in size. So again, we had productivity-enhancing recessions. But again, the GR is different. In the GR, the rate of firm exit for low productivity firms did not go up, and the growth rate of high-productivity firms did not rise. The GR is not “cleansing” on this metric either.

Why is the GR so different? The authors don’t offer an explanation, as their paper is just about documenting these changes. Perhaps the key is that a financial crash has distinctly different effects than a normal recession. A lack of financing means that new firms cannot start, and job creation falls, leading to lower reallocation effects. A “normal” recession doesn’t involve as sharp a contraction in financing, so new firms can take advantage of others going out of business to get themselves going. Just an idea, I have no evidence to back that up.

[An aside: For the record, there is no reason that we need to have a recession for this kind of reallocation to occur. Why don’t these crappy, low-productivity firms go out of business when unemployment is low? Why doesn’t the market identify these crappy firms and compete them out of business? So don’t take Foster, Grim, and Haltiwanger’s work as some kind of evidence that we “need” recessions. What we “need” is an efficient way to reallocate factors to high productivity firms without having to make those factors idle (i.e. unemployed) for extended periods of time in between.]

In a related piece of work Davis and Haltiwanger have a new NBER working paper that discusses changes in workers reallocations over the last few decades. They look at the rate at which workers turn over between jobs, and find that in general this rate has declined since 1980 to today. Some of this may be structural, in the sense that as the age structure and education breakdown of the workforce changes, there will be changes in reallocation rates. In general, reallocation rates go down as people age. 19-24 year olds cycle between jobs way faster than 55-65 year olds. Reallocation rates are also higher among high-school graduates than among college graduates. So as the workforce has aged and gotten more educated from 1980 to today, we’d expect some decline in job reallocation rates.

But what Davis and Haltiwanger find is that even after you account for these forces, reallocation rates for workers are declining. No matter which sub-group you look at (e.g. 25-40 year old women with college degrees) you find that reallocation rates are falling over time. So workers are flipping between jobs *less* today than they did in the early 1980s. Which is probably somewhat surprising, as my guess is that most people feel like jobs are more fleeting in duration these days, due to declines in unionization, etc.. etc..

The worry that Davis and Haltiwanger raise is that lower rates of reallocation lower productivity growth, as mentioned at the beginning of this post. So what has caused this decline in reallocation rates across jobs (or across firms as the first paper described)? From a pure accounting perspective, Davis and Haltiwanger gives us several explanations. First, reallocation rates within the Retail sector have declined, and since Retail started out with one of the highest rates of reallocation, this drags down the average for the economy. Second, more workers tend to be with older firms, which have less turnover than young firms. Last, the above-mentioned shift towards an older workforce that tends to shift jobs less than younger workers.

Fine, but what is the underlying explanation? Davis and Haltiwanger offer several possibilities. One is increased occupational licensing. In the 1950s, only about 5 % of workers needed a government (state or federal) license to do their job. In 2008, that is now 29%. So it can be incredibly hard to reallocate to a new job or sector of work if you have to fulfill some kind of licensing requirement (which could involve up to 2000 hours of training along with fees). Second is a decreased ability of firms to fire-at-will. Starting in the 1980s there were a series of court decisions that made it harder for firms to just fire someone, which makes it both less likely for people to leave jobs, and less likely for firms to hire new people. Both act to lower reallocation between jobs. Third is employer-provided health insurance, which generates some kind of “job lock” where people are unwilling to move jobs because they don’t want to lose, or create a gap in, coverage.

Last is the information revolution which may have had perverse effects on reallocation. We might expect that IT allows more efficient reallocation as people can look for jobs more easily (e.g. Monster.com, LinkedIn) and firms can cast a wider net for applicants. But IT also allows firms to screen much more effectively, as they have access to credit reports, criminal records, and the like, that would have been prohibitive to acquire in the past.

So we appear to have, on two fronts, declining dynamic reallocation in the U.S. This certainly contributes to a slowdown in productivity growth, and may perhaps be a better explanation than “running out of ideas from the IT revolution” that Gordon and Fernald talk about. The big worry is that, if it is regulation-creep, as Davis and Haltiwanger suspect, we don’t know if or when the slowdown in reallocation would end.

In summary, reading John Haltiwanger papers can make you have a bad day.

Meta-post on Robots and Jobs

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I don’t know that I have anything particularly original to say on the worry that robots will soon replace humans in many more tasks, and what implications this has for wages, living conditions, income distribution, the introduction of the Matrix or Skynet, or anything else. So here I’ve just collected a few relevant pieces of information from about the inter-tubes that are useful in thinking about the issue.

Let’s start with some data on “routine” jobs and what is happening to them. Cortes, Jaimovich, Nekarda, and Siu have a recent voxeu post (and associated paper) on the flows into routine versus non-routine work. In the 1980’s, about 1/3 of American workers did “routine” work, now this number is only about 1/4. Routine work tended (and tends) to be middle-waged; it pays pretty well. What the authors find is that the decline in the share of people doing these middle-wage routine jobs is due to slower flows *in* to those jobs, but not due to faster flows *out*. That is, routine workers were not necessarily getting let go more rapidly, but companies were simply not hiring new routine workers.

Unsurprisingly, people with more education were better able to adapt to this. Higher education meant a higher likelihood of shifting into non-routine “cognitive” tasks, which also is a move up the wage scale (upper-middle wages, say). Perhaps more surprising is that women have been more likely, holding education constant, to move into these cognitive tasks. It is low education males who represent the group that is failing to get routine middle-wage jobs. To the extent that these lower-educated males get work, it tends to be in “brawn” jobs, low-wage manual work.

This last fact is somewhat odd in the context of the robot-overlord thesis. Robots/computers are really good at doing routine tasks, but so far have not replaced manual labor. If there was a group that should have a lot to worry about, I’d think it would be low-education males, who could well be replaced as robots become more robust to doing heavy manual labor. One thought I have is that this indicates that manual work (think landscaping) is not as low-skill as routine tasks like data entry. I think there is more cognitive processing that is going on in these jobs than we tend to give them credit for (where to dig, how deep, should I move this plant over a little, what if I hit a root?, does this shrub look right over here, etc.. ), and that their wages are low simply because the supply of people who can do those jobs is so large.

Brad DeLong took on the topic by considering Peter Thiel‘s comments in the Financial Times. Thiel is relatively optimistic about the arrival of robots – he uses the computer/human mix at Paypal to detect fraud as the example of how smarter machines or robots will benefit workers. Brad worries that Thiel is making a basic error. Yes, machines relieve us of drab, boring, repetitive work. But whether workers benefit from that (as opposed to the owners of the machines) depends not on the average productivity of that worker, but on the productivity of the marginal worker who is not employed. That is, if I can be replaced at Paypal by an unemployed worker who has no other options, then my own wage will be low, regardless of how productive I am. By replacing human workers in some jobs, robots/machines drive up the supply of humans in all the remaining jobs, which lowers wages.

To keep wages high for workers, we will need to increase demand for human-specific skills. What are those? Brad likes to list 6 different types of tasks, and leaves humans with persuasion, motivation, and innovation as things that will be left for humans to do. Is there sufficient demand for those skills to keep wages elevated? I don’t know.

David Autor has a recent working paper that is relatively optimistic about robots/machines. He thinks there is more complementarity between machines and humans than we think, so it echoes Thiel’s optimism. Much of Autor’s optimism stems from what he calls “Polyani’s Paradox”, which is essentially that we are incapable of explaining in full what we know. And if we cannot fully explain exactly what we know how to do (whether that is identifying a face in a crowd, or making scrambled eggs, writing an economics paper, or building a piece of furniture) then we cannot possibly program a machine to do it either. The big limit of machines, for Autor, is that they have no tacit knowledge. Everything must be precisely specified for them to work. There is no “feel” to their actions, so to speak. As long as there are tasks like that, robots cannot replace us, and it will require humans – in conjunction with machines, maybe – to actually do lots of work. Construction workers are his example.

But I am a little wary of that example. Yes, construction workers today work with a massive array of sophisticated machines, and they serve as the guidance systems for those machines, and without construction workers nothing would get done. But that’s a statement about average product, not marginal product. The wage of those workers could still fall because better machines could make *anyone* capable of working at a construction site, and the marginal product of any given worker is very low. Further, adding better or more construction machines can reduce the number of construction workers necessary, which again only floods the market with more workers, lowering the marginal product.

Autor gets interviewed in this video from Ryan Avent at the Economist. It’s a fairly good introduction to the ideas involved with robots replacing workers.

Re-basing GDP and Estimating Growth Rates

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Leandro Prado de la Escosura recently posted a voxeu column about splicing real GDP series after re-basing. Re-basing of real GDP means adopting a new set of reference prices to value output in each year. Think of what Nigeria did last year, when they re-based from 1990 prices to using 2010 prices, and all of the sudden measured real GDP was about twice as big.

de la Escosura’s point is that when we re-base and “retrocast” real GDP numbers to past years, we may obscure evidence of rapid economic growth. You should go read his post, and his associated paper, to understand his point in full. But let’s use the Nigerian 2013 re-basing to get the basic idea. Let’s say that in 1990 Nigeria produced 1000 units of food, and zero motorcycles. In 2010 Nigeria produced 1000 units of food again, but produced 200 motorcycles. So there clearly is real growth in output.

In 1990, the price of food was 1 naira per unit and motorcycles were 500 naira. 1990 real GDP in 1990 prices is 1000(1) + 0(500) = 1000. 2010 real GDP in 1990 prices is 1000(1) + 200(500) = 101,000. This is a dramatic growth rate of real GDP (10,100% actually).

After re-basing, what do we get? In 2010 the price of food was 2 naira per unit, and motorcycles were 100 naira each. So 1990 real GDP in 2010 prices is 1000(2) + 0(100) = 2000. 2010 real GDP in 2010 prices is 1000(2) + 200(100) = 22,000. Still a lot of growth, but only 1100%. The growth rate of real GDP between 1990 and 2010 went from over 10,000% to about 1100%, an order of magnitude drop. Growth looks much slower in Nigeria after re-basing.

Why? Because with dramatic economic growth came dramatic changes in relative prices. Motorcycles dropped severely in price, while food went up slightly. Combined, this makes food look more valuable compared to motorcycles by 2010. So valuing 1990 output in 2010 prices tends to make 1990 look pretty good, because in 1990 they had lots of food relative to motorcycles.

de la Escosura’s argument is that in 1990, for sure, the 1990 prices are the right way to value real GDP. Similarly, in 2010, for sure, the 2010 prices are the right way to value real GDP. So leave those years priced in their own prices. For the nineteen intervening years, 1991-2009 inclusive, compute their real GDP in both 1990 and 2010 prices. Then average those two estimates depending on how far from each year we are.

So for 1991, let real GDP be (1991 GDP at 1990 prices)(18/19) + (1991 GDP at 2010 prices)(1/19). For 1992, let real GDP be (1992 GDP at 1990 prices)(17/19) + (1992 GDP at 2010 prices)(2/19), and so forth. For de la Escosura, this better captures the growth in real GDP over time. For our example, 1990 real GDP in 1990 prices is 1000, and 2010 real GDP in 2010 prices is 22,000, and the growth rate is 2,200%. It essentially splits the difference of the two different benchmarks, preserving some of the rapid growth seen using the 1990 prices.

This isn’t necessarily a new concept. Johnson, Larson, Papageorgiou, and Subramanian discuss this issue in their paper on the Penn World Tables. Their suggestion for a chained PWT price index amounts to a similar suggestion.

The big point is that by re-basing you are necessarily screwing with the implied growth rate of real GDP because you are screwing with the value of real GDP in the first year (1990 in our example). If there has been a lot of economic growth and relative prices have changed, then almost certainly the first year will have a higher measured real GDP when we re-base. With a higher initial level of GDP, the growth rate will necessarily be smaller.

If your worry about computing growth rates, then this is an issue you have to worry about a lot, and something like de la Escosura’s method or the Johnson et al suggestion is what you should do. If you worry about comparing income levels across countries, then this critique is not crucial (although you have other things to worry about).

Age Structure, Experience, Productivity…… and France!

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Miles Kimball posted a link to a relatively old Scott Sumner post that was discussing a Paul Krugman post from 2011. Which means I am only about 3 years behind, which is good, because I would have estimated I was about 5 years behind.

Anyway, Scott’s post deals with some facts about France. Namely, while GDP per capita in France is only roughly 70% of the U.S. level, GDP per hour worked is essentially equal to that in the U.S. French workers are just as productive per hour as U.S. workers, but just work fewer hours in aggregate.

There are generally two responses to this. The optimistic one: “The French have made a decision to spend their high productivity by taking more vacations and retiring earlier, leading to lower GDP per capita, but probably higher utility.” The pessimistic one: “The French labor system is so mucked up by taxes and regulations that despite being as productive per hour as the U.S., firms do not find it profitable, and workers do not find it desirable, to have more hours provided.”

It’s non-obvious which view is correct. Scott’s post makes two great points, though, about how to think about this. The first is one that I’m not going to deal with here. Comparing France to the U.S. is not an apples to apples comparison. The U.S. is better compared to the EU, or at least Western Europe, as a whole. French productivity looks much worse when compared to New England or the Mid-Atlantic as a region, and only looks good in comparison to the U.S. because the U.S. includes Mississippi and Alabama (which I will arbitrarily call the Sicily and Greece of Europe). It’s a great point.

The second idea that Scott talks about is whether we should be impressed by French output per hour being as high as the U.S. In France, the high youth unemployment rate and early retirement rate mean that the employed population is concentrated in the 30-55 age range. If this age range tends to be particularly productive compared to other age groups, then shouldn’t French output per hour be much higher than in the U.S., where we employ lots of sub-30 and over-55 workers?

Jim Feyrer has a paper from a few years back that looks precisely at the relationship of age structure and measures of productivity. What he finds is that the most productive group of workers are those aged 40-49. An 1% increase in the number of those workers (holding other age groups constant) is associated with about a 0.2% increase in productivity. Ages 50-plus imply lower productivity, but the statistical significance is low. Ages under 39, though, are significantly negative for productivity. Jim uses these relationships to partly explain the productivity slowdown in the US during the 1970s, when the Baby Boomers were filling up the labor force and were still under 40, meaning they were relatively low productivity.

But the results speak to this French question that Scott poses as well. By employing so few under 39-year-olds, France is essentially only using the very high productivity workers in the economy. Thus their GDP per hour is likely inflated by that fact, and their workers are not necessarily just as productive as those in the U.S. What you’d want is some kind of equivalent measure for the U.S. to make this concrete. What is the age-structure-adjusted GDP per hour worked in the U.S. and France? Based on Jim’s results, the U.S. would be ahead in that comparison.

This is related to the well-known result in labor economics that wages rise with labor market experience, but at a decreasing rate. That is, people’s wages always tend to rise with experience, but once you hit about 25-30 years of experience (meaning you are somewhere between 40-55 most likely, the increase gets close to zero. You can see a bunch of these wage/experience relationships in a paper by Lagakos, Moll, Porzio, and Qian, who compare the relationship across countries. One of the features of the data is that in rich countries (like France and the U.S.) the wage/experience relationship is really, really steep when experience is below 10 years. In other words, wages are particularly low for people who have little labor market experience, like young workers aged 18-25.

The U.S. tends to employ a lot more 18-25 year olds as a fraction of our labor force than France. Even prior to 2007, unemployment among those under 25 was roughly 20% in France, and only 10% in the U.S., see here. So the U.S. is employing far more workers that have not yet hit the sweet spot in labor market experience and their wages are very low. On the assumption that wages are some indication of how productive workers are, this means that the U.S. employs proportionately more low-productivity workers. So, again, France’s measured GDP per hour should really be higher than the U.S. level if in fact France and the U.S. have similar productivity levels.

Scott’s point is that we can’t take the equivalence between France’s and the U.S.’s GDP per hour at face value. This doesn’t necessarily mean that the pessimistic view noted above is correct. France could well be making some kind of optimal decision to take lots of leisure time and retirement. But that decision is not one made with the same “budget constraint” as the U.S. – France is very likely not as productive as the U.S.

If you do want to subscribe to the pessimistic viewpoint, then you could argue that not only have French regulations mucked up the labor market, but they have also given the statistical illusion of high productivity. Hence, France is in fact much worse off than the U.S. Even if they fixed their labor market, their GDP per capita would not reach U.S. levels.