Last Links of 2014

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

So now that I’ve yanked myself out of the nanaimo bar-induced coma of last week to return to the living, I’ve got a stack of links that have been piling up. So here is a last post of 2014 for you to scan through on your phone while you watch Pit Bull’s NYE countdown. (What, you’re *not* watching Pit Bull? And is it Pit Bull or Pitbull?)

Two books I read recently that were quite good:

  1. The First European Revolution by R.I. Moore. This was a fun read. Moore traces the origin of a unique “European” culture to the time around 900-1000 AD. This was after the Carolingian Empire had broken up, and the remaining areas were scrambling in some sense to re-order themselves. Key events are the clear separation of clergy from nobles (enforced through celibacy of clergy) and strict primogeniture (which eliminates contested lordship of warrior clans). Need to process this alongside Mitterauer’s Why Europe? which is also about creation of unique European family structure in place of clans.
  2. Adam Tooze’s The Deluge, about America’s plunge into world leadership during and after WWI. What stood out for me was the ridiculous self-righteousness of Woodrow Wilson. I had this sense that he had been too idealistic to solve the real-world problems at Versailles. But this book makes it even more clear that what we call “idealism” was really a entitled feeling of superiority of Protestant white dudes over the unwashed masses.

I also read a lot of garbage, but these two books make me sound smart, so there you go.

Assorted links of interest:

  1. Stephen Gordon piles on a willfully stupid article regarding Canada and the “resource curse”. Key quote, and this is one to pin up on your wall: “If God provides you with an abundance of something that the rest of the world values highly and is willing to pay through the nose to obtain, then this is a blessing, not a curse. If the ‘resource curse’ has any meaning, it has to do with politics, not economics.” There is never a time when you *don’t* want to have more of a valuable stock of a natural resource.
  2. More on the “resource curse”. Countries with resources are richer, but grow more slowly. So to the extent that you think being rich but growing slowly is a curse, there you have it. Recent paper by Alexander James says that the slow growth we see in resource exporters is due to slow growth in their resource earnings. Drops in commodity prices make overall growth look bad (think of Russia today). What James shows is that over the last few decades, the growth rate of the non-resource sectors of these resource-exporters grow just as fast as everyone else. So “resource curse” or no?
  3. Speaking of willfully stupid. Scientific American, of all places, published this econo-crank piece on the digital economy and secular stagnation. The lede is that Twitch sold for 970 million, but employs only 170 people. Presumably this means that economic growth might end. Uh, what? The market value per employee of tech companies has little to do with the rate of economic growth. If anything, the higher this is, the *greater* will be growth, as the incentives to start tech companies are so huge. We can have reasonable discussions about the relative value of different types of labor and how they might fare as innovation occurs, but that is an entirely different discussion from the “end of growth”.
  4. Send LateX code in Gmail. You heard me. Go check it out.
  5. A very nice comment by Scott Sumner on Noah Smith‘s comment regarding taxes and labor effort. Scott’s good point is that people often try to have it both ways in arguing for European-style social safety nets. When you say that GDP per person is only 70% of the US level, they say that this is because they work fewer hours, but have just as high utility. But if you talk about high taxes in Europe, they claim that Europeans work just as much as US workers. Which is it?

Mean-Reversion in Growth Rates and Convergence

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Brad DeLong posted about the recent paper by Pritchett and Summers (PS) on “Asiaphoria” and mean-reversion in growth rates. PS found several things:

  • Growth rates are not persistent. The growth rate over the last 10 years has very little information about the growth rate over the next 10 years. Growth rates “regress to the mean” as PS say.
  • Growth in developing countries tends to take place in bursts of growth and bursts of stagnation. This is different from rich countries where growth variation tends to consist of mild variation around a trend rate.
  • There is no reason to believe that rapidly growing economies today (China and India) will necessarily continue to grow rapidly.

Brad’s response is to take their evidence as a fundamental challenge to the standard Solow model explanation for why growth rates differ.

Lant Pritchett and Larry Summers are now trying to blow this up: to say that just as the neoclassical aggregate production function is a very bad guide to understanding the business cycle, as the generation-old failure of RBC models tells us, so the neoclassical aggregate production function and the Solow growth model built on top of it is a bad guide to issues of growth and development as well.

This is an overreaction. The mean-reversion and “bursts” that PS find are perfectly consistent with a Solow model including shocks.

Let’s start with the finding that regressing decadal growth rates on prior-decadal growth rates gives you a coefficient of something like 0.2-0.3. PS call this mean-reversion. I think it’s an artifact of convergence. Let’s imagine an economy that is following the Solow model precisely. It is very poor in 1960, and growth from 1960-1970 is about 10% per year. By 1970 it is much better off, and so growth from 1970-1980 slows to 5% per year. By 1980 this has gotten the country to steady state, so from 1980-1990 it grows at 2% per year. From 1990-2000 it is still at steady state, so grows at 2% a year again.

Now regress decadal growth rates (5,2,2) on prior-decade growth rates (10,5,2). What do you get? A line with a slope of about 0.397. Why? Because growth rates slow down as you approach steady state. Play with the numbers a little and you can make the slope 0.3 if you want to. The point is that convergence will generate just such a pattern in growth rates.

What about the unpredictability of growth rates? PS find that the correlation of growth rates across periods is very low. This is more problematic for convergence, on the face of it. If convergence is true, then growth rates across decades should be tightly correlated. In other words, even if the slope of the toy regression I ran above is less than one, the R-squared should be large.

In my toy example, the country systematically converges to 2% growth, and the R-squared of my little regression is 0.86. PS find much smaller R-squares in their work. The conclusion is that growth rates in the next decade are very unpredictable. So does this mean that convergence and the Solow model are wrong? No. The reason is that once you allow for any kind of meaningful shocks to GDP per capita, the short-run growth rates get very noisy, and you lose track of the convergence. It doesn’t mean it isn’t there, it just is hard to see.

Let me give you a clearer demonstration of what I mean. I’m going to build an economy that strictly obeys convergence, with the growth rate related to the difference between actual GDP per capita and trend GDP per capita.

More formally, let

\displaystyle  y_{t+1} = (1+g)\left[\lambda y^{\ast}_t + (1-\lambda)y_t \right] + \epsilon_{t+1} \ \ \ \ \ (1)

where {g} is the long-run growth rate of potential GDP, {y^{\ast}_t} is potential GDP in year {t}, {y_t} is actual GDP in year {t}, and {\epsilon_{t+1}} are random shocks to GDP in year {t+1}. This formula mechanically captures convergence to trend GDP per capita, but with the additional wrinkle of shocks occurring in any given period that push you either further away or closer to trend. {\lambda} is the convergence parameter, which I said in some recent posts was about 0.02, meaning that 2% of the gap between actual and trend GDP per capita closes every period.

I simulated this over 100 periods, with {g=0.02}, {\lambda=0.02}, {y^{\ast}_0 = 20} and {y_0 = 5}. The country starts well below potential. I then let there be a shock to {y} every period, drawn from a normal with mean 0, variance 0.25. Here are the results of one run of that simulation.

First, look at the 10-year growth rates over time. There is a downward trend if you look at it, but this is masked by a lot of noise in the growth rate. You have what look distinctly like two growth booms, about period 25 and period 50.

10-year Growth Rates

Second, look at the correlation of the average growth rate in one “decade” and the average growth rate in the prior “decade”. This is essentially what Pritchett and Summers do. I’ve also included the fitted regression line, so you can see the relationship. There is none. The coefficient on the prior-decade growth rate is 0.05, so pretty severe mean-reversion. The R-squared is something like 0.16. A high growth rate one decade does not indicate high growth the following decade, and the current decadal growth rate provides very little information on growth over the next decade.

Correlation of Growth Rates over time

But this model has mechanical convergence built into it, just with some extra noise dropped on top to make things interesting. And with sufficient noise, things are really interesting. If you looked at this plot, you’d start talking about growth accelerations and growth slowdowns. What happened in period 25 to boost growth? Did this economy democratize? Was there an opening to trade? And what about the bust around period 40? For a poor country, that is low growth. Was there a coup? We see plenty of “bursts” of growth and “bursts” of stagnation (or low growth) here. It’s a function of the noise I built in, not a failure of convergence.

By the way, take a look at the log of output per worker over time. This shows a bumpy but steady upward trend. The volatility of the growth rate doesn’t look as dramatic here.

Log output per worker

If I turned up the variance of the noise term, I’d be able to get even wilder swings in output, and wilder swings in growth rates. In a couple simulations I played with, you get a negative relationship of current growth rates to past growth rates – but in every case there was convergence going on.

Why are growth rates so un-forecastable, as PS find? Because of convergence, the noise doesn’t just cancel out over time. If a country gets a big negative shock today, then the growth rate is going to be low this year. But now the country is particularly far below trend GDP per capita, and so convergence kicks in and makes the growth rate larger than it normally would be. And because convergence works slowly, it will be larger than normal for several periods afterwards. There is a natural tendency for growth rates to be uncorrelated in the presence of shocks, but that is again partly because of convergence, not evidence of its absence. There are lots of reasons that the Solow model could be the wrong way to look at growth. But this isn’t one of them.

I think the issue here is that convergence gets “lost” behind all the noise in the data. Over long periods of time, convergence wins out. [“The arc of history is long, but it bends towards Robert Solow”? Too much?] Growth rates start relatively high and end up asymptoting towards the trend growth rate. But for any small window of time – say 10 years – noise in GDP per capita can swamp the convergence effects. In the growth literature we tend to look at differences of 5 or 10 years to “smooth out” fluctuations. That’s not sufficient if one wants to think about convergence, which operates over much longer time periods.

PS are absolutely right that we cannot simply extrapolate out China and India’s recent growth rates and assume they’ll continue indefinitely. We should, as growth economists, account for the gravitational pull that convergence puts on growth rates as time goes forward. But just like gravity, convergence is a relatively weak force on growth rates. It can be overcome in the short-run by any reasonably-sized shock to GDP per capita.

You don’t think “Oh my God, gravity is broken!” every time you see an airplane overhead. So don’t take abnormal growth rates or uncorrelated growth rates as evidence that convergence isn’t occurring.

Why Did Consumption TFP Stagnate?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I’ve been trying to think more about why consumption-sector TFP flatlined from about 1980 forward. What I mentioned in the last post about this was that the fact that TFP was constant does not imply that technology was constant.

I then speculated that technology in the service sector may not have changed much over the last 30 years, partly explaining the lack of consumption productivity growth. By a lack of change, I mean that the service sector has not found a way to produce more services for a given supply of inputs, and/or produced the same amount of service with a decreasing supply of inputs. Take something that is close to a pure service – a back massage. A one-hour back massage in 1980 is almost identical to a one-hour back massage in 2014. You don’t get twice (or any other multiple) of the massage in 2014 that you got in 1980. And even if the therapist was capable of reducing back tension in 30 minutes rather than 60, you bought a 60-minute massage.

We often buy time when we buy services, not things. And it isn’t so much time as it is attention. And it is very hard to innovate such that you can provide the same amount of attention with fewer inputs (i.e. workers). Because for many services you very specifically want the attention of a specific person for a specific amount of time (the massage). You’d complain to the manager if the therapist tried to massage someone else at the same appointment.

So we don’t have to be surprised that even technology in services may not rise much over 30 years. But there were obviously technological changes in the service sector. As several people brought up to me, inventory management and logistics were dramatically changed by IT. This allows a service firm to operate “leaner”, with a smaller stock of inventory.

But this kind of technological progress need not show up as “technological change” in doing productivity accounting. That is, what we call “technology” when we do productivity accounting is not the only kind of technology there is. The “technology” in productivity accounting is only the ability to produce more goods using the same inputs, and/or produce the same goods using fewer inputs. It doesn’t capture things like a change in the shape of the production function itself, say a shift to using fewer intermediate goods as part of production.

Let’s say a firm has a production function of {Y = AK^{\alpha}L^{\beta}M^{\gamma}} where {A} is technology in the productivity sense, {K} is capital, {L} is labor, and {M} is intermediate goods. Productivity accounting could reveal to us a change in {A}. But what if an innovation in inventory management/logistics means that {\gamma} changes?

If innovation changes the shape of the production function, rather than the level, then our TFP calculations could go anywhere. Here’s an example. Let’s say that in 1980 production is {Y_80 = A_{1980}K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}}. Innovation in logistics and inventory management makes the production function in 2014 {Y_14 = A_{2014}K_{14}^{.4}L_{14}^{.4}M_{14}^{.2}}.

Total factor productivity in 1980 is calculated as

\displaystyle  TFP_{80} = \frac{Y_{80}}{K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}} \ \ \ \ \ (1)

and total factor productivity in 2014 is calculated as

\displaystyle  TFP_{14} = \frac{Y_{14}}{K_{14}^{.4}L_{14}^{.4}M_{14}^{.2}}. \ \ \ \ \ (2)

TFP in 2014 relative to 1980 (the growth in TFP) is

\displaystyle  \frac{TFP_{14}}{TFP_{80}} = \frac{Y_{14}}{K_{14}^{.3}L_{14}^{.3}M_{14}^{.4}} \times \frac{K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}}{Y_{80}} \times \frac{M_{14}^{.2}}{K_{14}^{.1}L_{14}^{.1}} \ \ \ \ \ (3)

which is an unholy mess. The first fraction is TFP in 2014 calculated using the 1980 function. The second fraction is the reciprocal of TFP in 1980, calculated normally. So the first two fractions capture the relative TFP in 2014 to 1980, holding constant the 1980 production function. The last fraction represents the adjustment we have to make because the production function changed.

That last term could literally be anything. Less than one, more than one, more than 100, less than 0.0001. If {K} and {L} rose by a lot while {M} didn’t go up much, this will lower TFP in 2014 relative to 1980. It all depends on the actual units used. If I decide to measure {M} in thousands of units rather than hundreds of units, I just made TFP in 2014 go down by a factor of 4 relative to 1980.

Once the production function changes shape, then comparing TFP levels across time becomes nearly impossible. So in that sense TFP could definitely be “getting it wrong” when measuring service-sector productivity. You’ve got an apples to oranges problem. So if we think that IT innovation really changed the nature of the service-sector production function – meaning that {\alpha}, {\beta}, and/or {\gamma} changed, then TFP isn’t necessarily going to be able to pick that up. It could well be that this looks like flat or even shrinking TFP in the data.

If you’d like, this supports David Beckworth‘s notion that consumption TFP “doesn’t pass the smell test”. We’ve got this intuition that the service sector has changed appreciably over the last 30 years, but it doesn’t show up in the TFP measurements. That could be due to this apples to oranges issue, and in fact consumption TFP doesn’t reflect accurately the innovations that occurred.

To an ambitious graduate student: document changes in the revenue shares of intermediates in consumption and/or services over time. Correct TFP calculations for these changes, or at least provide some notion of the size of that fudge factor in the above equation.

What Should I Teach First-years?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I’m doing a little post-mortem on this semesters first-year graduate macro class. I’m thinking about what I should be teaching in this course. The big meta-question is what is the right kind of material to be teaching? I see two perspectives here:

  • Teach the big questions. They need to understand the open issues for macroeconomics, and then can be introduced in 2nd/3rd year to models/techniques that are suited to talking about those issues. A course like this would emphasize intellectual history more than specific mathematical techniques. The problem is that we don’t necessarily screen out people who cannot do the math at a sufficiently advanced level to try and *answer* any interesting questions.
  • Teach the techniques. My course, and most 1st-year courses, lean heavily this way. Once they have some of the “language” down, then we can talk coherently about the big questions with them in the 2nd/3rd year. The problem with this course is that we don’t necessarily screen out people who cannot understand what it means to ask an interesting question. The danger is we get optimization robots, not researchers.

Maybe I should just trust that PhD programs have evolved towards the right solution, and focus on techniques. The cost of having someone incapable of using techniques is so high later on that it must be avoided at all costs. But there is a part of me that feels like techniques are always something that can be learned by force of effort later on. Screening out people who can’t think without being given a specific math problem to do might be more useful.

Of course, if one does want to teach “big questions” to first-years, what are they?

For people who’ve done PhD’s, or are doing them now. What do you *wish* you had learned in first-year macro. What would have been useful?

I am far too lazy to try and think of this all by myself, so I’m posting it here in the hopes that smart people will offer up some suggestions either way. Any ideas are appreciated, will be stolen without shame, and will probably sit unused for years as a scribbled note to myself under a pile of other things on my desk.

You’ve Got Potential

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

A couple things I read this week, along with a frustrating parking-lot conversation with someone who asked me about the economy, got me thinking about how we talk about GDP. I think we can and should be a little more clear in economics about what we mean by “Potential GDP”. Obviously this term comes up a lot in discussions about the current state of most economies, as a lot of the policy discussion depends on how far (if at all) GDP is “below potential”.

There is a difference between “potential GDP” and what I guess we could call “potential potential GDP”. It may be easiest to start with an analogy to get these terms straight. Think of your health. Your regular level of health is your “potential health” – how you feel and how capable you are when you are not explicitly sick. Getting the flu would be like a recession, as you are clearly “below potential health”.

“Potential potential health” is different from your “potential health”. “Potential potential health” is your health if you starting working out regularly, stopped eating so many Christmas cookies, skipped that second beer, took the stairs, actually got up from your desk once in a while, did the stretches your therapist suggested, meditated daily, ate more vegetables and less bacon, etc. etc. “Potential potential health” is the best health you could possibly achieve given your genetics. “Potential health” is your non-sick state of health.

In terms of GDP, what do we have?

  • Potential GDP: This is our non-recessionary level of GDP. We spend most of our time in this state, but it is not the best we can do. It is simply the typical level of GDP we have been achieving lately.
  • Potential Potential GDP: This is the best possible level of GDP we could get given our current level of technology (which I would equate with your genetics). It is the GDP we could have if we eliminated market inefficiencies like information issues, and collusion, and regulatory capture, and rent-seeking, and externalities, etc. etc. Take all the Harberger triangles you can find and eliminate them, so to speak.

Why do I think we should distinguish these concepts? Because “potential GDP” gets confused very often with “potential potential GDP”. It is literally impossible to get GDP higher than “potential potential GDP”, and thus it is impossible to sustain a GDP higher than “potential potential GDP” for any period of time. “Potential potential GDP” is the budget constraint for the economy. We cannot possibly produce more than this.

But that is not true about “potential GDP”. It is *not* the short-run, medium-run, or long-run budget constraint for the economy. It is not something structurally fixed. But people treat it as such. They presume that any aberrations away from “potential GDP” must be offset over the long-run by equal and opposite aberrations. Booms (GDP above “potential GDP”) *must* be met by slumps (GDP below “potential GDP”). Similarly, slumps must eventually erase themselves. None of that is true, as “potential GDP” is not the budget constraint for the economy.

This matters for how one thinks about business cycles. We cannot uniquely decompose actual GDP into “potential GDP” and deviations from potential – in other words, into trend and cycle. Doing so presumes that the cyclical components “cancel out” over time. Econometrically, the methods used to separate trend and cycle *require* that the cycles cancel out around the trends. Roger Farmer’s recent post makes this point more clearly than I just did. As he says, by accepting the trend/cycle decomposition of GDP – i.e. by assuming that “potential GDP” is the budget constraint – business cycle economists have implicitly limited themselves to a small class of explanations for fluctuations.

Once you stop thinking of “potential GDP” as being necessarily a supply-side phenomenon, then failures of aggregate demand, or “animal spirits”, or self-fulfilling expectations can move around “potential GDP” as well. This is Farmer’s point about the economy having essentially an infinity of equilibrium GDP levels. We can get stuck at a new, lower level of GDP. There is nothing that necessitates that the economy move “back to potential”, as “potential GDP” is a fluid concept. There is also nothing necessary about recessions as some kind of economic retribution for booms. Stop. “Potential GDP” doesn’t work that way. If we can coordinate on a higher level of GDP, then great. We win – more cookies and Diet Coke for me. That isn’t some kind of cheat. It’s not “living beyond our means”. It’s just us finding a way to shift a little closer to the *real* limit, “potential potential GDP”.

A Few Growth Links

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

  1. Kelly and O’Grada on sustained economic growth in England berfore 1700. That growth was slow relative to modern rates, but they argue it was appreciable and associated with increasing human capital and high wages. They give several examples of significant division of labor within industry in this pre-IR era.
  2. Via Chris Blattman. Experimental evidence that participating in trade leads to higher productivity. The modern firm-level trade models generally take productivity of firms as given, and only those who are already productive find trade worth it. The most enthusiastic reading of this is that simply providing firms with information about markets boosts productivity. The least enthusiastic is that experimenters simply paid fixed export cost for firms.
  3. The UN Least Developed Countries Report 2014 is about Growth with Structural Transformation. One of the key messages is “Economic growth is not enough: it must be accompanied by structural transformation..”. Um, name one example of economic growth that did *not* involve structural transformation. Probably more to come from me on this report, but you can all study it at home over the break.
  4. Correlation of pathogen exposure and degree of innovation across primate species. Essentially, being social animals has benefits (cooperation, imitation, and innovation) and costs (infectious diseases). Big question obviously is whether one drove the other.
  5. Gehringer and Prettner on longevity and innovation. A basic scale story. The longer people live, the more incentives they have to invest in capital (physical and human). This expands the scale of the economy, which expands incentives to innovate and earn profits. A relatively optimistic response to population aging through lower mortality rates as opposed to pessimistic worries about stagnation.
  6. Noah Smith (from a long time ago) writes a review of a paper by Acemoglu, Robinson, and Verdier regarding different types of capitalism (think Sweden and the US) and innovation. Upshot is that Sweden’s “soft” capitalism is worse for innovation than US type. More interesting than the particulars of the ARV paper is Noah’s broader comments on writing down models designed to fit existing data. The fact that the data matches your model doesn’t imply your model is right. It means you were smart enough to get the math to work out.
  7. Alex Tabarrok links to CBO report on patenting and TFP growth. Not much of a correlation. Several ways to think about this. First, patents are a very imperfect measure of innovation. Second, TFP is a very imperfect measure of innovation – remember, TFP includes utilization changes, markups, and input changes. It does *not* equal technology, so it is probably not surprising that TFP and patents are not correlated.

I Love the Smell of TFP in the Morning

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Very recently John Fernald of the SF Fed released a quarterly series on total factor productivity (TFP) in the US. One of the neat things about his series is that you can look separately at investment (equipment and consumer durables) and consumption (everything else). When you plot these out, you see a really big divergence.

Fernald 2014 TFP

(Note: My graph from Fernald’s data).

Consumption TFP essentially flat-lines from about 1980 until today. At the same time, investment TFP races ahead. Aggregate TFP is a weighted average of the two, and since investment is only about 20% of total spending, this means aggregate TFP exhibits a slight rise (Each series is normalized to 100 in 1947, so you cannot compare absolute levels across sectors like this).

The flat-line in consumption TFP has generated a few puzzled reactions. David Beckworth in particular said that the consumption series “does not pass the smell test”. He says that Fernald’s measure (and by implication other TFP calculations) must be flawed, and wants a better way to measure productivity.

This is an overreaction, and represents a misunderstanding of what TFP is, and what it measures. The first thing that often happens is that people confuse “labor productivity” with “TFP”. Labor productivity depends on TFP and on other factors of production, like capital. So labor productivity could be rising in the consumption sector even if TFP is not.

But leaving that possible misunderstanding aside, let’s think more carefully about what goes into TFP. As a rough guide, when we measure changes in TFP what we get is the following

Chg. TFP = Chg. Technology + Chg. Utilization + Markups x Chg. Inputs

You can be more technical about things, but this is roughly what you’ll get. What are those three parts?

  • Technology. This is what it sounds like – the ability to produce real goods/services with a given stock of real inputs. If technology improves, this will add to our measure of TFP.
  • Utilization. If the economy, or the sector we are talking about, is using their capital or labor more intensely, then this will show up as increased utilization, and will also pass through to higher TFP. For the given stock of inputs (workers or number of machines) you are getting more output.
  • Markups x Inputs. This term is a tricky one. If you charge price markups over marginal cost, then this is equivalent to saying that you do not produce as much as socially optimal (where P = MC). So if we increase inputs in your sector, this raises output, and gets us closer to the socially optimal point. So when markups exist, higher input use will translate to higher TFP.

The problem that plagues Beckworth and many others is that they are trying to exactly equate TFP with “technology”. That just isn’t the case. Technology can be improving in the consumption goods sector, but this could be offset by changes in utilization, markups, or input use. Flat-lining TFP doesn’t imply that there were no gains in technology.

So what could be going on with utilization and markups/inputs? If you dig through Fernald’s data, you can find that utilization in the consumption sector has fallen over time. The consumption sector uses factors about 97% as intensely as it did in the 1960s. That shows up as lower TFP.

An additional factor that would play into consumption TFP staying flat would be market power, and here I think Beckworth gets it right that whatever is going on in consumption is because of services. The service sector tends to have really low markups over marginal cost. Additionally – and I have nothing but some intuition to back this up – I think innovation in the service sector may typically take the form of lowering markups. Think Wal-Mart. It sells the same crap you can find in 100 other stores. It’s entire business model is selling it for less than everyone else. With low and falling markups, the contribution of additional inputs like capital (e.g. various IT investments) and labor would not have added to TFP growth.

So consumption TFP could reasonably have flat-lined. I don’t think this represents any kind of glaring flaw in the methodology. But you have to separate the idea of TFP from the idea of “technology”. Once you do that, flat-lining consumption TFP is reasonable.

On top of all that, the idea that consumption technology has not grown much over time isn’t that hard to believe. Consider this example. We just were forced to buy a new fridge because the old one konked out (long, very annoying story). The fridge is produced by the consumer durables sector. Our fridge is more efficient, quieter, colder, etc. etc. than a fridge from 10 years ago. There have been clear technological advances in fridge-making that I benefit from. If I wanted a fridge equivalent to what we had 10 years ago, I could get that for probably 1/4 of the price of the new fridge. So there is obvious technological change going on in the investment sector, and obvious TFP gains.

But I bought the fridge through Best Buy (as it turns out, another long, annoying story). Best Buy’s value-added, such as it was, is part of “consumption” because it is a service. And is Best Buy any better at selling fridges or screwing up delivery dates than they were ten years ago? Maybe, maybe not. If you told me that a major appliance retailer in 1990 was about as efficient at selling and delivering fridges as one today, I’d believe you. What is the major technological breakthrough in the service industry that I should think of from the last few decades? Those little headsets that people at the Gap wear?

Does that mean I shouldn’t care about slow growth in consumption TFP? No. We’d prefer to have faster TFP growth than slower TFP growth. But you shouldn’t dismiss TFP because it doesn’t match up to the notion in your head. If TFP doesn’t pass the “smell test”, it may be that you’re sniffing the wrong thing.