# Trend, Cycles, and Assumptions about Fluctuations

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I am kind of veering off of the typical growth topic in this post (although not as badly as the last post). I was playing around trying to write questions for the first-year macro comp next month, and ended up thinking about how we distinguish trend from cycle in GDP. In particular, how do the assumptions built in to how we decompose GDP into trend and cycle tie into how we conceive of fluctuations?

Let’s take the simplest set-up, where log GDP follows a linear trend $\displaystyle \ln{GDP}_t = \alpha + g t + u_t, \ \ \ \ \ (1)$

and ${g}$ is the growth rate. ${\alpha}$ is the intercept, and it fixes the level of GDP in period 0. What I’m going to say below is all related to this linear trend assumption, but the concepts would follow even if you allowed for some polynomial in ${t}$ on the right-hand side, or if you did some kind of fancy filtering, like Hodrik-Prescott.

How do I do the decomposition? First, I get the estimated values ${\hat{\alpha}}$ and ${\hat{g}}$ from the data by running an ordinary least squares regression. I get the cycle as the deviations of log GDP from the estimated trend using these values. The cyclical deviations are just the residuals of that regression, $\displaystyle \hat{u}_t = \ln{GDP}_t - \hat{\alpha} - \hat{g}t. \ \ \ \ \ (2)$

Voila. You have a trend/cycle decomposition of GDP. Now that I’ve extracted the trend, I can write down a model to explain business cycles and try to match the information in ${\hat{u}_t}$.

This seems really sensible as an approach. But there is a lot embedded in this procedure, and I think it has consequences (intended or unintended) for how people think about cycles.

The first mathematical condition for running an ordinary least squares regression is $\displaystyle \sum_{t=0}^T \hat{u}_t = 0. \ \ \ \ \ (3)$

It always seems like a bit of a throw-away assumption when you teach econometrics. Yes, you say, make sure the errors add up to zero. If that were not true, then we could just adjust the intercept term until that assumption was true.

But this is not an innocuous assumption from an economic standpoint. If you use this when you try to estimate trend GDP, then you are asserting that deviations of GDP from trend must by necessity cancel out over time. That is, after you have estimated trend GDP and recovered your cyclical component ( ${\hat{u}_t}$), the booms must be exactly offset by busts.

If you build a theory of business cycles around this trend/cycle decomposition, then you are limited to a theory that only admits symmetrical deviations. You are pushed towards using symmetrical, (log)-normally distributed shocks to create cycles, for example.

You are also nudged towards treating “booms” as a pathology similar in every aspect to “busts”, only with the signs reversed. In particular, you are pushed towards the belief that busts are necessary to offset the booms. It suggests that we must “pay for” the excesses of the boom period with lower GDP in some other period.

This is wrong. Statistically, the fact that the best way to fit a line requires deviations to add up to zero does not mean that booms and busts must be perfectly symmetric. The economy does not have a lifetime budget constraint, which is what this symmetry implies. It has a dynamic budget constraint which simply says that real spending in one period has to add up to real production in that period. But the fact that you have a dynamic budget constraint does not mean that you can roll this up into a fixed lifetime budget constraint.

An example is helpful here. I have a dynamic budget constraint relating my real expenditure of calories on a given day to my real supply of calories on that day. The expenditure is all my basic metabolism plus whatever I burn going to the gym. The supply is whatever I eat plus the stock of calories I’ve got stored up (i.e. the flabby parts). The dynamic budget constraint says that the calories I burned today at the gym have to come from somewhere.

But this dynamic budget constraint doesn’t have any implication for how much I can burn over the course of my life. Now that classes are over, I’ve gone to the gym a few extra mornings, and expended more calories than normal. Does that mean I – by necessity – have to exercise less at some point in the future? Of course not. If I have just made a fundamental change in my exercise habits, then I can continue to hit the gym 5 days a week rather than 3 until I die. I can stay “above trend” forever. Similarly, if I decide to say “f*** it” and stop going to the gym, my calorie expenditure will fall below trend. And it can stay there forever. If I don’t exercise today, there is nothing about my dynamic budget constraint that requires me to go exercise tomorrow to make up for it. On this point many an exercise plan has failed.

GDP is like calorie expenditures. Yes, real expenditures must add up to real production in a given period. Great. But that doesn’t mean that GDP must conform to some infinite-period constraint. So if GDP is “above trend” for a while, that does not imply that it must fall “below trend” in order to balance that infinite-period constraint. Similarly, if we fall “below trend” for a while, there is nothing that requires us to necessarily have a boom in order to make up for the lost production. There is no lifetime budget constraint for GDP.

Back to the decomposition. The assumption that $\displaystyle \sum_{t=0}^T \hat{u}_t = 0 \ \ \ \ \ (4)$

says that there is a lifetime budget constraint. It says that all deviations above trend must be precisely and exactly offset by deviations below trend.

But that need not be the right assumption. Remember, our goal as economists is not to minimize the sum of squared residuals here, but to explain economic fluctuations from trend. So why not assume $\displaystyle \sum_{t=0}^T \hat{u}_t = -.03 \times T, \ \ \ \ \ (5)$

which would imply that the typical period is 3% below trend. That is, booms are more than offset over time by busts. You could set the summation to a larger number, and get that the economy is continually below trend, and never experiences a boom. Why not? Friedman proposed a “plucking model” of fluctuations, where there are occasional negative deviations from trend/potential GDP, but these are not necessarily offset by symmetrical booms.

The point here is that the statistical techniques used to recover measures of cyclical deviations embed an assumption that is not true. The GDP of an economy is not subject to a lifetime budget constraint. Therefore, booms and busts are not required to cancel each other out. What is the right assumption to make? I have no idea. Maybe it’s the -3% assumption I mentioned above. Maybe it’s -2%, or +5%.

I am sure that someone can tell me that “there is literature on this!!” already. Which is great. But that literature is not part of the standard toolkit that I am familiar with for first-year graduate macro. And I have downloaded and read a lot of lecture notes from first-year courses. I’ve never seen this discussed. Happy to see or hear of alternatives.

## 11 thoughts on “Trend, Cycles, and Assumptions about Fluctuations”

1. Ted Sanders on said:

Super good point, and something that I (and probably most people) haven’t thought about before.

2. Ben Kuhn on said:

If you build a theory of business cycles around this trend/cycle decomposition, then you are limited to a theory that only admits symmetrical deviations. You are pushed towards using symmetrical, (log)-normally distributed shocks to create cycles, for example.

If you want a different distribution of error terms, can’t you simply use a generalized linear model?

…booms and busts are not required to cancel each other out. What is the right assumption to make? I have no idea. Maybe it’s the -3% assumption I mentioned above. Maybe it’s -2%, or +5%.

This seems like ultimately a semantics argument.

There’s an automatic one-to-one correspondence between models in which (sum_t u_t = -0.03T) and models in which (sum_t u_t = 0). It corresponds to replacing alpha with alpha + 0.03 and then subtracting 0.03 from each u_t. Whether you put the 0.03 term in the trend or in the idiosyncratic noise is ultimately not super relevant; the math works out the same (and thus the predictions made by your model will be the same) either way.

• dvollrath on said:

The math is identical, sure. You get same slope. The point is that you are embedding a very different economic interpretation of the math into sum=0 versus sum=-3T. But no, there is no deep math point here.

• Ben Kuhn on said:

Would the different economic interpretations imply different things for your expectations about future GDP though? What I’m saying is that the supposedly-different economic interpretations also seem like semantics: your economic claims should also be able to translated between sum=0 and sum=3T.

• dvollrath on said:

The difference is that if you see GDP 3% below trend today, your expectation is that it will be 3% below tomorrow. Not that it will be closer to 0%.

• Ben Kuhn on said:

Sure, but the “trend” that you think it will fall below is higher by exactly 3% than it would be if you were using the other model, so your expectation about the actual GDP number is unchanged. The only thing that’s changed is whether you call it “below trend” or “on trend”.

3. john on said:

I have a question

since GDP is supposedly a unit root process, doesn’t this fact beat the purpose of using deterministic trends to talk about it?

if that’s the case, how would the ‘alternative’ model have to be formulated in order to account for the asymmetry between boom-bust?

thank you-interesting post

• dvollrath on said:

I think is partly Roger Farmers point. But it isn’t 100% clear if GDP is a unit root or not.

4. Xenus Uk on said:

To remove the trend from the cycle without imposing any assumptions, you need a filter. In economics the normal one used is Hodrick-Prescott. Your post was useful in that it clarified why that is the case. There is a good discussion of how the filter works in the Nelson Mark International Macro book in ch. 2. Basically you can treat discrete time data in the same way as continuous time data and you can treat any waveform as a combination of waveforms of different frequencies. This is basic electrical or audio engineering. The filter just removes those frequencies you don’t want. In this case the zero frequency trend. No distribution assumptions needed.
http://en.wikipedia.org/wiki/Fourier_analysis

• dvollrath on said:

Right – but the HP has the same issue. Everything is symmetric, so you are assuming that deviations above trend must be exactly offset by deviations below trend. Why does that need to be the case?

5. Robbin on said:

Thanks for sharing your thoughts on vene varicose
sintomi cura. Regards