Mathiness versus Science in Growth Economics

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Paul Romer created a bit of a firestorm over the last week or so with his paper and posts regarding “Mathiness in the Theory of Economic Growth”. I finally was able to sit down and think harder about his piece (and several reactions to it).

Before I get to the substance, let me make two caveats. First, Romer has been relentless in continuing to publish blog posts and tweets about his paper, so I’m kind of hopelessly behind at this point. I will probably make some points that someone has made in response, or talk about something that Romer has already brought up elsewhere. If so, the lack of links or attribution is not intentional. I just haven’t caught up yet. (See DeLong, and Wren-Lewis for two responses).

Second, one of the papers that Romer discusses is by Bob Lucas and Ben Moll. I know Ben a little, as he gave a talk at UH last year. We recently e-mailed regarding Romer’s criticisms of their paper. Some of what I will write below is based off of notes that Ben is writing up as a response. That isn’t to say that this post is a defense of Lucas and Moll, just a disclaimer.

What’s the issue here? Romer says in his paper:

For the last two decades, growth theory has made no scientific pogress toward a consensus. The challenge is how to model the scale effects introduced by non-rival ideas… To accommodate them, many growth theorists have embraced monpolistic competition, but an influential group of traditionalists continues to support price taking with external increasing returns. The question posed here is why the methods of science have failed to resolve the disagreement between these two groups.

One thing we have come to a consensus on is that economic growth is driven by innovation, and not simply accumulating physical or human capital. That innovation, though, involves non-rival ideas. Non-rival ideas (e.g. calculus) can be used by anyone without limiting anyone else’s use of the idea. But modeling a non-rival idea is weird for standard economics, which was built around things that are rival (e.g. a hot dog). In particular, allowing for non-rival ideas in production means that we have increasing returns to scale (if you double all physical inputs *and* the number of ideas then you get more than twice as much output). But if we have increasing returns to scale, why don’t we see growth rates accelerating over time? We should be well on our way to infinite output by now with IRS.

To answer that, Romer pioneered the use of monopolistic competition in describing ideas. Basically, even though ideas are non-rival, they *are* excludable. Give someone market power over an idea (i.e. a patent) and this allows them to earn profits on that idea. Because not everyone can actually use the idea for free, this keeps the growth rate from exploding. The profits that an owner of an idea earns from people using it are what incent people to come up with more ideas. So these market power models explain why the IRS doesn’t result in infinite output, and explains why people would bother to innovate in the first place.

An alternative is that ideas are non-rival, and non-excludable. That is, anyone is capable of adopting them immediately and for free. To keep the economy from exploding to infinite output, in these models you have to build in some friction to the free flow of ideas between people. Yes, you can freely adopt any idea you find, but it takes you a while to find the new ideas lying around. What you can retain in models like this is the idea of price-taking competition. No market power is necessary for any agent in the model.

Romer’s paper then proposes that the lack of consensus on this is due to one side (the latter, price-taking increasing returns group) making arguments for their side not on the basis of scientific evidence, but on `mathiness’. Let’s hold off for a moment on that term.

Is this really a big disagreement? In one sense, yes. You certainly still have papers from both camps in top journals, by top economists. In another sense, no. When it comes to doing any kind of empirical work in growth, there is no question that firm-level, market-power models of innovation that grew out of Romer’s work are the standard. The problem with the price-taking models is that they say nothing about firm dynamics (e.g. entry and exit), and these dynamics are a huge part of growth. With price-taking, there isn’t a reason for any specific firm to exist, and so things like entry and exit aren’t well-defined.

Are there math mistakes? The latter part of Romer’s paper discusses how several recent examples of price-taking models are sloppy in connecting words and math, and how some of them in fact contain mathematical errors. He discusses, in particular, the Lucas and Moll paper and an issue taking a double limit. This is something that Ben broached with me, and his explanation of the issue seems reasonable, in the sense that Lucas and Moll do not seem wrong. But between Romer, Lucas, Moll, and me, I am the last person you should ask about this.

More important, from the perspective of this `mathiness’ question, the math mistakes themselves are irrelevant. Romer’s larger point would be worth discussing even if the math were perfect. Pointing out a flaw doesn’t change his argument, and in fact probably detracts from it. He isn’t asking Lucas and Moll (or the others) to simply correct their paper, he wants to change the way they think about doing research.

Doesn’t everyone make silly assumptions? This was Noah Smith‘s initial reaction to the mathiness post. The assumptions made by the market power theories are just as impossible to justify as the competition theory. The price-taking theory assumes that people just randomly walk around, bump into each other, and magically new ideas spring into existence. The market power theory assumes that people wander into a lab, and then magically new ideas just spring into existence, perhaps arriving in a Poisson-distributed process to make the math easier. Why is the magical arrival of ideas in the lab less fanciful than the magical arrival of ideas from people meeting each other? In the models, they are both governed by arbitrary statistical processes that bear no resemblance to how research actually works.

At their heart, both of these theories have some kind of arbitrary process involved. But that is not Romer’s point. Every theory is going to make some kind of fanciful abstraction regarding the real world. If it didn’t, it wouldn’t be a model, it would be reality.

Okay, smart guy. What is Romer’s point? I think it is this: math is not science.

Here’s how the science on this would work. Collect data on the growth rate, number of innovations produced, and/or productivity growth. Test whether countries/states/firms that operate with price-taking grow at the same rate as those that operate with market power over ideas. If they do grow at the same rate, then you fail to reject the price-taking theory of innovation. You don’t accept it, you fail to reject it. And then you go on your way scrounging for more data to see if that particular test was just a result of sample noise.

If the price-taking market doesn’t grow or produce innovations, then you reject the price-taking theory. And then you go on your way scrounging for more data to see if that particular test was just a result of sample noise.

Let’s say that you fail to reject the price-taking theory. Now what? Now you start pulling out other predictions from both theories, preferably ones where they differ. And you test those predictions. If you could reject the market power theory predictions, but fail to reject the price-taking predictions, then you’d probably conclude that price-taking is the better explanation of innovative activity (but new data could overturn that). And vice versa. I’m not saying this is easy (how do you identify which economies are price-taking versus market power?), but this is how you’d do it.

That’s the science. That doesn’t mean the math is worthless. You have to have the math – the model – in order to come up with the predictions and hypotheses that we’re going to test with the data. Without the model we don’t know how to interpret what we see. Without the model, we don’t know what tests to run. So a paper like Lucas and Moll is useful in allowing us distinguish what a price-taking world might look like compared to a market power world.

Here is where we reach the crux of Romer’s argument, to me. Most people, including many inside academia who should know better, assume that math equals science. And rather than remind readers that math is not equal to science, authors often play along with that fiction. They play along by using very complicated math – “mathiness” – making their idea look more “science-ish”. They let people believe their model shows how the world does work, rather than how it might work.

[Update 5/21 6:30pm: That’s not a specific indictment of Lucas/Moll, but a re-statment of Romer’s argument. And a valid question here is whether we should expect every paper to actively re-state the concept that models are about how the world might work, not how it does work. Moreover, if someone misuses a theory paper, is that the fault of the authors?]

So all these guys are liars? No, I don’t think that Lucas and Moll, for example, are part of some conspiracy. I know Ben is somewhat flummoxed by what their paper did to come under such fire from Romer. They wrote a theoretical paper that strung out the implications of a certain set of assumptions. I think they are perfectly amenable to, and would support, anyone who could come up with good empirical evidence on their model.

How other people use these theories is a different story. Brad DeLong has suggested that the problem with the “mathiness” of these papers is that they allow people to reverse engineer support for their preferred political position. If we have a price-taking competitive economy, then any interference (i.e. taxes or subsidies) will generate deadweight loss. Are we in a price-taking economy? These papers by smart economists show that we *could* be, and if you confuse math with science then you assert that we *are* in a price-taking economy. Hence no interference is justified.

Why can’t we all just get along? Perhaps we should have different models for different situations. Dani Rodrik has made this point before (H/T Israel Arroyo for the link), urging economists to focus on choosing the right model, not trying to shove everything into one grand unified theory. The market power theory is useful in understanding innovation in pharmaceuticals, for example, or innovation in a leading-edge Western country like the U.S.

But the price-taking theory is useful in a situation where the innovation we are talking about is not actually a brand new idea, but rather an existing idea (even an old one) that people have not adopted yet. Think of something like proper fertilizer application among peasant farmers. Some farmers use the fertilizer properly, some don’t. But this isn’t because some farmers have property rights over the knowledge of how to use fertilizer. How long it takes the good practices to diffuse over the whole population of farmers may well be modeled as a a series of interactions between farmers over time, and knowledge gets passed along at each step. Taking the arrival of truly new innovations as exogenous may be a reasonable assumption to make for some developing countries.

If Lucas and Moll had framed their theory this way, would that mitigate the mathiness of the paper? I think it might.

Now what? I think Romer’s paper is right that we are not careful enough about distinguishing math from science in economics. It is easy to slip, and I have no doubt that some people take advantage of this slippage to push their viewpoints.

One thing to insist on (of papers you referee, of speakers, or of one’s own work) is that falsifiable predictions are clearly stated. What could the data show that would make your theory wrong? Force the authors to be clear on how science can be used to evaluate your theory. That isn’t to say that every theoretical paper needs to have an empirical test added to it. I am always in favor of smaller, more concise papers. But the follow-up empirical work for your theory should be obvious. Then hope some grad student doesn’t take the bait and prove you wrong.

Advertisements

34 thoughts on “Mathiness versus Science in Growth Economics

  1. As usual, your post is a wealth of good reflections. However, I think you are being too kind to Romer. His ‘Mathiness’ post is largely scattered and has only gotten some focus after he published it and others brought clarity to what he was trying to get at. Before then, the piece read like a ‘regalement de compte’ more than anything else. It is astounding that he never mentions Deirdre McCloskley who had written at length about ‘Mathiness’ and ‘formalism’ in economics. There is an ideological side to Lucas that is annoying but he has also been cleverly innovative, often using math, in some of his work. Romer brought up a useful point that you address well between math and science and allowed it to get lost in his ‘monopolistic’ ranting!

    • Fair point – there is a lot of McCloskey in what he is saying. And my guess is that McCloskey would agree regarding the mis-use of words.

  2. Nice post, Dietz. One quibble and one comment.

    Quibble: the “problem” of no firm dynamics in price-taking models is not true. It’s easy to model entry and exit with perfect competition.

    Comment: I don’t buy your interpretation of Romer’s point. He is not just saying that math is not science. We all know that math is a language of science. (Also, I don’t like you suggesting that there are academics who should know better, without identifying who you have in mind.) His big beef in my view is that a number of people he’s refereed papers for don’t listen to him. He’s got the right way to do growth…why do people still listen to these traditionalists?

    One more thing. Who the hell cares? Are there significantly different policy implications? Just assuming price-taking behavior does not preclude the possibility of desirable interventions (despite what DeLong claims, but he apparently has never heard of externalities). I have no where seen Romer explain *why* the two modeling approaches make a big difference for *every* policy question. As you mention at the end, for many if not most countries, the issue is catching up and the process of knowledge diffusion is easily modeled in a price-taking setting (I have done so myself here (shameless self promotion 🙂 )

    http://econpapers.repec.org/article/redissued/v_3A1_3Ay_3A1998_3Ai_3A2_3Ap_3A338-370.htm

    • David – fair point on me taking a shot without backing it up.

      I’m struggling with exactly how to define “mathiness” (or how to read Romer’s definition of it). The math neq science thing is kind of lurking there, in the sense that sometimes people expect/demand fancy math before they believe a point (here the examples I have in mind are anonymous referee’s, so naming names is a bit tough).

      But as I’ve been thinking about it, I think Romer’s point (or at least a useful point that I’m inferring from his work) may be that we’re going backwards sometimes. You’ve got a mathematical assumption that makes your model work, and then you layer some verbal explanation on top of this to quasi-justify the assumption. It’s not making a stark approximation about how the world works (the savings rate is exogenous) but something different (I have this cool difference equation, what should I label the variables to make it sound plausible?).

      Not sure that’s exactly it, but it might be closer to Romer’s complaint. And fair point about policy implications – are there fundamentally different implications? Depends on the question, I guess, but I would agree that for many/most questions the implications would be similar.

      • “I have this cool difference equation, what should I label the variables to make it sound plausible?”

        Yes, I think that more or less hits the nail on the head, I’d say. Nevertheless, I still find it ironic that he chooses the label “mathiness” to describe was essentially a translation failure into common words. Moreover, the fabrication of cool-sounding labels, like “animal spirits,” is not limited to putting flesh around mathematical bones.

  3. Pingback: 10 Friday AM Reads | The Big Picture

  4. What’s funny about all this Internet froth about “mathiness” is that no actual math is ever discussed, let alone shown!

  5. More important, from the perspective of this `mathiness’ question, the math mistakes themselves are irrelevant. Romer’s larger point would be worth discussing even if the math were perfect. Pointing out a flaw doesn’t change his argument, and in fact probably detracts from it. He isn’t asking Lucas and Moll (or the others) to simply correct their paper, he wants to change the way they think about doing research.

    I understood his point to be that these authors don’t even take the math seriously. Re: Lucas & Moll, he concludes that “An argument that takes the math seriously would note that the double limit does not exist and would caution against trying to give an interpretation to the value calculated using one order or the other.” One way of restating his argument is that in economics, math has become a kind of BS in the way the Harry Frankfurt formalized the term 10 years ago. Another way of restating his complaint is that math has come to play the same role as cavalry in modern warfare (as of c. 150 years ago): “to give tone to what would otherwise be a mere vulgar brawl!”.*

    You say, “Pointing out a flaw doesn’t change his argument, and in fact probably detracts from it. He isn’t asking Lucas and Moll (or the others) to simply correct their paper, he wants to change the way they think about doing research.” He is not pointing out a flaw (as he goes on to say, “Anyone who does math knows that it is distressingly easy to make an oversight like this. It is not a sign of mathiness by the author”). Rather, it is that nobody cares enough to detect and correct the flaw. This is a sign that the logical structure of the argument is central to the aim of neither the researcher nor the community of which s/he is a member.

    An important part of the point he is making is that those who engage in mathiness are writing in bad faith. I think your account misses this.

    To be clear, I am not here arguing for Romer’s position but rather for this interpretation of his argument.

    *By the way, much empirical work, and not only in economics, seems to be conducted in much the same spirit, but statisticalliness doesn’t have quite the same ring.

    • I think you made a great point here. It’s not so much the authors that should be indicted for this, it’s the rest of us for continuing to play along with all these papers. And in Romer’s terminology, we then end up in this bad equilibrium where the optimal choice is to write “mathiness” articles and all agree to accept each others work.

      • To the extent that the authors he criticizes developed and imposed these standards and norms on the rest of the profession, they can be faulted. To some extent, it seems that Romer is yearning for a golden age of economics, when mathiness was rare but mathematical models were not, one that I am not aware ever existed.

      • That’s fair. I’m not sure Romer is saying we have to go back to some golden age. He’s asking that we do better. Maybe there were times in the past when we did do better, but he’s not saying we should drop everything after 1956 and start over.

  6. Perhaps you’re looking for this:
    www°project-syndicate°org/commentary/dani-rodrik-on-the-promise-and-peril-of-social-science-models

  7. I’m sympathetic to the issue as applies to economic models, but I’ve recently been following an extended discussion about the value of patent, copyright and other IP protections that argues MORE ardently, with the shabbiest of evidence cited.

    Apparently, legal and business discussions aren’t even up to the level of mathiness that Mr. Romer decries.

    • That is something at the other end of the spectrum, so to speak. I think Romer is worried that “mathiness” is leading us towards pure political arguments, which are almost by definition unresolvable.

  8. Pingback: 05/22/15 - Friday Interest-ing Reads -Compound Interest Rocks

  9. Here is a modified version of a comment I left on this elsewhere:

    1) Romer is criticizing mathiness in economics, not math, which he certainly sees as valuable if applied well.

    2) With a model, all that is essentially said is, if this stuff is true, then this other stuff is true, or will happen. The model doesn’t, in of itself, say this is what will happen in the real world, where the initial if’s, or assumptions, are never true completely, and may be extremely untrue.

    Thus, as I always say, even though I have no name, a model is only as good as its interpretation, to reality. The big problem with Prescott and Lucas and gang, is that they prove something is true in the model and then they assert that it’s also true in the real world, where the fantastical assumptions they make are usually comically, and materially, untrue – but hey, it makes their libertarian/plutocratic ideology look better, and that’s what counts.

    A model can teach important lessons about the real world, but IF it’s a good model, and IF it’s interpreted to reality intelligently and using what we (often painfully obviously) know about the real world, not ignoring any other knowledge we have.

    One model in particular, which I’ve actually taken apart and completely understand, is Wallace’s 1981 AER. It’s often used as “proof” that quantitative easing can’t have any effect. To even most economists, it’s an impenetrable wall of math, so they can’t tell. But, I’m about to have my big post explaining the intuition, which basically shows that the result of “irrelevance” depends on ridiculous assumptions that clearly when they hold as little as they do in the real world mean that quantitative easing, especially large and unconventional, can have a big effect. And this is what the empirical evidence shows.

    But if you understand the model, you can see right away that it’s not going to hold in the real world in a very substantial way. It just depends too much on fantastical assumptions that the evidence is comically obvious come nowhere close to holding. It’s pretty obvious given other evidence about the real world, that you can construct a solid logic chain to very benign assumptions about the real world that ends with a big effect from a quantitative easing, as big as you want basically by increasing the size and/or unconventionality of the QE.

    • I don’t know, Richard. If I was you, I’d consider toning it down a bit. The Wallace theorem is a macro version of the Modigliani-Miller theorem. Also similar to the Ricardian equivalence theorem. Everyone knows that the assumptions do not hold exactly. The question is whether they hold approximately. One implication of the theorem is that the effects that people predict from open market operations are not likely to be a strong as they think. The theorems can also be interpreted as explaining why one would expect QE to work. Wallace would be the first one to admit that. So I really do not understand the tone of your essay above.

      • David and Richard. We write down models (like Wallace or RE) in very stark terms. Under *these* assumptions, *this* conclusion follows. So here’s a question for you both. Should we ask model-writers to include a section that describes explicitly why those assumptions might be wrong? (You do see this in some papers, but not universally).

        I guess what I’m wondering is whether model-writers should be expected to foresee the (mis)uses of their model, and try to cut that off? Wallace wrote down a model. Is it his fault if someone out there mis-uses it to claim monetary policy has no effect?

      • David and Richard, a follow up. From paragraph 1 of Wallace’s paper: “In this paper I will show that alternative paths of the government’s portfolio can be irrelevant in precisely the sense in which the Modigliani-Miller theorem shows that alternative corporate liability structures are irrelevant.” The italics are in the original.

        Just interesting to note that Wallace was abundantly clear that his theory was a statement about what might hold, and did not claim this had to be true.

      • I’m going with what I see in the blogosphere and economics articles. Scholarly journals are typically reluctant to make claims about the real world, and are conservative in what they claim. And I don’t think I’m wrong to see lots of influential people claiming that QE’s can’t work, with appeals to Wallace, or Wallace-type arguments. So this is what I’m talking about. The reality is that I think pretty much the heart of Wallace doesn’t come close to holding.

        Almost no one thinks, I’ll counteract the QE with trades, because, like in Wallace, I expect the government to hoist the profit or loss from the QE in some part on me in increased/decreased transfers net of taxes, and I know about how much, in each state. And I also know pretty well when the government will reverse completely the QE. And because I know all of this, I act optimally and buy what the government sells at about the current market prices enough to keep my consumption investment path about the same, as it was optimal before.

        The only thing in Wallace that really looks like a substantial factor is the assumption that at some time the government will reverse the QE. But even there, it’s not that certain even to very savvy investors, when, how much, or maybe even if. And this is to very savvy investors, who are a small minority. And for money managers, there’s “The Limits of Arbitrage”, and so on.

        So, I do see a lot of people claiming that a QE can do little or nothing, and appealing to Wallace. But it looks like, in reality, a QE can have just about as big an effect as the Fed wants, if the fed makes it big and unconventional enough. Keep making it bigger and/or more unconventional, and eventually there will be a big effect.

        As far as Neil Wallace, I don’t blame him. It’s a pretty interesting model, and he doesn’t make strong claims to the real world.

        But with the actual Miller-Modigliani, you always hear that it will far from hold due to bankruptcy costs, taxes, etc. You don’t hear claims of Irrelevance, or anything close to irrelevance. Maybe because it came out in such a different era in economics, the 50s.

        Should authors talk about in the paper, how the model might differ in the real world. I think so when there are clear and important differences. But it’s mostly the role of economists using the model, who make real world and policy claims about the model, to interpret it intelligently to reality.

      • Richard, you say you are only going by what you read in economics journals (and blogs, but I’ll discount those). So tell me…who are these economists writing in journals that QE doesn’t work because of the Wallace theorem?

        Also, in terms of QE having a big impact empirically, my own reading of the evidence is that it is fairly mixed.

      • In 2014 Bernanke said, “Well, the problem with QE is it works in practice, but it doesn’t work in theory”, at:

        http://homepage.ntu.edu.tw/~yitingli/file/Money%20and%20Banking/williamson_2014.pdf

        Roger Farmer wrote, also in 2014, “A wealth of evidence shows not just that quantitative easing matters, but also that qualitative easing matters. (see for example Krishnamurthy and Vissing-Jorgensen, Hamilton and Wu, Gagnon et al). In other words, QE works in practice but not in theory. Perhaps its time to jettison the theory.”, at:

        http://rogerfarmerblog.blogspot.com/2014/08/why-death-matters-for-central-bank.html

        So, if Ben Bernanke and Roger Farmer are saying this, then it’s a good bet that a substantial group of prominent economists are implying that QE can’t work due to the results of theory. And they seem confident of the evidence that it works in practice. And then there’s the question, What if you just kept increasing the size and/or unconventionality of the QE? Is the empirical evidence mixed that it would never work, no matter how big and/or unconventional?

        Who says Wallace is a big part of the theory saying QE can’t work? I first got it from Stephen Williamson. He was making these kinds of claims, which I found amazing. So I asked, What paper can I read to see the reasoning why? And he said Wallace. So, I got it, and have been spending my five minutes of free time here and there periodically trying to understand it, the intuition behind the math, why in this paper monetary operations can have no effect on any asset prices. And, the big question, does this apply to the real world or just the model.

        But Miles Kimball also makes a big deal about Wallace, and coined the term Wallace neutrality. And here is John Cochrane in an exchange with Noah Smith:

        Noah SmithAugust 6, 2014 at 11:17 AM:

        What is the “standard theory” that says QE has no effect on interest rates??

        Reply

        John H. CochraneAugust 6, 2014 at 11:21 AM:

        Modigliani-Miller theorem. Price = expected discounted value of payoffs.

        At: http://johnhcochrane.blogspot.com/2014/08/qe-and-interest-rates.html

        Miller-Modigliani applied to QE looks like a reference to Wallace’s, “A Modigliani-Miller theorem for open-market operations”.

      • Missed pasting in the last part of my reply. Here is the whole thing:

        In 2014 Bernanke said, “Well, the problem with QE is it works in practice, but it doesn’t work in theory”, at:

        http://homepage.ntu.edu.tw/~yitingli/file/Money%20and%20Banking/williamson_2014.pdf

        Roger Farmer wrote, also in 2014, “A wealth of evidence shows not just that quantitative easing matters, but also that qualitative easing matters. (see for example Krishnamurthy and Vissing-Jorgensen, Hamilton and Wu, Gagnon et al). In other words, QE works in practice but not in theory. Perhaps its time to jettison the theory.”, at:

        http://rogerfarmerblog.blogspot.com/2014/08/why-death-matters-for-central-bank.html

        So, if Ben Bernanke and Roger Farmer are saying this, then it’s a good bet that a substantial group of prominent economists are implying that QE can’t work due to the results of theory. And they seem confident of the evidence that it works in practice. And then there’s the question, What if you just kept increasing the size and/or unconventionality of the QE? Is the empirical evidence mixed that it would never work, no matter how big and/or unconventional?

        Who says Wallace is a big part of the theory saying QE can’t work? I first got it from Stephen Williamson. He was making these kinds of claims, which I found amazing. So I asked, What paper can I read to see the reasoning why? And he said Wallace. So, I got it, and have been spending my five minutes of free time here and there periodically trying to understand it, the intuition behind the math, why in this paper monetary operations can have no effect on any asset prices. And, the big question, does this apply to the real world or just the model.

        But Miles Kimball also makes a big deal about Wallace, and coined the term Wallace neutrality. And here is John Cochrane in an exchange with Noah Smith:

        Noah SmithAugust 6, 2014 at 11:17 AM:

        What is the “standard theory” that says QE has no effect on interest rates??

        Reply

        John H. CochraneAugust 6, 2014 at 11:21 AM:

        Modigliani-Miller theorem. Price = expected discounted value of payoffs.

        At: http://johnhcochrane.blogspot.com/2014/08/qe-and-interest-rates.html

        Miller-Modigliani applied to QE looks like a reference to Wallace’s, “A Modigliani-Miller theorem for open-market operations”.

        Now, I am citing blog posts, and statements in the press, not academic journals. But, I’m not paid lots of money to work in this area. I have an economics background, and have learned the Wallace model well. Nonetheless, I have no knowledge of climatology, but it still is strong evidence what prominent climate scientists say in the press and in surveys about man-made global warming and the profound threat it implies.

        If Bernanke and Farmer and Williamson and Cochrane are saying these things, then there probably are substantial economists claiming that QE’s can’t work due to theory, and Wallace seems to be an important part of that.

  10. Also:

    “But if we have increasing returns to scale, why don’t we see growth rates accelerating over time? We should be well on our way to infinite output by now with IRS.”

    I’m a bit rushed here, but I don’t see this.

    An idea is non-rival, and, say, provided free to the public by the government on the internet 24/7/365 to any scientist or businessperson anywhere in the world, at zero transactions cost.

    The more people and firms that use it the more benefit, or production, it creates.

    But it just doesn’t create infinite benefit (over a finite period of time) because there is not an infinite number of people or firms, so it hits a limit.

    You double the number of ideas and inputs and you get more than double the output, but this doesn’t mean infinite output is possible, because you’re still limited in your inputs, which you eventually run out of.

    I don’t see the problem. IRTS means that per unit cost drops as you produce more, as you put in more inputs, so there’s a problem of, then why haven’t costs per unit gone to zero? Easy to explain, you either (1) run out of inputs before cost per unit hits zero, and/or the IRTS only exists over a certain range of units, and then goes to CRTS, or DRTS. I don’t really see the problem here.

    Also, more ideas increase production, but not necessesarily the rate of growth in production, or not forever. Again, don’t see the problem.

  11. Coming from a physics / engineering background, and having analyzed the solow 1957 paper data in detail, to be able to show some ugly problems,

    I think the real problem is, that most economists forget, that your formulaes how output depends on inputs like labor, kapital, human kapital, technology, and how it scales

    are just simple approximations of the “true”, much more complicated dependencies, which are also not everywhere the same, and not strictly constant over time.

    I believe I could explain all those effects with the example of a hairdresser salon (simple and familiar to the very most of us)

    You need a certain minimum of capital to just start with, after that output rises very rapidly with more capital, and then peters out, and becomes negative, if deprecation is taken into account, increased returns to scale reverting, etc. etc.

    Would somebody here be interested in such an example being worked out?

    • There are plenty of examples of models that do these kinds of things. Romer’s complaint isn’t with mathematical complexity, it is with complexity that does not have any economic meaning. One of his examples involved the authors talking about “locations” of economic activity, but this was not a model of spatial patterns, or trade over distance, or anything like that. It was just a mathematical assumption to generate some frictions, and then the authors labelled it “locations” to make it sound like something real.

  12. Pingback: Links 5/25/15 | Mike the Mad Biologist

  13. Pingback: Matematicidad en el trabajo empírico — Foco Económico

  14. Pingback: More on Mathiness | The Growth Economics Blog

  15. Pingback: Empiricness — Foco Económico

  16. Pingback: Mathiness versus Science in Growth Economics | just pillows

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s