What is the Neo-classical Growth Model good for?

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

This is a long post. It is partly in response to some questions from first-year grad students, and so it can be considered as notes for a lecture that might potentially be given to them at some point in the near future. For all that, it isn’t pitched at a high mathematical level. I think you could understand this without having ever done any kind of dynamic optimization work before.

Do savings matter for growth? This is an ill-formed question for several reasons. First, implicitly hiding in that question is the assumption that savings = investment. That doesn’t necessarily have to be true, and the events of the financial crisis in 2008 provide some evidence of this. So let us be more careful and say “Do investment rates matter for growth?”.

What do we mean by growth? You could really be referring to several things. “Do investment rates matter for the level of output per capita?” or “Does the investment rate today matter for the growth rate of output per capita today?” or “Does the investment rate – on average – matter for the trend growth rate of output per capita?”.

Let’s start with the simple cross-sectional relationship of investment rates and the level of GDP per capita. The following figure is from 2010, using Penn World Table 8.1 data. What you can see is that there is a (noisy) positive relationship between the two. Places with higher investment rates tend to be richer. If you used average investment rates over the last two decades or some other smoothing method, you’d get a similar picture. So we have some evidence that more investment is associated with a country being richer.

Investment and GDP per capita

Hsieh and Klenow (2007), following others, noted that investment rates measured in PPP prices varied a lot more than investment rates measured in domestic prices. More simply, the share of domestic income that is spent on investment goods is similar across countries. However, the amount of real investment goods that this spending can buy tends to be very low in poor countries. This doesn’t mean that higher investment rates do not generate higher income per capita, it just means that there is a lot less variation in investment rates than we might think. This implies that the ability of differences in investment rates to explain cross-country differences in income per capita is limited. Hsieh and Klenow go on to show how it probably has more to do with differences in productivity in the investment good sector.

Domestic Investment and GDP per capita

What about the effect of investment rates on growth rates, not the level of GDP per capita? Here we dip into the whole world of cross-country growth regressions. See Barro or Mankiw, Romer, Weil, or any of a few thousand papers from the 1990s. The simplest form is to regress the average growth rate of GDP per capita over some period (say 1960-2000) against initial GDP per capita (in 1960) and the investment rate (averaged over 1960-2000 as well). If you run this regression, then yes, you’ll get a positive coefficient on the investment rate. Places with higher investment rates grow faster – conditional on initial GDP per capita.

That conditional is crucial. Those regressions are not saying that higher investment rates raise the long-run growth rate of an economy. They are only saying that the level of GDP per capita will be higher along the balanced growth path of places with higher investment rates. The long-run growth rate along that balanced growth path is probably similar across all those countries. If you remember the post I did about convergence and the analogy with people driving down the highway, these regressions tell us that investment rates allow you to move farther up the line of cars, but ultimately you still have to settle in behind the sheriff and go 65.

Why is there a relationship of investment rates and growth? The “Duh” answer to this is that capital matters for producing output. More investment, more capital, more output. What we really want to ask if “Can we describe the dynamics of investment and growth?”.

The workhorse model for these dynamics is often called the Neo-classical growth model. It is often the first dynamic model you learn in graduate school, and it sits at the heart of all your favorite (or despised) DSGE models. The Neo-classical growth model goes by a lot of names. I tend to call it the Ramsey model, and will do so throughout this post. It’s also sometimes referred to as the Cass-Ramsey-Koopmans model, or more bluntly as the Solow-model-with-endogenous-savings.

The Ramsey model is a model of investment and growth. It is not a model of the effect of investment on growth. The effect of investment on growth is baked into the Ramsey (and the Solow, for that matter) by the accumulation equation for capital

\displaystyle  \Delta K_{t+1} = I_t - \delta K_t \ \ \ \ \ (1)

and the assumption that output depends on capital, {K_t}. If you raise {I_t}, then you accumulate more capital, and if you accumulate more capital output goes up. This is purely mechanical.

In the Solow model, {I_t} is s fixed fraction of output. In the Ramsey model, the choice of how much of output to invest – how big {I_t} will be – is determined by a forward-looking optimization problem, typically with a representative agent. Hence investment and growth are jointly determined in the Ramsey model, and both are dictated by the current state of the economy, which is captured by the current capital stock, {K_t}.

Does the Ramsey provide a good description of the world? More precisely, does it provide a good description of how investment rates are related to GDP per capita or the growth of GDP per capita?

For explaining the cross-country relationship of investment and levels of output per capita, the Ramsey model is no better than the Solow model. The Solow says that countries with higher exogenous investment rates (typically denoted {s}) will be richer. The Ramsey says that countries with higher exogenous patience preference parameters (typically denoted {\beta} or {\rho}) will be richer. This a distinction without a difference. The Solow might be superior here, in that it leaves open any possible reason for investment rates to differ, while the Ramsey forces you into thinking about patience parameters.

For explaining the time-series relationship of investment and output per capita, or investment and growth, the Ramsey model can actually offer us something. The Solow model’s fixed investment rate is just that, and hence there is no way (aside from a series of remarkable exogenous coincidences) that the investment rate will change as a country becomes richer or poorer. In contrast, the Ramsey model’s whole raison d’etre is to describe how investment rates change as output per worker changes.

What does the Ramsey model predict? That depends on the specific parameter values you choose. You can get the Ramsey model to predict investment rates that rise with output per capita or fall with output per capita if you tweak things just right. If you try to discipline the parameters in the typical way done in macro – a capital share of about 0.3-0.4, an inter-temporal elasticity of substitution of about 1/3 – then you get that savings should fall as output per worker rises. In other words, when a country is below its balanced growth path, it is predicted to save a big fraction of output, and this fraction is predicted to decline as it approaches it’s balanced growth path. Basically, the “high” elasticity of substitution means people are willing to invest a lot today (i.e. consume little) in return for lower investment rates in the future (i.e. high consumption). A country below its balanced growth path that invests a lot will grow quickly, and hence the Ramsey predicts that investment rates and growth rates should track each other.

Does this prediction hold up? In some cases, it looks great. Here’s a plot of the investment rate in Germany after WWII until now, along with the plot of the forward-looking 10-year average growth rate of output per capita. This looks qualitatively like exactly what the Ramsey model predicts.

Germany

So everything is great, right? Not quite. There are a number of other examples where the pattern actually looks nothing like what the Ramsey predicts. Here’s a plot from Japan, and from 1950 to 1980 this works exactly opposite of what the Ramsey says, but perhaps from 1980 forward it works?

Japan

Or consider Korea, where the spike in growth rates precedes the spike in investment rates, and then once growth starts to slow down after 1980 as Korea approaches the balanced growth path, the investment rate stays level. Or India, where there’s been a spike in investment rates recently, well after growth started to accelerate.

Korea

India

This doesn’t necessarily mean the Ramsey is wrong, but to explain these patterns we need to start delving into remarkable exogenous coincidences again. Productivity shocks that happen at just the right time, or unexplained shifts in patience parameters at exactly the right moment.

There’s another issue, though, which is that even the Germany figure doesn’t match the prediction of the Ramsey model. Under the typical parameters, the Ramsey says that both investment and the growth rate should have declined much faster than they actually did. Another way of saying this is that convergence speeds are predicted to be incredibly high in the Ramsey model. For Germany, given the starting point in 1950, the growth rate should have already dropped to about 2.5% and the investment rate to 20% by about 1965.

I’ve mentioned before that empirically, the convergence rate is about 2% per year, meaning that 2% of the gap between actual GDP and the balanced growth path closes each year. We find that estimate coming out of all sorts of settings. The Ramsey model predicts convergence rates of up to 30-40% for countries like Germany in 1950. It’s not even in the realm of empirically plausible.

A more thorough examination of the correlation between investment rates and growth can be found in Attanasio, Picci, and Scorcu (2000) paper, which builds on Carroll and Weil (1994). Both find that, if anything, higher investment rates Granger-cause lower growth rates. They also find that higher growth rates Granger-cause higher investment rates. In other words, a shock to growth is likely to be followed by higher investment rates in the future – which is backwards from what we baked into the Ramsey model. Second, shocks to investment rates are actually likely to be followed by lower growth rates – for this you could perhaps argue as demonstrating that higher investment causes convergence, and hence the growth rate would fall. But it is hard to reconcile with what we’d expect to see in the Ramsey model.

So….What exactly is the Ramsey model good for? Off the shelf, the Ramsey is a poor descriptor of the time-series evidence, and is needlessly complex for explaining cross-sectional relationships. But in failing, you can understand what might need to be added to match the time-series data.

To “slow down” the convergence speed in the Ramsey model, for instance, you can drop the assumption that all capital is perfectly substitutable across firms. I did this with Sebnem Kalemli-Ozcan and Indrit Hoxha, and once you lower the elasticity of substitution to 3 or 4, then predicted convergence speeds in the Ramsey become reasonable.

To match the Granger-causation of growth to investment rates, you can break the assumption that preferences are time-separable. This is what Carroll, Overland, and Weil (2000) do by adding habit formation into the Ramsey. Basically, your marginal utility depends on how much you consume today and how much you consumed yesterday. Because of this, your response to shocks is relatively slow, and so when growth ramps up (due to a productivity shock, for example) the savings rate doesn’t instantly jump up, but takes a while to respond.

To capture the time-series experiences of places like India or Japan you could appeal to some kind of exogenous coincidence. Just because they are implausible doesn’t mean they can’t occur. S*** happens, and so the Ramsey could be totally right, just masked by a series of crazy shocks to productivity, preferences, institutions, etc.. The take-away from this could be that you should stop worrying about trying to explain growth as something that has a common process in all places, and focus on explaining the growth of a specific place in detail.

You can also remember that the aggregate data we are looking at is exactly that, aggregate. Which means that it is the summation over millions of little individual decisions, and so the representative agent in the Ramsey model is probably not a good approximation. But saying that you should allow for heterogeneity in the model is a lot like saying that you should study each country individually, as the heterogeneity is going to be unique.

Study Slavery to Study the Impact of Robots on Workers

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I started writing a post that was collecting several recent pieces about robots/technology and their impact on workers. Let me quick post those:

As I started trying to sketch out what to write, I realized that I was just barfing up a big pile of confirmation bias. I read the above articles and was able to convince myself that they supported my general “robo-optimistic” outlook. That is co-incident with my opinion that our fetish for manufacturing jobs and output is just that, a fetish.

There are, of course, lots of “robo-pessimists” out there who feel that technology is going to be very bad for labor. Low wages, or even mass unemployment are possible consequences. I’ve gone back and forth a little in the past with Richard Serlin, who generally falls into the robo-pessimist camp.

Rather than writing a post that says, essentially, “Those posts agree with my priors”, let me try to switch gears. What would constitute a good argument for robo-pessimism? In other words, what kind of argument would make me change my mind about this?

A common analogy used by robo-pessimists is the horse. As engines, and particularly internal combustion engines come into use, horses were made obsolete. The absolute population of horses has declined dramatically over the last 100 years because they became costly relative to using an engine to drive your cart around. When robots can do what humans do, the analogy goes, humans will become costly relative to using robots, and so humans will become completely unemployable.

I don’t think this is a particular good argument, mainly because horses have no ability to innovate for themselves. No horse ever looked around and said, “You know, I feel like there is more I could do.” Horses didn’t offer to become drivers of the new horseless carriages, nor did horses think to learn how to repair engines or build them so that they had something to do besides pull wagons around.

But people can innovate and invent entirely new jobs for themselves. If you tell me that people won’t possibly be able to innovate new jobs when the robots arrive, then I think you have a ridiculously low opinion of people. And it isn’t necessary that everyone innovates, just a few who invent new jobs and professions that we cannot possibly think of today. If we could, we would have invented them already.

In place of the horse analogy, let me suggest a line of reasoning to robo-pessimists that, to me, has a better chance of producing a convincing argument for that pessimism. Slavery. You want to give examples of when free people were put out of work by slaves. I’m not talking about the effect of being enslaved, that is clearly negative. I want to know the outcomes of those who remain free (small-time white farmers in the South) when workers are introduced who can exactly mimic the skills of the free workers (slaves). These are free humans being replaced by the equivalent of a living robot, and those free workers can still innovate new work for themselves. The story thus doesn’t fall into the problem that the horse analogy has.

In addition, the slavery comparison doesn’t require us to think about capital-labor complementarity, and it doesn’t rely on the introduction of a new technology that both eliminated some tasks (hand weaving) but created others (monitoring weaving machines). Slaves effectively are robots, for the purposes of the economic discussion here. They can perfectly replace free labor, but do not necessarily create any kind of other work for free labor in their wake.

I haven’t read it in a long time, but I think Gavin Wright’s Old South, New South is where I’d start. Slavery ensured that wages were kept extremely low for free workers in the South. This may have created conditions that encouraged new businesses or industries to locate there, and this is in fact what Wright suggests happened in the early 20th century after slavery was abolished. But while slavery was in place, industry did not develop in the low wage South.

One reason for this is that that slave-owners were “labor-lords”, not “land-lords” (Wright’s terms). They had no incentive to build up the value of land, as they could simply relocate further west with their major asset and start over. Thus the improvements that helped make the North more prosperous for free workers – railroads, schools, dams, ports, etc.. – were not built. Without those improvements industry could not or would not relocate to the South.

So to use slaves as an analogy for robots in their effect on free workers, the causes for robo-pessimism are not that it leaves free workers with nothing to do, but that it frees the robot-owners from any incentives to invest. Since robot-owners are free to move their capital to new locations, what incentives do they have to agree to the equivalent to railroads or ports or infrastructure?

In addition, despite having decades to come up with something else to do, the free workers of the South never created a new set of jobs or activities that allowed them to keep up with the North. They remained poor farmers, and did not (could not?) coordinate to build the infrastructure themselves. They were left to essentially scrape out a living as subsistence farmers without many connections to the broader economy. What you’d want to do is present me with evidence that free white living standards were pushed down by the introduction of slavery. Perhaps wages in areas of the South prior to the arrival of cotton versus after?

Those last few paragraphs are the result of maybe thirty minutes of thinking about robots from this perspective. If you wanted to make a really compelling argument for robo-pessimism, I think taking this analogy and running with it would be the way to go. It’s one avenue by which I think I could be convinced to switch from vaguely robo-optimistic to robo-pessmistic.

So consider this post a bleg. Are there good historical examples of the introduction of slaves into economies where we can observe the effect on the non-slave population?

As a last aside, I think this post is instantly #1 for “Titles that tell you everything important about a post”.

Why Information Industrial Classification Diversity Grows

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I read Cesar Hidalgo’s Why Information Grows. Going into it, I really wanted to like it. I really wanted it to give me some insight into one of those fundamental growth questions: what drives the speed of knowledge acquisition?

This is not that book. The beginning is fun for describing basic information theory, and its relationship to entropy. It has some neat examples of how we end up “saving” information from entropy by encoding it in solids like cars, houses, or even the organized binary digits on my computer. But when it comes to translating this into an explanation for why economies grow, there is a breath-taking amount of hand-waving. I could feel the breeze whistling out of my Kindle as I read it.

In the end, Hidalgo says some places are rich because they have complex production structures, meaning they produce goods or services that require a large number of people or firms to interact in some kind of network. These networks embody the “knowledge and knowhow” of the economy. I haven’t quite decided whether this is tautological, but it’s close.

He attempts to offer evidence in favor of his claims by appealing to the data he built with Ricardo Hausmann. This uses detailed export data to build up a measure of how complex (read: diverse) is the number of exports a country produces.

There are a few issues with trying to use this data on complexity with any explanation of economic growth, much less information theory.

1. The measures of complexity are built on export data. That’s because you can get data on exports that is very fine-grained in terms of products, “6-digit” for those in the business. 6-digit classification means you’ve got things like 312120 – Breweries, or 424810 – Beer and Ale Wholesalers. Export data is also great because you can get it bilaterally for a lot of countries. You have data on how much beer Belgium exports to the US, and how much beer the US exports to Belgium.

Export data is available at this level of detail because the transactions get funneled through customs procedures, usually in a limited number of geographic points (i.e. ports), that let you track them closely. You cannot get similar data for an entire economy because there is no equivalent to customs houses tracking the minutiae of all your day to day purchases. Yes, conceptually that data is out there in Target’s or Whole Food’s computers, but we don’t track domestic transactions at that level centrally. Which leads to the first issue. Just because you don’t export a diverse set of products doesn’t mean you don’t have a complex economy. The vast, vast, vast majority of economic transactions are domestic-to-domestic, even in countries with large export sectors. So while I buy that an index of complexity built on export data is highly correlated with actual complexity, it doesn’t necessarily measure total complexity.

2. What is more of a problem is that the measure of complexity is built on the given NAICs system of coding products. As I’ve mentioned before, these kind of industrial classifications are skewed towards tracking manufactured goods, and have not caught up to the complexity of services and the like. The 6-digit code 541511 is “Custom Computer Programming Services”. That is essentially all types of software work: web design, sys admins, app designers, legacy COBOL programmers, etc..

In comparison, code 541511 is “Dog and Cat Food Manufacturing”. 311119 is “Other Animal Food Manufacturing”, like rabbit, bird, and fish food. So we are careful to track the difference in economic activity based on whether processed lumps of food goo are served to dogs as opposed to bunnies. But we do not distinguish between someone designing Flappy Birds from someone doing back-end server maintenance.

This means that your level of complexity depends simply on now detailed NAICs gets. Take two towns. In one, they have a single factory that produces both dog and rabbit food, and they export both. This town looks complex because it exports in two separate NAICs categories. In a second town, they have several firms that do outsourcing for major companies, with different firms doing web design, server maintenance, custom C++ programming, and say three of four other activities. Because all those programming activities fall under a single NAICs category, this second town appears to have a less complex economy. The “knowledge and knowhow” in the second town is likely larger, but NAICs cannot capture this.

This is like saying that bacteria are less genetically diverse than eukaryotes because bacteria are all in one kingdom, while we happen to classify eukaryotes into 5: protozoa, algae, plants, fungus, and animals. But bacteria are known to be more genetically diverse across species than eukaryotes. If you focus on the arbitrary divisions, things can look more or less diverse based solely on your choice of those divisions.

3. Leave all the complaints about the measure of complexity aside. Hidalgo tries to show how important this is for explaining economic growth by…..running a growth regression. He doesn’t call it that. He plots GDP per capita against economic complexity in 1985, and there is a positive relationship. He then says that countries with GDP per capita below the level expected given their complexity in 1985 grew faster from 1985 to 2000, and that this justifies his theory. But that is just a growth regression, except without any explicit coefficient estimate or standard error.

Several issues here. First, he doesn’t bother to mention whether this is statistically significant or not. Second, we’ve spent twenty years in growth complaining about exactly these kinds of regressions because they are completely unidentified. He doesn’t even bother to try and control for any of the obvious omitted variables like savings rates or population growth rates. Most likely, complexity is just another of the long list of things that are correlated with high incomes – institutions, savings, a lack of corruption, etc.. – without having any idea whether they are causal or not.

Somewhere in there, perhaps invisible behind the blur of waving hands, is some kind of insight into how information expands and builds upon itself. That would have been an interesting contribution to our thinking on growth. But the book, as it is, fails to provide it.

Hip-Hop History of Macro

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

Do you find yourself a little lost trying to keep up with the history of macro posts that Romer (here, here, here), DeLong (here, here), and others have been posting? What did Lucas do, or not do, to change macro? Was it that big of a change? What is this saltwater versus freshwater stuff?

I’m here to help. The history of macro closely parallels the history of hip-hop, even down to the importance of the late 1970’s and early 80’s. Let me help you keep track of what is going on.

  • Solow and Tobin are Otis Redding and Sam Cooke.
  • Milton Friedman is James Brown.
  • Lucas and Sargent are Slick Rick and Doug E. Fresh. Their 1978 paper is the “La Di Da Di” of macro papers. Everyone samples from it.
  • Ed Prescott is Public Enemy’s Chuck D, which makes Finn Kydland Flava-Flav. Their 1982 “Time to Build” is the It Takes a Nation of Millions to Hold us Back of macro papers.
  • Robert Mundell (Run?) and Stan Fischer (Daryl?) are Run DMC.
  • Minneapolis is South Central LA.
  • The collection of economists at the Minneapolis Fed and U. of Minn. are N.W.A. (Play at home! Try to link specific economists to Dre, Easy, Ice-Cube, and MC Ren.)
  • New Keynesians are East Coast rap. Mike Woodford is Nas, Larry Summers is the Notorious B.I.G., and Blanchard, Mankiw, and Romer are all in the Wu-Tang Clan.

I spent way too much time thinking about this during a long car drive. But once you start, it’s hard to stop. There are so many unanswered questions. Who are the Eric B. and Rakim of real business cycles? What is the academic equivalent of the Tupac/Biggie feud, and who is the Suge Knight of macroeconomists? Is Bob Hall like Michael Jackson (not the weird stuff, the massively talented stuff) relative to hip-hop?

I had to work hard to stop myself from trying to do this with other fields. But in case you were wondering, Paul Romer is Kurt Cobain and Aghion and Howitt are Pearl Jam. I may have also convinced myself that Acemoglu is Beyonce and Raj Chetty is Taylor Swift.

Dumb Luck in Historical Development

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

I took advantage of a week of vacation to read through some books that had been piling up (queuing up? Not sure the right idiom for a Kindle). One was Philip Hoffman’s Why Did Europe Conquer the World? This, on its face, is another entry in a long line of global history books that argue Western European economic and colonial dominance is, at its heart, due to a rather specific characteristic: disease tolerance, or cows, or a knobbly coastline, etc. etc. But Hoffman’s work is different from these in a crucial respect that I’ll try to explain.

Hoffman’s entry attributes Europe’s dominance to gunpowder technology, and the ability to use it very efficiently. I don’t know that there is anything terribly controversial about saying European nations had an advantage in firepower by 1600, a distinct advantage by 1700, and a huge advantage by 1800. You could probably quibble with exact dates, or with the right statistics to use to measure firepower, but I’m not interested in that kind of argument (and I would certainly lose that argument to Philip Hoffman).

I’m more interested in what Hoffman does with this historical set of facts. In the book, he develops a model (summarized in words in the text, mathematically derived in an appendix) that is an attempt to explain why Europe got such a lead. It is a model of learning-by-doing in gunpowder technology, but where learning-by-doing only occurs if you actually fight. Hence, in Hoffman’s model there are four conditions for rapid development of gunpowder technology: frequent war, lots of resources expended on those wars, use of gunpowder specifically in those wars, and few barriers to adoption on new technology. The model is fine as it is. I’m not sure you need all the math, as the general ideas are clearly explained, and it isn’t like he’s after some kind of strict numerical simulation.

But the model is general, in that it applies at all times and in all places, and there is deep attribute that differs for Europe. Hoffman instead explains that Europe happened to meet the four conditions because of contingent historical events. In other words, Europe randomly found itself with a political setting that encouraged many high-stakes wars that involved gunpowder. Its lead was not due to some unique European characteristic, but rather was luck of the draw.

An acknowledgement like this, of the contingency or luck involved in historical development, is very rare in explanations of historical development. It is what Hoffman does very differently than most. The mistake that other global history books often make is to assume that because Europe was uniquely able to dominate the world economically and colonially, this must have a unique, causal explanation. And that is not true. It could all be a series of coincidences.

The right null hypothesis for this kind of work on historical development has to be “it was all pure dumb luck”. That doesn’t mean it was pure dumb luck, just that this should be the benchmark against which you evaluate the historical evidence. Hoffman, without saying so explicitly, does this kind of hypothesis test.

Here’s what I mean. Let’s pretend it is 1492, and we put 50 world leaders (Henry VII of England, Isabella, Charles VIII of France, the current Ming emperor, the Mamluk Sultan, etc..) in a room, each with a coin. Heads means their gunpowder technology gets better. Tails means it stays the same. They start flipping the coins, and after say 200 flips (years?) we see who has the most heads, and hence the most powerful gunpowder technology.

Yes, the expected value of heads is 100 for everyone, and yes, the average value across the 50 rulers is going to be about 100. But someone is going to have the most, and someone is going to have the least. I ran this a few times on the computer, and you always end up with a leader having about 114 heads, and a loser having about 86. Pure chance predicts that there will be a “gunpowder gap” (to paraphrase Dr. Strangelove). That’s the null hypothesis at work.

Hoffman essentially says that this is what happened. Europe and the rest of the world were playing by the same rules, with the same underlying characteristics, but Europe came up heads a few times more often than anyone else. If we could repeat world history over and over, Europe would end up being colonized as much as becoming the colonizer.

If you want to argue for some kind of unique European characteristic that systematically led to their lead in firepower, then you have to first argue that Europe’s lead in firepower was larger than we could expect to arise by pure chance. You have to first reject the null. That is, you would need to convince us that some European countries had hit heads 140 or 150 times. The odds of this are so preposterous (around 0.000000000000137) that we can reject the null, and hence there must be some systematic advantage for Europe. Only then should you start speculating about what the systematic advantage for Europe was.

Most global history books or theories jump right to the “speculating about systematic advantages” part, ignoring the need to reject the null first. So I give Hoffman credit here. He saw a correlation between European states and higher firepower, but did not immediately assume that this was a statistically significant correlation. He was willing to accept that this correlation – while meaningful in giving Europe an advantage – did not necessarily imply some kind of deep structural advantage for Europe.

Are there any deep structural advantages that Europe had? Maybe. But my guess is that a good portion (over 50%?) of the reason Europe advanced ahead of other areas was dumb luck, a series of fortunate accidents and coincidences. We are generally trained to look for systematic explanations, so being at peace with this randomness is difficult, but probably something we should get used to. A tip of the hat to Hoffman for his effort in that direction.

I feel like there is a Nick Crafts article from the mid-80’s(?) that has a similar argument about the British IR. That is, just because England had a particular characteristic does not mean that characteristic was crucial to the IR, or even that it mattered for the IR. Can’t seem to place it, though. Help?