The Skeptics Guide to Institutions – Part 2

NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.

This is the second of a series of posts on the empirical institutions literature that I am covering in my graduate growth and development course. In Part 1, I looked at how the 1st generation of this literature misused cross-country measures of institutions in their poorly identified regressions.

The second generation of empirical institutions work attempted to deal with the endogeneity problem in the standard “regress income per capita on institutions” regression of the 1st generation.

The dividing line between 1st-generation and 2nd-generation studies isn’t that bright. I used Mauro (1995) as an example of 1st-generation institutions work, but that paper uses ethnolinguistic fractionalizaton as an instrument for corruption. Hall and Jones (1999) look at measures of institutional quality instrumented with latitude and the percent of the population that speaks Western European languages. These instrumental variable (IV) strategies are generally dismissed, for the reason that few people believe ethnolinguistic fractionalization, latitude, or European language speaking have affects on income per capita *only* through institutions. In other words, these papers seem to fail on the second requirement of an IV, which is that the instrument has no separate correlation with the dependent variable.

The big event in the 2nd generation of literature was the arrival of Acemoglu, Johnson, and Robinsons (2001) using “settler mortality” as an instrument of institutional quality. They propose that the quality of institutions in a colony was a function of how deadly that colony was for European settlers. The idea is that in places where Europeans died quickly (Sub-Saharan Africa, Central America), they did not want to stay, and therefore installed extractive institutions to suck as many resources out of the colony before they caught some deadly disease. In places like the US or New Zealand, where they did not die, Europeans stayed. They therefore installed good, inclusive institutions.

The heart of the argument here is that institutions in colonies were exogenously determined by Europeans, and thus we have a clean empirical “natural experiment” that will yield a good estimate of the effect of institutions on economic development. AJR is widely cited, and the settler mortality instrument has been used in any number of other papers (I’ve refereed at least 5 or 6 myself in the last 10 years) since their paper came out.

But there are significant issues with the whole empirical strategy. There are four problems with their estimates that I usually think about:

1. They are still using an arbitrary measure of institutions as a continuous variable. The measure of institutions in AJR (2001) is “expropriation risk”, and every country is coded from 0 (high risk) to 10 (no risk). See the prior post for why index of institutions like this are useless. In short, the numbers have no meaning, but AJR treat them as if they do. A 10 does not mean that a US citizen is half as likely to be expropriated than a Bangladeshi (a 5.14). Going from Honduras (5.32) to Tunisia (6.45) is not necessarily the same thing as going from Mexico (7.50) to India (8.27). Their measure of institutions doesn’t measure “institutions”.

2. It is nearly impossible to believe that their instrument (settler mortality) has no separate correlation with the dependent variable (income per capita). Settler mortality arises from putting Europeans unadapted to different climates into those climates. Since the Europeans all come from a pretty similar climate zone, that means that settler mortality is essentially picking up the intensity of the tropical disease environment. While the Africans, Asians, or Americans they colonized may have been adapted to those diseases in the sense that they were no longer deadly, it doesn’t mean those diseases had no effect on those populations. Places that Europeans died are also places that tend to have incredibly poor agricultural conditions – lack of frost, overly heavy rains, and poor soils. Europeans dying at alarming rates is simply a proxy for bad geographic conditions. And no, the fact that AJR control for latitude, temperature, and humidity is not the same thing as controlling for agricultural conditions. You can hold those three things constant and have wildly different outcomes depending on soil, altitude, wind patterns, rainfall patterns, etc.. etc..

3. The estimated effect of institutions doesn’t make sense. Their IV results show a coefficient for institutions that is twice as large as the OLS coefficient. This is problematic. The whole reason we want IV estimates is because we think there is some kind of endogeneity between income per capita and institutions – specifically, that higher income leads to better institutions. This implies that the basic correlation of institutions and income per capita is biased *upwards*, or the OLS results are too big. But when they run IV, they get even bigger effects for institutions. This implies that income per capita has a *negative* effect on institutions, and that is hard to believe.

What about measurement error? We know that if institutions are measured with noise, then the OLS coefficient will be attenuated, or biased towards zero. But classic measurement error, as this would be, implies that there is some true “expropriation risk” out there in the world, and what we have is the true value plus some random error. But you can’t have this kind of measurement error when the numbers for expropriation risk are absolutely arbitrary. There is no *real* number to measure. The “expropriation risk” is precisely measured in the sense that it precisely measures the arbitrary index established by the Political Risk Services. So I don’t buy the measurement error argument.

In the end, the simplest explanation for why their IV results are larger than the OLS is that there is a correlation of their instrument with the error term. We know settler mortality is negatively related to expropriation risk. If settler mortality is independently and negatively related to income per capita, then the IV results are going to be larger than the OLS [for the math-inclined, beta(IV) = beta(OLS) + Cov(error,mort)/Cov(inst,mort) and that ratio of covariances is positive because the two terms are negative].

4. The data are probably wrong. David Albouy’s paper is the central reference here. Let me review the main issues. First, of the 64 observations, they do not have settler mortality data for 36 of them. For those 36, they infer a value from some other country. This inference could be plausible, but in many cases is not. For example, they use mortality data from Mali to infer values of mortality for Cameroon, Uganda, Gabon, and Angola. Gabon is mostly rainforest, and about 2300 miles away from Mali, a desert or steppe.

Second, the sources vary in the type of individuals used to make mortality estimates. Most relevantly, in some countries the mortality rates of soldiers on campaign are used, and in others the mortality rates of laborers on work projects. In both cases, mortality rates are outliers relative to what settlers would have experienced. Most importantly, the use of the higher mortality rates from campaigning soldiers or laborers is correlated with poor institutions. That is, AJR use artificially high mortality rates for places with currently bad institutions. Hence their results are already baked in before they go to run regressions.

Albouy’s paper shows that making any of a number of equally plausible assumptions about how to code the data will eliminate the overall results. Both the first stage – the relationship of mortality to institutions – and the second stage – the relationship of institutions to income per capita – become insignificant under any number of reasonable alterations of the AJR data.

So in the end the settler mortality evidence that institutions matter just does not stack up. It certainly does not have the kind of robust, replicable features we would like in order to establish the importance of something like institutions for development. If you want to argue that institutions matter, then by all means do so, but the AJR evidence is not something you should cite to support your case.

Next up I’ll talk about why 3rd generation empirical studies of specific institutions aren’t actually about institutions, but about poverty traps.


18 thoughts on “The Skeptics Guide to Institutions – Part 2

  1. “The whole reason we want IV estimates is because we think there is some kind of endogeneity… This implies that the basic correlation of institutions and income per capita is biased *upwards*, or the OLS results are too big. But when they run IV, they get even bigger effects for institutions. This implies that income per capita has a *negative* effect on institutions”

    I don’t think this is 100% true, especially the part about GDP per capita’s effect on institutions (the direction of the bias doesn’t say anything about that!). If there are several omitted variables, it is very difficult to predict the direction of the bias. Check Bond’s slides (27-31).

    • Fair point, as a general case. But the OLS and IV estimates are very stable across different specifications with alternative control variables. So the likelihood that there is some missing control variable that will work *just so* to make the OLS be lower than IV is unlikely. Impossible? No. But unlikely.

  2. “This implies that income per capita has a *negative* effect on institutions, and that is hard to believe.”

    Stupid question, but if you have already high income, why would you want better institutions? That’s like asking for the directions to the ice cream parlor when you already have ice cream. I would expect a negative effect for that reason in equilibrium, because I’d guess that the value of institutions is higher when you have low income.

  3. Pingback: The Skeptics Guide to Institutions – Part 1 | The Growth Economics Blog

  4. Pingback: Links for 11-21-14 | The Penn Ave Post

  5. I wish you had talked about the AJR “Reversal of fortune” paper, which they also use to argue the importance of differences in the institutions Europeans set up around the world. The negative correlation between population density in 1500 and GDP per capita in the present is largely driven by the fact that a disproportionate number of higher-income countries today are found in the Americas and Australasia (compared with Africa and Asia outside East Asia). Europe in 1500 was also less dense that much of Asia.

    But the paper hangs on population density in 1500 being a good proxy for high output per capita — as opposed to technological endowment reflected in land productivity just leading to more population rather than higher living standards. So with a more conventional intepretation of Malthusian-era population density, there’s no “reversal of fortune” at all. The more interesting question then becomes, how to explain the global variations in subsistence equilibrium incomes in the Malthusian era. Incomes in early colonial frontiers (North America, Australia, Argentina) were high, often much higher than in Western Europe, which is consistent with high land per capita in the simplest Malthusian framework. And there’s your paper on labour intensity in different kinds of crop production which probably best explains variations in Old World incomes & pop densities. Then one can talk about higher subsistence wages being a precondition for the early stages of industrialisation, in the Allen factor price sense, with high wage economies creating incentives to substitute capital for labour. Well, even the Allen approach can’t explain the persistence and prevalence of innovations and therefore sustained growth, but at least his approach is consistent with conventional intepretations of Malthusian dynamics.

    • Thanks – at this point I didn’t want to dive off into “Reversal”, just so I could focus on the strict institutions literature. But on your point on population density: yes, yes, and more yes. Population density has no necessarily correlation with development levels, and in fact may be negatively related to them. Which would lead, as you say, to an Allen-like possibility of earlier take-off in low density places.

      But “The Skeptics Guide to Reversal of Fortune” is a different post for a different day!

  6. As Michael Clemens wrote in his devastating critique of the use and abuse of instrumental variables, biases are only part of the problem. Good posts!!!

  7. Thanks, Dietz. How do you think, did subsequent research strengthen the institutional hypothesis? For instance, Dell’s work on mita has more robust theory and empirics. It’s, of course, the case of “poverty traps,” but traps induced by institutions. That is, mines are geography, but humans choose how to organize production.

    • Anton – take a look at the latest post on just those issues. TL;DR version is that “institutions mattered” is different from “institutions matter”.

  8. Pingback: A week of links | EVOLVING ECONOMICS

  9. All fair points, many of which I am very sympathetic to. I would add some nuances to your arguments, however: (1) while it is true that the instrumented variable (the contemporary institutions measure) is more ordinal than cardinal, identification ultimately comes from the instrument, so that (on the assumption that the instrument is valid; more on that later) we can comfortably infer causality independently of interpreting the magnitude of the causal effect. What I’m trying to get at is that the critique of whether “institutions matter” is distinct from “whether AJR’s estimates are accurate” (if, for instance, the settler mortality instrument does in fact satisfy the exclusion restriction, then we can make inferences about whether the effect is causal without worrying about measurement error in the instrumented variable); (2) given the stability of the IV estimates in different specifications, the possibility that there is *only one* omitted variable that consistently works to upward bias the estimate is actually less likely, since including more controls would raises the likelihood of correlation between the single omitted variable and the additional controls (which in turn would significantly alter the estimate). All that said, I do agree with you (and Albouy) that the biggest problem is the coding of the instrument in a fashion that would favor their result, which is compounded by the small size of the instrument set (by imputing the same values for Angola, Cameroon, etc. we essentially lose the benefit of true variation in the instrument set).

    • Fair points, both. On (2), it doesn’t necessarily have to be an omitted variable. Reverse causality could show up as consistently wrong estimates, even across different specifications. But you’re right that the stability of the AJR results suggests that there is *not* some phantom omitted variable that is lurking out there.

  10. Pingback: Guía para los escépticos de las instituciones « La Pata de Cabra

  11. Pingback: The Skeptics Guide to Institutions – Part 3 | The Growth Economics Blog

  12. Pingback: Seasons greetings and gift geek guide | Natasha Ardiani

  13. Pingback: Economic History Link Dump 15-01-2015 | Pseudoerasmus

  14. Pingback: Is David Booth right to come out against good governance? | Devpolicy Blog from the Development Policy Centre

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s