Shit Happens: When Equilibrium Theories Meet Non-Equilibrium Outcomes

By Cullen Hendrix for Denver Dialogues.

17038241669_37dfe77f30_k
Alan Greenspan, former chairman of the Board of Governors of the Federal Reserve System. Photo via Brookings Institution.

Note: This post was filed last Thursday, before the attempted coup in Turkey unfolded. The event referred to in the introduction is not/was not at all related to the coup.

Recently, I was asked by a group of policymakers to envision a scenario in which something bad would happen. This bad thing is (thankfully) rare in terms of historical precedent but always extremely consequential – hence the interest in foreseeing it. The specifics of outcome are unimportant. What is important is how the process of envisioning this outcome – and other rare but consequential outcomes – challenges  our essentially equilibrium-based theories of political behavior and human interactions.

Social scientists tend to theorize in probabilistic terms. We know the world is an inherently strange and contingent place, but within all that strangeness and contingency exists a set of general tendencies, i.e., responses by actors to particular circumstances and actions of others that are more or less likely to occur. As we see it, our job is to uncover and explain these tendencies, washing away the nuance of the specific instance in favor of mining large numbers of observations in order to uncover the general relationship. These tendencies inform our thinking about the world and allow us to make statements like “regimes facing large youth bulges are more repressive” or “regimes facing dissidents using guerrilla tactics are much more likely to engage in mass killings of civilians.”

Within conflict studies, this approach is often wedded to the rationalist paradigm. In order to understand why things happen, we must identify the relevant actors, identify their interests, and then figure out how the way their interactions are structured. Then, we assume actors pursue strategies that maximize the likelihood that their preferred outcome will come to pass.

This way of thinking about the world has proven pretty useful. But it has its limits. In particular, it confronts a huge weakness when trying to inform thinking about rare, hard-to-foresee events. Erik Gartzke’s “War Is in the Error Term” lays out this problem as it pertains to interstate war. Interstate war is incredibly rare. The Correlates of War project identifies roughly 95 traditional interstate wars – like the Spanish-American War or Falkland Islands War – in the entire international system between 1816 and 2007. Most of those were relatively minor: only 29 of them resulted in more than 20,000 combatant deaths. Most IR scholars would be hard-pressed to name more than 50 of them. And while I’m not going to crunch the numbers, I’d be happy to bet that more has been written about two of them – World Wars I and II – than the rest combined. By an order of magnitude. Or two.

In “War Is in the Error Term,” Gartzke is reflecting on the rationalist tradition in conflict studies, which holds that because war is costly, it is ex-post inefficient. There is always some bargain that the parties would have preferred to war: a leader takes a golden parachute into exile, countries X and Y agree to a different demarcation of their border, etc. Hence, to explain war one must uncover why the parties failed to achieve the bargain. That is, war is not an equilibrium outcome – something has to go wrong for us to observe actual conflict.

Because Erik is a clear writer, I quote rather than paraphrase:

We cannot predict in individual cases whether states will go to war, because war is typically the consequence of variables that are unobservable ex ante, both to us as researchers and to the participants. Thinking probabilistically continues to offer the opportunity to assess international conflict empirically. However, the realization that uncertainty is necessary theoretically to motivate war is much different from recognizing that the empirical world contains a stochastic element. Accepting uncertainty as a necessary condition of war implies that all other variables—however detailed the explanation—serve to eliminate gradations of irrelevant alternatives. We can progressively refine our ability to distinguish states that may use force from those that are likely to remain at peace, but anticipating wars from a pool of states that appear willing to fight will remain problematic.

When outcomes are mutually undesirable, and all parties know those outcomes are mutually undesirable, we generally expect they will not occur. And most of the time, they don’t. But it’s the instances in which they do occur over which we obsess, and it is precisely these instances in which our rationalist, probabilistic theories tend to break down.

Shit happens. An unsuccessful would-be assassin gets a shot at redemption when his target’s driver randomly turns onto the street where the assassin is standing. A discarded cigar wrapper helps turn the tide of the US Civil War. One sounds like a moron standing up in a room of highly educated, extremely well-informed people and saying “For outcome X to occur, something weird has to happen” – but that may well be the right answer.

Before 2008, economic analysts could have given you a million reasons why the financial crisis would not have occurred; no one apart from a few double bass-drumming weirdos stood to benefit. But after the fact, even as fundamental a believer in rationalism as former Fed chair Alan Greenspan was forced to concede, “I have come to see that an important part of the answers to those questions is a very old idea: ‘animal spirits,’ the term Keynes famously coined in 1936 to refer to ‘a spontaneous urge to action rather than inaction.’” “Animal spirits” is not a term traditionally associated with sober, rational analysis. By it, Greenspan and Keynes before him refer to irrational exuberance and discounting of the potential pitfalls of risky behavior.

In the quoted piece, Greenspan then goes on to suggest these animal spirits can be measured and incorporated into economic forecasting. But this misses the point. Even if Greenspan’s thoughts inform future forecasts, there will be some black swan event that catches us by surprise. Ex-post, we will all scratch our heads and wonder why we did not see it coming, and change our theoretical and predictive models to incorporate the insights from the latest crisis. And then the next crisis will occur, and the process will repeat ad infinitum.

I am a firm believer in rationalist explanations for most phenomenon, and ceteris paribus statements are likely to remain the rationalist social scientists’ stock-in-trade for the foreseeable future. Competing theories of conflict, economic crises, and other rare-but-consequential events have their problems as well – to wit, the “ancient hatreds” explanation for ethnic and religious conflict massively over-predicts violence, and false positives may be as big a problem as false negatives. Moreover, that we must continuously refine our predictions about the future in light of new information is a positive thing: it suggests we are learning more about the world around us. But we need to be circumspect about how well equilibrium theories can be expected to perform when predicting non-equilibrium outcomes.

2 comments
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like