More on Microfoundations

NASA image by Bill Ingalls, via Wikimedia.

By Joe Young

NASA image by Bill Ingalls, via Wikimedia.
NASA image by Bill Ingalls, via Wikimedia.

Thomas Zeitzoff’s recent post on microfoundations and conflict research and Jay Ulfleder’s response are both thought-provoking pieces on how to best draw causal inference in situations where we can’t perform the gold-standard of science — a controlled experiment. Zeitzoff is correct that field experiments hold much promise for validating a mechanism under certain conditions, and Ulfelder is right that with the rise in microlevel data we can likely get closer to measuring individual behavior in conflict zones. What I think is missing in this discussion are the following three things:

  1. There are lots of research methods, but none is perfect. I’m sure someone has a catchy phrase for this that I am missing, but I’d call this conundrum the fundamental problem of social science. We’d like to make sure that we measure what we claim we are measuring (validity), that our case(s) generalizes to other cases (generalizability), and that our measurement strategy yields the same results each time we use it (reliability). What we know for sure is that if we increase the power of one these, we likely decrease the power of another (see the Mundell-Flemming conditions in International Political Economy for a similar problem). Field experiments are fantastic for internal validity. Assuming we randomly assign people to groups, we can be quite confident that whatever the stimulus is leads to the change (if there is one) between the control group and treatment group. As Zeitzoff notes, these experiments do less well at generalizing to other cases (but they do better than lab experiments). Great data, collected by automated means, incorporating micro-level behavior likely can be reliability collected, generalizable in time and space, but will be less valid than the field experiment because of the non-random assignment to treatment (although matching methods and regression discontinuity designs can help approximate this). A deep case study, done by a native speaker, about the dynamics of conflict in a particular space may be the most valid for a particular town, group, or region, but does not generalize well (even in country or over time) and has issues with reliability. The take home point from this quick discussion is: We can’t have it all.
  2. Field experiments aren’t the only way to get at microfoundations. Field experiments are hot, for good reason, but this movement runs the risk of reifying one particular form of research. Computational models, game theory, counterfactual reasoning, matching methods, deep case analysis, and comparative case studies are all alternative ways of looking at microfoundations (some develop models of these micromotivations, some test them). Each has its strengths and weaknesses, and each is best suited for addressing certain questions.
  3. There isn’t a best unit of observation. As someone who is interested in micromotivations, sometimes I am not. Sometimes I want to know about how networks function, how dyads interact, and how groups behave. Looking at the individual can be useful in many cases and sometimes it is not. Again, depending on the question, depending on the thing, and depending on the time period, we may choose other spatial or temporal units of observation.

As Zeitzoff rightly notes, part of this growing animus for cross-national statistical work is due to its proliferation (likely driven by the availability of the data he cites). Zeitzoff might be right. We may have too many people doing this kind of work and need people going in the field, doing experiments, learning computational modeling, etc.

Will Moore wrote a paper, which takes some of the Correlates of War (COW) research to task for focusing too much on doing research in one particular way. Moore suggests that even if a small portion of the COW folks moved into an alternative research domain, a good deal of progress could be made. This, I think, is where it gets tricky. My soft thesis is: All of these methods have a place in conflict research. The best follow-up question is: How often should each method be used? Or what percentage of the total effort of all conflict scholars should be devoted to each method? I’ll leave that question open and look forward to your thoughts.

0 comments
  1. Well said, Joe, and I hope it’s clear from my post that I agree wholeheartedly with your first point. The “digital traces” I referenced hopefully have many gaps and flaws, too. I’m just more optimistic about the opportunities there than I am in field experiments, at least for research on violent conflict. But, as you say, the best of all possible worlds will involve continuing to learn from as wide an array of methods and data as we can throw at the questions that interest us.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like