Academics are Policy Troubleshooters

President Lyndon Johnson meets with the “wise men,” a group of informal advisors that included Undersecretary of State George Ball, a key devil’s advocate in policy debates among the “Best and the Brightest” surrounding the Vietnam War. Source: National Archives of the United States.

By Oliver Kaplan for Denver Dialogues

It seems like these days academics are increasingly striving to produce “policy-relevant” research. A side effect of this trend is the attendant anxiety that one’s research won’t be relevant enough for policymakers to pay attention. Relevant research can be a very good thing, but it isn’t the only way to “engage” with broader audiences. Social scientists can also contribute just by being in the room where policy discussions are being held and acting as sounding boards—as expert policy troubleshooters.

This kind of contribution is underscored in Micah Zenko’s new book, “Red Team: How to Succeed By Thinking Like the Enemy.” The book is a hagiography of red cells, the prime example being the CIA red cell formed after 9/11. [1] This team of analysts was encouraged to promote critical examination of core assumptions and introduce “fearless skeptics” and “devil’s advocates” into the intelligence process—to combat “groupthink.” The red cell’s “simulations, vulnerability probes, and alternative analyses, helps institutions in competitive environments identify weaknesses, challenge assumptions, and anticipate potential threats.”

Red-teaming is not exactly new. If academics are good for at least one thing, it is applying these kinds of intellectual skills to think outside the box. Sure, policy-makers may think of academics in terms of traditional stereotypes—they are too abstract, absent-minded, and aloof (am I be becoming more so by the day? I digress…). While academics don’t necessarily need to “think like the enemy,” who knew having one’s head in the clouds could be such a good thing?

The (dreaded!) academic seminar epitomizes collective troubleshooting with the goal of improving research and arriving at sound findings. This can involve the elaborate parading of methodological chops, with phrases such as “endogeneity,” “epiphenomenality,” “error terms,” and “instruments” flying freely.  Go ahead—try sharing an idea (or even starting a sentence) at one of these sessions and you’ll see how the full glory of academic critical thinking can make you question your assumptions (warning: when one’s own research is the object of such critique it can result in temporary emotional crippling!).

More precisely, academics are good troubleshooters because they do at least three core things well:

  1. Exploring counterfactuals. When it comes to examining whether a key causal variable or policy has a particular effect, academics are trained in constructing various counterfactuals to consider whether the same result would occur absent the key variable or policy. [2] This can be a crucial exercise for checking the assumptions of an argument through “thought experiments,” especially when it is hard to collect empirical evidence.
  2. Probing alternative explanations. When critiquing research, academics focus their attention on possible gaps and uncertainty in causal logic and think about whether competing causal “mechanisms” (omitted variables) may be at play. This can include consideration of alternative behavioral models of the enemy, as forcefully argued by Alex George, one of the original scholars to push for “bridging the gap.” Academics often talk about putting on the “hats” of different types of scholars to explore how different theories might account for a given set of facts.
  3. Indicators and evidence. Academics are pros at examining the evidence base for causal claims. They are also trained to consider the implications of such claims and think about how to later measure outcomes to know if one’s causal model is accurate.

Beyond these skills, academics tend to know history well, are good at applying existing literature and past cases to new problems, and are aware of where existing knowledge falls short (indeed, this is a good share of what PV@G does). Academics are pushed to be flexible thinkers and to import theories for new problems from unexpected places and from across disciplines. Finally, the transparency of the academic (scientific) process—its emphasis on peer review and getting multiple eyes on a problem—helps us check our (confirmation) biases at the door. Yet rather than being some deeply arcane tools, these methods are familiar to first-year grad students.

This doesn’t mean academics promise perfect answers for tough real-world decisions or are necessarily great forecasters. That may be asking too much of anyone (for instance, the complexity of the current situation in Syria has made events difficult to predict). But at least academic thinking can help assess the pros and cons of different policy options. It may then be up to the practitioner or analyst to decide how to weigh the (political) risks of competing options or if more certainty or information is needed before recommending a decision.

From the academic perspective, armed with troubleshooting skills, it can be frustrating to hear policy-makers debate options with little evidence or erroneous support. Yet it is incumbent on academics to resist the temptation of turning a policy discussion into an academic seminar. Showing off methodological chops in front of policy-makers is a terrible idea, as policymakers may not be interested in, or open to, research output written in academic-ese or that has few applied results. Instead, academics should policy-troubleshoot in ways that are accessible to non-academics and non-experts, as practitioners are often pressed to make decisions with incomplete information and little time. This could mean limiting comments on policy options to only the most central or most resolvable issues.

It should also be in policy-makers’ interest to come half way, as Celestino Perez, Jr. has argued on this blog. I’m not saying hug an academic, but maybe consider keeping a few in the room.[3] So, next time, if you want to know the number of reasons why your policy analysis might be wrong (and what options might work better), don’t worry about it. We’ve got you covered from here till next Tuesday.

[1] To be clear, I am not arguing that additional troubleshooting can directly alter the course of events or could have prevented the 9/11 attacks. It can help yield more informed decisions and weighing of risks and alternatives.

[2] I.e., if examining the relationship where “factor a implies result b,” then the counterfactual claim is “if not a, then not b.”

[3] Academics may have much to contribute by advising policymaking processes, though I do not mean to gloss over real professional ethical tensions that “being in the room” may entail. Such a controversy was seen in resistance by some anthropologists to cooperating with the U.S. military’s Human Terrain System program.

1 comment
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like