The Problem with the Problem of Bridging the Gap

Professor Erica Chenoweth discusses the challenges of civilian peacekeeping with Mel Duncan, founding Executive Director of Nonviolent Peaceforce, as a part of the Carnegie Corporation of New York-funded “Bridging the Academic-Policy Gap” at the Sié Chéou-Kang Center for International Security and Diplomacy at the Josef Korbel School of International Studies. By University of Denver.

By Cullen Hendrix for Denver Dialogues

Adam Elkus’ “The Problem of Bridging the Gap” has generated some heat among political scientists wrestling with what it means to bridge the gap between academic discourse and practical policy relevance. Dan Nexon, lead editor of the International Studies Association’s flagship journal, effectively re-blogged it at Duck of Minerva. Witty and concise, it amounts to a “medium-length polemic” (Nexon’s words, not mine) against the concept of policy relevance. While an entertaining read, the critique misses the mark. If we accept it on its own terms, the logic is persuasive. If we question its most fundamental assumption, the critique unravels.

Elkus’ critique consists of four points (if you have ten minutes, a sense of humor, and care about this discussion, please go read it):

  1. “It judges the value of academic inquiry from the perspective of whether or not it concords with the values, aims, preferences, and policy concerns and goals of a few powerful elites.”
  2. “It demands that academic inquiry ought to be formulated around the whims and desires of the people being studied.”
  3. “It makes no demands on the policymakers themselves.”
  4. “It allows questions and projects to be assigned from above rather than discovered, and substitutes political efficiency for scientific contribution as a review criteria.”

I could take issue with points 2-4, though each contains a grain of truth. But Elkus’ first point is the most important, so here it is in its entirety:

It judges the value of academic inquiry from the perspective of whether or not it concords with the values, aims, preferences, and policy concerns and goals of a few powerful elites. Why, if anything, do we judge “policy relevance” by whether or not it helps government policy elites? Surely governmental elites, politicians, think-tankers, etc aren’t the only people who care about policy! The “policy relevance” model is simply a normatively unjustified statement that political scientists and social scientists in general ought to cater to the desires and whims of elite governmental policymakers.

This definition of policy relevance is not the revealed word of the Almighty(ies). Policy relevance is a contested concept that can be defined in multiple ways. Elkus’ definition of policy relevance—seeking to inform the opinions of elite policymakers, politicians, and think-tankers in order to affect government policy—is far from the only plausible one. If we define policy relevance in broad terms, such as, “a desire to engage with the world we study, rather than to merely observe it,” the type of work that fits under the policy relevant banner expands considerably. For instance, the entire field of development economics is very applied, i.e., oriented toward affecting outcomes, even if that is not universally viewed as a good thing. Development economists often engage directly with the communities they seek to help, with or without engagement with policymaking elites. Even within the field of security studies, where Elkus’ critique would seem to have the most validity, engagement need not mean just high-level meetings at the Pentagon or on the Hill. The Sie Center for International Security and Diplomacy’s Carnegie Corporation-sponsored “Bridging the Academic-Policy Gap” project brings together academics, civil society leaders, policymakers, and ordinary citizens to discuss the ways nonviolent actors and strategies can be agents of peace in highly violent contexts. Our work ranges from dialogue and exchange with nonviolent organizers to collaborating with fisheries scientists tasked with managing critical natural resources. In these cases, our work seeks to influence and inform policy at the grassroots and/or implementation level. These projects have clear bearings on security outcomes, provided we accept a broader definition of what constitutes security studies. And we’re not alone: AidData-affiliated researchers are engaged in partnerships with USAID field officers—hardly Beltway insiders—on a wide range of development assistance-related research. I’m sure there are more examples out there, and I hope the commenters flag them.

These types of collaborations occur at the operational level and often entail thankless hours spent pouring over data and discussing practical implementation issues. As these types of activities occur far from the op-ed pages of national newspapers and don’t make for great sound bytes on news channels, it’s unsurprising they are not the first thing that comes to mind when Elkus thinks of policy engagement. Indeed, Elkus’ critique seems geared toward a particular brand of policy relevance that emphasizes media engagement. The merits of that type of engagement notwithstanding, it remains but one of many ways to bridge the gap. But, cumulatively, the type of direct engagement discussed here, outside the limelight, may be more influential.

Elkus is good to remind us that we should be critical of attempts to forward specific agendas under the banner of policy relevance. But in doing so, he forwards an overly narrow definition of policy relevance and then critiques the concept of policy relevance for being overly narrow. I am sure there are reasonable facsimiles of Elkus’ straw man—the credulous academic armed with a regression table and no understanding of actual politics—wandering around Washington. I’m just not sure they are typical of social scientists’ engagement with policymakers.

  1. Not being a political scientist, i’ll say that it does seem to me that the OP is a strawman.

    At the same time, the situation he describes, where the needs of the client/patron industry shape the direction of academic research are just about universal. In this case, the client/patron industry is the policy-advocacy industry.

    Similar things happen, for example, in the way research into molecular biology, lets say, is shaped by their client/patron industry, which is big pharma.

    Mechanisms are typically put in place by actors in the client/patron industry to empower researchers to fight for their independence against managers engaged in corporate power-politics. This is because of lessons learned in the US auto industry, where the belief was adopted, that the lack of objective integrity was one of the reasons US auto makers were losing to foreign competition. The philosophy is quite common across US corporate management. Although you could say also commonly violated in practice (Volkswagen?), it is at least a value that is accepted in principle.

    On the industry side of the fence, in most industries, it is programs like “six sigma”, which formalize quasi-scientific-method justification for corporate decisions, under the banner of Quality (with a capical Q) and the belief that Quality positively correlates to profit. On the academic side, it is peer review, and a tradition of objective truth as the primary value, scientific method, etc etc.

    So in summary: moderating mechanisms on the client/patron industry side. This is because the researchers serving them lack the power to enforce any kind of policy on integrity. Typically when academics clash with their client/patron partners, they use the vicious tactic of doing exactly what they are told. Eventually moderating mechanisms result, and then it is perfectly ok that the client/patron industry directs the areas of research.

    In the corporate case, there is of course the problem of externality, so you have prominent cases where this system breaks down, like research proving tobacco smoking is safe. But that doesn’t invalidate the concept.

    How this applies to the foreign-policy-advocacy industry, which unlike corporations, operates on very long cycles of feedback, or possibly no feedback at all, is an interesting question. A Darwinian process may be the answer.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like