Responsible Policy Engagement: Some Challenges

US Capitol Building. Photo courtesy of John Brighenti.

By Cullen S. Hendrix for Denver Dialogues

Last month, the team at the Sié Center introduced our program on Rigor, Relevance, and Responsibility: Promoting Ethical Approaches to Policy Engagement. Via this work, we hope to help scholars navigate the sometimes murky waters of policy engagement (or “broader impacts”) in which funding agencies and universities are increasingly asking them to swim. When we take an active role in affecting policy outcomes, we onboard some responsibility for those outcomes.

To be clear: I view policy engagement as a good thing. But that a thing is good does not preclude it from being complicated or difficult. And we need to address that head on if we are going to be in the collective business of encouraging early-career academics to do it.

My experiences engaging with policy-consequential actors/processes has come primarily in four arenas: the US intelligence and national security communities, multinational corporations, multilateral agencies and international NGOs, and government scientists and natural resource managers. This engagement has taken several forms: direct consultations including scenario development and brainstorming, desk and/or field-based studies for eventual publication by organizations/agencies themselves, and data sharing and trainings for collaborators and scientists.

These experiences have been overwhelmingly positive, both personally and professionally. But they have often caused me to reflect on my obligations—to myself, to the policy audiences, and to the people whose lives their policies affect—in ways that left me confused and sometimes uncomfortable. In this post, I outline three general challenges that I have worked through.

  1. The Black Box Challenge

When engaging with a policy-consequential actor, one often will not be able to observe directly or fully how one’s insights are being used, or to what ends. This challenge is more extreme in some instances than in others. In the extreme, as with some interactions with members of the intelligence community, one may not even know the true identities of the individuals with whom one is interacting. Moreover, one may never know whether the information or insight you provided was acted upon or whether it was consequential for real-world outcomes. In this sense, it’s not a true black box, because the only part the scholar (partially) observes are their inputs into the system: the outputs remain opaque. In these circumstances, how does one decide what kind of information or insight they are willing to share?*

*Note: Two colleagues suggested I supply an example here. I really can’t, because I can’t know exactly which conversations led to what. That’s sort of the point.

2. The Cherry-Picking Challenge

At the risk of stating the obvious, policy-consequential actors have preferences over outcomes and often over specific strategies, priors about how the world works, and concerns over the optics of their actions and decisions. Given the pervasiveness of confirmation bias and motivated reasoning, it is reasonable to expect policy-consequential actors will be more naturally drawn to some pieces of evidence or insight than others, regardless of its intellectual or social scientific merits.

This process may be conscious or subconscious, but can result in the tendency to use expert insight in a highly selective manner. Again, this challenge will be more extreme in some instances than others. In my own experience and in my discussions with others, it seems career bureaucrats are less egregious than office-seekers and their professional staffs. But some variant of the challenge is pervasive—and was tackled provocatively by Adam Elkus several years ago. For Adam, the issue boils down to this:

“Political scientists have this strange, naive belief that policymakers are just uninterested actors looking for the best advice they can find and if only they could be fed the political science in a form that their unique tribe understands everything would be a-ok. It’s almost as if political scientists — who study the strategic behavior of political actors — throw all of their own research out the window when naively formulating their notions of policy relevance.”

I am not sure the challenge is as insurmountable as Elkus’ post suggests. But he is absolutely right in pointing out that policy engagement requires academics to think strategically about how their work might be used. In light of this challenge, how can social scientists seek to inform debate in ways that minimize the chances their evidence will be used in bad faith or in a highly selective manner?

  1. The Communicating Consensus and Uncertainty Challenge

Social scientific evidence is often highly contextual and, in some instances, still inconclusive. Individual scholars operating in these areas often have staked their professional careers and reputations on a particular position that may be at odds with consensus or may overstate consensus where none exists. This challenge is compounded by the fact that policymakers typically would like more clarity and decisiveness than the literature will warrant? Will X work? Should we do Y? “It’s complicated” may be accurate, but not particularly helpful.

Importantly, it is unlikely the policy-consequential consumer of this information knows any of this: the whole reason the expert is in the room is because they know something the policy actor does not. This asymmetric information gives the scholar power: if they so choose, they can act as gatekeeper and selectively interpret scholarly evidence to bolster their position.

What responsibility do scholars have to convey ambiguity and acknowledge when their view is a minority one, and how?

To give an example: I’ve been asked in multiple fora to offer thoughts on how environmental conditions affect armed conflict. In my own work, I’ve found a somewhat counterintuitive relationship between drought/water scarcity and conflict (that it’s pacifying in certain circumstances and at certain scales). This finding is at odds with a relatively large body of literature linking drought with conflict. In these circumstances, would it be responsible to treat my findings as representative (or “correct”, and therefore representative of reality) and not note that the question is still a relatively open one—or that mine is a minority view? My personal answer to this question is “no, of course not.” But I’ve definitely seen scholars overstate the degree of agreement around controversial positions in order to elevate their own conclusions.

This list of challenges is by no means exhaustive. Over my next few posts for PV@G, I’ll be addressing others and including vignettes designed to illustrate them. But in anticipation of that, I encourage readers to comment or email me to identify others. We cannot hope to resolve all these challenges—some are so inherent to the enterprise as to be fundamentally irresolvable—but hopefully we can help the next generation of policy-engaged scholars engage with confidence and clarity.

1 comment
  1. I find your engagement on this topic not only astute, but very timely as information has become the currency of our day and it’s “validity” power. Coming from the intelligence world initially and more recently shifting to academia, I especially appreciate your distinction between long haul workers in these fields versus office holders. I think the relationship between these professionals and academic experts is an important area to consider further—how to develop shared language, trust and understanding for the greater good in the issue areas collectively being tackled? And of course, how to convey/steer office holders genuinely to the right decisions?

    To your concern for misuse of expertise, I believe in democratic societies, we might look to transparency and voice as tools by which to counter dark twists of knowledge. I have operated in classified environments, and perhaps as a result, I appreciate greatly the freedom an academic has to openly share knowledge. (Unless you are speaking to consultations in which academics enter into classified working agreements.) I argue academics must work harder to make their findings accessible to a more general public. Educate the constituencies and inform their political power. That is certainly not easy, but truly reflects what I believe most political scientists hold dear, the power of these people we study to bring about a better society for themselves.

    I would like to think a more equitable distribution of information and knowledge will reduce the power play that currently frames information. Perhaps I am naive to these entanglements from the academic point of view? I look forward to hearing more about the examples people have experienced. So glad you are opening this conversation!

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like