Recently, Nature Climate Change published a study demonstrating significant sampling bias in the research that informs our understanding of whether climate change will accelerate human conflict. I was a peer reviewer of “Sampling bias in climate–conflict research,” and I wrote an accompanying “News and Views” piece summarizing it. I am fascinated by the issue of sampling bias; it’s perhaps the most consequential and least recognized form of bias in the social sciences, with potentially massive consequences for what we (think we) know about a host of phenomenon.
Put simply, sampling bias arises when the sample – the individuals, countries, or regions under study – deviates from the population it is intended to represent. You may have heard it called “the college sophomore problem”: psychologists run lots of experiments on college sophomores because they are convenient to study in a university setting. But college sophomores are not representative of society at large, so the inferences drawn from studying them may or may not be valid.
The article, which was written by Courtland Adams, Tobias Ide, Jon Barnett, and Adrien Detges, generated some pretty vocal criticism. As Elizabeth Chalecki told a reporter for the Atlantic, “I can’t see what the authors are trying to accomplish with this article.” In the same story, Solomon Hsiang — the lead author of the influential meta-study that has become the go-to reference for those citing strong, pervasive climatic effects on conflict – said the article “insinuate[s] there is some kind of bias boogeyman in the research field without actually showing that anybody else who came before made an error or demonstrating how their idea could affect findings in the field.”
While the Nature Climate Change study is not perfect, and while I will be forever thankful to Hsiang for introducing the term “bias boogeyman” to my lexicon, this criticism strikes me as excessive. My News and Views piece offered a generally positive take, and here, I explore the article’s three main findings in light of these criticisms.
- Researchers in this space are not sampling on the independent variable, which is climate change exposure or vulnerability.
The authors convincingly demonstrate that research in this area is not focused on cases where climatic stress is most acute. “Dogs that don’t bark” – cases where the cause is seemingly evident but the effect is not – are very important for helping us understand the conditions under which even severe climatic stress is decoupled from violence. If this is our ultimate policy goal, understanding a country or community’s sources of resilience – social, political, and economic – is of utmost importance.
- There’s a streetlight effect in climate-conflict research.
I’ve written before about the streetlight effect (here, here, and here), and I find the evidence presented by the authors compelling. Do their results invalidate everything we think we know about climate change and conflict? Not hardly; this may be how the study is being packaged, but it’s not the actual finding. While the article suggests there may be scope conditions to what we know, especially regarding the case-based research on the subject, there have been enough careful meta-analyses and cross-national, time series-based studies on climate-conflict connections to say we know something, even if the picture is still murky.
- Researchers in this space are sampling on the dependent variable (e.g., conflict), at both national and regional scales.
Scholars in this area focus more attention on countries with large numbers of battle deaths and regions—Africa and Asia—where the presumed linkages are strong.
In line with “Sutton’s Law”—which is named for an apocryphal statement by Willie Sutton that he robbed banks “because that’s where the money is”—it makes little intuitive sense to study conflict in places where it does not occur. That would be a bit like studying influenza in people who don’t have it.
But any epidemiologist will tell you that looking only at sick patients will lead to misleading inferences about causes. The point of social scientific rigor is to reduce the biases that human intuition and post-hoc reasoning inject into our understanding of the world. This finding is the least problematic for the existing literature, as the quantitative studies include observations without conflict for the same countries and regions. Regional foci may have scope conditions, but they aren’t truly sampling on the dependent variable.
Overall, I find the authors’ evidence pretty compelling. Where I differ is in my approach to what it all means. To me, the biggest issue with the article is the insinuation – more evident in the media coverage than in the manuscript itself – that this is all part of some conspiracy to stigmatize and recolonize the Global South. For example, author Tobias Ide told the Atlantic, “I am not saying that everyone who focuses on climate change in Syria or Kenya is automatically promoting colonial behavior, but if there’s a connective frame it might well facilitate this kind of thinking.” Like most conspiracy theories, this claim is much more elaborate and logical than reality can support.
First, the idea that scholars studying climate change and conflict are consciously forwarding the agenda of global capital and Western hegemony is, on face, pretty far-fetched. If anything, I’d argue the people most invested in this literature are more likely to be deeply committed to reducing the vulnerability of marginalized populations to the vicissitudes of nature than acting as agents of global capital.
Second, my own research into the streetlight effect suggests it emerges for incredibly prosaic reasons. Humans are creatures of habit and convenience. It’s not that researchers have made systematic errors, a questions Hsiang raises. Rather, it’s that on margin, it’s easier to work in places where we can speak the language, reasonably expect to do our work free from government interference or repression, and not have ourselves or our studies imperiled by political instability. And given that English is the lingua franca of scientific discourse, this tends to mean more politically open former British colonies.
These are very human desires, and they’re completely understandable without resort to a larger, more politically charged narrative. These permissive factors are, of course, tied to colonialism and histories of oppression and exploitation. But this doesn’t diminish the reality facing current researchers when they decide where to locate their analyses. One of the more powerful insights about the streetlight effect is that it imposes limits on our knowledge and the validity of entire areas of inquiry—even if the individual studies within that area of inquiry are themselves iron-clad and the researchers are operating in the best possible faith.
In sum, I think the authors have done us all a service by highlighting the nature of potential biases in climate-conflict research. We learned something new, and future assessments of the security implications of climate change will have to wrestle with this finding. Does it undermine the evidentiary basis for links between climate change and conflict? No. But it does suggest that looking where the light is may lead us to misdiagnose the conditions under which climate change may contribute to conflict—and that is not just a problem for researchers, but for the people who are most vulnerable to both climate change and conflict.
This post originally appeared on The New Security Beat, the blog of the Wilson Center’s Environmental Change and Security Program.