Predicting Government Violence to Improve Theory and Practice

A man kneels before an Orthodox priest during the Euromaiden protests in Ukraine. By Jim Forest.

Guest post by Daniel W. Hill Jr. and Zachary M. Jones

A man kneels before an Orthodox priest during the Euromaiden protests in Ukraine. By Jim Forest.
A man kneels before an Orthodox priest during the Euromaiden protests in Ukraine. By Jim Forest.

Why do some governments generally respect the fundamental human rights of their citizens while others frequently abuse them? Why do some governments arbitrarily imprison, torture, and murder their citizens while others do not? A rather large body of research in political science examines this question, usually by examining information taken from annual human rights reports issued by Amnesty International and the US State Department. Political scientists (and their more enterprising students) turn these reports into data that can be analyzed using statistical methods. These data contain information about the use of political imprisonment, torture, kidnapping (“disappearance”), and extralegal execution. Typically scholars look for evidence of a correlation[1] between human rights abuses and various factors they believe explain human rights abuses, such as features of a country’s government (e.g., democratic institutions), its economy (level of development), or international political factors like openness to trade and investment, international legal commitments, or criticism from human rights advocacy groups.

In an article recently published in the American Political Science Review, we approach this question in a different way. Instead of asking whether and how various things are correlated with human rights abuse, we decided to see to what degree these things improve our ability to predict how abusive different governments are, which we believed would be a more interesting exercise to academics and other potential audiences, and would provide more useful information for policymakers. We did this in two ways. First, we calculated the predictive accuracy of a statistical (regression) model that accounted only for a country’s level of development (as measured by GDP/capita) and population size, and compared this to models that also accounted for the various domestic and international factors that scholars think affect human rights abuses. The idea is that if these factors really matter, then accounting for them should improve the model’s ability to predict which governments are more abusive than others. Second, we used random forests, a machine learning method that allows us to make fewer assumptions about the nature of the relationship between state repression and the various factors we were examining.[2] For this analysis we assessed predictive power by calculating the decrease in prediction accuracy caused by permuting (i.e. randomly shuffling) each predictor, one at a time. The idea here is the same: if the predictor being permuted is important in predicting state repression, then this will decrease the model’s ability to accurately predict levels of state repression.

In many cases the things that scholars view as important causes of human rights abuse did not improve the model’s accuracy. But some things did very well at predicting abuse. First, democracy and civil war strongly predict the kind of egregious, violent abuses we examined, which is consistent with previous research. However, as we explain in the paper, these findings are not particularly useful for theory or policy. Democracies, by definition, cannot engage in too much politically-motivated violence (such as imprisoning or killing large numbers of political opponents), or they would not be considered democracies for very long. Second, scholars classify conflicts as “civil wars” when they have resulted in a certain number of casualties. Since civilians likely make up a large part of these casualties, and the human rights abuses being examined are precisely those that involve the violent targeting of civilians, civil war also bears some relationship to these abuses by definition. More interesting, we think, are other features of politics and society which predict abuse well, including judicial independence, constitutional provisions for a fair trial, a common law legal system, government revenue generated from natural resources, and demographic “youth bulges.” These predictors have only recently received attention in the literature, and we think that developing more fine-grained theories and collecting new data to determine exactly how each of these is related to government violence would be very useful. We also find that trade openness and the presence of INGOs predict abuse well, though in general features of domestic politics add more predictive power than international political factors. Additionally, disappearances and extralegal executions occur less than political imprisonment and torture, and so are harder to predict. Of course, after our analysis was complete, it occurred to us that we had omitted several factors that scholars have found to be correlated with rights abuse, for example economic sanctions and a domestic economy based on contractual exchange. We reanalyzed the data and included these and several other omitted factors, and the results of the new analysis can be found here. To preview the new results, measures of economic sanctions, contract-intensive economy, and an alternative measure of judicial independence perform very well in the analysis, and in some cases outperform all of the variables considered in the published manuscript. The impressive performance of economic sanctions in particular is notable because this stands out as an international political factor that outperforms many domestic factors. Since sanctions involve an explicit policy choice they can be changed much more easily than many of the other things considered in the analysis, and this result provides a straightforward policy implication.

The primary purpose of our exercise was to see which of the numerous theoretical explanations for government violence are strongly supported by the evidence available to us. The answer to this question can inform future work on the topic. For example, scholars of economics, comparative politics, and law have written much about how strong court systems and constitutions can prevent various kinds of government abuse, and our analysis indicates that research on political violence can benefit from these insights. Also, we think that the study of political violence generally will benefit from considering the predictive accuracy of the theories we create, which provides direct evidence about the usefulness of our theories. Examining predictive accuracy offers a better way to evaluate theoretical explanations than looking for a statistically significant correlation. The substantive interpretation of predictive accuracy is also far more intuitive than that for statistical significance, which will be readily appreciated by anyone who has been put in the unfortunate position of having to explain the latter concept in a social science methods course. But besides speaking to academics, we also wanted to help generate knowledge that will be useful for policymakers. There is increasing interest in using statistical models to forecast political violence and instability, and some of this work receives considerable funding from the U.S. government. Though our analysis cannot properly be called forecasting, it does evaluate predictive accuracy and so represents a step in that general direction. Given recent public discussions of the (ir)relevance of political science research to the policy world, we urge other researchers to take the additional step of assessing their predictions.[3] The ability to accurately predict political violence will make the policy relevance of our research obvious to those who are still skeptical of the practical value of political science.

[1]Controlling, of course, for potentially confounding variables.

[2]For example, this method can automatically detect nonlinear relationships as well as interactions among the different variables we were interested in.

[3]The statistical analyses that scholars of political violence conduct produce predictions, whether they are evaluated or not.

Daniel W. Hill Jr. is an Assistant Professor in the Department of International Affairs at the University of Georgia. Zachary M. Jones is a Ph.D student in Political Science at Pennsylvania State University. 

1 comment
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like