Political Violence and Humanitarian Technology

Guest post by Katja Lindskov Jacobsen

Refugees wait in line for help from UNHCR staff in South Sudan. By DFID.
Refugees wait in line for help from UNHCR staff in South Sudan. By DFID.

Can humanitarian uses of new digital technologies always be expected to have benevolent consequences? In a recent Security Dialogue article, I investigate this question. Biometric technologies are not only used as post-9/11 counter-terror measures (used in airports and by intelligence officers and military staff in Iraq and other places) but also as a technology, which is increasingly deployed in a range of different humanitarian settings. According to a central humanitarian actor, namely the United Nations High Commissioner for Refugees (UNHCR), biometric technologies such as fingerprinting or iris scanning can, for example, help ensure that refugee registration is faster and more accurate. However, such representations of the technology have not gone unchallenged. Critical voices have called attention to various potential downsides of humanitarian uses of biometric technologies. Indeed, it has been argued that this humanitarian faith in the ability of new biometric technologies to ‘assist’ humanitarians in their protection practices, is based on shaky assumptions and unwarranted expectations.

Buttressing the relevance of such critical voices, we can recognise a number of dynamics with implications for the sense in which such humanitarian practices may deliver better and more efficient protection if we look more closely at how humanitarian actors have deployed specific biometric technologies. Such dynamics for example become more easily discernible if we zoom in on a specific case in which UNHCR has been using biometric registration technology. The very first application in which UNHCR used biometric iris recognition was in 2002, when the technology was introduced as a mandatory aspect of a repatriation programme in the Afghan-Pakistan borderland. Every Afghan refugee who wished to return to Afghanistan (from camps in Pakistan) had to undergo mandatory iris recognition at designated Verification Centres in order to get access to the one-time assistance package offered by UNHCR as part of this repatriation endeavour.

Experimenting on Refugees?

The politics of this technology application unfolds at three levels. First of all, this application of a new biometric technology in a certain sense resembled an experiment. With reference to the fact that it was the very first time that this sensitive technology was being used in such harsh and challenging conditions, where the technology would for example be exposed to heat and dust, UNHCR has described this technology use as an ‘experimental iris testing’ (UNHCR, 2003). But deploying the technology under such challenging conditions did not only entail a risk of ‘failure’ (if the technology could not perform as expected) – it also entailed a risk that technology failures could perhaps translate into humanitarian failures. For example, if the technology produces a false match (between the live iris image of a refugee seeking assistance and the stored iris template of a refugee who had already received assistance) then such a failure could potentially result in a refugee being denied humanitarian assistance because he/she was mistaken for another refugee who had already received assistance. Indeed, this aspect of contemporary humanitarian technology uses is arguably best understood in light of a long history of different instances where humanitarian subjects have been considered fit for experimentation and exposed to various risks that it would not be considered appropriate to expose other ‘more valuable’ bodies to.

Digitized Refugee Bodies

Second, dynamics of crucial importance for the question of how technology-endowed humanitarian practices can offer improved protection to refugees also unfolded at another level. The application of biometrics implies that once registered, a biometric template of the refugee’s iris is produced. Accordingly, what emerged in the context of this technology use was a newly digitalized refugee body. And crucially, this digital refugee body is open to new forms of intervention which may imply additional insecurity for the implicated refugee – unless, in this case UNHCR, is able to protect the refugee from such interventions.

Data Sharing?

Third, in the case of UNHCR, neither refugees nor others interested in the use of biometrics in refugee contexts are able to find any clear answers to the politically sensitive question of which actors (host states, donors, etc.) UNHCR may share this biometric refugee data with, and in what form (aggregate or individual biometric data). UNHCR recently decided to renew its biometric system and in relation to that a Request for Proposals was issued (2013). As uncertainties remained, UNHCR agreed to reply to various questions from bidders, but concerning a question regarding the subject of data sharing (“Are there plans to exchange biometric data with their partners [NGO, Governments, etc.]?”) UNHCR simply responded that: “Biometrics will be used at UNHCR’s discretion. Whether or not UNHCR exchanges data with partners, is not relevant” (UNHCR 2013).

As such, the extent to which humanitarian uses of biometric technology (possibly even digital technologies more broadly) offers an alternative to political violence or perhaps an extension of the domain of sovereign power (insofar as digital refugee bodies have become a terrain of intervention) thus seems a crucial question that deserves more attention in current debates about humanitarian technologies – debates that are, however, characterized by an almost unchallenged optimism rather than critical reflection, debate and action.

Finally, what must also be said is that such calls for more critical reflection on the potentially for negative effects of various humanitarian practices – including technology uses – also call into question a more fundamental assumption that humanitarians often make, namely that humanitarian practices are benign and benevolent acts that simply serve to offer protection to people for whom no other protection is available. For example if a state is unable to guarantee safe living conditions for people within its territory, this may lead to a situation where people flee and then humanitarians often step in to help those people. Yet, the risk that uncritical humanitarian uses of new technology may unintentionally give rise to new forms of insecurity, is but one illustrative example of the continued relevance of a longstanding questioning of this assumption – more specifically of questions about the extent to which humanitarianism necessarily represents an alternative to the different kinds of political violence experienced by people whom the current landscape of sovereign states fail to protect.

Katja Lindskov Jacobsen is an Assistant Professor in the department of Risk & Disaster Management at Metropolitan University College in Copenhagen, Denmark.

5 comments
  1. I read last week about the “Internet of Things” and its potential for breaching the confidentiality that we love so well as well as its potential for savings in the US Health Budget. How do you feel about that?

  2. Astute observations. Unfortunately the urge to surveille and control is characteristic of all state organizations, including those supposedly carrying out humanitarian work, and they are unlikely to stop or restrain themselves. The tale of using expensive high technology in order to positively identify Afghan refugees, possibly to their severe detriment, in the interests of preventing some double-dipping, is absolutely classical.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like