Data Power Conference

Picture of a conference workshop

<< Back to Programme

Panel Session 4d): Data, Security, Citizenship, Borders (Chair: Clare Birchall)

 

Big Data, Big Borders

Btihaj Ajana, King's College London

The paper is concerned with the ways in which the adoption of big data analytics in border management is increasingly contributing to the augmentation of the function and intensity of borders. Recently, there has been a growing interest in Big Data Science and its potential to enhance the means by which vast data can be collected and analysed to enable more advanced decision making processes vis-à-vis borders and immigration management. In Australia, for instance, the Department of Immigration and Citizenship has recently developed the Border Risk Identification System (BRIS) which relies on big data tools to construct patterns and correlations for improving border management and targeting so-called ‘risky travellers’ (Big Data Strategy, 2013). While in Europe, programmes such as EUROSUR and Frontex are examples of big data surveillance currently used to predict and monitor movements across EU borders. In this paper, I argue that with big data come ‘big borders’ through which the scope of control and monopoly over the freedom of movement can be intensified in ways that are bound to reinforce ‘the advantages of some and the disadvantages of others’ (Bigo) and contribute to the enduring inequality underpinning international circulation. Drawing on specific examples, I explore some of the ethical issues pertaining to the use of big data for border management. These issues revolve mainly around three key elements, namely, the problem of categorisation, the projective and predictive nature of big data techniques and their approach to the future, and the implications of big data on understandings and practices of identity.

 

The Datafication of Security: Reasoning, Politics, Critique

Claudia Aradau and Tobias Blanke, King's College London

From ‘connecting the dots’ and finding ‘the needle in the haystack’ to data mining for counterinsurgency, security professionals have increasingly adopted the language and methods of computing. Digital technologies appear to offer answers to a wide-ranging array of problems of (in)security. Why have digital technologies been taken up so quickly by security professionals, why has digital knowledge circulated so rapidly across sites, scales and spaces? While answers to these questions have focused on the role that military and security economies have played in fostering computing devices and data infrastructures, this paper explores the datafication of security as a ‘style of reasoning’ (Hacking 2002) that appeared to offer answers to problematizations of (in)security. For Hacking, a style of of reasoning ‘introduces new objects, and new criteria for the truth of falsehood of statements about those objects’ (2012). As the datafication of security relies on new methods of quantification and data mining different from traditional sampling, it implies new forms of evidence, enabling technologies and methods of verification. If styles establish their own criteria of truthfulness, then a critique of the political effects of datafication cannot simply oppose one style of reasoning to another. Rather, by drawing out the criteria of truthfulness that datafication establishes, we aim to formulate a critique that addresses the very technologies of self-authentication, proof and demonstration. To this purpose, the paper draws on declassified documents and legal cases in the wake of Snowden revelations and juxtaposes them to existing debates in computer science.

 

Jus Algoritmi: How the NSA Remade Citizenship

John Cheney-Lippold, University of Michigan

The classified National Security Agency documents released by Edward Snowden in 2013 detail a trove of controversial surveillance practices over both national and foreign populations. These forms of surveillance, decried by many as illegal under U.S. laws pertaining to privacy and protections against government intrusion, became the centerpiece of an ongoing, international debate over the rights of the state versus the rights of the citizen. But what exactly is a citizen in a digital world? Who exactly can be guaranteed the privileges of citizenship when surveillance is ubiquitous, transnational, and connected to an IP address rather than an individual person?
This is the precise problem that the NSA encountered when trying to fit its ubiquitous surveillance within the legal foundations of the U.S. Constitution. The NSA's response was to create a citizenship algorithm, using several different variables (or "selectors") to determine if a target was a "citizen" or a "foreigner". A target with a foreignness value of 51% would have a citizenship value of 49%, enabling the state to surveil his or her communications. If, one week later, the same target had a citizenship level of 51% and a foreignness value of 49%, he or she would be afforded the right to privacy.

My paper will argue that the NSA's interpretation of citizenship as a statistical process is a radical shift away from the historical dichotomy of citizenship/foreigner. The consequences of an algorithmic mode of identity production will be expounded on.

 

What Do Data Accomplish for Civil Society Organisations? The Case of Migration and Social Welfare in the UK

Will Allen, University of Oxford

The Increasing availability of datasets to members of the public is opening new possibilities for civil society operations (Ross 2013), where civil society is conceived as lying outside public and private business sectors (Bastow, Dunleavy, and Tinkler 2014). This promises to transform not only what civil society organisations know about their own issue areas and sectors, but also how they develop longer-term strategies. Yet in the cases of UK organisations working on migration and social welfare issues—two topics on which a great deal of data is generated and made available—this process encounters some challenges in its delivery. Ongoing interviews with civil society organisations working in these fields are revealing that target audiences, available skills, and demands of external media or funding environments contribute to perceptions and uses of data which at first glance seem to fall short of the level of transformation promised. But what if this is not the full picture? Even in organisations’ published materials and senior officials’ talk, the term ‘data’ overlaps with or is less preferred to the terms ‘evidence’ and ‘evidence-based research’. concepts popularised in the UK by New Labour under the auspices of generating policy that was more ‘scientific’. If ‘data’ and ‘evidence’ are perceived as roughly interchangeable by organisations, this opens critical questions about the power of civil society to impact and depoliticise public debate using (re)presentations of data as ‘neutral’ evidence. It also warrants asking whether the hype, availability, and promise of varied datasets actually meets the objectives of these civil society organisations.