Data Power Conference

Picture of a conference workshop

<< Back to Programme

Panel Session 6a): Data Mining/Extraction (Chair: Bernhard Rieder)

 

Platform Specificity and the Politics of Location Data Extraction

Carlos Barreneche, Universidad Javeriana

The rise of smart phone use, and its convergence with mapping infrastructures and large search and social media corporations, has led to a commensurate rise in the importance of location. While locations are still defined by fixed long/lat coordinates, they now increasingly ‘acquire dynamic meaning as a consequence of the constantly changing location-based information that is attached to tem’ (de Souza e Silva and Frith, 2012: 9) becoming ‘a near universal search string for the world’s data’ (Gordon and de Souza e Silva, 2011). As the richness of this geocoded information increases, so the commercial value of this location information also increases.

This article examines the growing commercial significance of location data. Informed by recent calls for ‘medium-specific analysis’, we build on earlier work (Barreneche, 2012a; Wilken, 2013) to argue that each major social media corporation (Twitter, Facebook, Google, and Foursquare) actively extracts location data for commercial advantage in specific ways that are subtly yet significantly different from each other and that these differences warrant close attention. By not paying due and careful attention to the specifics of data extraction strategies, political and cultural economic analyses of new media services risk eliding key differences between new media platforms, and their respective software systems, patterns of consumer use, and individual revenue models.

In response, we develop a comparative analysis of two platforms – Google and Foursquare – and examine how each extracts and uses geocoded user data. In building this analysis, our aim is to construct, in Gerlitz and Helmond’s (2013: 2) words, ‘a platform critique that is sensitive to [Google’s and Foursquare’s] technical infrastructure whilst giving attention to the social and economic implications’ of both platforms. We are also aware that any examination of the location data extraction strategies of these two companies must also pay attention to the ‘specificity and performative efficacy of different relations and different relational configurations’ (Anderson & Harrison, 2010: 16), including the cross-platform partnerships between them and other corporations (such as between Google and Yelp, for example, and between Foursquare and the Facebook-owned Instagram and the Google-owned Vine and Waze).

From this comparative exploration of platform specificity, we aim to draw conclusions concerning marketing (economic) surveillance.

 

Incompatible Perceptions of Privacy: Implications for Data Protection Regulation

Jockum Hilden, University of Helsinki

New technologies have always challenged not only existing regulation but also existing social norms of privacy, on which future laws are based (Tene & Polonetsky, 2013). Data that used to be known only to data subjects are now stored in the databases of private companies and public authorities. This raises several legal, political and ethical questions: Is the computerised mining of keywords on an instant messaging app comparable to an actual person reading a private conversation? What is consent online? What data may be sold to third parties? The questions are hard to answer since social networks, fitness apps and smart smoke alarms lack historical equivalents, as the data they provide are significantly richer than what has previously been available (Ohm, 2010: 1725).

The European Union is presently trying to address online privacy challenges with a new General Data Protection Regulation (EC, 2012), which is yet to enter into force. The Regulation is undoubtedly a compromise of several conflicting privacy views, but it is still unclear to what extent different perceptions of privacy have influenced the Commission’s proposal.

This paper will explore how different interest groups reacted to the European Commission’s communication on data protection (EC, 2010), which provided the roadmap for the proposal for a General Data Protection Regulation. The empirical data is composed of 288 submissions to the Commission’s public consultation on the topic (EC, 2011). A sample of submissions that are representative of the interest groups will be chosen for closer analysis. The results will provide a clearer picture of the privacy perceptions of different interest groups and their influence on the final proposal for a regulation, which is an aspect often ignored in politics research (Klüver, 2013: 203).

European Commission (2010): Communication from the Commission to the European Parliament, The Council, the Economic and Social Committee and the Committee of the Regions: A comprehensive approach on personal data protection in the European Union (COM(2010)609). Available at http://ec.europa.eu/justice/news/consulting_public/0006/com_2010_609_en.pdf.

European Commission (2011): Consultation on the Commission's comprehensive approach to personal data protection in the European Union. Responses available at http://ec.europa.eu/justice/newsroom/data-protection/opinion/101104_en.htm.

European Commission (2012): Proposal for a Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation), COM 2012/011. Available at http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf.

Klüver, Heike (2013): Lobbying in the European Union: Interest Groups, Lobbying Coalitions and Policy Change. Oxford: Oxford Scolarship Online. DOI: 10.1093/acprof:oso/9780199657445.001.0001.

Ohm, Paul (2010): Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA Law Review vol. 57(6): 1701-1777.

Tene, Omer and Polonetsky, Jules (2013): A Theory of Creepy: Technology, Privacy and Shifting Social Norms [September 16, 2013]. Yale Journal of Law & Technology. SSRN: http://ssrn.com/abstract=2326830.

 

Data-Mining Research and the Accelerated Disintegration of Dutch Society

Ingrid Hoofd, Utrecht University

The use of data-mining by social researchers, in which computers are called upon to handle exceptionally large data sets, has become widespread. Big data in particular, with its promise of in-depth ways of comprehension, appears to be the new motto in cutting-edge social research. As also this conference’s call for papers attests, claims abound that big data allows us to access opinions, feelings, and behaviours of people with ever more speed, accuracy, and efficacy. While such optimism is to some extent productive, this paper suggests we should be exceedingly apprehensive of these discourses around digital tools. This is not simply because these tools obviously play an important role in managing and sorting populations – a goal that many social scientists unwittingly serve – but especially because the ‘knowledge’ gained from data-mining coincides with a near-perfect obscuring of the central oppressive politics of technocratic capitalism, which the paper calls ‘speed-elitism.’ Speed-elitism is the sublimation of ideals of social progress – to which governments and the social sciences subscribe – into the contemporary tools of acceleration. By analyzing the data-claims made by social scientists around the 2012 Haren Riots in the Netherlands, the paper claims that proper social representation has given way to algorithmic functionality. It argues that this slippage is possible because acceleration and the hope for a better society have always been conjoined twins in the Western philosophical tradition. This means that the Haren Riots researchers, despite – or rather because of – their data-mining efforts dissimulated the foundational violence of technocratic capitalism in Dutch society.

 

Erasing Discrimination in Data Mining, Who Would Object? - Is a Paradigmatic Shift from Data Protection Principles Necessary to Tackle Discrimination in Data Mining?

Laurens Naudts and Jef Ausloos, University of Leuven (ICRI/CIR - iMinds)

Data mining – a crucial step in knowledge discovery in databases – is gradually becoming a critical element in decision-making processes. Though presenting many benefits to capitalise on ever-growing data sets, data mining may also result in discrimination.

Up until now however, the regulation of data mining has primarily been approached from a ‘data protection’ point of view, without considering anti-discrimination rules. Although the raison d’être of these regulatory regimes fundamentally differs, the protection offered by these rule sets could be considered as complementary.

This article will determine whether, from a legal perspective, data protection principles can counteract the potential discriminatory effects of data mining. In order to do so, it will start by articulating the normative goals underlying anti-discrimination rules and the challenges presented to it by data mining. This also includes identifying the underlying normative/legal basis for data mining’s benefits. As a result, a balancing exercise can be drawn between the different fundamental rights/interests at stake. Subsequently, the article will investigate how data protection law can be used in this balancing exercise. More specifically, it will evaluate how the rights to erasure and to object might counter discrimination, while – at the same time – not (disproportionately) thwart the potential benefits of data mining.

In conclusion, the article essentially looks at the legal challenges data mining poses from an anti-discrimination perspective. It does recognise, however, the accessory role data protection law can play in order to neutralise these challenges.