Data Power Conference

Picture of a conference workshop

<< Back to Programme

Panel Session 5c): Algorithmic Power (Chair: Heather Ford)

 

Profiling as Data Power: Addressing Algorithmic Knowledge

Jake Goldfein and Andrew Kenyon, University of Melbourne

This paper investigates profiling as an exercise of data power. Specifically it explores the significance of exercising power over individuals based on purely non-representational knowledge of that person. Profiling ‘knowledges’ are produced through minimal direct contact, instead using aggregation, concatenation, mining and washing of the data generated as a by-product of individuals’ navigation through digital space. The information used to generate profiles is thus abstracted from the information subject, and can be interpreted only through convention (algorithms) rather than any natural or objective relation. Are existing or proposed data protection laws sufficient for controlling this articulation of data power?

This paper offers a consideration of possible legal regimes that have been suggested to regulate profiling as an exercise of data power. Is access to information held by data controllers and processors sufficient? The draft general data protection regulation presently being negotiated in the European Parliament may suggest certain limitations on the types of information that can be used in the generation of profiles by commercial and government entities. However, limitations on profiling that simply exclude certain types of information can be expected to have limited utility. Regulations need to focus on profiling as a method of knowledge generation (De Hert, Hilderbrant, Gutwirth), rather than excluding particular types of ‘sensitive’ information from profiles (sexual orientation, religion, politics etc).

 

From Words to Numbers: Redefining the Public

Misha Kavka, University of Auckland

Twenty-five years ago, Habermas’s The Structural Transformation of the Public Sphere was translated into English and the phrase ‘public sphere’ entered academic discourse. The defining image of the public sphere, as imagined by Habermas, was the 18th-century coffee house, where talk was rampant and democracy was based in debate and discursive deliberation. Despite criticisms about the exclusive nature of Habermas’s normatizing concept, the word-oriented public sphere has had tremendous impact on the way that we think of sociopolitical interaction, and it continues to operate as a theoretical touchstone for considerations of online democracy, social media collectivities, citizen journalism, etc. The problem is, however, that in the era of big data and quantified subjectivity the site of meaning production is shifting from words to numbers. This paper will argue that, in the rapid turn to data, the public sphere has undergone a structural transformation toward the public-as-aggregate. If big data teaches us anything, it is that numbers are not self-explanatory but rather require interpretation through processes of aggregation. Populations, activities and even subjects as data-fields are mined for quantitative information that can be redistributed as massive multiplicities of meaning. While the public sphere retains visibility, the aggregate has become the effective site of knowledge and power production, at the expense of the individual as a discursive site of agency. This paper will seek to map the new relation between the public and subjectivity by asking, if words are now passé, then where do we look for the agential remainder of the subject within the public-as-aggregate?

 

Deep Sight: The Rise of Algorithmic Visuality in the Age of Big Data

Jonathan Roberge, Institut National de la Recherce Scientifique and Thomas Crosbie, University of Maryland College Park

The rapid advance and broad adoption of computer vision algorithms across new media technologies has immense consequences for the experience of everyday life. At their simplest, computer vision algorithms are step-by-step procedures for calculations entrenched in software codes intended to render data meaningful to a user’s sight (see Urichio, 2011). Much of our visual culture was shaped in the age of monitors rendering data in text blocks on a two-dimensional surface. Today, however, we have entered a new regime of algorithmic visuality, where the dissemination and processing of increasingly automated, mobile and accurate images is powerfully supplemented by artificial intelligence and machine learning capacities. Prominent actors in the technology sector, including Google, Facebook and Amazon, are now shifting their corporate strategy to focus on bringing algorithmic visuality into mainstream consumer culture, heralding a far more immersive and ubiquitous regime of the “internet of things” and wearable computing (Featherstone, 2009; Turck, 2014). Technologies such as augmented eye-wear and drones mounted with (and guided by) 360º high-definition cameras are now being placed in dialogue with one another, creating rich, multi-tiered data streams, deep sight that situates actors in dynamic, meaning-laden environments. The outcome is enormously powerful data, crucially linking street-level, virtual and aerial perspectives. Yet, the spread of algorithmic visuality remains an understudied sociological phenomenon, with industry understanding far outstripping social scientific inquiry, and with almost no research to date on its cultural, economic and political consequences. Our presentation introduces the theoretical framework and findings of a research project focusing on the adoption of algorithmic visuality in Canada.

Featherstone, Mike (2009). Ubiquitous media: an introduction. Theory, Culture & Society, 26(2-3): 1-22.

Turck, Matt. (2014). The internet of things is reaching escape velocity. Tech Crunch. 2 Dec. Accessed from: http://techcrunch.com/2014/12/02/the-internet-of-things-is-reaching-escape-velocity/

Uricchio, William. (2011). The algorithmic turn: photosynth, augmented reality and the changing implications of the image. Visual Studies, 26(1): 25-35.

 

Self-quantification and the Dividuation of Life: A Deleuzian Approach

Vassilis Charitsis (Karlstad University)

Self-tracking and self-quantification is an emerging popular phenomenon that aims to promote “self-knowledge through numbers”, or in other words data. Numerous tools and devices have been developed that allow users to track and quantify every aspect of their lives. By doing so they generate huge amounts of data that firms can draw upon to develop their market offerings, while individuals are digitized and transformed into what Deleuze (1992) calls “dividuals” within vast banks of information systems (Martinez 2011). Zwick and Denegri-Knott (2009) assert that the notion of the dividual is premised on the accumulation of dispersed consumer information and its conversion and reorganization based on specific codes and through that process, dividuation becomes an expression of capitalist accumulation that aims at breaking down life into pieces of information. According to Deleuze, this is achieved not through traditional disciplinary institutions but through mobile forms of surveillance that have the ability to monitor, measure, intervene and control “dividuals” in real space and time (Gane, 2012). The accumulated data from these mobile forms of surveillance treat human subjects not as agentic individuals within a population but as samples from which patterns of consumer behaviour can emerge (Palmås, 2011) , i.e. as “dividuals” upon which marketing strategies can be based, but also directed to. In that sense, what has been described as surveillance economy (Andrejevic, 2009) relies heavily on the dividuation of consumers (Cluley and Brown 2014). Following this analysis, this paper argues that the Deleuzian notion of the dividual has found its ultimate commercial application in the self- quantified movement that allows and promotes the dividuation of users’ entire lives.