Thursday 3 December
Racialising Algorithms
#algorithm
#colonization
#algorithmicgovernance
#data
#ArtificialIntelligence
#surveillance
#automation
#deeplearning
11 am EST
8 am PST
4 pm GMT
(Zoom/YouTube)
Registration
Panel:
Karen Alderfer
(Duke University)
"Algorithmic Radicalization and the Prediction of Terrorism"
A common understanding in terrorism studies since the 1990s has been that, although terrorism was once based on the achievement of strategic goals, a new form of terrorism has emerged that is more amorphous in its structure and more indiscriminate in its violence. Following the 9/11 attacks in the United States and prompted by “homegrown” terrorist attacks in Madrid (2004) and London (2005), the US and the UK, in particular, began developing theories of radicalization. Crucially, as enumerated in the UK’s Prevent strategy and the New York Police Department’s report “Radicalization in the West: The Homegrown Threat” (2007), these theories opened the possibility of preemption by intervention within the process of radicalization. In the NYPD report, in particular, radicalization is broken down into four stages (pre-radicalization, self-identification, indoctrination, and jihadization), each of which is characterized by distinct behavioral and material signatures, making the terrorist a site of prediction and surveillance. Additionally, such an approach defined radicalization algorithmically, with a specified set of steps that produced a definite outcome: the terrorist attack. Yet with the growth of ISIS and the phenomenon of young people traveling to Syria as well as an increase in homegrown attacks in Europe and the US, certain scholars of terrorism studies have come to think of radicalization as less linear, more unpredictable, a process the “affective fabric” of which must be analyzed (McDonald 2018).
In this paper, I trace the understanding of the figure of the terrorist in theories of radicalization from the early 2000s to today in comparison with the changes in predictive analytics. Following Luciana Parisi’s recent work, I read new understandings of the terrorist as “incomputable,” refiguring the terrorist as central to computation. As Parisi has noted, discussions of algorithmic automation have defined such automation as based on breaking the continuous into the discrete, transforming it into a “step-by-step procedure,” an understanding that is mirrored in the NYPD’s elaboration of radicalization (Parisi 2015, 130). As opposed to this, Parisi writes that “…the increasing volume of incomputable data (or randomness) within online, distributive, and interactive computation is now revealing that infinite, patternless data are rather central to computational processing” (Parisi 2015, 131). Such a description increasingly comes to fit both the elaboration of radicalization in terrorism studies and the models suggested or implemented to predict it.
In order to more fully address these shifts, I examine a variety of recent attempts to automate the prediction of radicalization. These include Google’s Redirect Method, which redirected Youtube viewers suspected of sympathizing with ISIS based on their viewing history to specifically curated videos refuting ISIS’s claims, in addition to the US Immigration and Customs Enforcement’s attempt (later dropped) in 2018 to contract out the development of a machine-learning technology that would predict the possibility of a visitor to the US committing a terrorist attack. While the Redirect Method positions radicalization as a process with distinct steps, which can be counteracted, the ICE technology would have attempted to predict even the possibility of future radicalization through the analysis of what seems to be “infinite, patternless data.” Finally, understanding the figure of the terrorist as incomputable allows a positioning of the shifting yet ongoing War on Terror as a process of computation, driving new forms of prediction.
Elisa Giardina Papa
(UC Berkeley)
"Cleaning Emotional Data"
While the image of the world—and with it every aspect of human life—is reborn as data, the dream of total capture and total control is mired in a dilemma: how to discern data from Data. No matter how big Big Data are, if they are unstructured, the algorithms designed to discern, correlate, predict, and preempt cannot but fail. In recent years, several scholars have justly emphasized how data are never raw nor neutral; they do not speak for themselves, but rather they echo the biases of their collectors. Yet the processes and economies through which data get cleansed often go unnoticed. AI and ML (Machine Learning) companies have discovered that the most efficient and cost-effective way to improve data quality is to offload this burden onto thousands of underpaid and precarious micro-task workers located largely in the Global South. From the Philippines to India, and from Venezuela to Kenya, “clickworkers” must incessantly label, categorize, annotate, and validate massive quantities of digital records in order for artificial intelligence to function. That is, the primary task of this offshored human infrastructure is to “cleanse” data from an incomputable excess.
In this paper, I will present a work in progress: an art project based on a three-month-long personal experience as a worker for several North American human-in-the-loop services that provide datasets to train AI algorithms to detect emotions. Among the performed tasks collected in the project are the taxonomization of human emotions, the annotation of facial expressions according to standardized affective categories, and the recording of my own image to animate three-dimensional figures. The work documents these microtasks while simultaneously tracing a history of emotions that problematizes the methods and psychological theories underpinning facial expression mapping.
A number of AI systems, which supposedly recognize and simulate human affect, base their algorithms on flawed understandings of emotions as universal, authentic, and transparent. Increasingly, tech companies and government agencies are leveraging this prescribed transparency to develop software that identifies, on the one hand, consumers’ moods and, on the other hand, potentially dangerous citizens who pose a threat to the state. The contemporary implications of this demand for emotional legibility can be traced back to 19th-century physiognomy and electrophysiology and the drive towards quantifying and stabilizing emotional authenticity. They recall a history of gender and racial stereotyping in which the ability to be in control of emotions and to “appropriately” display them has come to be seen as a characteristic of some bodies and not others.
Anna Engelhardt
(Independent Researcher)
"'Couriers Are Never Late': Racialised Algorithms of Russian Logistics"
I investigate the contemporary ecosystem of workplace algorithmic surveillance deployed by Yandex, Russian Google-like IT monopoly. Yandex presents a unique case of an IT-monopoly that is involved in logistical networks through its multiple products, each being the part of the larger system, - taxi, food delivery, maps and algorithmic solutions for business logistics. Not having competition with other monopolists in the field like Uber, Russian branch of which Yandex bought in 2017, and far surpassing Google in all other markers, Yandex is able to create an enormous logistical network and solutions for its surveillance. Logistical infrastructures of Yandex cannot be disentangled from algorithmic surveillance - key technology for governing labor that is operated extensively in contemporary logistical industries, as overviewed extensively by Rossiter (Rossiter, 2016). The large-scale surveillance of Russian authoritarian state tends to overshadow such workplace surveillance, creating the lack of the research on both its current operation and its genealogy (Asmolov and Kolozaridi, 2017).
Indeed, this network profits from the absence of functional governmental policy on privacy protection - such policy was characterised by the leader of Social Data Hub, Russian version of Cambridge Analytica, as “luckily nonexistent” (Artur Khachuyan, 2018). These conditions enable total workplace surveillance, creating “poverty and stress” among employees with the new ways of control “when there should be wealth and leisure” (Terranova, 2014). Such violence is targeting primarily racialized subjects, as the majority of Yandex couriers and taxists come from (ex-)Russian colonies, resulting in their exhaustion, starvation, deaths due to overwork, and impossibility for unionization. Heavy miscalculations of the time needed by the workers to cover certain distance make it impossible to rest or to comply with the delivery requirements as a whole, resulting in heavy cuts from the wages. These conditions are presented in the advertisement of Yandex logistical solutions as making couriers never be late.
With my research I cover how the data that are being extracted from these workers and algorithms that are deployed for its analysis and prediction are producing and being produced by racializing assemblages, following Dixon-Roman and his reading of Weheliye (Dixon-Roman, 2016; Weheliye, 2014).
Instead of aiming for the precision of the algorithms in place, strategy that was widely contested by Louise Amoore, I investigate how infrastructure of workplace algorithmic surveillance claim to acquire the truth to make the decision-making process beyond doubt (Amoore, 2018; - , 2019; Goriunova, 2019; Pasquinelli, 2017). I disentangle Russian colonialism, one that operates through the complex entanglement with one of the Western origin (Tlostanova, 2011) through its logistical infrastructure (Engelhardt and Shestakova, 2019, Engelhardt forthcoming). Russian infrastructure of colonial domination might be revealed through looking into logistical networks of Empire as they, according to Deborah Cowen, map the logic of contemporary imperialism in spatial materialization (Cowen, 2014). Such material approach to colonialism outlines the counterintuitive collaboration between Russian State and Western companies and breaks with “colonial equivocation” of Russian colonialism being hidden behind the Western one (Tuck and Yang, 2012). Instead it shows how Western colonial influence rather enhances one of Russia against racialized subjects.
Chair: Ethan Plaue