Wednesday 2 December
Deep Colonialisms
& Non-Linear Learning
#blackfeminism
#afrofuturism
#machineLearning
#deeplearning
#algorithmicgovernance
#robot
#algorithm
#ArtificialIntelligence
11 am EST
8 am PST
4 pm GMT
(Zoom/YouTube)
Registration
Panel:
Mercedes Bunz
(King’s College London)
"Error is no exception: towards the alien logic of machine learning"
In the interest of capital, new forms of computational logics (such as machine learning) have been claimed as an ‘artificial intelligence’, which has been wrongly shaped in the image of the ‘human, just better’. To debunk the myth of a more effective instrumental reason, this paper aims to expose its alien logic in two parts. Following from Pasquinelli’s (2019) ‘Grammar of Error of Artificial Intelligence’, the first part will analyse look at a range of machine learning experiments and findings to argue that adversarial settings have wrongly been dubbed an ‘exception’. Guided by Rieder’s (2017) approach of ‘Scrutinizing an algorithmic technique’, the paper will be analyse some well known, and some less well known examples from computer science: from the lost game of Alpha Go against Lee Sedol to a study by the University of Tübingen into the classification strategy of convolutional neural networks (Gheirros et. al. 2018) as well as to recent studies in adversarial machine learning. These examples will show that the image of artificial intelligence as ‘human, just better’ is covering up an alien logic of machine learning that is at the heart of its functioning and fundamental to it.
To further explore this alien logic of machine learning further, the second part of the paper will link the findings to contemporary work of artists such as Memo Atken, Anna Ridler or Holly Herndon showing that each case exposes the alienness of contemporary machine logic with a double gesture of playful criticality. Diving into particular examples of each artist, the second part will show that contemporary art allows an alternative approach to the capitalistic domination of the machine as a slave (Simondon 2017) suppressing its own identity. Embracing machine aesthetics, allows instead the introduction of a collaborative mode of production in which the machine is collaborating with the artist exploring its unique alien logic.
The research presented in this paper is part of the AHRC founded Creative AI Lab, a collaboration of the Serpentine Gallery and the Department of Digital Humanities, King’s College London which aims to generate new research into the role of machine learning as an artistic critical practice. The lab was co-founded by Mercedes Bunz and Eva Jäger and the research is being done in partnership with Rhizome/New Museum and the Digital Theory Lab at NYU, New York.
Elizabeth de Freitas
(Manchester Metropolitan University)
"Philosophical probabilisms: Deep learning and the infinitude of useless hypotheses"
If probabilistic reasoning was once the poor cousin of pure mathematical deductive reason, it has now found its place in a new logic of digital decision making where, as Parisi (2017) emphasizes, “the chain of contingencies becomes the driving force for decision-making actions”. When aligned with ontological claims about the indeterminacy of matter, this new algorithmic reason can lead to a dangerous and reductive naturalizing of computation. This danger comes from failing to distinguish “the continuity of biophysical complexity from the discrete character of computational abstraction”. If an alternate speculative reading of algorithmic intelligibility destabilizes this reductive computational image, it will have to reckon with the technical processes at work in hypothesis generation and abduction, as mobilized in popular machine learning techniques like deep learning. Zalamea (2012) defines abduction – through Peirce - as an inferential process that locally glues the breaks in the Continuum of habit and expectation, by means of an arsenal of methods which select effectively the “closer” explanatory hypotheses for a given break, thereby stitching and mending the discontinuities in any new regularizing perspective.
Following Zalamea’s claim that abduction must engage with a realm populated by an “infinitude of useless hypotheses”, I explore the ways in which this approach to an explanatory continuum might be relevant to the error landscape of contemporary AI deep learning, where the algorithm traipses over hills and troughs, seeking absolute error-minimums. This paper explores this new landscape, and the extent to which it links with non/philosophical probabilisms as discussed in Deleuze’s (1981) reading of Hume, in which the imagination fuels reason’s speculative stretch, affirming “all of chance” and sustaining the delirium interior to thought (Roffe, 2015).
Hank Gerba
(Stanford University)
Rethinking Non-Linear Aesthetics
From 1977-94, and bearing witness-by-proximity to the then fomenting “California Ideology” which would inflate the dot-com bubble, Sylvia Wynter taught at Stanford University. Though her work has helped numerous scholars to launch the project of reimagining “Man" against its overrepresentation in liberal humanism, this presentation seeks to apply her work to questions of algorithmic personhood and, more pressingly, subjection.
If algorithmic systems have broadly proven themselves to be machines for the production of individualized subjectivity (if only affectively--they concurrently dividualize), then a return to Wynter’s work might render insufficient purely neoliberal explanations, that term often manifesting as merely a register of a political regime or set of economic policies, and expose instead a profound mutation of essentially liberal tactics of oppression and subjection. Through archival research of her time at Stanford, I hope to show that Wynter’s rethinking of aesthetics, desire to structurally refigure academia, invocation of a “new science of man,” and interest in autopoietic and non-conscious facets of personhood, were fundamentally tied to her awareness of the computational, cybernetic, and post-cybernetic theoretical milieu in California. To draw this connection further opens Wynter’s work, and its broader milieu, to questions of algorithmic subjection.
Extending these roots through contemporary media theories, which picture computational media as non-linear processes that straddle the material-rhetoric continuum, I claim that Wynter allows us to conceptualize algorithmic processes as a kind of mechanized prophylaxis between long extant non-conscious processes of liberal oppression and the conscious manipulation of power. The inscrutability of neural networks, manipulation of publics into recombinant and marketable populations, and differential application of surveillance technologies based on socioeconomic class and race, each plays its part to "parallelize" operations of power, problematizing inherited regimes of aesthetic resistance premised on an optimism, however slight, that a representational attack on, say, racism or homophobia, might consciously cause friction in the minds of its hegemonic target.
This is no longer tenable in an algorithmic world, where, increasingly, deployments of power are formally isolated from conscious thought, exercised (and regressively refined) purely functionally in the algorithm. What is proposed here, then, is a tracing of non-linearity as seen through Wynter’s aesthetics. Crucial to this tracing is Wynter’s interest in the then (and still) emerging science of complexity, of which we can see ideological and technical derivatives in the construction of neural and agent-based modeling systems, network optimizations, and myriad other reticular technologies. Her aesthetics proposes that thought itself be re-thought, that it might find itself in a continuous relationship with non-conscious systems, rather than attempting to fortify philosophy as a conscious enclave despite them. Wynter helps us to understand not only how subjection functions in extant computational environments, but also how we might alter our material and conceptual relations to technologies to enable more equitable subject-media relations.
Jessica Edwards
(Independent Researcher)
"On the Comparative Computational Fugitive Pathologies of the Black Female Slave and Artificial Intelligence"
Chair: Stamatia Portanova
(Vimeo)
Screening:
Ibaaku - Processhun (2018)
(Official Music Video, feat. Danniel Toya)
Danniel Toya - Robotboy de Kinshasa (2017)
Marynet J