The Love Bot

In 2015, Ashley Madison, the website for married people seeking affairs, was hacked. After the hack, it was discovered that the site had 75,000 female chatbots drawing eleven million men into intimate conversations. This was because Ashley Madison found itself with a disproportionately high number of male subscribers and practically no female subscribers. Each fembot had a name, age and location, as well as a standard set of chat-lines to contact users built on basic responsive natural language processing abilities. Users did not notice the digital workforce till the breach was revealed. The artist collective !Mediengruppe Bitnik’s installation based on the hacked data embodies five of the 436 fembots active for users in London city. What is it about being intimate with non-humans that bothers us? Which kinds of non-humans?

Insect Intelligence

In the novel Kill Decision, artificially intelligent military drones are programmed on the basis of weaver ants, who communicate with each other to achieve complex predatory actions, and secure their territory. Termites are less violent but are similarly creative and inspirational. In his book Insect Media, Jussi Parikka shows us that media technologies and insect intelligence/behaviour mirror each other, blurring notions of ‘natural’ and ‘artificial’. The philosopher Amia Srinivasan writes: “…individual termites are not particularly intelligent, lacking memory and the ability to learn. Put a few termites into a petri dish and they wander around aimlessly; put in forty and they start stampeding around the dish’s perimeter like a herd. But put enough termites together, in the right conditions, and they will build you a cathedral.” In this example of stigmergy, distributed coordination results in self-organised behaviour without any outside or top-down programming. This ‘swarm intelligence’ is common in insects and is already being applied to the programming of robots: TERMES (video) are termite behaviour-inspired robots.

Crisper Futures

CRISPR-Cas9 is a genetic modification technology. In 2018, the scientist He Jiankui used it to experiment with conferring genetic resistance to HIV to twins in utero. There is no global consensus on applications of genetic modification in humans and He’s experiment sparked global outrage. Bill Gates, however, believes this technology has potential applications for ‘good’ and ‘global development’. For example, he has proposed the use of CRISPR to skew mosquito populations to produce more males than females, thus potentially reducing the spread of Malaria and Zika viruses. Similarly, he suggests gene-editing East African cows (in an Edinburgh lab) with Holstein cow genes for the former, to yield more milk to dairy protein-poor communities in Africa. Such applications of experimental technologies are imagined to deliver solutions to control and manage wild and unstable environments. In doing so, they re-shape the future of the planet.


The visual artist Sougwen Chung has been programming robots to collaborate with her in creating visual art. In her TED Talk she says she wants to push the long tradition of “marks made by the human hand”. How this works is that Chung draws together with robot arms holding a stylus; and at other times, paints around and with robots trundling over a large canvas releasing paint. In Omnia per Omnia (2018) she used a dataset of human movement flows in New York City as captured by surveillance cameras to program her robot collaborator’ ‘Douglas’ (Drawing Operations Unit: Generation_3 Live Autonomous System). In Exquisite Corpus (2019) Chung uses her own biometric data. The results are visual filigrees of human and machine responding and relating to each other. She says of her work: “I found that I missed physical gesture when working with computers—specifically the gestural instincts I’ve developed through violin and drawing. Sometimes working with software and code can feel like one is relegated to the screen. So that feeling led me to explore working with robots through the medium of performance, to re-engage with physical spaces. Robots are typically regarded as industrial tools, but I’ve always thought of them as a kind of kinetic sculpture. Being able to invent my own human/machine collaboration processes has been really empowering.”


‘Shedbird’ is the nickname that two photographers gave to a young Laysan albatross that lived near a tool-shed on a small, remote island in the North Pacific where they were stationed. The island was a sanctuary for marine life and the photographers were documenting its twenty-year history. Shedbird never made it out to sea, though; it died from a perforated stomach. There were 12 ounces of plastic inside it. The bird’s parents had foraged for food in the great Pacific garbage patch so it had grown into adolescence carrying plastic inside it. After it died, the photographers documented its death as well as the plastic that caused — this featured, among other things, bottle caps and lighters. This is a story about pollution and plastics but also of entanglement between humans and non-humans. The story of Shedbird is not only that we need to pollute less, which suggests a return to a kind of ‘clean’ past (which is unlikely), but that we have to think about how ecologies change and new hybrids emerge. Some of these might be us humans, whose bodies and minds are re-shaped by tracking through data (pdf) open-source hormones, doping in sports, medical or assistive technologies. Anna Tsing describes her co-authored book Arts of Living on a Damaged Planet by saying that it is nature writing of a particular kind; about, for example, “noticing sewage-filled canyons where tomatoes are growing up in the midst of old tires. This is a kind of nature in which readers feel the pressures of extinction as well as the wonders of biodiversity.” How can we live, flourish and grow together with other species for flourishing and growth amidst ruin, catastrophe and violence?

The Cyborg?

The word cyborg is a portmanteau of ‘cybernetic’ and ‘organism’. It was coined by scientists Manfred Clynes and Nathan Kline in 1960 ( pdf) as a way to talk about how the human body would need to be augmented and enhanced if it had to survive in extraterrestrial space. The first cyborg was a white rat that the scientists implanted with a small osmotic pump that released chemicals intended to change the animal’s physiology.

The cyborg exists exists and is valourised as a synthetic, malevolent intelligence in the Hollywood Terminator franchise; but in Donna Haraway’s Cyborg Manifesto, first published in the Socialist Review in 1983, it is an ironic and mythical figure that dissolves boundaries between human and non-human. Haraway’s Manifesto can sometimes feel impenetrable, so we might be forgiven for thinking of her cyborg in terms of Lynn Randolph’s iconic image accompanying Haraway’s book, Cyborgs, Simians and Women: The Reinvention of Nature (full pdf here). In this illustration, we see a smiling sphinx at a keyboard, “her hands poised to play with the cosmos, words, games, images, and unlimited interactions and activities,” raised up in low Cobra-pose, wearing the shamanic White Tigress headdress. Her body is a circuit board, while galaxies, theorems and a game of tic-tac-toe swirl in the vast ether behind her. Part machine, part woman, part animal, Randolph’s figure is a chimera.

But a cyborg is not necessarily a strict human-machine merger; it is an imaginary for “thinking with”. According to Sophie Lewis, the cyborg is “a luminous translation of the Marxist idea that we make history, but not under conditions of our choosing. It is a timely suggestion that political science address the fact that we are full of bubbling bacteria, inorganic prostheses, and toxic economic mythologies.” The Manifesto, together with much of Haraway’s early work, dismantles ideas of the ‘natural’ and ‘artificial’. Haraway proposes that we are already cyborg, constituted by networks and technologies that go beyond the digital. In a 1997 interview with Hari Kunzru, Haraway discusses the ‘natural’ body as imagined by sport, saying: “Before the Civil War, right and left feet weren't even differentiated in shoe manufacture. Now we have a shoe for every activity.” Winning the Olympics in the cyborg era isn't just about running fast. It's about "the interaction of medicine, diet, training practices, clothing and equipment manufacture, visualization and timekeeping.” Drugs or no drugs, training and technology make every Olympian a node in an international technocultural network — just as ‘artificial’ as any athlete at their steroidal peak.

September 2010 was the fiftieth anniversary of the coining of the term ‘cyborg’. Over the course of that month, Tim Maly assembled 50 instances of the use of the word to a dedicated Tumblr called 50 posts about cyborgs.

History of the Human

In the European Enlightenment and period of Modernity that followed on its heels (roughly the 1700s to the early 1900s), ‘the human’ emerged both as separate from nature and as more than ‘mere’ machine. That is to say, we distinguished ourselves in two ways: not being driven by our instincts, and not existing as ‘brute’ machines pre-programmed to do just one thing repeatedly. The world of non-humans — insects, minerals, bacteria, machines, trees — constituted a separate, inert landscape for humans to measure and know objectively, and through this to exploit and develop. Thus an outcome of the scientific discoveries, technological innovations, governance and culture of this time period was the notion of separation: of nature from culture, emotion from reason and subject from object. This is also related to the Cartesian notion of the separation of mind from body. 

Some of these ideas of separation persist in our understanding of the human; for example, we understand ‘intelligence’ as somehow ‘contained’ in a brain or something brain-like, rather than as constituted by the interplay of social, cognitive, psychological and cultural practices over time. The social, scientific and cultural values that emerged during the Enlightenment and Modernity were also inextricably linked to Transatlantic slavery, colonial plantations and European colonialism. These global events radically altered our relationships with the land, nature, knowledge, and each other. At this time, black and brown people were considered to be less-than human; less than the Cartesian, rational, propertied, white, able-bodied, cis-heteronormative man. As Alexander Weheliye discusses extensively in Habeas Viscus ( pdf), we must acknowledge that our systems of knowledge are shaped by the notion of black and brown bodies as ‘non-human’ others.

Living with Nature

Living in harmony with nature and the land is not an easy place to ‘return’ to, as many might assume. Consider the insights of two ‘art+ecology’ projects, the Slug ‘o’metric and the Breast Plough’o’metric by Paul Chaney. The Plough’o’metric is “a series of digital strain gauges and a small on-board computer [that] allow[s] the operator to record the exact amount of effort needed to plough some land by human power alone.” Similarly, the Slug ‘o’ metric is a set of garden-scissors-as-sculptures that kill garden slugs and quantify their deaths. According to the artist, the scissors mirror the process of outsourcing death to different technologies, thus putting distance between humans and non-humans. It also follows the process of how technology in agriculture mediates relationships between human and non-human. Regine de Batty writes of these works: “Both unsettle the typical illusion that ‘living with the land’ is a pure and uncomplicated affair.”

Effluent run-off from a gold mine cleaned with ammonia before being re-integrated with nature.Quebec, Canada, 2017. Image: Maya Ganesh.


By Steph Holl-Trieu

The history of technological development (read: progress) often follows, blinkered and mouth agape, the linear arrow thrown by an incel homo oeconomicus, ivory tower genius, or Selfish Gene. Yet even the incel emerged from the womb of his mother and continues to feed on meals provided by her care. The genius sits in a tower built and maintained by workers. The Selfish gene relies on organisms and groups to carry it forward. No unit of being survives or comes into existence on its own. It is to the detriment of our understanding of life that symbiosis has been largely ignored in evolutionary biology, and we owe much to the scholarship of Lynn Margulis, who foregrounded symbiosis as driving inherited variation — instead of genes being passed down in the cordon sanitaire of random mutations. Life, then, is owed to the intimacy of strangers.

This sits at the core of what Donna Haraway calls ‘sympoiesis’: making-with or worlding with others. Just as in the past algae ingested oxygen-producing cyanobacteria, making photosynthesis their own, futures are made by things—living or inert—that resist death by ingestion or elimination by transformation. The relations that make our world, relations of encounter, exchange, struggle, surrender, war, peace and fusion are contingent; prefigured in the environment of their interaction. Ant colonies, for example, are being studied to identify the algorithms, or rules, that connect local interactions of individual ants to the behaviour of the collective. As Deborah Gordon underlines, these studies call for new vocabularies that transgress the two (insufficient) methods of biology to explain collective behaviour: one, that of single cells working from an internal program adding up to the larger unit of organisation; and two, the whole system functioning as one entity or superorganism. Both options remain blind to the importance of the environment. Gordon’s research, instead, suggests that certain algorithms are likely to be used in specific ecological situations.

Suzanne Simmard uncovers how trees communicate through networks of fungal mycelia, exchanging resources, identifying kin, as well as sending out warning signals. The forest wouldn’t be one without fine threads of hyphae crawling, branching and pulsing underground. Cells, metalloids, Cordyceps fungi, humans, algae, Amazon Alexa, NASA, OpenAI all share the same denominator: they act and reproduce in relation to the world. Life and technology results from collective behaviour – collective behaviour as the formation of patterns deeply embedded in ecology. This requires, as Ursula le Guin explores in her story ‘The Author of the Acacia Seeds’, a different engagement with ecology; a becoming of therolinguists, mechanolinguists, cyberinguists and geolinguists “who, ignoring the delicate, transient lyrics of the lichen, will read beneath it the still less communicative, still more passive, wholly atemporal, cold, volcanic poetry of the rocks: each one a word spoken, how long ago, by the earth itself, in the immense solitude, the immenser community, of space.” (You can listen to le Guin reading from her story here)

Image Credit: Endosymbiosis: Homage to Lynn Margulis, Shoshanah Dubiner, 2012:


By Andrea Kelemen

Popular AI narratives are infused with the hubris of creation; a clear dualism between master and tool, a top-down power relationship prefiguring subversion. And yet, AI and other technological tools far exceed the realms of instrumentality, efficiency, and materiality. As Latour writes: “Consider a tiny innovation commonly found in European hotels: attaching large cumbersome weights to room keys in order to remind customers that they should leave their key at the front desk every time they leave the hotel[,] instead of taking it along on a tour of the city. An imperative statement inscribed on a sign — 'Please leave your room key at the front desk before you go out' — appears to be not enough to make customers behave according to the speaker's wishes. Our fickle customers seemingly have other concerns, and room keys disappear into thin air. But if the innovator, called to the rescue, displaces the inscription by introducing a large metal weight, the hotel manager no longer has to rely on his customers' sense of moral obligation.” Where the sign fails to convince hotel customers to act morally, the new technology — the metal weight — succeeds.

Here, the technology is not a mere tool in the hand of its maker, but something that brings about a new moral order. Thus new technologies represent bifurcations in the unfolding of time, as every new innovation brings about a slightly altered moral order that builds on the previous one. The more technologies proliferate, the more the initial order becomes opaque. In this process, technologies transform from automatic tools in the hands of their masters into autonomous moral agents.


By Chris Harris

Making a sport out of human mimicry and designating ourselves as jurors of intelligence, feeds the human ego. To continue to optimize the design of A.I. to think like humans is to constrain the complexity and plurality of our notions of intelligence in an ontologically shallow landscape. As we ascend this local maxima of surface symbol manipulation, we derive diminishing returns of insight.


By Andrea Goetzke

Neural nets in Connectionist AI were modeled on human neural networks. Now with access to large amounts of data, these computational networks have surpassed the human capacity for fast and efficient computation. In this way, AI is modeled on the human, but achieves something better and beyond the human. For example, a self-driving car that can navigate traffic without being distracted by a text message. An image search for the term ‘artificial intelligence’ shows human brains turned into ethereal circuit boards. This confirms the human as the starting point for something cleaner, lighter, more perfect. Some people imagine and write about concepts such as the Singularity: high capacity thinking minds without human bodies; bodies that disturb progress, seamlessness and perfection. Islands of perfection that use algorithms to keep borders shut; that outsource the dirty work. Impeccable houses cleaned by cheap labour. Secure and planned life schedules that have never learned to improvise. We are a society that invests in profitable technology development with convenient narratives, shiny products and ‘objective’ predictions, rather than in messy, chaotic, time-consuming and unpredictable social and cultural deliberations.

Most of the world today is made up of precarious and vulnerable beings, both human and non-human. Can perfection exist as an island in a sea of precarity? The perfect and the other? Donna Haraway's cyborg and Anna Tsing's ‘ contaminated diversity’ acknowledge that purity and paradise are no longer options (if they have ever been) — certainly not after capitalism, patriarchy and colonialism. ‘Contaminated diversity’ acknowledges and embraces relations between beings; how all beings are changed through encounters with others. The indeterminacies of encounters gives a different vision of the future than the controlled, perfect one that machines can calculate and predict for us.

Thinking from the position of precarity, contamination, solidarity and imperfection, intelligence might be reframed as grown out of human experience and embodied minds that make sense of encounters. And doesn't intelligence also have to do with desire? Desires that are shaped through encounters? Arturo Escobar talks about how misleading Western dualisms of emotion vs. reason are, because “even the decision to be rational is an emotional decision.”

What does AI have to do with these questions?

Photo by Paweł Czerwiński on Unsplash

Arts of Navigation

By Steph Holl-Trieu

Intelligence as understanding is prefigured in the act of navigation. To create a solution, one must first come to know what the problem is. The reverse can be found in many forms of technological “innovation”, created under the guise of solving virtual problems. In the end, these prove to be false novelties, trojan horses ushering in real problems which did not exist before. Solutions-sans-problems can quickly become insults to life itself. Even in assumed novelty we find conservatism or ignorance. Yet, life does create problems, issues, troubles—forcing new solutions. The absolution of any form of action or exploration of solutions may be equally conservative and violent. Life, or the terrain we find ourselves on, is only ever explored in fragments.

The experience of hunting down these fragments is aptly demonstrated by the act of scouting in Sid Meier’s video game Civilization. Wherever the scout follows the click of the cursor, tiles reveal themselves in all their splendour (colour, resource, terrain, etc.), but as the scout moves further away, the tiles fall back into sepia, muting the information that was once discovered. We could think about our life-in-fragments as the hexagonal tiles in the game, only our axes of exploration don’t run across a 3D-rendered flattened surface, but reach into the inner core of the earth, boil in volcanic magma, stratify soil, force themselves into cells and cells in cells, or ribonucleic acid in protein shells: an amnesiac exploration, expanding and contracting space and time. Sometimes we forget what we have discovered about our environments as quickly as we have learned about them. Countering amnesia by tilting, zooming, pivoting, mapping, orienting and navigating in a step-by-step, fragment-by-fragment manner is nothing more than (or short of) applying algorithmic methods.

Often treated as novelties, or foreign bodies seeking scepticism and observance, Matteo Pasquinelli writes that algorithms actually date back more than 3000 years. In the ancient Hindu Agnicayana ritual, devotees symbolically recomposed the fragmented body of the god Prajapati in a falcon-shaped fire-altar following step-by-step instructions according to a geometric plan. Rather than being a new invention and sole heir to the siege of Enlightenment, territorialised within the industrial West, algorithms, Pasquinelli says, “are among the most ancient and material practices, predating many human tools and all modern machines” and “grow out of the data produced by collective intelligence.” Navigation in this sense can be understood as the calculation of space that follows a sequence in time. In other words, it is space mapping out the problem over time that gives form to ‘intelligence’. While Hindu devotees used algorithms to secure gifts from the gods, today algorithmic navigation in vast seas of data is a practice to treat complexity-induced vertigo — provided the map is drawn with sensitivity so as to not insult life itself.

Image Credit: Sid Meier’s Civilization, Screenshot by Nerd_Commando, 2018 

Emerging Environments

By Steph Holl-Trieu

Environments are elusive. Etymologically, an environment is what surrounds, encircles and encloses, inherited from the Old French environer. The Latin suffix -ment indicates that the environment-as-noun signifies the result or product of the action of the verb (environ-) — or its means or instruments. This double-edged meaning hidden in the suffix spills and slips fully into the word’s own ambiguity: Environments emerge from actions – they are results, or temporary manifestations, of actions’ endpoints. At the same time, they offer us affordances to perform new actions; actions which again transform environments, these transformed environments then affording us anew to act within them. While we are grasping for it, forming a concept, the environment is already in the process of changing.

It is not only human action that creates environments, but the constant doings and undoings in the kingdoms of animalia, fungi, plantae and bacteria. And yet, the constraint of identifying active (organic) subjects by separating them from inert (inorganic) matter is quite possibly an accelerated collapse into dead ends. Species belonging to the machinic phylum do not merely react to the form-giving actions of living beings, but hold the very properties that give form to the economic intra-actions between biology and metallurgy. In his 1964 book Understanding Media: The Extensions of Man, Marshall McLuhan coined the phrase “the medium is the message”. Far from being a catchy slogan, it was primarily intended as a serious warning: against the juicy meat of content (whatever you have been binge-watching) that anaesthetises while shifting our attention away from media nested in other media (television, online streaming, digitised mechanical moving images, packets of data shooting across fibre optic cables). While you’re caught in the narcotic loop of Netflix’ autoplay, media living in other media are creating their own environments forcefully and (to your distracted eye) invisibly.

The important question is not why X is saying Y in Z film (or why Tay became a spinshot of racist slurs – the answer is quite clear: humans are festering pools of bigotry), but how does seeing from a distance (tele-vision) actually matter, how does it manifest materially? Media emerge from a given environment, yet they inject difference into the scale, pace and pattern upon which the environment acts on itself and whomever or whatever it encompasses. The medium-as-message forces us to turn Artificial Intelligence on its head to inquire after the Intelligence of the Artificial, so as to keep up with the delays and sprints in pace, the undulation and distortion of patterns across micro- and macroscales. The artificial signifies and traces dead matter being forced down below: tectonic plates shifting, oil pressurised to squirt above the surface, metals mined from ores, circuits and packets switching, gases released into and taken from the atmosphere, diodes flashing, signals bouncing, minute actions constantly bleeping with the transmission and transformation of energy and information. However nauseating it might be, keeping up with the artificial is necessary: Environments remain elusive, not just because they are constantly changing, but because they force action and demand to be changed.

Image: Image Credit: Integrae Naturae, Robert Fludd, 1624,_Integrae_Naturae.jpg

AI Art?

‘AI as artist’ fuels the anxiety that AI might ‘replace’ and supersede human creativity. The French collective Obvious, whose Portrait of Edmund Bellamy raised U$450,000 at a Christie’s auction, took this idea a step further by making it appear as if ‘the AI made the art’. Philosopher Matteo Pasquinelli refers to such work as ‘statistical art’. In a similar move, Gene Kogan is developing an ‘autonomous AI artist’ Abraham, who “creates unique and original art.” These works are often compelling to viewers because they render the human form uncanny or haunting, particularly in the use of techniques such as GANs. Mario Klingemann’s work is a good example of this.

Another approach might be to view these works in the tradition of generative art, a decades-old movement of using computation and the random-ness it creates to produce a particular aesthetic. Generative artworks emphasise that it is not AI or computation that is creative, but instead, as Jason Bailey writes, it is “generative artists [who] skilfully control both the magnitude and the locations of randomness introduced into the artwork.” There is also a distinctive movement of artists paying attention to the politics and political economies of AI technologies such as machine learning, computer vision and the algorithmic management of everyday life. These include The Normalizing Machine by Mushon Zer-Aviv, Anna Ridler’s Myriad (Tulips), Wesley Goatley’s Chthonia, Stephanie Dinkins’ Conversations with Bina 48, Asunder by Tega Brain, Julian Oliver and Bengt Sołen among many others.

Intelligence Work

What we associate with ‘intelligence’ in machines is the work of computation; work that was first done by humans. Lorraine Daston assembles a history of calculation since the mid 1700s in Europe ( pdf), detailing how the “hard labour” of mathematical calculation in physics, nautical navigation and astronomy was performed. It was achieved by a pyramid-like hierarchy as follows: a few mathematicians on top, who determined the formulae and designed the logarithms; below them, the algebraists, who translated the logarithms into numerical form; and below them, roughly seventy or eighty “workers”, who only knew elementary arithmetic but “actually performed the millions of additions and subtractions and entered them by hand into seventeen folio volumes.” Daston notes that it was the humans toiling away mechanically that sparked the idea for Charles Babbage’s Difference Engine, the precursor to the modern computer.

Speaking of the masses of workers at the bottom of the calculation pyramid, Daston writes that Babbage was known to often say that “the fewest errors came particularly from those who had the most limited intelligence, [who had] an automatic existence, so to speak.” (17) In other words, it was possible to imagine humans as being ‘automatic’. Fast forward a few decades and the mechanical, time-consuming labour of calculation was outsourced to women, who were considerably cheaper to hire as workers and treat as machines.

Jennifer Light’s history of computers shows that till the middle of the 20th century, ‘computer’ was, quite literally, how women calculators were referred to in their place of work; it was their designation, like ‘typist’. The complex and tedious work of calculation required of these ‘computers’ was rarely, if ever, acknowledged. It is only very recently that the work of Katherine Johnson and her team of mathematicians, whose work was integral to the success of NASA’s first human spaceflight project, came to light in the book (and subsequent film) Hidden Figures. Over time, women’s work has been systematically erased from the history of computers, and women pushed out of the technology industry. Marie Hicks’ book Programmed Inequality tells a similar story: of how Britain lost its edge in computing because it excluded women engineers and mathematicians.

Increased automation generates anxieties associated with the changing role of the human. However, the mechanisation of calculation with machines in the 19th and 20th centuries did not necessarily erase the involvement of humans. If anything, a more daunting task arose: coordinating and apportioning work between humans and machines; getting them to ‘flow’ together; figuring out who (or what) was better at which kind of task. So as calculation and calculating devices permeated industries like insurance, government departments, colonial management companies and military and operations research, the algebraists became middle managers who oversaw tasks between humans and machines.

Questions of human-machine interaction remain fresh today as we negotiate, for example, handover between human drivers and auto-pilot systems in driverless cars, for example. Or have humans tag images, like in CAPTCHAs, so that machine learning systems learn to identify objects better. The work of intelligence thus continues as an interaction between human and machine.


What is commonly understood as ‘AI’ today is as much about business and industrial imaginaries as it is about culture and computation. These socio-technical imaginaries are part of a particular cosmology: beliefs about how our existence is architected, its histories and futures, and the relationships between parts of a universe and its whole. Cosmologies bring forth narratives, maps of the world, stories, values, societies, structures and feelings. It is difficult to acknowledge the existence of other cosmologies because our own can be so totalising: they organise our understanding of existence. As AI appears to mimic human behaviour and intelligence, it brings our cosmologies into focus — it raises questions about what it means to be human. Climate crises resulting from human activity, and living alongside threatening and resilient viruses, make us examine our place amidst non-human others.

Perspectives on the non-human world, the “world-without-us”, as Eugene Thacker puts it, can be terrifying and difficult to grasp. This is the human being subtracted from the world as we know it; the world that the horror genre, the occult, the mystical and demonism open up to us; the world in a time of large-scale climate disaster. The world-without-us is also what existed before humans showed up — and what will remain after we leave. Before that, however, we are constantly making the earth ours; the world-for-us built through politics and society.

There are other cosmologies out there, it seems, that propose a different construction of the world and the place of humans in it. Metaphorically speaking, embracing other cosmologies is like re-thinking transport: like, thinking outside the concept of getting from A to B on a bus, and considering telekinesis instead. This is a paradigm shift. For example, Yuk Hui’s approach to cosmotechnics is about letting go of concepts such as the ‘human’ and ‘technology’ and entering an entirely different cosmological order comprising Qi (something instrumental or concrete) and Dao (something like an approach, “the way”) and their interaction. Black quantum futurism is the disruption of space-time, envisioning a new chrono-political order: one in which history and future exist in the present, simultaneously. Indigenous epistemologies approach AI in terms of ‘making kin’ with machines through mutual respect. But different cosmologies cannot be tried on as if this season’s fashion; a new or different cosmology necessitates the reordering of fundamental beliefs about society, institutions and relationships, and what we think the human is.

Entangled Lineages 

By Georgia Lummert

The robot as slave, mere extension of our will, is a relentless, restless worker. These etymological roots and oceanic routes not only tell us how we shape and perceive our worlds — and what substance those worlds are made of — but also make visible the kinds of relations we establish and live in. More importantly, they tell us to whom we deny certain interrelations, who we perceive as rootless, and who we see as not having a voice and will of its own. Rather than attempting to create legitimising unambiguous human lineages, we might instead use translational histories and the act of translation itself as a chance to understand our mutual co-dependence and relatedness. 

In translation, Claudia de Lima Costa and Sonia E. Alvarez state in “ Dislocating the Sign: Toward a Translocal Feminist Politics of Translation,” “there is a moral obligation to uproot ourselves, to be, even temporarily, homeless so that the other can dwell, albeit provisionally, in our home.” Translation illuminates the fact that we are not alone in this world, and that this world is not one — but many, a pluriverse. Barbara Cassin writes, “Several languages are several worlds, several ways to open oneself to the world.” The robot, too, lives in different languages and worlds.

From (one of its) birth(s) in Karel Čapek’s drama R.U.R. (Rossumovi Univerzální Roboti, where the robot, designed by humans as cheap workforce deprived of any rights, finally revolts, through variations and translations of it like Alexandr Andriyevskys 1930s (finally failing) prospect of robots potentially bringing down capitalism, till today where we have mostly forgotten its multilingual, entangled origins and revolutionary potential. Albeit not considered consciously, this history makes itself heard. It is the ‘tłum’, the ‘mass’, the ‘crowd’ when we speak of it in Polish for example. Although official etymology has it that the word ‘tłumacz’ is of Turkish origin – ‘dilmac,’ from language, ‘tyl’, and the one that has it, ‘mac’ –, I cannot help but also hear the ‘tłum’ in it, the crowded and messy place where my every enunciation, my every translational attempt departs from. 

To have a chance at being understood, I need to stifle, suppress, muffle — ‘tłumić’ — a whole range of meanings. I need to cut some threads, violently uproot my word, make it an orphan, a robot to possess and become seen as a person that commands the language she is speaking in. Commanding a language, władać językiem, eine Sprache beherrschen, владеть языком all imply acts of subjugation, of conquest and appropriation — resorting to drastic, sometimes funny means. Commanding a certain language can offer access, recognition, and allow my words to become heard as reasonable language, as knowledge. Thinking of language as a mere tool, a extension of my will that I can fully control, in short: thinking of language as of a ‘robot-slave’ forgets the revolutionary potential they have, it obfuscates its uncontrollability and fleetingness, leading us to forget that it is language commanding us, giving directions to us, and thus accommodating us — rather than the other way around. The metaphor ‘commanding a language’ manifests itself in how the interfaces of current day neural translation programs are shaped. But they do not work on their own: the data their translation suggestions are based on represents but a fragment of our multilayered pluriverse. Their decisions are arbitrary, because every word we use shapes, tints and aligns the ones surrounding it.

Deep Down DeepL

By Georgia Lummert

It is probably not presumptuous or exaggerated to state that current popular translation interfaces render communication not only easier and faster but to some degree also more democratic and equal. The flipside, though, is this: they flatten our understanding of language, turning ‘translation’ into a neat, contained and unidirectional linear thing with a clearly identifiable beginning and end. The pace at which these programs show me an equivalent for the word I search for in another language obfuscates the fact that translation always starts ‘in media’s mess’, and has a long way to go — that it is by far not a question of preexisting equivalencies to look up and search for; that the work of neural machine translation is itself based on massive data processing. Translation is work, a time consuming task that several agents contribute to. And that more often than not, relies on errors rather than success. While it might dawn on most of us that the seemingly lofty business with words is not taking place in a vacuum, we still tend to imagine ‘the internet’ and thus the translation services it offers as an immaterial thing.

In reality, as Tabita Rezaires’ video Deep Down Tidal reminds us, the sea floor with its “submarine fiber optic cable network” is the internet’s manifest material location. Translation, the ‘перевод’, leads and drives (водить) us through (пере) the water (вода), through an ocean of circulating meanings in mutual entanglement. It is an ocean we share with each other, albeit with very different histories of exile, forced migration, and conquest. Contrary to this shared history, flat translation interfaces sustain a myth of neatly separable so-called ‘source’ and ‘target’ languages by opening up two strictly separate windows for each of them. There’s no connection, no exchange, prior to the consciously made effort to translate, to say ‘the same thing with other words’, as if it didn’t matter which words from which worlds I use. The world made up by these translation interfaces can be a pretty poor one to say the least. DeepL, the translation website, for example, presents a Eurocentric little universe in which humanity knows nine languages, all of them of ‘European descent’. Whatever else I type in there works as noise, as unintelligible and thus unreasonable gibberish that does not earn the decency of the label ‘language’. This stems from the powerful (one) world-making project of connecting ‘the human’ to the one logos, the one language, the one reason of those in power. Instead of streamlining or fitting whatever is said in a hierarchy of scalables in a flat universe, we might use translation as a tool to make visible and produce what Anna Tsing calls “meaningful diversity”. We might use it to find our place in the world and weave our translocal and translated worlds from there, the pluriverse, the various entangled and enmeshed cosmologies and waterworlds we live in.

Mechanical Rules

By Andrea Kelemen

By re-situating AI in terms of different epistemological orders that led to its current ubiquity, it becomes apparent that our present technological predicament is one that is increasingly governed by objectifying dualisms. Central to this trend is the gradual prioritisation of mechanical rules over human judgment in various fields of science and social life. As Lorraine Daston explains, premodern rules were considered to always be context-dependent, adaptable to specific cases and unique conditions. However, from the 17th century onwards, a new kind of rule emerged: one that considered itself so universal that its particular enforcement excluded the possibility of adaptation. From bureaucratic systems to the promise of fully automated production lines all the way to computation and the intricate meshing of human-machine systems, this new type of rule is fundamentally algorithmic: unambiguous, finite and definite. Chasing after the perfect universal rule led to the emergence of technologies like machine learning-based predictive policing and automated facial recognition systems. Shall we reconsider the limitations of rigid rule-based systems? What about our attempts to fully control complex human processes with mechanical rules alone?


Commitments to Deathlessness

By Paul Wiersbinski

This hypothetical table, Commitments to Deathlessness, is a personal archive of ideas about AI. It is an attempt to categorise all the different terms and ideas that I connect to current discourses on AI from a historical perspective by finding abstract forms and headlines for them. At first view, I want this to appear very technical, but zooming in, one can find relations and pop-cultural connections everywhere.

Claiming Nature

“We recognise that separating humanity from nature, from the whole of life, leads to humankind’s own destruction and to the death of nations. Only through a re-integration of humanity into the whole of nature can our people be made stronger. That is the fundamental point of the biological tasks of our age. Humankind alone is no longer the focus of thought, but rather life as a whole…”

These words are from a 1934 text by German botanist, Ernst Lehmann, cited by Peter Staudenmaier in a text co-authored with Janet Biehl about ‘eco-fascism’ in Germany’s history. Lehmann believed that National Socialism (aka Nazis) was ‘politically applied biology’, referring to that party’s adoption of genocide/eugenics as part of their political ideology and governance. Staudenmaier writes that some conservationists, eco-‘warriors’, and back-to-nature environmentalists who de-centre human perspectives and listen to Nature on its own terms are also proponents of a type of xenophobic nationalism. As much as these movements may seem like a posthumanist centring of the non-human world, the ‘return to nature’ narrative is a kind of enchanted fascism, a belief in ‘going back’ to a better before. But ‘before’ is usually a mythical, constructed time of local, religious, ethnic and religious ‘purity’.

There are dragons lurking around every epistemological corner.

The online Windows XP simulator runs in a web browser and its operation imitates the operating system. You can use it to prank someone.

 Artificial Physical Intelligence

Artist Anicka Yi imagines machines that have ‘intelligence’ — but not in an anthropomorphic sense. These machines have sensorial intelligence and evolve independently from humans — in a manner that is neither antagonistic nor dystopian nor utopian. She refers to them as ‘biological machines’ having Artificial Physical Intelligence, or API, as opposed to Artificial General Intelligence or AGI. So she builds complex artworks that combine diverse fields such as molecular biology, geochemistry, perfumery, machine learning and electronic that reach towards the sensorium of such machines. They can be difficult to describe without actually experiencing; this may be part of what she intends. In Biologizing the Machine (Terra Incognita) at the 2019 Venice Biennale, Yi’s team installed mud from the city in acrylic vitrines together with bacteria that emitted a particular smell. The smell was being ‘learned’ by a machine learning system. Over time, and with changes in light, humidity and temperature, the bacteria interacted with other organisms in the soil, creating a vivid pigmentation visible in the vitrines. Thus the work captures the smell, the mud, the machine and time itself in communication with each other. Talking with Elvia Wilk, Yi says that her work on API “is not about making them [machines] similar to humans but let[ting] them evolve as their own species.” Wilk notes that we are already in a time when many technologies that we consider smart or intelligent are about machines communicating with each other; consider for example, QR codes, two factor authentication, the Internet of Things, and natural language processing applications.

Statistics and Machine Learning

By Andrea Kelemen

In the 18th century rule-based systems were becoming prevalent. Notable at the time was Pierre Simon Laplace pioneering ‘scientific determinism’; he claimed that if an intellect could know all the forces that set nature in motion and could be intelligent enough to analyse this data, “for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” Laplace was also credited with developed Bayesian probability theory, which remains important in data science and modeling. This is one story about how the field of statistics was born. 

In statistics, the probability of an event is inferred based on prior knowledge of conditions that might be related to the event. In other words, statistical models make use of rules to correlate seemingly disparate details of dynamic life and introduce perceived certainty to otherwise dynamic and contingent systems Both statistical inference and logical operations are an essential tool of machine learning systems operating in highly contingent environments. As Ramon Amaro explains, machine learning uses statistical models to reduce complex environments by simplifying data into more manageable variables that are easier to calculate and thus require less computational power.

Machine learning is built upon a statistical framework. However, on a more basic level, statistical inference rests on logical assumptions that are questionable. According to the problem of induction, machine learning systems generalise based on a limited number of observations of particular instances, and presuppose that a sequence of events in the future will occur as it always has in the past. For this reason, the philosopher C. D. Broad writes: "induction is the glory of science and the scandal of philosophy." For David Hume, scientific knowledge is based on the probability of an observable outcome: the more instances, the more probable the predicted conclusions. Thus the number of instances perceived correlates to the number of times these instances will appear as the establishment of truth.

With AI and big data, this means we are inscribing our past, often outdated, models into the future by assuming that categories stay rigid and that events occur in the same manner over time. The wide-spread use of logical operations in a growing number of fields that still exist but were previously never automated, we end up running into a whole array of problems. There are mistakes involving uncertainty or randomness, biased data sampling, bad choice of measures, mistakes involving statistical power, and a crisis of replicability