Noise
Against the AI Arms Race
By Mindy Seu
Words: 1652
Estimated reading time: 9M
DESIGNER AND RESEARCHER MINDY SEU—CREATOR OF THE CYBERFEMINISM INDEX—TRACKS ARTIFICIAL INTELLIGENCE ANXIETY AGAINST THE HISTORY OF TECHNOLOGICAL PROGRESS.
Text Mindy Seu
AI dominates headlines: “Elon Musk Sues OpenAI,” “U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI,” “A.I. Is Making the Sexual Exploitation of Girls Even Worse,” “E.U. Agrees on Landmark Artificial Intelligence Rules.” We’re living in a time of so much AI anxiety that it’s rated on the odds it could destroy humankind, otherwise known as “probability of doom,” or “p(doom).” Unlike automated industries of generations past, like the factory lines of the Industrial Revolution, the automation of intelligence also targets white-collar workers: the college-educated, writers, artists, programmers, mathematicians, and analogous specialists. Though chatter around it has swelled in recent memory, the term AI traces back seven decades to John McCarthy, who coined it, and Marvin Minsky, who popularized it, noting that “AI is the science of making machines do things that would require intelligence if done by men.” Like all tools, it was created to help humans. And like most modern tools, it was created to help us understand ourselves.
The tool itself isn’t dangerous. It’s about how the tool is used—by whom, who seeks to benefit, and who is exploited in its development. The economist David Autor proclaims that AI could save the middle class, increasing the productivity gap between lower- and higher-skilled workers. The artist Sougwen Chung describes the potential for a human-machine partnership, outside of the screen and as an extension of the body. Both express the potentials of the technology, posing hopeful alternatives to louder and more common fears about its future. If we map AI’s rise alongside preceding automation, the shifts in labor force and hand-wringing rhetoric reveal an uncanny echo.
Automation in the factory did in fact create sweeping change, when measured by metrics like speed, cost, competition, and surplus. “Most industrial labor today is still heavily manual, automated only in the sense of being hooked up to the speed of electronic networks,” writes Italian theorist Tiziana Terranova in his essay “Red Stack Attack!” This is the crux of automation by way of capital: Spare time is not filled with leisure, but absorbed to create new modes of work and control. This might reveal the misdirection of our anxiety—we fear the loss of jobs because we need them to survive.
Out of capitalism arose a violent binary: nature and moderns. The former contained the biosphere, Indigenous people, and women; they were considered resources that could be extracted from and exploited. The latter was a self-proclaimed elite that dominated under the guise of progress. As Kelly Pendergrast writes, “No resources are natural”—only through extraction by man does coal change from rock to fossil fuel. Who becomes disposable in the name of AI’s progress? That depends on which sorts of work are deemed mechanizable.
In most every discussion about the true threat of AI arises the question of humanness—how much our nature matters in the workforce, whether that be in the charm of customer service or the creativity required to come up with an innovative idea. But there are instances, too, where that humanness is projected onto a technology. The Mechanical Turk of 1770—a person hidden inside a chess-playing automaton—is an early, infamous example in the lineage of humans-playing-machine. The term “Mechanical Turk” became shorthand for labor rendered invisible by high-tech tools. Amazon followed in that legacy, naming its class of micro-taskers—an outsourced, 24/7 workforce performing content moderation and data validation computers can’t do well (for now)—the Amazon Mechanical Turk.
Inordinate focus on output disregards input—the remote factories of thousands who train and retrain data. Alexandr Wang, one of the youngest CEOs to grace the cover of Forbes, founded Scale AI, whose workforce was described as the “picks and shovels” of the generative AI gold rush. In other words, humans—hundreds of thousands of them. To commodify like this certainly isn’t a way to feel hopeful about a people-first company, but it clearly indicates who is privy to venture capital-funded success—the miners versus the mined.
AI systems are considered easily breakable when they cannot adapt to a condition outside a narrow, defined set. These outlier conditions are considered edge cases in training data. Simplifying the world for a dumb machine requires complex and tedious categorization. Usually, this is outsourced to English-speaking annotators in Kenya, Nepal, and the Philippines. According to a surge pricing-like mechanism, their pay equilibrates according to the number of available annotators and the requested turnaround time for data delivery. “Innovation” relies on their pursuit of unpaid training in hope of spotty employment.
Wang’s profile was titled “Mobilizing AI’s Infantry,” so it’s apt that these workers are treated as foot soldiers—numerous and disposable—on the road to machine-learning oligopoly. It’s no surprise, then, that one of Scale AI’s biggest clients is the US military. Wang predicts that we’ll spend as much on human data as we do on computing power in the AI industry. “If we’re talking about agents that are designed and developed within commercial or state structures aimed toward surveillance, management of populations, and so forth,” writes Mashinka Firunts Hakopian in the LA Review of Books, “then our relations with them are most likely designed to be relations of asymmetrical power.”
These machines mirror their creators: motivated by capital and efficiency and progress at all costs. In Artforum, Hannah Baer describes how we humans believe ourselves to be the most intelligent species, subjugating everything we’ve deemed lesser than. We extend this analogy to artificial intelligence: If it were to become more clever, it would surely dominate us. As Baer writes, “We imagine a thinking computer that wants infinite power and is fueled by searing ambition, that seeks to conquer and control. We then feed the computer popular narratives (which mostly contain these themes and are rooted in these values) and express fear and dismay when they act competitively or jealously or seem to pursue self-enhancement.”
We’ve summoned a view of AI with metaphors of infantry, of data capture, of doom. To loosen the death grip of catastrophic future thinking, we must chart the efforts of those undermining abusive systems—finding ways to repurpose AI, or taking it outside of capitalism’s bounds. Sound artist Holly Herndon developed an artificial intelligence for her album Proto, automating live performances to humanize computer programs; along with her partner Mat Dryhurst, she’s created templates that allow creatives to opt out of AI training datasets, preventing theft or iteration off their work. Hyphen-Labs developed HyperFace, a wearable projector that fights against algorithmic facial recognition. And multimedia artist Stephanie Dinkins fed an AI three generations of Black cultural knowledge in an attempt to diversify the tech landscape. “We simply cannot afford to ignore or be repulsed by AI,” she told Time. “It is changing our world exponentially. At the very least, we have to acknowledge it and see what that means for our individual lives.”
The reality of these recent developments tells us one thing: AI is here to stay. In this arms race, profit-focused companies aim to be the first in order to reap the majority of the industry’s rewards. This results in the prioritization of speed over safety. But when we only map AI to p(doom), we disregard its part in a holistic system that accelerates global social inequities, and its potential to be something greater. We’re at the early stages of AI’s proliferation. Considering how it can be democratized is perhaps the only way to maintain hope for the future of the technology—fighting not against the tool, but those who push its buttons. We have to think past our fear, interrogating what we can do about it, how we can voice our concerns, and how we’ll champion a more human future. After all, AI is a mirror.