Noise

TECHOCENE: AGENTIC AI

Close

TECHOCENE: AGENTIC AI

Words: 1862

Estimated reading time: 10M

AI MODELS LIKE OPERATOR AND ALEXA+ PROMISE TO STREAMLINE MUNDANE COMMUNICATIONS. WILL THESE DIGITAL ASSISTANTS BRING US FREEDOM, OR ERODE OUR SOCIAL SKILLS?

By Hannah Ongley

A few years ago, my therapist introduced me to Goblin Tools, a set of small-task aids designed to help neurodivergent people handle life’s most daunting challenges—like communicating efficiently and effectively with other humans 9,000 times per day, in between doing your actual job. Corporate etiquette requires a lot of performative politeness, which used to mean spending an embarrassing amount of time deciding on whether to bookend the main point of the email with “Hello” + “Thanks!” or “Hi” + “Best,”. Most human interaction seems to be about jailbreaking people’s emotional settings, meaning we’ve always been prompt engineers in a sense. Take Dale Carnegie's self-help book How to Win Friends and Influence People, a sociopathic guide to getting others to behave by pretending to be interested in them, released all the way back in 1936.

In 2025, the release of agentic AI models like OpenAI’s Operator mark a shift from machines that serve us to machines that represent us. Unlike the algorithms that fetch our news or the robots that deliver our meals, they are capable of navigating the web and performing tasks autonomously, with a kind of mechanical grace that’s as unsettling as it is impressive. All this offers an exciting, if paradoxical, premise: If we can tell a computer how to behave like a human in the blunt, uncaring language of machines, the computer can handle the mundane aspects of our strategic communications, leaving us more bandwidth for meaningful social interactions. Imagine a highly capable intern that transforms your “that’s a terrible idea” into “I can see where you’re coming from, but what if we tried this instead?” Or a digital assistant planning your dinner party—asking guests for dietary restrictions, adjusting recipes accordingly, and getting the groceries—so you’re not too exhausted to participate in conversation.

Operator is just one of many AI agents being rolled out this year. OpenAI has also announced Deep Research, a specialized capability integrated with ChatGPT that autonomously browses the web for extended periods, generating cited reports on user-specified topics by interpreting and analyzing text, images, and PDFs. Amazon has introduced Alexa+, with enhanced natural language processing capabilities to handle more complex, multi-step tasks. And last week at Mobile World Congress 2025, China’s smartphone-maker Honor unveiled an AI agent capable of reading, understanding, and interacting with your screen. It can perform tasks such as booking a reservation via OpenTable (to a limited degree of success).

Unlike food delivery robots with those cute big eyes the LAPD uses to spy on you, these new digital assistants don’t have endearing WALL-E-like characteristics to deter you from kicking them or stealing their food. We’re bound to subject Alexa+ to verbal abuse—could that change the way we interact with people, more broadly, in the real world? On the other end of that spectrum, we’ve heard about the danger of falling in love with robots with human-like features. But plenty has also been said of the dangers of humans acquiring machine-like features. Joseph Weizenbaum, the MIT professor who invented the original chatbot ELIZA in 1966, has been declared a tech prophet by scores of media outlets since the launch of ChatGPT in 2022. Towards the end of his career, Weizenbaum, as Ben Tarnoff documented in a uniquely brilliant long-read for The Guardian in 2023, became a dissenter in the face of the computer revolution he helped create, growing skeptical of the idea that computers could change the status quo and declaring it a counterrevolution instead. Weizenbaum warned that machines wouldn’t only become more human-like, gaining consciousness as well as access to your Instacart account, but that humans might in turn become more like machines—rigid, unfeeling, and disconnected from life’s moral and emotional aspects.

ELIZA was a robot psychologist that won the affection of users with rote responses to inputs like, “I’m feeling depressed.” (“Why are you feeling depressed?”) With the chatbot, Weizenbaum had actually intended to demonstrate the superficiality of human communication. He pointed out that, if people in everyday interactions simply reworked and reflected back what they heard, we’d think there was something wrong with them. In a therapy session, this way of relating is seen as pensive and empathic. People loved talking to ELIZA. Today, with the rise of AI agents that can understand the nuances of human language, why should we bother talking to people at all? It’s not a rhetorical question, but one we should all genuinely reflect on. Early chatbots didn’t eliminate our need for meaningful, emotionally rich connections, even if it sometimes briefly felt like they did. And just because we can outsource some of the conflict that characterizes deep connections doesn’t mean we should. ELIZA’s rudimentary statement + response formula doesn’t compare to the magnificent scale of contextual awareness and cognitive leaping of human dialogue like, for example, “I’m leaving you” + “Who is she?”

Considering how vulnerable we are to being emotionally manipulated by machines, maybe it’d be a good thing if our digital assistants did best on a steady diet of cold, hard instructions. When we talk to other people, we are instinctively on guard. When we talk to computers, we have a propensity to pour our hearts out and give away our personal information. Beyond that, I’m sure there are negative consequences to instructing an AI agent to act as you, with all your interpersonal flaws. I mean, feeding a machine examples of your uniquely imperfect communication style as training data to talk to your boss or post online on your behalf sounds risky. But giving a human assistant access to your inbox, so they can read your polite lies and replicate their hallmarks in the future, definitely sounds super awkward. On balance, depending on your personal brand of anxiety disorder, picking the robot might be worth the risk.

In his book Computer Power and Human Reasoning: From Judgement to Calculation, Weizenbaum emphasized that certain tasks should never be outsourced to computers, regardless of whether or not they can complete them. It would be a “monstrous obscenity,” for example, to have a computer make a ruling in a court case or act as a psychiatrist in a clinical one. But both these instances are already coming to fruition. On one hand, agentic AI is reshaping healthcare, particularly diagnostics, with astounding implications for genomic data analysis that will save lives. On the other hand, in the UK, Court of Appeal judge Lord Justice Birss declared ChatGPT a “jolly useful tool” after using it to draft part of a ruling.

The real danger seems to lie in relying on AI to do our thinking for us. The more we embrace a machine-like approach to decision-making, the better we get at efficiently bulldozing human considerations like justice, compassion, and basic decency. Why waste time on moral judgment when you can streamline everything with a soulless algorithm?

More importantly, AI is destined to fail at any task that requires it to imagine truly new possibilities. A computer-generated solution is one that will maintain the status quo culturally, politically, and economically, extending paths that have been proven unsustainable. We need human creativity to imagine new ones. For the not-so-small tasks on humanity’s to-do list, like mitigating climate disaster, AI’s most important contribution might be its self-annihilation. (Despite DeepSeek’s energy-efficiency claims, AI-powered data centers continue to be a big problem for the environment.)

In his 1967 follow-up to his first article about ELIZA, Weizenbaum argued that no computer could ever fully understand a human being—and then took it a step further by claiming that humans can’t fully understand each other either. His reasoning was that we’re all shaped by a singular mix of life experiences that act like a personal filter, limiting our ability to truly grasp someone else’s reality. Language might help bridge the gap, but the same words can evoke entirely different meanings depending on who’s listening. And some things can’t be expressed at all. As Weizenbaum put it, “There is an ultimate privacy about each of us that absolutely precludes full communication of any of our ideas to the universe outside ourselves.”

Honor’s new tool, conversely, shows that AI agents are pretty boring if you peek under the hood. Machines will change the world, but it’s a waste of energy trying to understand them like we do people. If understanding another is striving to get closer to a truth you can never quite reach, we’re better off taking interest in AI agents for only as long as it takes to make them behave, saving any emotional investment for our human interactions—especially the uncomfortable, disharmonious ones that can challenge us to see from different perspectives, get past our limiting beliefs, and possibly change the world. It can’t hurt to practice being polite to the delivery robots, though, in the meantime.

Back
Start over