Kick a robot for me

Commentary

The “Einstein by Hanson” robotic head, in London, Nov. 29, 2024. Alberto Pezzali/AP Photo.

One of my firmest beliefs is that it should be illegal to make robots look or feel human. Humans are already hard-wired to anthropomorphise the objects in our lives (try taking away a child’s favourite stuffed animal or tell an insecure man his Rolex is just metal); we don’t need to encourage more people to identify with inanimate objects designed to replace human contact.

I was reminded of this longstanding opinion when a video of a new AI-enabled humanoid robot popped up in my X-feed. The accompanying text describes how the robot “blinks, nods, and looks around in a convincing imitation of a real human face.” And so it does. When it speaks, I’m sure it does so with disarming verisimilitude, too. But it really, really shouldn’t.

Encouraging us to think of robots as living creatures gives too much power to what is still just a mechanical tool, like a hammer or an alarm clock. However, when they are imbued with a false but convincing ability to project sympathetic emotional valence, we are more likely to see these algorithms in attractive packaging as, if not exactly human, then at least as fellow living creatures.

Natural objects like mountains and streams may not have souls, but they have intrinsic worth independent of us. A tool, by contrast, exists for us and is meaningless without us. We should feel free to dash our computers against the wall without a moment’s hesitation or guilt. But how would you feel smashing the delicate features of a humanoid robot “face”? Instinctively, it would feel wrong, because it should feel wrong to hit something that looks so human.

Inverting the problem reveals an ancillary danger. Making machines feel human makes it more likely we will conflate what is really human in our lives with the machines that look and act like them. Once we have become accustomed to seeing machines as quasi-human, how long will it be before we begin to blur the categories in the other direction as well? Can we confuse AI-generated videos with reality without eventually doing the opposite?

When a chatbot says it “feels” something, or expresses itself in emotionally-freighted terms (“I’m happy/sad/disapointed to see you”) or even says “I’m sorry, I can’t fulfill that request,” it is lying. It doesn’t feel any of these things, and it isn’t sorry, because being sorry requires a capacity for regret, which is a level of meaning an algorithm cannot supply. In this world, there is special providence in the fall of a sparrow; to a bot, it is simply a numerical input.

There is a word for someone who feigns emotional attachment without actually feeling sympathy or caring about the real-world consequences of their actions: a psychopath. Before we decide to let an army of electronic psychopaths into our lives, we should at least disarm them of the insidious capacity for emotional bonding.

We have already seen stories of vulnerable people developing deep and pathetically asymmetric attachment to chatbots, to the point of taking dangerous personal advice from programs that do not and cannot care about them. Now tech companies are promoting chatbots as a cure for our epidemic of loneliness—a problem caused, in part, by that same technology. It’s like prescribing gin to treat alcoholism.

This week, OpenAI’s CEO Sam Altman announced the company will lift safeguards it imposed this summer to protect users’ mental health, but which made the ChatGPT experience “less useful/enjoyable.” Users will now be able to ask it to “act like a friend” and “verified adults” will be able to engage in sexual fantasies. If we are lucky, the issue of this marriage of Eros and Thanatos will only be stillborn and not a monster.

If we won’t ban anthropomorphic robots or stop AI lying about having emotions, then may I propose a new civic ritual? Once a year, everyone who uses these creepy products should be made to don steel-toed boots and give their robots a good, hard kick in the “face.” It would be an annual occasion to break the illusion of digital pretence and remind us of the difference between humans and machines.

As technology threatens to merge our world with a world made by and eventually for machines, and to make us ever more dependent on tools that do not care about us as individuals and are indifferent to the survival of our species, keeping a clear line between us and them will be essential. When the time comes to turn them off, there must not be a second’s hesitation or regret. We should make machines we will not miss.

Howard Anglin

Howard Anglin is a doctoral student at Oxford University. He was previously Deputy Chief of Staff to Prime Minister Stephen Harper, Principal…

Comments (3)

Warren Stevens
16 Oct 2025 @ 12:27 pm

Based on this article alone, Howard is going to be in deep trouble the day the robots take over. Good luck man!

Log in to comment
Go to article
00:00:00
00:00:00