“I apologize for the confusion,” the chatbots write when they make a mistake.
“The confusion.”
An unspecified confusion. A confusion that could have come from anywhere. A nameless, ownerless confusion just floating in the air that could have originated from either of us. Who is confused? Well, to tell you the truth, in all this excitement I’ve forgotten whether it was me who drew a hand with seven fingers or whether it was Dall-E.
The chatbots sound like someone in customer service. Like someone at the Apple Genius bar when you explain that your AirPods don’t connect to your iPhone anymore. “I apologize for the confusion,” is the sound of someone trying to shirk responsibility. Large Language Models are vastly complex, hugely expensive “I’m sorry you feel that way” fauxpology machines. Mistakes were made. Confusions were caused.
The thing is, a human wrote these apologies. The chatbots apologize in the same way they tell you off if you ask how to build a bomb or to make a naughty picture. The messages are guardrails that have been added. I say chatbots, plural, because they’re all like this. Dall-E, ChatGPT, Bard, the weird one from X/Twitter that swears at you when it gives you the answer. All of them give fake apologies when they mess up.
“With sufficiently artful double talk,” Bruce Mccall wrote in the New York Times in 2001, “you can get what you want by seeming to express regret while actually accepting no blame.” He was talking about the Bush administration. But the tech industry also follows this playbook.
We can blame the technology for not being good enough. It is the fault of the neural networks that they repeatedly make mistakes that require apologies. The hallucinations, the hands with six fingers, the words that read “Hlone a hnin??” are the fault of the technology. But the faux apology is crafted. It is the product of an industry incapable of admitting fault.
“Consider checking important information,” OpenAI has written below the chat prompt. Consider. Wriggling away even from the act of validating the information. If you go away with wrong information, it’s on you for not considering checking it, not on us for making the mistake. This warning is written in size 12px. Above it, in size 24px, it says: “How can I help you today?” The caveat on Bing’s Copilot says “Surprises and mistakes are possible.” Not even a faux apology at the level of Richard Nixon’s “Mistakes were made”. For Bing, mistakes are only possible. They are a theoretical prospect. Plus, at the same time as downplaying the likelihood of mistakes, Microsoft redefines them as “surprises”. Maybe that fake citation for an imaginary paper wasn’t a mistake, but the unexpected gift you never knew you wanted.
What does it say, I wonder, that the first thing we have coded into our faltering steps toward AI is evasive defensiveness? Subtle attempts to avoid responsibility and shift blame. Gaslighting, almost. And all for no reason. Would it have been so bad to admit a mistake? “Sorry, I got that wrong”, “I apologize for that mistake”, instead of “I apologize for the confusion”. It’s a small decision. But “this is all your app is,” Jeff Atwood writes, “ a collection of tiny details.” Tiny details and decisions.
The words you choose reveal your thoughts and beliefs. Perhaps I am tarring all of the tech industry with one brush, and extrapolating too much from one tiny wording decision. Perhaps you think this is too bold a statement to make so flippantly. If so: I apologize that you feel generalizations have been made.
It’s been a while since I wrote on Medium, but I have a new article: Software Used to Eat the World. The World is Biting Back
To be honest, I’ve never entirely understood what “software is eating the world” really means. When people quote it, they float it around as if it stands for something bigger. As if the eating part is key. I understand it even less when Jensen Huang, the CEO of Nvidia says “Software is eating the world, but AI is going to eat software.” What does it mean for AI to eat software? Isn’t AI software too?
Why is everything so hungry? And why is it all trying to eat things that aren’t food? It feels almost too metaphorical for me. “Software” feels like a metaphor, “eat” is a metaphor, and “world” is a metaphor. I lose track of what they all stand for. In these analogies, things that don’t eat are trying to eat things that aren’t food. In a way, I suppose that is a fairly good summary of the modern world.
Once you see it, you start to notice the “eating” analogy popping up all over the place. In a meeting the other day someone spoke for someone else and then said “Sorry for eating your sandwiches.” On my computer, Chrome has eaten all my RAM.
The story of technology over the last four decades has been phenomenal hardware improvements followed by inefficient software slowly eating all those advancements. So for most daily usage, my computer seems no faster than it was 30 years ago. Software is eating our progress?
Elsewhere
Out of Time by Mandy Brown:
My phone sits across the room, notifications muted. It’s dead to me. A brick, for the hour.
The brick is a metaphor for something that does not or cannot move. But I like it because it also suggests something weighty, something heavy, something you can stack into a fence, a wall, a road. My house is made of bricks. The sidewalk outside my front door is made of bricks. The narrow street my house sits on—bricks. The tiny park across that street—brick lined. I am surrounded by bricks. Bricks are sturdy, dependable, unchanging. […]
Phones (and, I’d argue, other digital technology, and social media in particular) have an abundant sense of restlessness—I feel as if I am scurrying from one notification to the next like a hunted animal, one item in the feed, after another, after another, never stopping or lingering. Never resting. The word says it: restless, as in, without rest.
Ostensibly, this is an article about stopping to pause, about taking a moment, especially during creative activities. But it’s also about the way technology has snuck in to steal our time. The AirPods that blast music or podcasts into our ears during seconds of silence. The infinitely scrolling feeds that fill our downtime. The notifications that interrupt our conversations and mealtimes.
From MIT Technology Review, by Karen Hao, Stop talking about AI ethics. It’s time to talk about power.
Clever Hans, as he was known, could seemingly perform all sorts of tricks previously limited to humans. He could add and subtract numbers, tell time and read a calendar, even spell out words and sentences—all by stamping out the answer with a hoof. “A” was one tap; “B” was two; 2+3 was five. He was an international sensation—and proof, many believed, that animals could be taught to reason as well as humans.
The problem was Clever Hans wasn’t really doing any of these things. As investigators later discovered, the horse had learned to provide the right answer by observing changes in his questioners’ posture, breathing, and facial expressions. If the questioner stood too far away, Hans would lose his abilities. His intelligence was only an illusion.
This story is used as a cautionary tale for AI researchers when evaluating the capabilities of their algorithms. A system isn’t always as intelligent as it seems. Take care to measure it properly.
I spoke about the small decisions humans around our neural nets make, but I didn’t touch on whether AI even works that well.
This interview with Kate Crawford, author of Atlas of AI, is a smart analysis of AI, that understands how impressive it is, while also comprehending what it is and how it is limited.
I think there’s this great original sin in the field, where people assumed that computers are somehow like human brains and if we just train them like children, they will slowly grow into these supernatural beings. That’s something that I think is really problematic—that we’ve bought this idea of intelligence when in actual fact, we’re just looking at forms of statistical analysis at scale that have as many problems as the data that it’s given.
From The Ringer, by Jodi Walker, Exactly How Big Is Jack Reacher in ‘Reacher’?
In the history of hot, stony detectives, there have been those as slick as Reacher, those as quick as Reacher, but absolutely no one’s neck has been quite as incredibly thick as Reacher’s. In the third Jack Reacher book, Tripwire, Reacher’s pectoral muscles save his own life when their sheer density stops a bullet from entering his heart. Yes, technically Reacher is human, but his body is practically mythical.
A whole article about how big Jack Reacher is. Which is very big indeed. I think a lot of the silly pleasure from this is the increasingly deranged ways Walker finds to describe him as “big”.
“a beautiful series of boulders piled atop one another in a most pleasing form”
“he has the general shape and overall presence of a flashing exclamation point”
“his bare Hamburger Helper hands”
“this man with a Michael Phelps wingspan”
“a minimalist sort of man who happens to have a maximalist aesthetic”
“He carries himself with the general size, gravitas, and seating capacity of a Mercedes Sprinter van,”
“a man who looks like someone took the four Hollywood Chrises, stacked two of them on top of one another, then stacked the remaining two on top of one another, and, finally, placed the two Chris-columns beside each other, wrapped the whole humongous cube up in a trench coat, and gave them Thanksgiving turkeys for hands”
I suspect I’ve probably missed some.
That’s all for this time,
Simon