Douglas Adams (of “Hitchhiker's Guide to the Galaxy” fame) was a genuine tech enthusiast, and a keen observer of people’s interactions with technology. He shared his three rules on the topic in his posthumously published collection, “Salmon of Doubt”, 2002:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you’re thirty-five is against the natural order of things.
Nearly a quarter century on, those rules hold true for many of us (depending on where we fall on the Borg Spectrum). New technology should not be immune from criticism, but we should be aware that our perspective is inevitably shaped by our own experiences.
When Adams wrote his novel, “Dirk Gently’s Holistic Detective Agency”, 1987, there were already plenty of researchers working on artificial intelligence. Those researchers had been through the AI Winter of the 1970s, and were about to enter another one that would last until the end of the millennium. Periods where interest, funding, and progress in artificial intelligence work stagnated. Adams died in 2001 at only 49 years old, so we never got to read his perspective on the current cycle of AI tools, but we can glean some ideas from his work.
In the Dirk Gently book, Electric Monks are sold as a sort of robotic “belief as a service” system. You can have them do all the tedious work of believing in something while you get on with life.
In this passage, Adams discusses their physical design:
When the early models of these Monks were built, it was felt to be important that they be instantly recognizable as artificial objects. There must be no danger of them looking at all like real people. You wouldn’t want your video recorder lounging around on the sofa all day while it was watching TV. You wouldn’t want it picking its nose, drinking beer and sending out for pizzas.
Most video recorders are long since retired, having earned their couch time recording endless episodes of Buffy and the X-Files, but the same objection applies to today’s AI chatbots. Companies are still giving their bots human names, and either being vague or outright pretending that it is a real person. If we’re generous, perhaps that deception might be intended to make bots feel more comfortable and helpful, but it can backfire badly when they can not respond as a real human would.
When you know you are dealing with a machine, you can set your expectations more accurately, whether that's for better or worse. A bot can share useful information, perhaps perform some actions for you, sure. You don’t have to worry about it being impatient with your slow typing, and it won’t be upset if you revisit the same question 6 times.
But if you want someone to empathise with you, to consider bending a rule or providing some unofficial advice, you want a person. Even if they aren’t perfect, even if they can’t really help, there is value in another human listening and responding thoughtfully.
Let's call a bot a bot, at least for now.
This Could Have Been an Email
This article first appeared in The Supportive Weekly, Mat's email newsletter for anyone who wants to create better customer experiences. Subscribe now...it's not boring!






