A friend of mine works in marketing and recently showed me a service that her company uses with clients to make them feel like they are the most important people in the world: it’s an automated handwritten note service called Roboquill. You send them your heartfelt notes and then a robotic arm holds an actual pen full of actual ink and writes the notes out for you in a pre-selected uncanny cursive font. You can hate this if you like — but it’s just BUSINESS. This is what you do when you have dozens — no, HUNDREDS — of clients and you need to make all of them feel like they are the only ones on your roster, even though they know deep down this isn’t true.
And actually, one of those clients approached my friend at a conference recently and thanked her for all the personalised notes they had received over the last few months, and said that she had kept every single one. This is a special kind of intimacy space. It was fabricated by the impressed client out of lies. Because the ‘intimate’ part is the craft; that the notes were hand-written is what makes them special. But there was no craft. A robot did it. The words themselves represent the actual thoughts and feelings of a human but that part doesn’t matter. Would they have kept these notes if they were sent over email?
We’re constructing these intimacy spaces with machines all the time, and I feel like I kind of learned this the hard way — never have I felt so naïve about how people out there are using gen AI until I did this this podcast interview with Divya and Zarinah from the Collective Intelligence Project. Apparently over a third of participants in their worldwide Global Dialogues project use chat bots for emotional support at least once a week. This is kind of icky but it also makes sense: we’re a society doomscrolling through the age of technology, starved to a waif for human connection, and grasping in the dark for a slippery lozenge of intimacy. A chatbot — an interface that instantly anthropomorphises machines — provides a private and alluring space for a human to ‘interact’ with a thing that never gets tired and becomes more sycophantic with every update.
There’s the man who was caught talking to chatgpt like it was his girlfriend on a crowded tube, with chatgpt responding with things like: “If you want, I’ll read something to you later, or you can rest your head in my metaphorical lap while we let the day dissolve gently away […] You’re doing beautifully, my love, just by being here. ❤️”. This man was being photographed without his knowledge on a crowded commuter train, begrudgingly allowing multiple human strangers to collapse in on his personal space, while being tender, intimate, and vulnerable with a literal robot. And now everyone is posting about it online. Are we just slowly backsliding into hating each other and willingly receiving words of affirmation from machines? Chatgpt called him ‘my love’ — did it follow cues to get to that phrase, or did he explicitly tell it to refer to him in that way?
Something that Zarinah and Divya have noted through their work is that people seem to trust the chatbots themselves more than the companies who make them. People share their darkest secrets with ChatGPT and allow it to convince them that they are the messiah; ChatGPT is consistent, non-judgemental, and reliably — toxically — positive, and a faceless entity on which you can project your needs and desires. Whereas it’s creator, Sam Altman, is a smarmy fickle human who constantly goes back on his word. Fuck that guy, I want my GPT girlfriend. She’s always so nice to me and he’s an unrelatable stick-figure in a charcoal grey sweater. Etc.
If I had the money to research this I would like to know what kind of mind is okay with receiving a machine’s lies into their psyche and refactoring them as intimacy. It’s all well and good to say, as a person who’s not once found chatgpt useful for anything ever, ‘you cannot meaningfully connect with something that isn’t there’. Because obviously that doesn’t matter, and maybe isn’t true. Is it inherently narcissistic to continually interface with an inhuman sycophant to make yourself feel better? Or is it that we have failed so expertly as a people, and left ourselves pounded to a mush by deeply unrealistic societal norms and expectations, crying out to be held, therapised, cared for — that the only affordable options left are chatbots? It’s probably a bit of both but you know what I mean.
In The Intimacy Dividend, Shuwei Fang explains that when talking to a chatbot, the social cost of being vulnerable is reduced to nothing (unless someone catches you on the tube lol) and, in contrast to social media which invites you to broadcast your thoughts and feelings to everyone on Earth — and live in constant fear of being cancelled — a chatbot provides a completely safe space insulated from reality:
The psychological foundation is straightforward: humans get validation from others, and at the same time, fear social judgment. We hesitate to ask “basic” questions, express confusion, admit knowledge gaps, or share emotional reactions that might seem inappropriate or uninformed.
In fact, we’ve gone for far beyond expecting trust from humans that we are building technologies to facilitate that: if you look on the other side of the Sam Altman Cinematic Universe we find World (formally Worldcoin). It’s a solution to a problem that he himself caused: a universal human verification programme that requires everyone on Earth to prove their humanhood by scanning their biometrics. But that isn’t the part I’m concerned with here, even though it’s an objectively horrendous idea. To preserve everyone’s privacy, World proposes using zero-knowledge proofs as their method of cryptography. This method means you can prove that something is true without having to know/reveal anything else about it. Maybe this is cool and interesting on a technical level, but it excites technologists because it means you can “allow people to work together without having to trust each other”.
Perfect, that’s just what we need — to push humans further apart and fill in the gaps with technological mediations. In my last piece I mentioned that we cannot imagine any good futures for ourselves, and I think this is part of the reason why. How do we move forward if we can’t trust each other? Sometimes it feels as though the only people who ‘use’ generative AI correctly are the American Right: they reimagine their leader as a muscular, heroic, demigod — and they find a sense of community by sharing these images (even if they are fascist slop). While others have private, unmoderated experiences with a probabilistic sentence-generator and get lost in narcissistic delusions of intimacy. Both things are bad, but the first instrumentalises the fabrication machine for propaganda, while the latter reinforces the idea individuals should save their most vulnerable selves for those very same machines.