Discover more from Horrific/Terrific
🪞 Stop striving for authenticity
It left us a long time ago. You’re never getting it back. Sorry.
Hello Best Online Friends (BOFs). Last week I told you that I hate paywalls, and that I’d rather just keep Horrific/Terrific open for everyone — and that if you just want to donate to support my writing, that would be amazing. Then I screwed up the links in the buttons so that it didn’t take you to the right place to donate. Sorry about that. If you do want to donate to keep this newsletter High Quality™️ then please use this button below that actually works:
Just to be clear: paying will not get you more stuff. This payment will just be to help me keep this newsletter as best as it can be at the moment. Thank you!
Anyway, this week we have:
Google’s contextual search update — now you can talk to adverts like they’re your friends!
If you’re worried about losing sight of what’s real, don’t worry: nothing was ever real anyway.
I think the ‘promise’ of AI and the perpetual lies spun by capitalism are almost indistinguishable: the purveyors of AI tell us that if we use it enough, we will be fully optimised humans. The fundamental lie driving capitalism is that if you work hard enough, you will be wealthy. All of this is of course wrong. You will never be wealthy; you will never be optimised (but that’s okay).
Becoming ‘optimised’ and living a frictionless life where even the way you inhale and exhale somehow raises your IQ is comically unattainable. This week I read a piece in the Convivial Society which referenced Jaques Ellul (some old guy who wrote a book), and his concept of doing things in ‘the one best way’. This is generative AI all over — the idea that you can ask a complex question and receive one perfect answer. OR that you can utilise AI to make yourself better and more productive. A quote from The Convivial Society:
“One under-appreciated consequence of believing there is such a thing as the ‘one best way’ in every aspect of life is subsequently living with the unyielding pressure to discover it and the inevitable and perpetual frustration of failing to achieve it.”
I’m thinking about this right now because of the latest AI update to come out of Google Search: adverts which are dynamically created by AI, based on the context of your search query. It’s easier to show you than to describe it:
Here we see a demo of a guy literally having a conversation with some adverts. He asks for ‘hiking backpacks for kids’ and the AI generates some text which kind of just describes what a hiking backpack is, and then provides bullets on some factors to consider, such as how much the backpack can hold etc. What Google is trying to do here is provide ‘the one best way’ to find ‘the one best backpack’. But here, the ‘simple and straightforward’ answer has only made things unnecessary complicated:
A backpack isn’t a piece of specialist kit, I don’t really need a computer to tell me ‘what factors to consider’ — the factors should come from my preferences, surely
Ultimately, with this mechanism, you are interfacing with an ad; will it really give you the ‘one best’ thing, or will it just try and sell you something as quickly as possible?
With something like this, I can see myself trying to figure out how the machine wants me to talk in order for me to get what I want.
Maybe that last bullet isn’t true, because the whole point of this is that it’s ‘conversational’; you can talk to it like a person and it will understand what you mean. This distinction is what seems to be driving critics insane. I agree that when talking to a robot, it should feel like you are talking to a robot, and not a human. In fact, it’s sort of patronising to be constantly reassured by corporations that the big scary machines are going to get warmer and friendlier. I do not need warm and friendly vibes from machines… I need that from humans??
However, recently these criticisms have gone way too far. Take this fear-mongering piece in The Economist from April, which basically says that AI has hacked the planet and we’re all doomed. This piece is written by someone who thinks that before the emergence of generative AI, we were all living pure authentic lives free from machine abstraction. According to him, machines are now completely indistinguishable to humans online, and they are all making fools of us.
I think that these narratives make it easy to read striving for ‘the one best way’ as striving for authenticity. People want to have real conversations with real people; people don’t want to have to adapt themselves in order to work with machines; and people certainly don’t want to be scammed into submission by very convincing bots.
The thing is, authenticity took a permanent vacation from humanity nearly a century ago — we haven’t had any of it since the invention of propaganda, advertising, and PR (which is just three different ways to say the same thing tbh). Celebrities and politicians speak in riddles; corporations bend the truth about the quality of their products; individuals sheen themselves with success and happiness on social media. We have also been adapting ourselves to suit machines for ages now: in the mid 20th century workers learned to type, then fifty years later we all learned to type again but this time on our phones — and in a new kind of shorthand. Think about the daft non-sentences you write into Google Search — this represents a way of expressing ourselves that was completely novel and bizarre. But no… conversational AI (where you don’t have to adapt your language) is a step too far, apparently.
The economist piece assumes that every internet user is one Facebook post away from being radicalised into a Qanon follower — it demonstrates a lack of respect for the general public, and it also over-estimates the abilities conspiracy theorist groups., one of my new favourite writers on Substack says: “Isn’t the limiting factor on the spread of QAnon its inherent stupidity?” and I think we all need to remember this. I think that often, people are smart enough to know what’s real and what isn’t.
This futile struggle for authenticity reminds me of this stupid sofa making the rounds on Twitter. Someone on TikTok found a blue sofa on the street, and apparently it looks just like a certain expensive designer sofa worth thousands of pounds. People have been arguing over the sofa’s authenticity — is it a real designer sofa left on the street, or is it just a knock off? To me, this is the wrong authenticity question. Really we should be asking if the TikToker staged this whole thing for clout. Or maybe the designer did it as part of a very stupid guerrilla marketing campaign to raise awareness about their boring overpriced piece of furniture.
Of course, none of those questions are worth thinking about either. There are more important things to worry about than the authenticity of this couch. Building your life around something that will never happen, whether it’s full human optimisation or ‘authenticity’ is a very sad way to exist. May I suggest: a truly convincing simulation of something real will be impossible to tell apart from the real thing — in which case, surely you can just enjoy it??