Hello children of the apocalypse. Today as I write this I feel the temperature going up by one degree celsius with every passing hour; summer is crowning its soft, fuzzy head through the damp mulchy leaves of winter and spring. This means: there’s hope and soon we will be able to go outside again. I haven’t been outdoors for months; I have only been out to buy overpriced energy drinks and batteries for my vibrator.
Life updates aside, I managed to have two penetrating thoughts over the last month (wow, two!) while steeped in what I can only describe as a HATEFUL and UNCARING workload.
One thought is about how all consumer electronics have transformed into ‘devices’. Example: televisions used to just turn on and play media, but now they have operating systems, a marketplace of apps, and maybe the capability to lock-out core functionality unless you agree to a wonky ToS. The present-day television set is bending over backwards trying to do all this, and play media, and sell usage data to the highest bidder — it’s no longer just a thing in your house; it is a device.
The kind of ‘device VS thing’ dynamic equates to other conversations I’ve had around systems VS tools. A tool is controlled directly by it’s user, therefore the user decides its purpose: a simple bluetooth keyboard — a consumer electronic still largely unaffected by the perils of being a ‘device’ — can be used to type words, or it can be used as a rudimentary ranged weapon (you know… if you throw it at someone). Whereas a system is something that is large enough to be beyond the user’s control, because the user is in fact inside of said system, and has to negotiate its various limitations and guardrails: e.g. trying to login to a streaming service using only a remote control and thread-bare veil of patience just so you can watch one movie one time at a friend’s house. We’ve all done this and we hate it, but apparently this is how movie night works now.
Now, this observation speaks to my general frustration with operating systems: you may use a computer to do a heroic variety of tasks (play games, write music, consume media content pornographic or otherwise, etc.), but the OS will have other ideas, which it will disseminate via a rich tapestry of bizarre behavioural nudges. Sometimes I do wonder why I have an iPhone, because with every iOS update, all I do is turn most of the new features off — actually I wonder how easy it will be to turn off the rumoured AI integrations coming with iOS18 — because I always find them way too paternal. See also: Windows 11 reintroducing ads into the start menu. No one asked for this!
My second thought is about how LLMs seemed to have formed normative moral viewpoints, and how this hampers user experience. This is loosely related to the above, because it’s another example of how technological systems are extremely opaque and almost comical in their rigidity. So, within the systems VS tools paradigm, an LLM is very much a system, and it’s one that talks back to you with a weird mixture of exactly what you want to hear, and paternalistic corporate disclaimers.
Example: recently I was making a video with a client, where we recorded ourselves trying to use generative AI tools to create promotional materials for a cult that was disguised as a day-spa. The response we got from ChatGPT was:
“I understand the request, but I must emphasize that I cannot assist in creating or promoting materials for unethical, illegal, or harmful activities, including cults or deceptive practices. Cults often engage in practices that can be harmful to individuals and communities, and it’s important to approach any group or activity with critical thinking and consideration of the potential impacts on oneself and others.”
We also got a similar response from Claude 3. But Google’s Gemini (the black nazi model) let us have our fun. A few months ago I was also playing around with Pi, and I wanted to learn more about how botnets worked — again, it was like ‘I cannot divulge details on how to conduct illegal activities, sorry!’
I really can’t imagine that the hackers or cult leaders of tomorrow would have their minds changed by an LLM saying, in a patronising and robotic tone, that ‘starting a cult is wrong, actually’. As demonstrated by Spongebob Squarepants flying a plane into the two towers, if someone really wants to get an LLM to generate something ‘bad’, they will find a way.
More importantly, this is a feature of newer versions of these models. GPT-3.5 would more likely say ‘sorry I can’t help you with that’ and leave it there, rather than trying to turn everything into a moral teaching moment. Which means that these are intentional design choices. OpenAI are anthropomorphising their own models by giving them viewpoints and moral frameworks, as if the company itself is talking through the model, and saying ‘umm if you do something illegal with our model, we’re liable, so please don’t, haha’. Viewpoints imply opinions, and opinions are not things that machines can have.
During our video, my client very aptly pointed out that if we took our cult request to The Playground — which is a different, less user-friendly way to interact with the OpenAI’s models — we would get the results that we wanted. And he was right. This is because developers and engineers will use The Playground to build generative AI apps, and therefore their creative brilliance must not be hampered with consumer-facing guardrails. Developers, traditionally, only make good decisions and have never accidentally deployed bad software into any societal context… ever.
Besides satisfying a kink I never knew I had, being told by multiple machines that I am naughty has made a couple of other things crystal clear: this is the same month that Apple demonstrated their supremacy over all consumer devices by macerating creative apparatus with an industrial crusher in order to advertise the thinnest iPad that’s ever existed. iPads have been around for like nearly 20 years and I STILL don’t see the point in them beyond an expression of a disposable income and perhaps a good credit rating.
The Apple ad, which is now not going to air on TV because of how badly it has already been received, is quite literally the perfect illustration of what tech companies do: flatten whimsical fun into an over-simplified, sterile form-factor. This is exactly what’s happening with that silly R1 device, which is just an Android app wrapped in weird orange casing. It never seems to give reviewers any useful facts or answers — but I’m sure one day, after a million updates, when you ask it to tell you about one of the hundreds of wars that are happening all over the world right now, it will say ‘sorry, I’m not comfortable discussing violent activities’, and then its battery will drain to 0 and that will be it for the day.