Discover more from Horrific/Terrific
🤤 Why is existential risk so alluring?
We’re all doomed! God isn’t there just something so hot about that
Sorry, I’ve tried to ignore it, but the ‘AI is going to kill us all’ arguments have reached a comical inflection point and now I can’t stop thinking about the following things:
Why are there hype-critics out there who insist that AI is going to kill us unless we do something? I think this is fun for them tbh
I know that inventing future doom and complaining about it very loudly is a great way to ignore actual real problems that are happening right now — it’s also a great way to never imagine a good future for ourselves, like, ever
I want to look at how generative AI has created a new paradigm in user interfaces, and how this could signal that we aren’t doomed, actually
Before we start, in case some of you aren’t fully aware of what’s going on with Twitter right now…
I’m just going to summarise it really quickly because it’s so so funny:
After charging for Twitter Blue and it’s unattractive features, Elon decided to girlboss, gaslight, and gatekeep the site even harder by jacking up the price of the API
Result: many useful bots and disaster-tracking accounts disappear
Even more recent result: no API means that people have resorted to impolitely scraping data from the website. This makes sense, and was probably foreseen by many software engineers at Twitter who Elon routinely ignores while posting bad memes from 2013
All the activity from the scraping is obviously wildly inefficient for Twitter’s infra, so now they need to limit how many tweets you can read a day. Yes, infinite scroll has been obliterated. Social media is no more
Also, for some reason, Tweetdeck is broken, and for some other reason, they are relaunching it but only for verified customers
Oh and the other day Twitter DDoSed itself repeatedly
All of this chaos has pushed Meta to accelerate their launch of Threads (their Twitter rival) to the 6th of July, when originally it was meant to be at the end of the month
At the time of writing, BlueSky seem to be silent on this and are still operating as invite-only. They should really really open it up unless they want everyone to default to Threads, which I imagine will have the atmosphere of a critical surgery fundraiser party winding down at 9.30pm after only having raised 19% of their goal. Please don’t join Threads. Follow me on BlueSky instead: @geoiac.bsky.social
How do I control a computer? With commands, or with my intentions?
With the introduction of generative AI systems, we now have a new kind of user interface: you no longer have to give a computer a series of commands (e.g. open browser, navigate to this website, purchase overpriced stationary). You can now simply lay out your intentions (e.g. ‘I want some overpriced stationary’) and, in theory, get what you want. I think this is an important distinction, and it’s one that seems to be consistently ignored by those who fear generative AI like it’s a malevolent god with a short temper.
To demonstrate what I mean, let’s look at some of the ‘discourse’ swirling around the online toilet bowl these days. There’s a growing concern that the web is being eaten by junk websites created by AI. The experts who conducted this research say that “this is not healthy” and that there are many “prominent brands that are advertising on these sites and that are unwittingly supporting it.”
This analysis perfectly characterises the way a lot of people are talking about AI right now. It looks at a problem that already existed, and then exclaims that it’s all AI’s fault, and that it’s happening in an organic way which is beyond our control. I think it’s been just under two decades since Google realised they could make money by measuring click-through rates. So, horrific spammy websites which exist purely for advertising purposes have already been around for quite some time — this is nothing new. Misinformation was already polluting our newsfeeds with clickbait; online scammers were already taking money from unsuspecting individuals. So, what is the complaint exactly? Is it that this stuff is happening, or that it’s just happening faster?
It’s odd to be concerned with, or surprised by, the speed at which underhanded money-making channels are invigorated by new technologies. If the invention of online advertising means you can technically make money by throwing up a few bad high-traffic websites, why would you not try and automate this if you could? Is this really such a shocking turn of events? Considering how ineffective online advertising is, and how people tend not to tolerate shitty glitchy websites, I really doubt this trend will continue.
But, there is this weird assumption that all these trends will indeed continue. Forever. Until we die. The Center for AI Safety, an organisation that most likely didn’t even exist eight months ago, has released a very short and shocking statement that has now been signed by hundreds of scientists and ‘other notable figures’. It says: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Notice how many CEO’s of AI companies have signed this, and therefore, of course, welcome the idea that their new technology is so powerful that it can obliterate all human life.
Comparing AI to pandemics is in poor taste; we just had one of those so naturally everyone will agree that they don’t want the AI version of that any time soon. But comparing AI to nuclear war is completely unhinged: these people seem to enjoy the idea that a technology has put us on a path to rapid-self destruction, as if there’s something sexy about our own extinction. It’s odd that climate change, the thing that is actually going to kill us, doesn’t seem to concern them.
So there are a few things at play here, mixing together to make this stew of unavoidable doom. One is, as covered, the sheer speed at which is this is happening. The other is the assumption that this speed, and therefore current trajectory of AI development, will continue exactly as it is, in perpetuity. And the final thing is just how much that these concerns rely on pure speculation, and sensationalist sci-fi fantasy claims. Let’s just unpack these three dynamics a bit.
When there is fear that this is all moving ‘too fast’ from those at the top — e.g. those who have a hand in designing AI technologies, or enough power to influence their path — it’s hard to take it seriously, because they have the ability to slow things down. This alleged fear of ‘moving too fast’ frames these technologies as inevitable rather than deliberate; that generative AI is ‘just happening’ to us, and it’s not that we are making it happen. Furthermore, announcing that something is ‘too fast’ sort of downplays the power of human adaptability. Society has to adapt to new innovations all the time. When trains were first a thing, Victorians constructed fantastic moral panics over how their speed had an effect on the mental health of passengers. I’m not saying that’s what’s happening here, but I am saying that we definitely got over it and we love trains now.
The second dynamic shrouding our future in doom is the assumption that current trends in AI will continue, no matter what. This one really demonstrates a lack of like… basic analytical thinking. It’s humanly impossible for any ‘trend’ to continue at the same rate forever. This all reminds me of the poorly structured arguments from the 1968 book The Population Bomb which said that if we kept breeding at our current rate, the Earth would become so full of humans that our collective body heat would be able to melt iron. Saying something like this is so unhelpful; it’s kind of just a weird and pointless factoid that you grasp for at parties if the conversation runs dry — ‘did you know that if this one insane impossible thing happens, an other even more insane impossible thing will happen after that?’.
This is the same as framing AI as something as dangerous as nuclear war: it doesn’t help people actually understand whatever the problem is, because all it does is imagine a future that is completely grounded in a nonsense-fantasy. There isn’t going to be a future where we are all standing shoulder to shoulder, boiling ourselves alive, so why are we even talking about that?
This leads us nicely onto the third dynamic, where any imaginings of the future are all pretty extreme, and always to the negative. If you actually want to create change that will benefit humanity in the long-term, it’s not useful to narrow your focus to one possible future — especially one where we all go extinct. There seems to be an aversion to imagining good futures, and aiming for those. Isn’t it much harder to imagine a hollywood-style extinction event and then just sort of try to avoid it? Wouldn’t it be better to aim for something solid instead of just aiming for ‘anything but extinction’?
Assuming that if given too much power, machines will deem us irrelevant and passively or actively eradicate us in order to extend their own survival is a very astringent way to anthropomorphise. Rich and influential capitalists tend to be individualistic and untrusting, and it seems that they are projecting these attributes onto potential future versions of AI, painting machines as ruthless expansionists who’s ultimate goal is self-preservation. But why would that be their ultimate goal? Why do we assume that we will lose control, and that the goal of machines will be a very human one?
Help my avoid my own extinction by subscribing and donating money to my writing efforts. Thanks!
The ‘human extinction’ stance is even less convincing when you look at how AI has created a new user interface paradigm. The current paradigm is where we issue separate commands to a computer to achieve a desired result, such as my example above, where you open your browser, navigate to a website, and then buy overpriced stationary. You will not get the overpriced stationary unless you give all those commands in succession: you can navigate to the website and put the stationary in the basket, but you won’t get it until you actually click ‘buy’.
What generative AI has given us is ‘intent-based outcome specification’, which is a UI where you no longer have to issue a series of commands, but rather simply state your intention: ‘I want overpriced stationary’, and then some backend processes will take place resulting in you getting your stationary. It’s obviously nowhere near that advanced yet, because you still have to structure prompts quite delicately to get the desired output.
But, an intent-based UI opens up a range of possibilities for us: in order to maintain control over computers and machines, we just have to be clear about what we want. In this frame, I think that this is only achievable if we a design for a future that consists of many purpose-built machines, rather than a single monolithic system that can do everything. Because if you’re using machines to achieve your goals by only outlining intent, you leave a lot of room for the machine to ‘decide’ what steps to take in between — and if it’s a machine that can ‘do anything’, it’s process will be very difficult to predict.
There should be no space for harmless well-meaning human desire to be satisfied with harmful means. E.g. if your intent is to have ten litres of fresh water delivered to your house, it would be reasonable to expect that a machine wouldn’t steal this from someone else’s house, or siphon it away from a farm or something. The ‘fetch water’ mechanism should be, in theory, very limited in what it can do. But who’s responsible for these limits? Is it down to the user to make sure they always add in, ‘but don’t kill anyone in the process’ or should it be up to those who build and design these systems to hard-code the limits in?
Personally I think putting the responsibility on the user is both unfair and potentially dangerous (because humans could easily intend to inflict harm), but having safety features baked-in would also be extremely challenging. It’s hard to prepare for every eventuality — and it’s also not really on the agenda. Looking back at the people who signed the shocking statement from The Centre for AI Safety, we see that a lot of them are people who design AI systems, such as Sam Altman. They want to frame their technologies as powerful and unpredictable, because they don’t want to take responsibility for outputs of their machines, or the paths that the machines took to get to those outputs. They would rather paint the inner-workings of these machines as mysterious, fantastic, and beyond our reckoning.
A truly intent-based UI, with proper limitations in place, would not be sexy and alluring, because it would only result in our extinction if we literally ask for it. Any harm that came to us would be intentional — which means an intent-based UI might be unattractive to someone with bad intentions, because it would make it pretty hard for them to dodge blame. So, the reason for the increase in glitchy click-bait websites is that there are a few people out there who’s intentions are to make a lot of money from online advertising very quickly, and that has been made much easier to do with AI. What we’re looking at here is a closing gap between what people wish they could achieve with technology, and what they can achieve. There are systemic problems which lead to people even wanting to make money in this way, but no one seems to want to address these.
The current creators of AI systems also don’t want to limit themselves to designing purpose-built machines for specific use-cases. They want generalist systems that can be anything to anyone, because they want that gap between human desire and what is technically possible to be completely gone — and for human intention, which has the potential to be harmful, to take a back seat as an unfortunate but unavoidable feature of life with AI.