š¤¤ Why is existential risk so alluring?
Weāre all doomed! God isnāt there just something so hot about that
Sorry, Iāve tried to ignore it, but the āAI is going to kill us allā arguments have reached a comical inflection point and now I canāt stop thinking about the following things:
Why are there hype-critics out there who insist that AI is going to kill us unless we do something? I think this is fun for them tbh
I know that inventing future doom and complaining about it very loudly is a great way to ignore actual real problems that are happening right now ā itās also a great way to never imagine a good future for ourselves, like, ever
I want to look at how generative AI has created a new paradigm in user interfaces, and how this could signal that we arenāt doomed, actually
Before we start, in case some of you arenāt fully aware of whatās going on with Twitter right nowā¦
Iām just going to summarise it really quickly because itās so so funny:
After charging for Twitter Blue and itās unattractive features, Elon decided to girlboss, gaslight, and gatekeep the site even harder by jacking up the price of the API
Result: many useful bots and disaster-tracking accounts disappear
Even more recent result: no API means that people have resorted to impolitely scraping data from the website. This makes sense, and was probably foreseen by many software engineers at Twitter who Elon routinely ignores while posting bad memes from 2013
All the activity from the scraping is obviously wildly inefficient for Twitterās infra, so now they need to limit how many tweets you can read a day. Yes, infinite scroll has been obliterated. Social media is no more
Also, for some reason, Tweetdeck is broken, and for some other reason, they are relaunching it but only for verified customers
Oh and the other day Twitter DDoSed itself repeatedly
All of this chaos has pushed Meta to accelerate their launch of Threads (their Twitter rival) to the 6th of July, when originally it was meant to be at the end of the month
At the time of writing, BlueSky seem to be silent on this and are still operating as invite-only. They should really really open it up unless they want everyone to default to Threads, which I imagine will have the atmosphere of a critical surgery fundraiser party winding down at 9.30pm after only having raised 19% of their goal. Please donāt join Threads. Follow me on BlueSky instead: @geoiac.bsky.social
How do I control a computer? With commands, or with my intentions?
With the introduction of generative AI systems, we now have a new kind of user interface: you no longer have to give a computer a series of commands (e.g. open browser, navigate to this website, purchase overpriced stationary). You can now simply lay out your intentions (e.g. āI want some overpriced stationaryā) and, in theory, get what you want. I think this is an important distinction, and itās one that seems to be consistently ignored by those who fear generative AI like itās a malevolent god with a short temper.
To demonstrate what I mean, letās look at some of the ādiscourseā swirling around the online toilet bowl these days. Thereās a growing concern that the web is being eaten by junk websites created by AI. The experts who conducted this research say that āthis is not healthyā and that there are many āprominent brands that are advertising on these sites and that are unwittingly supporting it.ā
This analysis perfectly characterises the way a lot of people are talking about AI right now. It looks at a problem that already existed, and then exclaims that itās all AIās fault, and that itās happening in an organic way which is beyond our control. I think itās been just under two decades since Google realised they could make money by measuring click-through rates. So, horrific spammy websites which exist purely for advertising purposes have already been around for quite some time ā this is nothing new. Misinformation was already polluting our newsfeeds with clickbait; online scammers were already taking money from unsuspecting individuals. So, what is the complaint exactly? Is it that this stuff is happening, or that itās just happening faster?
Itās odd to be concerned with, or surprised by, the speed at which underhanded money-making channels are invigorated by new technologies. If the invention of online advertising means you can technically make money by throwing up a few bad high-traffic websites, why would you not try and automate this if you could? Is this really such a shocking turn of events? Considering how ineffective online advertising is, and how people tend not to tolerate shitty glitchy websites, I really doubt this trend will continue.
But, there is this weird assumption that all these trends will indeed continue. Forever. Until we die. The Center for AI Safety, an organisation that most likely didnāt even exist eight months ago, has released a very short and shocking statement that has now been signed by hundreds of scientists and āother notable figuresā. It says: āMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.ā Notice how many CEOās of AI companies have signed this, and therefore, of course, welcome the idea that their new technology is so powerful that it can obliterate all human life.
Comparing AI to pandemics is in poor taste; we just had one of those so naturally everyone will agree that they donāt want the AI version of that any time soon. But comparing AI to nuclear war is completely unhinged: these people seem to enjoy the idea that a technology has put us on a path to rapid-self destruction, as if thereās something sexy about our own extinction. Itās odd that climate change, the thing that is actually going to kill us, doesnāt seem to concern them.
So there are a few things at play here, mixing together to make this stew of unavoidable doom. One is, as covered, the sheer speed at which is this is happening. The other is the assumption that this speed, and therefore current trajectory of AI development, will continue exactly as it is, in perpetuity. And the final thing is just how much that these concerns rely on pure speculation, and sensationalist sci-fi fantasy claims. Letās just unpack these three dynamics a bit.
When there is fear that this is all moving ātoo fastā from those at the top ā e.g. those who have a hand in designing AI technologies, or enough power to influence their path ā itās hard to take it seriously, because they have the ability to slow things down. This alleged fear of āmoving too fastā frames these technologies as inevitable rather than deliberate; that generative AI is ājust happeningā to us, and itās not that we are making it happen. Furthermore, announcing that something is ātoo fastā sort of downplays the power of human adaptability. Society has to adapt to new innovations all the time. When trains were first a thing, Victorians constructed fantastic moral panics over how their speed had an effect on the mental health of passengers. Iām not saying thatās whatās happening here, but I am saying that we definitely got over it and we love trains now.
The second dynamic shrouding our future in doom is the assumption that current trends in AI will continue, no matter what. This one really demonstrates a lack of likeā¦ basic analytical thinking. Itās humanly impossible for any ātrendā to continue at the same rate forever. This all reminds me of the poorly structured arguments from the 1968 book The Population Bomb which said that if we kept breeding at our current rate, the Earth would become so full of humans that our collective body heat would be able to melt iron. Saying something like this is so unhelpful; itās kind of just a weird and pointless factoid that you grasp for at parties if the conversation runs dry ā ādid you know that if this one insane impossible thing happens, an other even more insane impossible thing will happen after that?ā.
This is the same as framing AI as something as dangerous as nuclear war: it doesnāt help people actually understand whatever the problem is, because all it does is imagine a future that is completely grounded in a nonsense-fantasy. There isnāt going to be a future where we are all standing shoulder to shoulder, boiling ourselves alive, so why are we even talking about that?
This leads us nicely onto the third dynamic, where any imaginings of the future are all pretty extreme, and always to the negative. If you actually want to create change that will benefit humanity in the long-term, itās not useful to narrow your focus to one possible future ā especially one where we all go extinct. There seems to be an aversion to imagining good futures, and aiming for those. Isnāt it much harder to imagine a hollywood-style extinction event and then just sort of try to avoid it? Wouldnāt it be better to aim for something solid instead of just aiming for āanything but extinctionā?
Assuming that if given too much power, machines will deem us irrelevant and passively or actively eradicate us in order to extend their own survival is a very astringent way to anthropomorphise. Rich and influential capitalists tend to be individualistic and untrusting, and it seems that they are projecting these attributes onto potential future versions of AI, painting machines as ruthless expansionists whoās ultimate goal is self-preservation. But why would that be their ultimate goal? Why do we assume that we will lose control, and that the goal of machines will be a very human one?
Help my avoid my own extinction by subscribing and donating money to my writing efforts. Thanks!
The āhuman extinctionā stance is even less convincing when you look at how AI has created a new user interface paradigm. The current paradigm is where we issue separate commands to a computer to achieve a desired result, such as my example above, where you open your browser, navigate to a website, and then buy overpriced stationary. You will not get the overpriced stationary unless you give all those commands in succession: you can navigate to the website and put the stationary in the basket, but you wonāt get it until you actually click ābuyā.
What generative AI has given us is āintent-based outcome specificationā, which is a UI where you no longer have to issue a series of commands, but rather simply state your intention: āI want overpriced stationaryā, and then some backend processes will take place resulting in you getting your stationary. Itās obviously nowhere near that advanced yet, because you still have to structure prompts quite delicately to get the desired output.
But, an intent-based UI opens up a range of possibilities for us: in order to maintain control over computers and machines, we just have to be clear about what we want. In this frame, I think that this is only achievable if we a design for a future that consists of many purpose-built machines, rather than a single monolithic system that can do everything. Because if youāre using machines to achieve your goals by only outlining intent, you leave a lot of room for the machine to ādecideā what steps to take in between ā and if itās a machine that can ādo anythingā, itās process will be very difficult to predict.
There should be no space for harmless well-meaning human desire to be satisfied with harmful means. E.g. if your intent is to have ten litres of fresh water delivered to your house, it would be reasonable to expect that a machine wouldnāt steal this from someone elseās house, or siphon it away from a farm or something. The āfetch waterā mechanism should be, in theory, very limited in what it can do. But whoās responsible for these limits? Is it down to the user to make sure they always add in, ābut donāt kill anyone in the processā or should it be up to those who build and design these systems to hard-code the limits in?
Personally I think putting the responsibility on the user is both unfair and potentially dangerous (because humans could easily intend to inflict harm), but having safety features baked-in would also be extremely challenging. Itās hard to prepare for every eventuality ā and itās also not really on the agenda. Looking back at the people who signed the shocking statement from The Centre for AI Safety, we see that a lot of them are people who design AI systems, such as Sam Altman. They want to frame their technologies as powerful and unpredictable, because they donāt want to take responsibility for outputs of their machines, or the paths that the machines took to get to those outputs. They would rather paint the inner-workings of these machines as mysterious, fantastic, and beyond our reckoning.
A truly intent-based UI, with proper limitations in place, would not be sexy and alluring, because it would only result in our extinction if we literally ask for it. Any harm that came to us would be intentional ā which means an intent-based UI might be unattractive to someone with bad intentions, because it would make it pretty hard for them to dodge blame. So, the reason for the increase in glitchy click-bait websites is that there are a few people out there whoās intentions are to make a lot of money from online advertising very quickly, and that has been made much easier to do with AI. What weāre looking at here is a closing gap between what people wish they could achieve with technology, and what they can achieve. There are systemic problems which lead to people even wanting to make money in this way, but no one seems to want to address these.
The current creators of AI systems also donāt want to limit themselves to designing purpose-built machines for specific use-cases. They want generalist systems that can be anything to anyone, because they want that gap between human desire and what is technically possible to be completely gone ā and for human intention, which has the potential to be harmful, to take a back seat as an unfortunate but unavoidable feature of life with AI.