Discover more from Horrific/Terrific
🦹♂️ Everyone is President Business from the Lego Movie
Superglue yourself to the continued fulfilment of your infinite desires
Riddle me this: is Google the ‘best’ search engine, or is it just the one that pays billions of dollars every year to be the default on everyone’s device? The answer is in the question…
Anyway, on with today’s class. It’s a very short musing on a thought I had about AI recently. I mean, that’s all I write these days but yeah, glad you’re still here.
I was recently having a conversation with Eric Wycoff Rogers, a good friend of mine who runs the London Night Cafe — a great quiet spot to hang out in late at night if you want to sit neck deep in a ball pit and discuss complex topics such as the future of AI (which is exactly what we did).
Eric, who’s brain is larger, wetter, and more able to retain information than mine, told me about coherent extrapolated volition: a conception of friendly AI which does not need to be told our desires explicitly, but rather it anticipates them, and is able to autonomously act in the best interests of all humankind. It’s very All Watched Over by Machines of Loving Grace and it probably isn’t even slightly possible, because it would require humans to take the initial step of programming it, and humans are notorious for not being able to agree on what’s best for everyone/making software that doesn’t quite work as intended.
Imagining alternative futures is important, so let’s just say for a moment that achieving coherent extrapolated volition (CEV) could be possible at some point down the line. Still, do we really need to make things like happiness, comfort, and caring entirely replicable by machines?
Thinking about this concept brings me back to the fact that it’s really not clear what tech leaders are trying to achieve with AI systems. They deploy them without clear use-cases, and then warn us about existential risks. And then there are new regulatory shifts that alter the landscape in interesting ways. The general consensus here seems to be that we should work to stop things from getting really bad, instead of actively planning for ways in which AI could actually make things better overall. CEV — or something like it — does not appear to be the goal. The only goals I can extrapolate right now are short term financial gains and the avoidance of total annihilation.
There’s a weird kind of grisly pride in openly admitting — or insinuating — that a thing that you created has the potential to bring about an extinction event. Like, okay?? Thank you, I guess??? Hope you get the nobel peace prize or whatever xoxo. This is abject fear-mongering that allows the creators of AI systems to control the narrative and set us on a narrow path towards a future that is defined by them. You know, maybe I would love it if there was an AI out there that could cuddle me to sleep and make everything okay again, but those ideas don’t seem to make the headlines. Because cuddles are considered a lot less powerful and impressive than mass destruction (I disagree, but whatever).
I don’t think the overlords of our current top-down generative AI landscape are aiming for utopian cuddly outcomes OR dystopian machine-uprising outcomes — because those are way too extreme. They want to keep everything contained within a watered-down inoffensive midpoint, where we do nothing but generate mundane viral content and automated marketing workflows. Technocapitalists crave control and order; they fucking love prediction models, machine-readable data, and, I dunno… making every single process fully auditable like GitHub does with codebases. They’re all like President Business from the Lego Movie, who can’t stand the idea of people freely expressing themselves, and punishes society by demanding that everyone submits to his idea of perfection, or face being superglued into place.
This kind of explains why right now, AI art just seems kind of lame and nothingy. Max Read recently wrote about ‘Controlism’, a new form of AI art that uses a reference image to create a new image. It very much represents how some of the generative AI community have formalised a way to make extremely kitsch images that are a blurry grey approximation of what everyone likes to look at on social media.
Yes, this stuff is going viral in 2023, but I do wonder how long this can last. I think a likely progression from this is not human extinction or eternal happiness, but rather a future where the outputs you produce from generative tools will be so perfect for you personally that you won’t even bother sharing it with your networks to show off or to gain viral success, nor will you be interested in looking at anyone else’s content. You will be absolutely satisfied with what you can create yourself, because it will be 100% what you want to watch/read/listen to/generally experience. I mean, it will probably all be porn, but still…