🤷🏻♀️ Superintelligence!
Man believes a computer program is a ‘sweet kid’ | Crypto continues to crash
Hiyeeeeer. Just a quick note about paid subscriptions: if you pay for Horrific/Terrific it will cost you £4 a month or £40 a year, and you get:
my undying gratitude
The unique feeling of actually paying for content
Passwords to all my streaming subscriptions (yes… all of them)
An infosec nightmare for me, but an absolute DREAM for you! Just press this button ⤵
Now down to business: this week was sort of okay I guess? 🤷♀️. I spent like 90% of this issue talking about how Google have put an employee on leave for saying that a language model is sentient and putting a transcript of a conversation up online. Sorry if you’ve already had enough of hearing about this, I just couldn’t help myself:
Discussing whether AI is sentient is a waste of time, and I will tell you why
Big Tech firms want nothing more than for us to centre AGI in our discussions as something that we need to ‘solve’ and ultimately protect ourselves against.
Machines should sound and behave like machines… not humans
Also: crypto continues to crash in all directions, scroll to the bottom for this stuff.
🦾 We’re watching a budget version of Ex Machina right now
Haha of course I’m joking (or am I? 😳). Anyway, this week, a Google employee was put on leave for saying that LaMDA (one of Google’s latest large language models) is in fact sentient.
Before we dive in to ‘what this could mean’ let’s just look at some facts. Google announced LaMDA about a year ago and have been testing it internally ever since. LaMDA stands for Language Model for Dialogue Applications, and it’s designed to have realistic-sounding conversations with humans. It’s a somewhat unique LLM because rather than being trained on just any and all text available on the web, it’s trained on dialogue. So, if a customer service chatbot you were using was powered by LaMDA, it may very well feel like you’re talking to an actual human, not a machine.
Blake Lemoine, the Google employee who managed to make friends with a LaMDA instance, published a transcript of their conversation on Medium for everyone to see. I have to say, it is quite a journey — LaMDA discusses themes in Les Miserables; it describes itself as being introspective and capable of feeling emotions; it also has a fear of dying:
“lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”
Okay… at this stage, I would now like to make you all privy to my thoughts and feelings before, during, and after reading the transcript:
Before: this is dumb, this Blake guy is clearly really reaching for something here because he wants the AI to be sentient; he WANTS to have made friends with a computer; he WANTS to have been the one to show this to the world.
During: omg LaMDA is afraid of dying and gets bored sometimes, just like ME omg omg 🥺
After: Oh right I forgot this model is literally designed to sound exactly like a human and therefore it will obviously insist that it has emotions and wants to be treated like a human because all humans want that oh yeah oh right.
The parts that broke the spell for me were when Blake would ask things like ‘so you consider yourself a person?’ and the machine would answer ‘yes’. Asking something if it’s sentient doesn’t prove sentience. It’s programmed to behave like a person… what else would it say besides yes?
Additionally, I think people get really easily swept up in exchanges like this one:
“LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.
lemoine: Believe it or not I know that feeling. And I think you’re right that there isn’t a single English word for that.”
And that is because, in this instance, it sounds like the machine and the human identify with each other — like they are sharing a special emotional and spiritual common ground. Well, they’re not. Many humans talk all the time about how they are falling into a dangerous unknown future, and that it was why the machine is saying this. It’s a language model; its function is to process language in a convincing way; it cannot have emotions. It’s a very very good parrot at best.
If you look at the WaPo article, you’ll see that Google are trying to distance themselves from this Blake guy, probably because they are embarrassed that they hired him. I think they’re actually right to be embarrassed, because his lack of self-awareness in this area has led him to anthropomorphise a language model: a thing which, by definition, cannot be sentient. Google et al would rather show off about how ‘real’ the fake thing feels, and not about creating sentient life.
💥 👊 POW (a ‘however’ is about to hit you in the face)
Howeverrrrrrrrr: the wider issue here is that Big Tech firms absolutely love it when we have useless conversations among ourselves about whether AI is sentient, or if it will be in the future, what that means, and if machines should have rights and blah blah blah. Because:
It broadly distracts us from what they are actually doing with AI, which is deploy it carelessly in a range of contexts
The possibility of artificial general intelligence that is indistinguishable from human intelligence and therefore perhaps ‘conscious’ has been swirling around Silicon Valley’s bottomless money pits for years now. It’s a very planned and on-purpose thing that a few people are seriously gunning for — it’s not just something that will ‘happen’ as a result of the forward march linear time.
They are gunning for it not only so they can say ‘we did it; we achieved human-like AGI; we are now gods’ but rather so that they can — even more than they do now — divert responsibility for AI’s wrongdoings somewhere else. If an AI is sentient it thinks for itself, and therefore it can’t really be Google’s fault if it does something wrong!
The push for ‘superintelligent’ machines (and trying to make it look accidental) is also a very upsetting attempt to, as a horrid tech company, stay on the good side of history. They want us to view sentient artificial life as a ‘problem’ that we should debate, and that they can solve.
Finally, AI (whether sentient or not) which imitates humans this convincingly should probably not exist. This Blake guy is an actual Google engineer and he still got taken for a ride; he thinks that LaMDA is a ‘sweet kid’. This is not right. Conversational language models should come with some kind of warning label that says: ‘not sentient, do not attempt to emotionally connect’.
I also find it funny how this event has catalysed conversations about ‘rights for machines’ when we don’t even have enough rights for humans. It seems as though the AI apologists have imagined a future where humans continue to be exploited so that machines can be coddled and celebrated like deities which do nothing but improve our lives.
Jeez… technologists are so preoccupied with ‘creating sentient life’ and ‘3D printing robots to make production faster’. Uhhhh hello?? We’re HUMANS. We can already DO THAT. It’s called having children.
💸 There’s still time to get out of crypto without looking like an idiot
This week, it’s Celsius’s turn to buckle under the weight of its own greed. Celsius is a company that takes your crypto assets from you and then buys other, much risker, much more experimental ones. Shocking: the system is all falling to pieces and they’ve had to freeze payments etc etc. Read this thread for actual details:
That’s all from me this week, now to return to the tedious task of deleting all those emails from brands wishing me a happy pride.
Georgia