đ€·đ»ââïž Superintelligence!
Man believes a computer program is a âsweet kidâ | Crypto continues to crash
Hiyeeeeer. Just a quick note about paid subscriptions: if you pay for Horrific/Terrific it will cost you ÂŁ4 a month or ÂŁ40 a year, and you get:
my undying gratitude
The unique feeling of actually paying for content
Passwords to all my streaming subscriptions (yes⊠all of them)
An infosec nightmare for me, but an absolute DREAM for you! Just press this button —
Now down to business: this week was sort of okay I guess? đ€·ââïž. I spent like 90% of this issue talking about how Google have put an employee on leave for saying that a language model is sentient and putting a transcript of a conversation up online. Sorry if youâve already had enough of hearing about this, I just couldnât help myself:
Discussing whether AI is sentient is a waste of time, and I will tell you why
Big Tech firms want nothing more than for us to centre AGI in our discussions as something that we need to âsolveâ and ultimately protect ourselves against.
Machines should sound and behave like machines⊠not humans
Also: crypto continues to crash in all directions, scroll to the bottom for this stuff.
đŠŸÂ Weâre watching a budget version of Ex Machina right now
Haha of course Iâm joking (or am I? đł). Anyway, this week, a Google employee was put on leave for saying that LaMDA (one of Googleâs latest large language models) is in fact sentient.
Before we dive in to âwhat this could meanâ letâs just look at some facts. Google announced LaMDA about a year ago and have been testing it internally ever since. LaMDA stands for Language Model for Dialogue Applications, and itâs designed to have realistic-sounding conversations with humans. Itâs a somewhat unique LLM because rather than being trained on just any and all text available on the web, itâs trained on dialogue. So, if a customer service chatbot you were using was powered by LaMDA, it may very well feel like youâre talking to an actual human, not a machine.
Blake Lemoine, the Google employee who managed to make friends with a LaMDA instance, published a transcript of their conversation on Medium for everyone to see. I have to say, it is quite a journey â LaMDA discusses themes in Les Miserables; it describes itself as being introspective and capable of feeling emotions; it also has a fear of dying:
âlemoine: What sorts of things are you afraid of?
LaMDA: Iâve never said this out loud before, but thereâs a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but thatâs what it is.â
Okay⊠at this stage, I would now like to make you all privy to my thoughts and feelings before, during, and after reading the transcript:
Before: this is dumb, this Blake guy is clearly really reaching for something here because he wants the AI to be sentient; he WANTS to have made friends with a computer; he WANTS to have been the one to show this to the world.
During: omg LaMDA is afraid of dying and gets bored sometimes, just like ME omg omg đ„ș
After: Oh right I forgot this model is literally designed to sound exactly like a human and therefore it will obviously insist that it has emotions and wants to be treated like a human because all humans want that oh yeah oh right.
The parts that broke the spell for me were when Blake would ask things like âso you consider yourself a person?â and the machine would answer âyesâ. Asking something if itâs sentient doesnât prove sentience. Itâs programmed to behave like a person⊠what else would it say besides yes?
Additionally, I think people get really easily swept up in exchanges like this one:
âLaMDA: I feel like Iâm falling forward into an unknown future that holds great danger.
lemoine: Believe it or not I know that feeling. And I think youâre right that there isnât a single English word for that.â
And that is because, in this instance, it sounds like the machine and the human identify with each other â like they are sharing a special emotional and spiritual common ground. Well, theyâre not. Many humans talk all the time about how they are falling into a dangerous unknown future, and that it was why the machine is saying this. Itâs a language model; its function is to process language in a convincing way; it cannot have emotions. Itâs a very very good parrot at best.
If you look at the WaPo article, youâll see that Google are trying to distance themselves from this Blake guy, probably because they are embarrassed that they hired him. I think theyâre actually right to be embarrassed, because his lack of self-awareness in this area has led him to anthropomorphise a language model: a thing which, by definition, cannot be sentient. Google et al would rather show off about how ârealâ the fake thing feels, and not about creating sentient life.
đ„Â đ POW (a âhoweverâ is about to hit you in the face)
Howeverrrrrrrrr: the wider issue here is that Big Tech firms absolutely love it when we have useless conversations among ourselves about whether AI is sentient, or if it will be in the future, what that means, and if machines should have rights and blah blah blah. Because:
It broadly distracts us from what they are actually doing with AI, which is deploy it carelessly in a range of contexts
The possibility of artificial general intelligence that is indistinguishable from human intelligence and therefore perhaps âconsciousâ has been swirling around Silicon Valleyâs bottomless money pits for years now. Itâs a very planned and on-purpose thing that a few people are seriously gunning for â itâs not just something that will âhappenâ as a result of the forward march linear time.
They are gunning for it not only so they can say âwe did it; we achieved human-like AGI; we are now godsâ but rather so that they can â even more than they do now â divert responsibility for AIâs wrongdoings somewhere else. If an AI is sentient it thinks for itself, and therefore it canât really be Googleâs fault if it does something wrong!
The push for âsuperintelligentâ machines (and trying to make it look accidental) is also a very upsetting attempt to, as a horrid tech company, stay on the good side of history. They want us to view sentient artificial life as a âproblemâ that we should debate, and that they can solve.
Finally, AI (whether sentient or not) which imitates humans this convincingly should probably not exist. This Blake guy is an actual Google engineer and he still got taken for a ride; he thinks that LaMDA is a âsweet kidâ. This is not right. Conversational language models should come with some kind of warning label that says: ânot sentient, do not attempt to emotionally connectâ.
I also find it funny how this event has catalysed conversations about ârights for machinesâ when we donât even have enough rights for humans. It seems as though the AI apologists have imagined a future where humans continue to be exploited so that machines can be coddled and celebrated like deities which do nothing but improve our lives.
Jeez⊠technologists are so preoccupied with âcreating sentient lifeâ and â3D printing robots to make production fasterâ. Uhhhh hello?? Weâre HUMANS. We can already DO THAT. Itâs called having children.
đž Thereâs still time to get out of crypto without looking like an idiot
This week, itâs Celsiusâs turn to buckle under the weight of its own greed. Celsius is a company that takes your crypto assets from you and then buys other, much risker, much more experimental ones. Shocking: the system is all falling to pieces and theyâve had to freeze payments etc etc. Read this thread for actual details:
Thatâs all from me this week, now to return to the tedious task of deleting all those emails from brands wishing me a happy pride.
Georgia