Discover more from Horrific/Terrific
🤷 Irresponsible AI
AI Ethics teams don’t exist anymore | Blocking the TikTok ban | Newspeak House now accepting new residents
Hello! If you work at Google, good luck printing this issue and then stapling it together (my recommended way of reading Horrific/Terrific) because I guess those benefits have now been taken away, lol.
This week had the energy of a spineless centrist politician 🤷 Please try and make sense of all of this, I literally wrote it in one go on Friday morning because I didn’t have TIME dammit.
A US senator is trying to block the TikTok ban — good luck out there buddy
Big Tech firms are really leaving AI ethicists out to dry — good luck out there buddies
If you’re in London and what to try something different, Newspeak House are looking for their next cohort of fellows
🔊 There is one rational voice cutting through the crap about banning TikTok from the US
There is a senator who’s currently out there trying to block the US TikTok ban — good luck bro, I hope whatever you’re doing works. Rand Paul, the senator in question, seems to be doing this for the right reasons: banning TikTok would damage free speech, and TikTok is clearly being singled out just because it was made in China; American social media apps broadly engage in very similar practices but no one is trying to ban those. One of his other reasons for blocking is to ‘protect small business’ but that’s a very neo-liberal thing to say tbh. Anything like this always makes me question someone’s motivations — does he truly care, or is he simply beholden to some very persuasive lobbyists?
I wrote about this a lot more a couple of weeks ago, but broadly, taking away a popular app from the citizens of an entire country is fucking authoritarian. The US government are essentially saying, ‘please, citizens, stop having fun — we are scared’. This really is a tale as old as time: a government refuse to actually govern, become terrified as they witness the monstrous and unchecked growth of global business, and then make irrational decisions that will only negatively impact everyone.
Writing all this is really hard and time-consuming. Please make it better by donating money to me, thank you :)
👋 There’s no need for ‘responsible AI’ anymore
Forget about Google’s paperclips, the latest round of cut-backs in the tech industry has hit AI ethics teams the most. I’d say, for everyone besides Twitter, this is happening because tech companies want to capitalise on the generative AI hype, and AI ethics teams would only ruin that for them. Twitter are too busy ruining things for themselves to strategically fire their AI ethics teams — they got fired in November along with the other 70% (or whatever it was).
Anyway, all of this has made me think about the tempestuous timeline that ‘AI’ has been on over the last few years. Try and remember what it was like before generative AI was a thing (just… just try) — it felt like all the conversations were about the dangers of models that predict online behaviours, or racial bias, or facial recognition & law enforcement, or quantum computing, or maybe even applications in medicine.
GPT-2 existed but no one cared about it because it was no where near as ‘impressive’ as what we have now. The AI Ethics Teams that were ensconced in Big Tech firms at this stage were only there to make it look like the firms were taking these conversations seriously. It just made sense, for optics, to have a team of people working for you who all had PhD’s with ‘AI’ in the title. 2-3 years ago the development and ‘progress’ of AI systems seemed quite steady (there were no products catalysing the ridiculous levels of hype we have now) so there wasn’t much space for these teams to get in the way of ‘innovation’.
Then, Timnit Gebru was fired from Google for producing research that made a strong case for slowing down the development of large language models. This work is exactly what Google were paying her to do, but large language models are a core part of their business. So suddenly, ‘the optics’ stopped making sense and she had to go.
And now that the wonders of generative AI have infected the imaginations of thousands of white men with blue checkmarks, all AI ethics teams are suddenly irrelevant. More than that, actually: they are huge road blocks to innovation. The conversations about racial bias in algorithms have been eclipsed by a unique mixture of stuff that is completely unhinged, and stuff that is definitely worth thinking about — but that we probably should never be in a position to think about.
The unhinged stuff includes questions like ‘should machines have rights??’ and also makes the worst ideas possible, such as Nothing, Forever, a procedurally generated TV show that runs for 24 hours, isn’t entertaining, and occasionally spits out transphobia. The stuff that is worth thinking about, but I really wish we didn’t have to think about, is around the accuracy and authority of chatbot results, and distinguishing machine-generated text/images from anything created by humans. Oh and which jobs or industries are most ‘exposed’ to AI. Ffs.
The firing of these AI ethics teams has the same flavour as the hiring of metaverse teams in 2021. Those metaverse teams had jobs for 9-18 months before they were chewed up and spat out by the hype cycle. ChatGPT has already been banned in Italy, and it’s possible more countries will do the same — but I wonder what that will do to the trajectory of this hype. The metaverse fizzled out because it was ultimately useless and stupid. Generative AI feels different to that 😬.
🎓 Be cool; become a Newspeak fellow
Ed Saperia, founder of Newspeak House, has asked me to let you all know that the 2023/24 residency programme is now open for applications. It’s a great place to learn new things and meet new people; I know that many of the people who read Horrific/Terrific will definitely have many overlapping interests with the space, and many of you are also current/previous fellows! From the website:
We are seeking candidates for the next cohort of our residential programme, starting in October. Please share this with your communities and encourage others to apply. Expressions of interest are now open, and applying is quick and easy.
The deadline is end of April. The course is designed to support mid-career technologists gain a deep understanding of the landscape of political, civic, non-profit and public sector organisations in the UK, in order to found groundbreaking new projects or seek strategic positions in key institutions.
Apply here. If you need an extra motivation, I usually co-work there every Wednesday 😉.