š Delay & Obfuscate
Open AIās gears of exploitation | The Paris Olympics surveillance project | Scroll to the end for something sincere
Good morning! You know what? I fucking hate winter. Everyone who likes it is wrong. Itās a mulchy brown wet fart of a season. Summer though? Summer is far superior; summer is where you recline next to a body of water while an attractive IG celebrity goes down on you (thatās what itās like for me anyway).
And now down to business: This week was just like winterā¦ unenjoyable and smelly š. But why?
Facebook has reinstated Trump and the world continues to turn
OpenAIās attempts to ādelay and obfuscateā its own inner workings have broken down a bit; turns out, they build tech just like everyone else does: with exploitation and sadness.
The Paris Olympics are a great example of how new infrastructure = surveillance
šļø I wonāt spend long on this, but Trump is back on Facebook
Charlie Warzle has aptly observed that Trump and Facebook are in a state of mutual decay. And that really is the central point here ā if youāre worried about what the reinstatement decision will do, just have a nap bro. The layers chaos and nonsense that have been applied to the landscape since 2021 have changed everything so drastically that Facebook doesnāt have the same amount of power as it did back then.
It literally doesnāt matter that Trump, an orange man that used to be president, is now back on the platform. He cannot sow chaos on a degraded social media platform, in a world where chaos is currently the norm. Thereās more important stuff to worry about.
š Shocking Not Surprising In The Least: OpenAI exploited workers in Kenya to build ChatGPT
Over the last week and a half, this TIME article has managed to shatter-smash its way through every device I own. How DARE it try so hard to capture my precious, finite attention.
šĀ ICYMI: the article outlines how, when building ChatGPT, OpenAI used Sama (an āethical AI companyā who Iāve written about very recently) to label toxic, hateful, and violent language. This is so the machine ā in theory ā recognises hate speech in its training data, and does not replicate it.
What we can glean from articles like this is that apparently, there is no limit to our outrage when we inevitably learn about the gross, sticky, inner-guts of an outwardly pristine piece of technology. Personally I used all my outrage up on the Cambridge Analytica scandal. I have no outrage left. I havenāt been outraged at anything for years. So no, I am not even slightly surprised to learn that OpenAI ā which is about to be valued at $29bn ā only used $200k of its money to āensureā that ChatGPT wouldnāt spit out hate speech.
Because of course ā of course this is how a for-profit company conducts business and produces new āinnovationsā at these scales. Apparently, there is just no other way to scrub your conversational AI free of toxic language than to use a very small proportion of your profits to pay someone else to do it. No no, the darling end-users of this technology must never be exposed to graphic descriptions of beastiality or child abuse ā thatās for other people in other countries.
ChatGPT is produced in the same way that all future-defining technologies are: less fortunate communities must bare the brunt of extractive and exploitative practices so that we, the people who send emails for a living, can distract ourselves by playing with a very expensive parrot ā or maybe own an electric car!
We should really ask ourselves if the emotional distress piled onto workers in Kenya is even worth having a piece of technology that no one even asked for. Yes, itās an interesting tool, and isnāt it just great that we can sit here and have all these academic conversations about āSearchā, and speculate about how this may remove one more tiny spec of friction from our already frictionless lives. Meanwhile, large swathes of people we will never meet spend their waking hours reading snippets of graphic texts describing sexual abuse, suicide, and gore. Seems fair and balanced, I guessā¦
š
Another major sporting event, another justification for unnecessary surveillance
There are growing concerns that the 2024 Paris Olympics will be a great excuse for security companies to really let loose and express themselves through horrid surveillance technology. I donāt fully understand this despicable tranche of people who honestly believe that the only way to keep public spaces āsecureā is to encroach on the privacy of individuals.
Among some of the proposed plans for these Olympic Games are algorithmically powered real-time camera systems which apparently spot suspicious behaviour automatically. To what end? Unclear. Will these systems actually catch people out when they are being āsuspiciousā or will they annoy and upset people? Again, unclear. I guess theyāre just going to try it all out and see!
But of course, weāve seen stuff like this play out already. Just last year a woman was denied entry to a venue in New York. According to the woman, when the security staff asked her to confirm her identity, āthey told her their facial recognition system already knew who she was, and more importantly, where she workedā.
She was denied entry because the law firm she worked at was in the middle of a case against Madison Square Garden Entertainment, the parent company which owns this particular venue. Excluding someone from a building on that basis alone is ridiculous, but thatās not the important part: itās more that this side of the āsecurityā industry has succumb to a heavily noxious culture of operations; that the default approaches are laced with fear and paranoia, resulting in the gruesome and pernicious use of facial recognition.
This reminds me of when I was a waitress for a event-staffing company. I was set to work a very long shift in Leatherhead ā some kind of bullshit lunch for the conservative party. Both David Cameron and George Osborne were going to be there. The event manager called while we were en route, explaining that I was no longer āclearedā to work this shift, and was immediately let out of the car. It turns out that the client (I guessā¦ the conservative party?) had run ābackground checksā on all the staff, including on our social media.
I was considered too left wing to put plates of food down in front of vacuous right wing politicians ā even though I had done this many times before (because it was my job). This was such a violation on so many levels: it was done without my knowledge, and I was denied wages because of my politics ā and I had absolutely no control over it. All this stuff has the same flavour; why on earth are we supposed to just accept these levels of surveillance?
š„§ One last thing before my brain turns to jelly
This week, I had the pleasure of attending an event at The Royal Society launching their report on privacy enhancing technologies. With me was Alice Thwaite, founder of Hattusia. For The Royal Society, Alice and I produced an essay which took a deep-dive into trust, assurances, and standards in the context of privacy enhancing tech; we really examined trust as a concept, outlining different kinds of trust and trust relationships. We also explained how and why privacy and security become conflated, and how that might inform the way in which standards develop in the field. If this stuff interests you, you can read our essay here.
Alice and I are both very proud of how much the concepts and arguments in our essay were used in the final report. In case you were unaware, when Iām not venting my frustrations in this newsletter, Iām probably working on things like this. So if you ever need me for anything like this, do not be afraid to send an email!
Have a great weekend you spicy little gremlins.