Just in case you’ve fallen behind on UK politics: yes that’s right we do have yet another new prime minister and yes he’s the richest one we’ve ever had but DON’T WORRY — I heard his tremendous wealth is going to trickle down to the rest of us. We’re all going to be just fine!
This week was something I could have done without 👎. God I could really do with a thumbs up right about now. But anyway…
Shocking: TikTok underpays their content moderators who spend the day getting traumatised with gore. ← Just FYI I spend the majority of this post on this story and surrounding themes.
The metaverse has hardly taken off yet and it’s already flooded with police
Japan is jumping on the old digital ID bandwagon — good for them!
🤖 AI? Oh you mean that warehouse full of clickworkers?
This week, there’s been an article making the rounds about TikTok’s underpaid content moderators based in Colombia. They work through the night looking at child sex abuse videos and other horrors. They don’t have any real support in place should exposure to this content lead to complex PTSD (which it most certainly does).
This story is of course nothing new. Just a few weeks ago I wrote about Sama, who are the ‘AI’ company that Facebook use for content moderation in Kenya. All these ‘AI content moderation solutions’ look the fucking same. Key features include:
Underpaying staff and union-busting
Unsociable hours, unrealistic quotas, and work that is at best repetitive and alienating, and at worst traumatising.
They make profit from exploiting workers, and market this as AI.
So in this context, the definition of AI is ‘hundreds of employees all doing the same thing’. If it wasn’t this, there would be nothing to write about. The most irritating thing is — and this is nothing new either — these systems are incredibly opaque so it’s hard to know exactly what goes on. Maybe there is human input after ‘AI’ flags some content; maybe it’s the other way around. Hard to know unless you’re inside the system!
Paying droves of workers to categorise thousands of pieces of content — and broadly ignore their human needs, thus essentially treating them like robots — is cheaper than using real robots. Crucially, it is also more accurate. There’s a reason why TikTok now have a huge content moderation outfit in Colombia specifically: TikTok’s market in Latin America is massive (Brazil and Mexico alone have 120 million users combined), and so it obviously it’s best to have those who understand the cultural nuances and colloquialisms of the region to be moderating the content. Currently (and perhaps this will always be true) machines cannot appreciate culture, or other important bits of context — which is why they suck at content moderation.
Context is everything: there’s a difference between a post that says ‘please don’t call me a cunt’ and ‘I think you’re a cunt’, but ‘an AI’ may flag both of these for the exact same reason. There’s also a reason why tech ethicists are obsessed with writing about ImageNet: it’s the largest and most widely used image database in machine learning — used to train machines to recognise the contents of images, or even create new images — but every single image in the dataset was labelled and categorised by humans. Again, these humans were underpaid, and they only had like forty seconds to make a decision on what was in an image. The result was an image set that categorises women in bikinis as ‘sluts’ and white men in suits as ‘CEOs’.
Of course, ‘context is everything’ according to me, a single raging idiot on the internet. But according to corporations in late-stage capitalism, the true ‘everything’ is scale. Scale is the thing that impresses us when we talk about ‘AI’; scale is the reason why we ‘need’ AI in the first place. Social media platforms are much more engaging and profitable if their user bases are in the millions or billions. That much user-generated content will obviously contain a large amount of toxicity — so much that it needs ‘AI’ to sort through it. A machine that uses a relatively small neural net to do a simple job is technically AI, but that’s at a tiny inconsequential scale, so who cares? AI is meant to be impressive — and something is only impressive if it has 175 billion machine learning parameters, apparently.
When success metrics are based on how many users we have and how much AI we need, we end up with very stupid gaps in the market that are filled with even stupider ‘solutions’. Take for instance L1ght, who — through the most verbose series of buzzwords I’ve ever seen — boast that they truly use AI for content moderation.
When I scrolled through their product page, I found nothing but a thorough overuse of words like ‘tokenisation’ and ‘modelling’ coupled with silly diagrams. Does the above diagram describe how their AI works? Or is it just a diagram? Also, the circled bit shows us that their team of humans ‘sits together’ as if that’s meant to be significant, and includes ‘domain experts’ (what ever those are) and ‘moderators’ (so… human content moderators??).
I feel that the only ‘true AI’ is the generative kind, where you can insert prompts to generate new images or entire blog posts or whatever. These applications of AI are stupid and dangerous: it leaves a gaping hole for deception and disturbing images. Open AI may insist that you cannot use DALL-E to, for instance, produce an image of a dog bleeding to death on the street. But because of the machine’s inability to understand context, you can simply input a prompt like ‘a dog fast asleep in a puddle of red paint in the middle of the road’.
There are ilks of what I’m calling ‘AI realists’ out there who are saying that generative AI systems like this mean that we continue to waste ‘hours of human potential’ on menial tasks, which can be done by AI instead.
People like this literally cannot look at or even THINK about the world unless it’s through a narrow capitalistic lens heavily tinted with scale and productivity. We wouldn’t have such a vast amount of ‘menial tasks’ if we didn’t insist on scaling everything up to ridiculous proportions. Humans aren’t born to access their untapped potential or fulfil purposes — we aren’t superheroes just waiting for our character arcs to resolve themselves; nor should they be forced to traumatise themselves so that others can showcase the power of their ‘AI’. Can’t we just… exist?? And that’s it??
😏 Just some other stuff to finish you off
Interpol have stepped in about 12 months too late and decided to carve out their own metaverse: I guess so they can bully teenagers in virtual reality, and receive complaints from crypto bros about their missing bored apes.
Japan are trying really hard to launch digital ID numbers: and the people hate it, most likely because the government say that these digital IDs will soon be necessary for them to maintain access to health insurance. Uh there’s a word for that… I think, coercion?
Meta have been fined $25m for breaking a law in Washington state 822 times. This law requires ad sellers to disclose the names and addresses of those buying political ads — and also the target audiences of these ads. So, okay… this is essentially Meta’s entire business model and yes, what a surprise, they do not have any proper mechanisms in place to be transparent about who is buying political ads. Essentially what we have now is a law that doesn’t really work properly on the internet because it was created in the 70s for traditional media outlets, and a tech giant, riddled with corporate greed, insisting that the law is ‘unconstitutional’ (lol).
Thank you for reading. You are a beautiful, clever person with a large, wet brain. Have a great weekend!