Discover more from Horrific/Terrific
Free speech 🔜 🗑️
What if one law in Texas changed the face of the internet? Haha no that’s nonsense. But what if???
🐫 Here’s a two-fold thought that’s been bothering me lately (yes, two whole folds):
no matter how much the most powerful social media platforms insist they have content moderation under control, they do not. It’s too hard, even for them.
Literally no one, not even the people who make the laws, understand what is means to protect free speech.
Why are these thoughts in my brain? Well, there’s lots of stuff — LOADS, actually. Cloudflare’s very stupid response to Kiwifarms just the other week; the sheer indestructability of Alex Jones; Facebook’s inability to prevent hate speech and misinformation in the lead up to general elections in Kenya; how a few months ago, multiple platforms struggled to contain a video of a white supremacist doing a mass-shooting in a supermarket.
But the main thing that’s set me off is the new Texas anti-censorship law: it’s a serious smack in the face for free speech, the very thing it says it’s trying to protect. Here’s what it does:
It restricts social media from doing any content moderation on posts based on the viewpoints of the user
In other words: if someone posts something hateful or offensive, and the platform take it down, they were wrong to do so if any user in Texas happens to share the viewpoint of the OP
It’s so stupid, it’s just so so stupid
In practice, this means that social media platforms are powerless to moderate content based on their own community guidelines — which technically limits their freedom of expression. So yes, a law meant to protect free speech actually violates free speech. The law was in fact initially blocked by the supreme court for this reason. But now some other court has decided that this law is just fine, which means it will go ahead. I will never understand US law making. Ever.
While this law only ‘applies in Texas’, we surely all understand that the internet transcends silly location-based jurisdictions. Users all over the world are now collectively clenching their buttocks in anticipation for a precedent to be set. So glad that ‘being connected’ means that a small group of idiot law makers from Texas can have this power over us.
☝️ Just so we’re all super clear on this, enforcement of this law would roughly look like this: a group of people on Twitter are targeting an individual with hateful, bigoted language. This is in violation of Twitter’s rules, so they remove the posts. Someone in Texas sees this, is in agreement with the hate group, and orders Twitter to put the posts back — otherwise they’ll sue. Twitter’s rules kind of become a bit pointless, and disgusting hate speech remains on the platform.
The only thing really protecting us from enforcement of this law is the blind trust that others won’t speak up if they, or anyone they happen to agree with, are ‘censored’. This really is an unapologetic, mangled interpretation of censorship. Those who insist that platforms are biased and suppress right-wing voices are just confused: actually, it’s the hateful and violent voices which are suppressed — perhaps it just happens to be that most of those voices come from the right?
🔎 Okay enough of that, let’s look into some details
When this law got passed the other day, the attorney general of Texas said “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.”
One: he is literally saying that some people should get free speech and some shouldn’t. Two: this demonstrates a fundamental misunderstanding of the role of online platforms, and what censorship is.
Social media platforms are not neutral spaces. They have rules about what you can and cannot do. When Facebook takes down a post, it is not engaging in censorship, it is simply moderating content in line with its own rules.
🐈 Look at it this way: I could start my own social media open to everyone, just like Facebook, but the only rule is that you can only post photos of cats. Anything that wasn’t a cat would be taken down. It’s not censorship, it’s just THE RULES.
I think that often we struggle to make this connection in our minds because traditional social media is sold to us as an open sprawling landscape where you can meet anyone and experience anything and ‘make connections’ and whatever else. Regardless of whether any of these attributes match reality, the fact is, there is no rule out there that says ‘say whatever you want, it’s chill!’ This law gives equal credence to extremist right-wing propaganda and a photo of your frozen margarita.
What we’re looking at here is a group of angry, unpleasant people who not only don’t like the rules, but refuse to even admit that they exist. I feel like if you’re engaging with a platform who’s rules just don’t work for you, the best thing to do would be to just find somewhere else — but of course it’s never that simple.
🤡 Big social media is too big to anything well
So my next point is: maybe they should be neutral. Two reasons:
Staying on top of content moderation in any meaningful way is basically impossible at their scale. We’ve seen this manifest in so many ways; one way being Mudge’s report on Twitter’s bot problem
Private companies companies should not be in charge of curating THIS MUCH information. I think having a network of people and AI who are actually equipped to make content moderation decisions within all the cultures that a big social media touches would be a whole other ball game, and not one that a company like Facebook wants to play.
These two observations are rooted in the insatiable desire for growth that these companies have. We didn’t need Frances Haugen to tell us that Facebook prioritises engagement and user acquisition over everything else — that’s pretty fucking obvious.
If you’re the biggest social media in the world, any mistakes made by your content moderation systems are going to have larger, much more damaging network effects. Repeatedly exposing a hundred people to white supremacist hate speech is not the same as exposing it to a hundred thousand people.
‘Mistakes’ are of course not always technical. In 2019, Facebook finally banned content relating to white nationalism and separatism, after letting it fester on the platform for years "because we were thinking about broader concepts of nationalism and separatism – things like American pride.” This is clearly a shit take and only proves my point: they are not equipped — and nor should they be allowed to — make decisions on this, given their sheer size.
Big platforms have taken to turning to their users to help them shape policy and weigh-in on misleading content. Twitter has Birdwatch, which was launched a couple of years ago and has just been expanded. It enables a select group of users to fact-check and write notes on posts that potentially misleading; stuff like ‘if climate change is real why is it cold outside right now? Think about it’. Facebook just recently ran an experiment with 250 users, where they were invited to share ideas on how to address content like this too. I don’t disagree with this approach, but it’s definitely too little too late. A big corporate will never really do user participation in the same way as a platform which starts out with those principles from the very beginning. It feels like they’re only doing this now because they’re all out of ideas.
Content moderation systems are already struggling to keep up, and this gruesome little law in Texas only has the potential to make things worse. People think they want a platform where you can say or do anything you like, but then those same people faint in disgust when they realise that all they’ve done is made the water safe for every dimension of adult content to thrive. If you want the internet to be a full-on wild west, you need to have thicker skin than that, I’m afraid.