Wanna know what I've been doing this week? The answer is: vibing.
Despite the vibes, the week has been something I could have done without 👎. Why's that then?
- I had a chat with Lucie Kaffee about the most recent winner of the Turing award: just like large language models, he is racist.
- I don't like it when Big Tech firms further consolidate power via military contracts, I just DON'T okay?
- The Crypto Twins are at it again
🤖 ARmy time!
ARmy = Augmented Reality Army, obviously.
If you somehow still thought that capitalism was democratic then THINK AGAIN. Microsoft are going to build over 120k modified HoloLens headsets, especially for the Pentagon (FYI the 'Pentagon' is the building which is shaped like a Wiccan symbol, where the Americans make all their decisions about what should explode next).
Here are some bullets for your brain (the non-lethal kind):
- Microsoft employees are of course upset because they just wanted to make a cool AR headset, not military weapons — but they have no say because, as mentioned, capitalism is not democratic.
- Using the headset in war will make it cheaper for consumers, eventually. So yes I'm all for war, as long as it keeps the price of heavy, cumbersome, stupid, face-mounted computers down...
- If you, a consumer, had been wondering up until now what a HoloLens could possibly be good for, you've got your answer: you are not the target audience, the point is to train soldiers more effectively OBVIOUSLY
How many steps away from transhumanism is this? One? Two? None...? 😳
🏆 Business as usual: a racist, white old man won an award
Jeffrey Ullman is one of the most recent winners of the Turing Award, which is undoubtedly the highest recognition a computer scientist can get. This was in part for his foundational work on algorithms, which has maintained influence in the field for decades.
Of course, algorithmic bias is as real as human bias — which Ullman himself suffers from, as demonstrated by his explicit refusal to support Iranian grad students at Stanford. I spoke to Lucie Kaffee, a computer scientist who I have great respect for, about all of this. Here's a quick round-up of our conversation:
GI: Don't you think it's ironic that a racist man won an award for his seminal work in algorithms?
LK: Yes but it also doesn't surprise me. If you look at the list of Turing Award winners, it's extremely alienating. The Turing Award is most prestigious one that computer scientists can get, so it should incentivise people to do good work. But we keep rewarding the same kind of people over and over. It's like a panel of judges from Oxbridge only ever giving awards to people who are also from Oxbridge. We need to diversify who we honor in computer science if we want to diversify the space, especially with things like AI.
GI: recently I haven't been able to shake the idea that we are not ready to teach machines until we, as humans, become 'better'. Therefore, we should stop using AI for so many things — or stop using it entirely. What do you think about THAT Lucie?
LK: there's no need to stop, just put more effort into making under-represented groups visible. We keep rewarding people who already have a lot of resources to do research. There is not enough recognition for those working outside of European or American universities. These communities do amazing research with limited resources — e.g. without gargantuan models and a million GPUs at their disposal. This research is then much more accessible for other groups, because the lower spec makes it easier to replicate.
GI: I understand that removing bias from machines would mean we would have to remove our own. How do computer scientists mitigate that bias?
LK: Badly. Computer scientists are trained to program things, not decide on 'how to fix representation in NLP'. I am not educated to make decisions like this. If you ask me to program an AI to 'show you a scientist everyday' my first thought might be to tell it to make sure it throws out a woman 50% of the time, and a man 50% of the time. But that doesn't represent reality because we're still suffering from generations of oppression, resulting in fewer scientists who are women. So what do I do about that? The point is I don't know, and I need a moral philosopher to help me because my area of expertise is computer science. Yet, we assume that computer scientists know everything.
GI: finally, I feel as though we are in the tech version of 'who can build the tallest building', except it's 'who's language model has the most parameters'. GPT-3 has 175 billion of them, and yet when you play with something like DALL-E the output really is just... an image. Is the power it takes to train these models worth it? I'm asking this because of the environmental risks (something outlined in Timnit Gebru's recent research paper).
LK: I think that having one huge model that can basically do anything means that we will not 'go back' and scale down, but rather continue to scale up. Honestly if I had the option of training a small model on my own machine myself, or using an API which gives me access to a central, large model which is has 99% accuracy... I'll pick the API. But, once again, I am not trained to make consequential decisions about these things — just as we need ethics boards to help us through this work, we also need 'resource committees' to make sure we're not just wasting energy all the time.
End of interview. Lucie had to run off to a real meeting. If you need professional insight on ML or AI, may I suggest you pay for some of Lucie's time — she's very clever. Contact her here.
Lucie is a PhD student at the School of Electronics and Computer Science, University of Southhampton, focusing on how to support lower resourced language communites on Wikipedia and Wikidata. She also makes art.
🚣♂️ No more Winkleloss for the Winklevoss
This week, Forbes published this huge and unacceptable ego-stroke for the Twin Capitalists. A double hand job would have been less embarrassing, and more productive. Why do Forbes think the Winklevoss twins are suddenly relevant again? Ah yes, because of that thing I love to hate, the NFT.
The Winklevoss twins are of course famous for being outwitted by Mark Zuckerberg, who basically stole Facebook right out from under them, and have been desperately overcompensating ever since. They own Nifty Gateway, a trading platform for NFTs which recently had its vulnerable security infrastructure exploited, resulting in a lot of NFTs being stolen, to my delight.
The twins are obsessed with the idea that blockchain will save us all — and therefore they should both die in a fiery pit — and say that what we're looking at now is the start of the 'metaverse' where everything is a digital asset. Two things:
- stop trying to make everything digital
- stop trying to make everything... an asset
- Bonus third thing: the fuck is a metaverse, shut up
This Forbes article says the twins now 'find themselves at the center of an antiestablishment movement'. God, I think I just dry-heaved a little. Pro-tip: losing to Mark Zuckerberg does not make you antiestablishment. Rather, it makes you just like everyone else. If Mark did not succeed, and the Winklevoss twins were deemed the true founders of Facebook, you can bet your tight, sexy ass that they would NOT be harping on about decentralised networks and the disruption of real-world economies. No, they would be cashing in on their amazing, billion-dollar idea. Because in case you weren't aware, making billions of dollars is their primary goal.
I leave you with the cringiest, most Silicon Valley part of the article:
“We actually call our employees astronauts,” Cameron says. “We’re all astronauts building on the frontier of money and the frontier of art and the frontier of finance.” Accustomed to finishing his brother’s thoughts, Tyler chimes in:“We feel like we’re on a spaceship, exploring a new frontier.” Michael Del Castillo, Forbes.