🤷 Uncanny Moments
Google announce new technology to scare and annoy | Tesla and Bitcoin need to take a break | Thank you for those search results... how about a chat?
Hey there gut sacks and welcome to your weekly tech-themed gossip column. I had trouble concentrating this week because it was my birthday, so I was getting drunk a lot. Luckily this content is still all free so I don't have a duty to make it good.
This week was: sort of okay I guess 🤷. Because:
Google I/O was this week — prepare yourself for the future
Google has also released a very interesting research paper — prepare for people to be fired
Tesla and Bitcoin are having a conscious uncoupling — prepare to roll your eyes.
📣 Google I/NOOOOOOO
This week was Google's developer conference, so naturally they announced a bunch of new pieces of tech that are either boring, pointless, or creepy. Let's focus on the creepy. Here's a fake, uncanny, series of moving images of someone's child:
This new feature of Google Photos is called Cinematic Moments, and luckily does not have a launch date yet. I say luckily because it is quite clearly born out of our collective waking nightmares. As you may have guessed, these are just photos stitched together, with a machine filling in the gaps, to make an almost believable-looking — but mostly upsetting — 'video'.
Wow, what a weird and unnecessary flex of their AI capabilities. What disturbs me about this is that this use case is utterly trivial to Google. This is just something they do to entertain and patronise users to justify all those juicy military contracts, obviously. Lest we forget that the only way technology 'get's better' is by using it to kill people more efficiently (otherwise how will you know it's working?)
But it's okay because the head of AI ethics at Google just announced that she will be doubling the size of her team. You know what they say, the best way to start doubling a team is by firing people. I assume this time they'll just hire Kier Starmer's shadow cabinet — they are polite, quiet, and don't seem to have any strong opinions on anything.
🔎 It's time for a chit chat about your closest Starbucks
Some researchers at Google want to change the way Search works. They say that we should lose the ranking method and just use one single, large language model (like GPT-3 or the next round of BERT). I say: good luck with that.
How it works now: by constantly crawling the web for new pages and content, and then using an algorithm called PageRank which ranks results based on relevancy. Apparently this hasn't changed much since the very beginning of Search.
How it could work: ask a question, and get an answer. The language model should 'understand' the query and then construct an answer based on content within relevant pages. This already sort of happens with basic stuff — you ask Google how tall Beyonce is and it will just tell you without making you trawl through an article for the answer.
Sounds fine if you want to smooth over every search into one tiny interaction that might not even be accurate. Also, these other issues come to mind:
We all know that language models such as GPT-3 sound smart, but are actually really dumb. If used like this in Search, will it just parrot misinformation on existing websites? Or get the wording wrong on complex concepts? Or some other thing we cannot foresee?
The web is full of both great things, and hateful things. GPT-3 is trained on both, thus emulating the hate and bias. However, the goal is to eventually train future models to keep a record of where they get the words from.
This one might just be me, but the less of my time spent having fruitless conversations with dumb robots the better. I am an ADULT and should be treated as such. Eats Peanut M&Ms for breakfast.
A quick note on the rise of conversational AI: I think my last point is especially key because Google have also announced LaMDA, a new language model that is trained on dialogue, and not on general web text. This way it can talk to you like a human, instead of regurgitating information. Maybe I am old fashioned, but I think machines should stay behaving like machines. The more a machine can emulate a human, the more capitalism (or just bad actors in general) will leverage this... and the less humans get paid.
💸 Well waddya know, Bitcoin is down
I'm SICK of trying predict what the crypto market is doing — can someone just explain it to me? Oh wait, I think it was Elon again...
In case you are completely unaware of how hypocritical this is, here is a run down of events that happened prior to this tweet:
Tesla bought $1.5bn worth of Bitcoin in February, and announced that they will be accepting Bitcoin as payment going forward.
Elon Musk proceeded to talk about how hard Bitcoin makes him over various social media platforms
Bitcoin then went up — very up.
Tesla sold 10% of their giant lump of Bitcoin, making a profit of $101m.
📅 That leads us to present day: Tesla will no longer accept Bitcoin payments because it's bad for the environment. So basically, how bad it is for the environment just so happens to outweigh how little they care about Bitcoin now that they've made a huge stonking wedge of profit from it.
What we can takeaway from this: the benefits of Bitcoin (getting rich) are reserved for Elon Musk. The downsides to Bitcoin are reserved for... the rest of the planet. Atmosphere? What atmosphere?