πŸ€·πŸ»β€β™€οΈ MS Health

Prioritise AI, forget everything else | A responsible ML initiative | A very wealthy robot

πŸ€·πŸ»β€β™€οΈ MS Health

Sorry I'm late, I went to BRIGHTON yesterday and DRANK BEER. Am I stressfully catching up with everything today? Yes, but at least I'm free.

This week was sort of okay I guess 🀷. It's one of those situations where the good and bad just balance each other out:

  • Twitter have actually realised that their actions have consequences
  • Microsoft are tightening their grip on privatised healthcare
  • Something something a robot is better at NFTs than you are

🩺 Healthcare: important. AI: not fit for purpose

And yet, here we are. The more I read and write about AI, the more I think it's not ready for most of it's current applications. But, those with power have other ideas. I guess they're the ones in charge β€” what the hell do I know?

I'm complaining about this today because this week, Microsoft announced that it will acquire Nuance, who specialise in conversational AI. So that's stuff like natural language understanding, voice recognition, text to speech, etc. For some reason, their technology is used a lot in healthcare. They call it 'cloud-based ambient clinical intelligence for healthcare providers'. If you're not sure what that means, then you and I have at least one thing in common.

Here's what the CEO of Microsoft has to say about it:

"Nuance provides the AI layer at the healthcare point of delivery and is a pioneer in the real-world application of enterprise AI. AI is technology’s most important priority, and healthcare is its most urgent application. Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we accelerate growth of Microsoft Cloud for Healthcare and Nuance." Satya Nadella, CEO, Microsoft.

Hmm okay and here's my steaming pile of hot takes:

  • Healthcare does not need an AI layer. What it does need is a 'free for everyone no matter what' layer.
  • AI is only technology's most important priority because people like Satya Nadella say it is. Why can't we prioritise thinking about how to use AI without upsetting everyone, instead of just throwing AI at everything and seeing what sticks??
  • Privatised healthcare is trash and belongs in the bin with all the other trash. Microsoft are helping maintain trash. And they're GOOD at it.
This is what happens when you let evil caricatures make all the decisions

😬 Twitter are... trying. At least they're trying.

When the other social media giant, Facebook, 'try' all they really do is make the playground safer for advertisers while the rest of us sit here wishing we didn't spend our early 20s training their AI by tagging our friends in all those photos.

Okay but Twitter: this week they announced a Responsible Machine Learning Initiative. With this, they want to reduce the potential harms of algorithmic decisions, and take more responsibility for them. Taking straight from the article I just linked, they will analyse the following, and make it public:

  • A gender and racial bias analysis of our image cropping (saliency) algorithm
  • A fairness assessment of our Home timeline recommendations across racial subgroups
  • An analysis of content recommendations for different political ideologies across seven countries

Agree: you most certainly should have rigorous process behind how you use algorithms in your products, especially if you have as much power as Twitter

Disagree: just like when they banned Trump, this is too little too late. ML is not new, and surely the people at Twitter were smart enough to realise the potential outcomes of making algorithmic decisions at this scale.

Disagree STRONGLY: I imagine that the outputs of this initiative will be wordy, very academic-sounding reports that hardly anyone will read. If the people who are affected most by bad algorithmic choices don't even get to learn the results of this analysis, then this is not true transparency, this is just more fucking around to look good.

MY SOLUTION: hire me to write the reports, I'm very good at being clear, friendly, and accessible.

MY MORE SERIOUS SOLUTION: if they really really cared about the potential harms of and impacts of ML in their product, they would stop using machines to power recommendations entirely. They are attempting to shine a light into a black box when really they should just be throwing the black box out of the fucking window.


πŸ€Ήβ€β™€οΈHere's some more stuff that happened/was inflicted upon us this week

On-device deepfakes: do you know what's annoying about deepfakes? It's that you cannot just make and store them on your phone. That's the problem with deepfakes. That and that alone. A company called Avatarify have the solution. This year so far, 140 million deepfaked videos have been created with Avatarify (dumb name), and they've ONLY JUST realised that they should be watermarking these so that viewers know they are fake.

Peak horrific: the humanoid robot called Sophia has been selling NFTs and is now going to launch a music career, while the human content creators (who need to eat to survive) bend over backwards just trying to get noticed. Hilarious!

How to show you care: this issue has contained a lot of chat about AI Ethics. MIT Technology Review have written a little guide for organisations who want to demonstrate that they know what they're doing with AI, and will not use it unintentionally dismantle civilisation as we know it.

Thank you for reading. If you're in the UK, I hope you spend the weekend getting utterly shit-faced in a beer garden somewhere. You deserve it.

Love from Georgia


πŸ“£ If you have a cool or funny story to put in Horrific/Terrific please tweet me, or even send a DM