Shortcuts / 07 June 2023

Artificial intelligence

Artificial intelligence, or AI, has been getting quite a bit of really bad press lately, with experts warning it could ultimately spell the end of the human race. So in this Squiz Shortcut, we take a look at what AI is, why some of the biggest names in tech are so worried, and what governments are going to do about it.

Remind me what AI is again…
It’s generally defined as anything you can get a machine to do that would have previously required human intelligence. Scientists have been working for decades to try to replicate some human attributes – like the way we reason, communicate, and problem-solve – in computers.

What are some other examples of AI?
So a self-driving car is one, and by now we’re all probably familiar with voice-activated assistants like Apple’s Siri or Amazon’s Alexa.

What about ChatGPT?
Yep, that’s another good example of AI. Along with its main chatbot rival, Bard, ChatGPT has been in the news constantly in recent months and blowing people’s minds with the things it can do. It’s considered a huge development in the AI world.

Why’s that?
Because it’s incredibly good at adapting and improving its intelligence with training. So we’ve seen ChatGPT is capable of writing everything from uni essays to computer coding, and even writing songs and poems.

What do the tech titans have to say about these developments?
So a couple of weeks ago they put out a pretty extraordinary statement. Some of the people involved in this include Sam Altman – the guy behind ChatGPT – as well as top executives from Google and Microsoft and hundreds of top academics.

What did it say?
It was just a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Woah…
It sounds a bit overcooked – and not everyone agrees with that warning. AI expert Meredith Whittaker mocked the statement, saying tech leaders were really overpromising their product if they think it could destroy the world.

But there’s something there, right?
There is – because Altman and others who signed the statement have previously made calls for the regulation of AI. This time they’ve just grabbed the world’s attention by raising this Argemeddon scenario.

Take me through the doom and gloom point of view…
So just a month ago British computer scientist Geoffrey Hinton – who’s considered the ‘Godfather of AI’ and also signed the statement – made headlines after he quit Google and expressed regret over parts of his life’s work. He fears the uninhibited growth of AI could lead to killer robots that are smarter than humans.

That sounds pretty bad…
It’s not great, and he’s repeatedly warned the UK’s Defence Ministry about these potentially lethal killing machines – he says it’s “hard to see how you can prevent the bad actors from using it for bad things”.

And is that a legit concern?
Well, there was actually a recent story about a drone that actually attacked its operator – but that turned out to be false.

What was the deal?
So a US Air Force colonel was giving a speech at a big aeronautical conference and he described an experiment where an AI-enabled drone was given a mission to destroy a missile-launch site. But the drone kept being stopped from doing that by its operator, so after repeated attempts, the drone attacked the control tower where it was getting these orders so it could complete the mission.

Except that never happened?
Nope. It sent the internet into a bit of a meltdown but the colonel came out to say he misspoke – he wasn’t talking about an experiment that had actually happened, just a scenario they’d been workshopping. But he also said it was a “plausible outcome”.

Umm, that still doesn’t sound great…
But the good news is that that’s the worst-case robots-take-over-the-planet type of scenario. Most of the concerns that drove all these tech people and scientists to sign this letter are a lot closer to home.

Like what?
So Microsoft President Brad Smith says deepfakes are a big concern for him because they could be used to deliberately mislead people.

You mean like Pope in the puffer jacket?
Exactly… That’s a great example of a deepfake image – and while most people reckon that was just a bit of fun, there’s another recent example that’s more sinister.

Do tell…
There was an AI-generated image of an explosion claiming to be near the Pentagon that went viral on Twitter. It was an image that appeared to show plumes of black smoke near a building that has just some passing resemblance to the Pentagon. It was all done with an online tool that can create those types of images.

And people didn’t realise it was fake?
Not straight away. Within minutes of the original tweet being published, it got picked up by some big accounts and even retweeted by Russian state media. There was even a dip on Wall Street before reputable sources got involved to say it wasn’t true.

Wowsers…
Yep, so you can see how just one AI-generated image can fuel fear, affect the stock market and be used as propaganda by a bad actor like Russia.

What are some other concerns about AI?
Experts are worried about what kind of impact it could have on society as it wipes out some jobs, as well as the possibility of AI chatbots being used to groom children to commit terrorist acts. Then you’ve got some doctors fearing an over-reliance on AI for diagnosis and treatment.

So what should we do about it?
That’s the huge question. Our federal Minister for Industry and Science Ed Husic released a report on all this last week where he called for public and industry feedback on how AI should be regulated.

Right. But I’m guessing Australia’s not really a big player in the AI game?
No – most of the big AI developments are coming out of the US, China and the EU, so really whatever happens here can’t be in isolation. It’s definitely a field where the experts would prefer if like-minded countries moved together.

So what’s happening overseas?
One of the big areas they are looking at in the UK is around medicine, including implementing new rules that medical devices that use AI should have to go through a full system of trials, similar to how we regulate drugs to ensure they are safe.

Sounds fair…
And you also hear a lot of people talking about ‘guardrails’ around AI. So for example in medicine, that might be defining where assistance from AI should begin and end and when a human medical professional needs to get involved.

Any other discussions about AI regulation to note?
The US Congress has been debating how to crack down on deepfakes for a few years now – some think legislation should be used to force software companies to watermark all their images – but others reckon that isn’t feasible.

It all sounds quite tricky…
It is – but a few months ago, Google did stop people from using one of its data tools that can generate deepfakes. And Meta, TikTok and YouTube are banning deepfakes that are intentionally misleading on their platforms.

So they’re making the final judgement call…
Yep, which is a whole other point of concern for some. But it really shows just how worried experts are about deepfakes – testing has shown that even people who study this stuff will fail to pick a fake image about one-fifth of the time.

That’s not a good sign…
Nope, and it’s why we have a lot of people in the know saying we should put the brakes on, agree on a few core principles and get a lot tougher with approval processes before new AI technologies leave the lab – because it’s hard to reign in once it’s out in the ether.

Squiz recommends:

A short clip of 60 Minutes reporter Tom Steinfort talking to an AI robot

How to Spot an AI-Generated Image Like the ‘Balenciaga Pope’ TIME Magazine

Squiz Shortcuts - A weekly explainer on a big news topic.

Get the Squiz Today newsletter

It's a quick read and doesn't take itself too seriously. Get on it.