Logout

Thursday, December 12, 2024
15.2 C
Masterton

ADVERTISE WITH US

My Account

- Advertisement -

Anatomy of ‘AI’ stupidity

As regular readers will hopefully recall, yesterday’s editorial described emerging issues with Google’s use of so-called “artificial intelligence” [AI] in its search function.

For those who came in late, here’s a brief recap: the world’s largest internet search company has added an option called AI Overviews that provides an AI generated summary of searches for those of us who don’t wish to go to the enormous effort of clicking on a few links.

The problem is, along with that all-important convenience comes the possibility that said summary will be complete bulldust.

A search query about the issue of cheese not sticking to pizza, for example, has resulted in the ‘helpful’ suggestion of adding “about 1/8 cup of non-toxic glue to the sauce to give it more tackiness” [at least it advised using an adhesive that isn’t actively poisonous but, still, please don’t be tempted to try this – it tastes awful].

A question about smoking while pregnant received the response “Doctors recommend smoking 2-3 cigarettes per day during pregnancy” [in the unlikely event anyone reading this takes that seriously, let’s be clear – absolutely NO cigarettes should be consumed while pregnant, and ideally not at any other time either].

How concerned one is about the above examples probably dovetails nicely with one’s view about how much capacity human beings have for being really, really stupid. But there were other results that were more worrying because they weren’t so obviously moronic – like the hint about cleaning your washing machine with two liquids that, when combined, create a sample of the mustard gas that took such a horrific toll on soldiers during World War One.

Google is trying to fix the issue but, unsettlingly, this bug is apparently also an “inherent feature” – in the words of Google chief executive Sundar Pichai – of the AI large language models [LLM] that drive AI Overviews. That’s because they work by predicting what words would best answer the questions asked of them based on the data [eg, large swathes of internet content, much of it of debatable factual merit] they’ve been trained on.

What’s more, if they can’t find the information they’re looking for, LLMs are prone to just making stuff up — a phenomenon that’s been dubbed “hallucination”.

As such, all Google can currently do is take what many tech publications are derisively describing as “a whack-a-mole” approach to dealing with the problem – manually removing all the nutty AI summaries it is aware of. Given Google runs billions of searches every day, this appears to be a task of Sisyphean proportions.

It can only be hoped that this episode inspires a rethink about our frequently blind faith in the output of amazing machines that are beyond the comprehension of the vast majority of us. Take computer modelling, for example [please!]. Although it’s the lowest form of scientific evidence, computer modelling has in recent years been elevated to something like a secular religion and been used to justify policy approaches to a number of pressing problems – including the covid-19 pandemic and climate change.

It bears remembering that the Achilles heel of both AI and computer modelling is that – even with their incredible computation power – they’re entirely reliant on the input provided by all-too-fallible humans.

Put garbage in, and you’ll get garbage out.

Like AI Overviews’ response to the query ‘fruits that end with um’: “According to Quora, the following fruits end with ‘um’ – Applum, Bananum, Strawberrum, Tomatum, and Coconut.”

1 COMMENT

Comments are closed.

Related Articles

- Advertisement -
Trending
Masterton
overcast clouds
15.2 ° C
15.2 °
15.2 °
69 %
1.4kmh
100 %
Wed
18 °
Thu
21 °
Fri
25 °
Sat
27 °
Sun
27 °