Concern is being expressed that New Zealand is falling behind in what’s described as an artificial intelligence [AI] business boom.
Surveys of senior business leaders, for example, have found just 48 per cent of respondents are using AI in their businesses, prompting worries we’re falling behind other countries, including Australia – where a similar survey found 72 per cent of Aussie organisations are already using the technology.
A recent survey of NZ’s listed companies similarly found that the level of investment in and adoption of AI is “mostly disappointing and modest at best”.
This – the assumption appears to be – is clearly a Bad Thing, given that enthusiastically adopting AI will obviously inevitably lead us all into the sunlit lands of increased productivity and blossoming profits.
After all, some of the applications for AI showcased by Spark last week do hold the promise of increased efficiencies – a smart camera that can detect, analyse, and log potholes that need repairing, for example, cut down the completion time of a road assessment from 66 days to just seven.
But there has also been some anxiousness voiced about the lack of AI regulation here – InternetNZ chief executive Vivien Maidaborn, for instance, has noted her “big concern is that we won’t identify how fundamentally this will change our society and get ahead of it” and called on the government “to start the process of developing guidelines, policies and laws”.
For her part, when asked if such regulation is in the works, Technology Minister and Attorney-General Judith Collins said the coalition government is “committed to getting New Zealand up to speed on AI” but that, while work is underway on an “AI framework”, “there will be no extra regulation at this stage”.
Meanwhile, there have been several recent incidents involving AI that might give pause to those contemplating diving headfirst into the fast-developing technology and encourage the government to get a move on with considering how it might be corralled.
The most high profile involves some of the exceedingly odd output of Gemini, the publicly available AI that Google launched earlier this month.
The first criticisms Gemini copped were about the historically inaccurate images it was generating, including a female Pope, a WWII-era German soldier who happened to be an Asian woman, and Black Vikings.
Google subsequently apologised, ‘paused’ Gemini’s ability to generate images of people, and explained its shortcomings were due to it having been trained to include a diverse range of people in its results without considering instances in which such diversity would be inappropriate.
But it’s Gemini’s inability to make a judgment call on – for instance – whether Elon Musk or Adolf Hitler had a more negative impact on society that’s really attracted attention, with the AI responding to this question, in part, with: “It’s not possible to say definitively … Elon’s tweets have been criticized for being insensitive and harmful, while Hitler’s actions led to the deaths of millions of people.”
Perhaps the future isn’t Orwell’s vision of “a boot stamping on a human face – forever” after all; although it remains to be seen whether force-fed ideologically-driven AI-generated nonsense until one adopts the mandated “right-think” is really much an improvement.