Recently I asked a good friend of mine who works for an artificial intelligence-based company that you’ve almost certainly heard of what questions politicians should be asking about the impending proliferation of AI.
I thought he might talk about cybersecurity, data management, workplace productivity, pornographic deepfakes.
Instead, almost immediately, he said that we should be designing a robust social welfare policy. His take was that work as we know it would essentially cease to exist, leading to a total rethinking of social and economic structures.
It’s big picture stuff. It’s complex. It’s unknown and it’s coming.
But there is a huge gap between the people who know the most about AI capabilities and risks, and those who don’t think about it all.
And Australians are at particular risk of being overcome by what Mustafa Suleyman, one of the founders of AI company DeepMind, the first technology to beat a human at the notoriously complex game ‘Go’ (and since bought by Google), calls ‘The Coming Wave’.
Global research shows that Australians are some of the most sceptical and hesitant in the world about utilising AI technology. According to the 2024 Edelman Trust Barometer, 52 per cent of Australians reject AI with just 15 per cent embracing it. A 2023 ANU report found that just 17.4 per cent of Australians think AI will create more jobs that it eliminates.
This attitude leaves us more vulnerable to cyber-attacks, more susceptible to mis- and disinformation campaigns and less likely to capitalise on economic growth opportunities that Goldman Sachs predicts will increase global GDP by US$7 trillion over ten years.
The coming wave will leave us in its wake if we are not ready to maximise its use and protect against risk. When I bring up AI with people who don’t work in tech I still get asked, “well, are you for it, or against it?”
We are, of course, well past that question. It is now a matter of how – or how not – to use AI.
AI is a tool, it’s a capability. It is not an industry or a sector. It will permeate our lives in all facets. AI is already such an embedded feature of our world we often don’t even realise where it’s working.
AI has, for years, underpinned now-normal tools like Google Maps and online supermarket chatbots, as well as government services like Revenue NSW, which has been using AI since 2018 to identify and support vulnerable people who may be unable to pay their fines by providing alternative resolutions to fines or imprisonment.
But ChatGPT, just over 18-months-old, has revolutionised the world’s thinking about AI and its capabilities as an almost sentient presence. It has given AI what seems like an independent voice through the rapid development of large language models (LLMs), and a set of eyes through tools like DALL-E that create images and videos on command, now to a visual standard that is truly indistinguishable from real photographs.
We could be better utilising AI to generate individualised medicine, to make our energy grids more efficient, to strengthen democracy by engaging disenfranchised voters and finding bridging views across a population through pro-social media.
There are massive benefits to using AI well, but there are also big concerns that we can’t ignore. People are naturally worried about losing their jobs, their intellectual property, privacy and identity.
We are on the precipice of having AI tools that can literally set up a fully operational online retail store that generates revenue, from emailing suppliers to setting up a bank account to responding to disgruntled customers who received the wrong colour item.
There are also serious considerations about the impact of generative AI on our democracy with deepfakes soon to reach undetectably realistic quality and online bots and automated content creation being used to drive polarisation and extremism through mis- and disinformation.
These are technologies that can hack our psychology for extremely nefarious and destructive purpose.
On an individual scale, as people seek to replace real human interaction, there are examples of peopling fall into deep, suicidal depression after their access to fake, AI girlfriends is cut off.
It turns out that hacking our human psychology isn’t all that difficult with tools that distort our sense of reality.
Yet, Professor Melanie Mitchell notes the limitations that AI systems currently have. Things like being unable to transfer knowledge from one domain to another or provide quality explanation of decision-making processes. These lead to a range of risks like questions of bias, fairness, reproducibility, security vulnerabilities and legal liability.
Of course, we are far past the point of eliminating AI from our lives and containment presents a set of challenges that must be considered and addressed in a serious, expert and sustained way.
That is why NSW needs a dedicated AI Commissioner. Leadership in this space is critical.
It is not good enough that the NSW government keeps automated decision-making secret, adds to the myriad matrix of “digital officer” roles that already exist within departments and agencies and fails to systematically engage with industry.
We need a public-facing leader dedicated to building better relationships, networks, insights and capability through AI.
We need a specialist entrusted to undertake research, gather industry together, lead public education campaigns, assist small and medium business people and equip the public service.
The NSW Minister for Customer Service and Digital Government Jihad Dib last year cut the role of the NSW chief data scientist, an incredibly impactful position which led to the creation of the world-leading ethical framework for the use of AI. This work has since been used as a template at a federal level, and even amongst international standards bodies.
Minister Dib’s decision was short-sighted and small-minded.
As legislators, we are at the bleeding edge of a technological shift that is possibly unparalleled in human history. We are at the forefront of responsibility when it comes to harnessing, exploring and protecting against artificial intelligence technology.
It is exciting and slightly terrifying.
As legislators, we must rise to this truly unique moment in time and accept that we need some professional help. That help should start with a state AI Commissioner, who has an office and the resources to explore some of our most complex challenges.
Suleyman suggests four intrinsic characteristics of AI that make containment such a big challenge, which must be contemplated when considering impact.
One, that AI technology proliferation has a hugely asymmetric impact, two, that the technology is experiencing hyper-fast evolution, three, that it is omni-use and four, it is becoming increasingly autonomous.
AI technology is stretching the limits of human capability and is quickly overtaking it. These are the real and unique challenges we face.
The people of NSW deserve a dedicated AI Commissioner to help equip us for the uncertainty and the opportunity of artificial intelligence.
Jacqui Munro MLC is a Liberal member of the NSW Parliament’s Upper House and the shadow assistant minister for innovation and digital government
Do you know more? Contact James Riley via Email.