The federal government’s recently released interim response to regulating artificial intelligence is (understandably) short on detail, but appears to say all the right things in an attempt to balance the need for innovation with the desire to maintain a semblance of control over a scary new technology. It’s a tough gig, and regulating disruption is easy to get wrong.
The challenge for policy makers is in understanding the world as it might be, and when it comes to technology, governments can find it difficult to articulate a long-term view of what the world might look like.
This is particularly the case when trying to regulate tools which transcend their departments and portfolios, and which cut across culture, industry and the broader economy.
Who can forget that the first automobile was regulated by having a man walk in front of each vehicle holding a reg flag – to warn other road users, and to limit the speed at which that automobile could go.
That regulatory approach in the early 20th century took the world view of the time; embedding known understandings of the urban environment and taking initiatives to protect its occupants.
Subsequent motor vehicle regulation continued in that vein, focusing on individual vehicles and their impact, particularly around human safety.
What was missed in that approach was an appreciation of how the motor vehicle would have a much broader and more dramatic impact on social and economic ways of being.
It’s unlikely that the bureaucrats who created the red flag rule could have imagined the world a century later, largely shaped by the possibilities of mechanised transport.
Which brings us to AI. The world’s governments are torn between the promised economic upsides, and the potential for great harm. In that context, the government’s risk-based approach seems on the surface to be appropriately sensible. But there remain genuine questions around how those risks are articulated and best addressed.
Inherent in the response is an understandable privileging of human beings. In many instances, the issue raised is not attributable to AI per se, but us. We know that AI can be biased and discriminate against certain sectors of society, but humans are very good at that all by themselves, and surely the ‘unacceptable risk’ of AI social scoring should be more about the social scoring than the AI.
There is little point in only regulating AI when the issue is a human one as well.
Returning to our automobile analogy, the government is proposing a regulatory framework for automated vehicles. But rather than being only concerned with AI driving, the approach should focus on regulating for safe driving.
Our experience of autopilots in passenger aircraft suggests there is real potential for self-driving cars to be safer than the status quo, so our policies should apply the same approach to ability and judgement whether car drivers are human or machine.
There is also reluctance to accept the AI’s imminent widespread adoption, when the reality is that every piece of software seems to have AI capabilities built in as part of their roadmap, from office productivity tools to smartphone operating systems.
Very soon, AI will be a normal part of our everyday lives – it will be in our work and our life, our schools, offices and factories. The government response plays this down, seemingly denying this reality and suggesting that we will be able to pick and choose when and where we can use AI.
This response reminds me of 1990s attempts to regulate the internet, misreading its growth and broad, deep, global impact.
For example, the report discusses the newly released Australian Framework for Generative AI in Schools, which has been developed to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society.”
This is the wrong emphasis. Whilst responsible and ethical AI usage is necessary, there is a real need to focus on repurposing our educational institutions to ensure that our students are ready to live and work in a brave new world.
And what’s missing is that broader understanding of what a brave new world might look like, a vision for a world where we are living and working alongside intelligent machines. We need a conversation about what would enable us to be comfortable and build the trust we need in such a world.
That currently exists in science fiction, and I hesitate to put the onus on government alone to answer these philosophical questions, but a true principles-based policy needs to present a vision for the long term – one that extends beyond the next election cycle.
Right now, we probably do need the metaphorical man with the red flag. But we also need to imagine our possible futures, and be happy with where we are heading.
Professor Sherman Young is the Deputy Vice-Chancellor Education and Vice-President at RMIT University. His research is focused on the impact of new technologies.
Do you know more? Contact James Riley via Email.