Assistant Treasurer Stephen Jones has thrown his support behind an AI regulation model that is directed at activity and harms rather than the technology itself, as the government starts exploring “bespoke” guardrails this year.
Earlier this month, Industry and Science Minister Ed Husic revealed a yet to be appointed panel of experts will explore new regulation options for “high risk” AI in Australia in response to calls for tighter rules.
No timeline has been placed on the panel, but Mr Husic said he wants it operating as soon as possible as sweeping regulations are being finalised in other jurisdictions.
In an address to lawyers, regulators and technology experts at the UTS Shaping our Future Symposium on Wednesday, Mr Jones said AI is less likely to share its benefits widely and concentrate risks compared to previous technological breakthroughs.
He said the government must mitigate the risks, but too prescriptive intervention would limit the genuine benefits of AI.
“We cannot foresee everything. And in a sense, you don’t want to. You want to create the environment where the market, private individuals can innovate, explore capacities and opportunities,” Mr Jones said.
He said regulation should apply to activity not technology.
“If the technology is being used in a way that is harmful, focus on the harms that are being done, not on the technology… That is the way we approach this issue in other areas of the economy.”
Speaking at the same conference, ASIC Chair Joe Longo stressed many existing laws apply to the use of AI and the existing obligations around good governance and the provision of financial services don’t change with new technology.
However, the opaque and complex technology means there may be gaps in preventing the harms of AI from occurring and in offering recourse to victims.
“The point is, there’s a need for transparency and oversight to prevent unfair practices – accidental or intended,” Mr Longo said.
“But can our current regulatory framework ensure that happens? I’m not so sure.”
As the government continues to mull its regulatory options, the European Union has already reached provisional agreement on its EU AI Act that will usher in risk-based regulation of AI systems across member states.
The European approach will outright ban the use of AI for high-risk scenarios like social credit systems and biometric surveillance.
Australia’s eventual response will apply existing frameworks like privacy law wherever possible, but is likely to have “bespoke rules” for high-risk application as well, Mr Jones said.
“It’s pretty hefty work. Even the task of defining what’s high risk and what is not will be a matter of conjecture, but that’s the stuff of government. It’s what we do every day of the week.”
Do you know more? Contact James Riley via Email.