Australia proposes mandatory guardrails for AI


The Albanese government has proposed ten mandatory guardrails for artificial intelligence in high-risk settings as it moves to curb dangers posed by the nascent technology.

A standalone Artificial Intelligence Act similar to regulation already introduced in the European Union is one of three approaches being considered after more than 12 months of consultation.

Industry and Science minister Ed Husic will release a paper proposing the mandatory guardrails on Thursday morning, rounding out a seven-month process of development led by group of 12 AI experts.

But another round of consultations will now take place, meaning any regulation won’t come into effect before 2025 – more than two years after OpenAI’s ChatGPT burst onto the scene.

A voluntary AI safety standard, also to be released on Thursday, will act a stopgap, allowing organisations to begin adopting best practice while the government considers its preferred legislative approach.

Consistent with Canada and the EU, the proposed guardrails will require organisations developing or deploying high-risk AI systems to take steps to ensure products reduce the likelihood of harms.

The risk-based approach that has been developed emphases testing, transparency and accountability, including the labelling of AI systems and testing of products before and after release. The full list is:

  1. Establish, implement and publish an accountability process, including governance, internal capability and a strategy for regulator compliance
  2. Establish and implement a risk management process to identify and mitigate risks
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content
  7. Establish processes for people impacted by AI systems to challenge use or outcomes
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
  9. Keep and maintain records to allow third parties to assess compliance with guardrails
  10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails

Three legislative options have been proposed, including reforms to existing laws in areas like privacy, copyright and online safety, new framework legislation or a new cross-economy Artificial Intelligence Act.

Submissions to last year’s consultation found that at least ten legislative frameworks may require amendments to respond to applications of AI, according to the government’s interim response.

The government has already sought to address AI risk by adapting existing laws, most recently with the passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill last month.

The proposed definition of high-risk AI uses takes into account any adverse impacts on an individual and collective rights, as per Australian human rights law, as well as their physical and mental health.

Announcing the proposed guardrails and voluntary standard, Mr Husic said AI was “one of the most complex policy challenges facing governments the world over”.

“For a long time, there was a pretty permissive view about the technology and the fact that it could be developed without government intervention, but those days are well and truly over,” he said.

The Australian Information Industry Association and the Tech Council of Australia, two of three observers of the AI Expert Group, have welcomed the proposed guardrails and voluntary AI safety standard.

AIIA chief executive Simon Bush said that mandatory guardrails based on testing, transparency and accountability would ensure that Australia’s approach aligns with international best practice.

But with few remaining parliamentary sitting days this year, he has urged the government to move fast so that the guardrails and standards can be in place as soon as possible.

Even if the government introduced legislation immediately after the next consultation round, it is only likely to pass early next year – just ahead of the federal election before May 2025.

“Now that these proposals have been released, we urge the government to move as quickly as possible to enact the mandatory guardrails, providing the industry with confidence and certainty,” Mr Bush said.

Mr Bush added that “it is crucial to establish these guardrails promptly to facilitate AI adoption” in Australia, which is already considered a laggard compared with other developed nations.

Mr Husic would not be drawn on the government’s timeline when asked on Thursday, saying only that the government would “take some time to implement” the outcome of the four-week consultation now underway.

But he said businesses “don’t need to wait for this work to occur”, with the AI safety standard taking effect immediately.

“Businesses can get cracking right away on safely and responsibly using AI. This gives them the time to prepare for the… guardrails, and it’ll give Australians peace of mind that protections are being put in place.”

Mr Husic cited new research from the National AI Centre that shows around 80 per cent of business think they’re doing the right thing but only 30 per cent of businesses are following best practice.

Australians are the most nervous about AI of any country, with just under 70 per cent of respondents worried about the implications of the technology, according to an Ipsos survey last year.

“What the Australian government wants to do is create that bridge between best intention and best practice, so a lot of what we’re releasing today is designed to do just that,” Mr Husic said.

Amid an extended productivity slowdown, Treasurer Jim Chalmers in his 2024 Curtin Oration in Melbourne last week stressed the importance of AI to Australia’s future grown.

By 2030, AI could contribute up to $4.4 trillion to the global economy – more than the current output of the United Kingdom,” he said in the address to the John Curtin Research Centre.

The AI Expert Group and its industry observers began canvassing options and thresholds for regulating AI shortly after the government released its interim response to consultations on AI safety in January.

Among the experts are UNSW’s AI Institute chief scientist Toby Walsh, the CSIRO’s former chief scientist Bronwyn Fox and Australian Institute for Machine Learning director Simon Lucey.

The group, which had its term extended to the end of September, remains temporary, although the government has committed to establish a permanent AI advisory body.

DISR’s Science and Technology Group deputy secretary Helen Wilson told a Senate committee last month that the “permanent body will have a much broader remit and have different skills on it”.

Updated at 11:00 am to include comments from Mr Husic and immediate reaction from industry

Do you know more? Contact James Riley via Email.

Leave a Comment