Australia will have mandatory safeguards for high-risk artificial intelligence use like autonomous vehicles and healthcare but make little to no intervention for low-risk use like filtering spam emails under a new risk-based regulatory approach to be announced by the federal government on Wednesday.
The commitment to mandatory regulation follows crackdowns in the EU and Canada and is expected to require organisations developing and deploying high-risk AI in Australia to have it independently tested before release, disclose to end users when AI is in use, and designated responsibility for AI safety to specific individuals.
An interim advisory panel will be appointed to explore options for the mandatory guardrails, including a potential dedicated AI legislative framework and reforms to existing areas like privacy, copyright and online safety laws.
Do you know more? Contact James Riley via Email.