There is now a “once-in-a-generation” challenge to ensure technology is developed and rolled-out fairly and equitably to ensure harmful programs like robodebt are avoided in the future, Human Rights Commissioner Ed Santow says.
The Australian Human Rights Commission’s report on human rights and technology was tabled in federal Parliament this week, with 38 recommendations to government on how to ensure new technologies are used by governments and the private sector in a fair, inclusive and accountable way.
The recommendations included the establishment of an Artificial Intelligence Commissioner, a moratorium on the use of facial recognition in high-stakes situations until adequate regulation is in place, and legislation to ensure accountability around AI-informed decision-making.
A key driving force behind the report is how to ensure there are no repeats of the federal government’s disastrous robodebt program, where automated and AI-driven decision-making was used to issue debt notices to welfare recipients.
“It’s about learning the lessons from robodebt. The Commonwealth Ombudsman and a number of parliamentary inquiries have raked over what went wrong, and what we need to do now is make sure that when the government uses AI and automation in important decision-making that it is fair, inclusive and accountable,” Mr Santow told InnovationAus.
“There are ways to lean into new technologies that will help drive better decisions, and more data-driven decisions. We need to make sure that basic requirements like anti-discrimination law are adhered to and we don’t have opaque forms of decision-making. If something goes wrong you need to be able to get to the bottom of what went wrong and have the decision reviewed.”
There is now a “once-in-a-generation” challenge and opportunity to develop the proper regulations around emerging technologies to mitigate the risks around them and ensure they benefit all members of the community, Mr Santow said.
Three key factors are coming to a head this year, presenting opportunities for governments to act, he said.
“The first is regulation. We’ve seen overseas in particular that countries are getting really serious about making sure citizens are protected against harm,” Mr Santow told InnovationAus.
“There has been a bit of a regulatory lag over the last 15 years or so as AI has been developed, but that’s changing. That means that companies and governments will really need to take this more seriously.”
The second is around increased reputational risks for companies that use AI and other technologies in an unfair way, while the final factor is around the commercial imperative for change.
“The bank that uses an algorithm to make loan decisions and because of the system they incorrectly identify women as on the whole worse customers – we care because it’s unfair and discriminatory, but you’re also losing a good customer,” he said.
“If companies and government agencies don’t get their heads around that then they’re putting themselves at commercial risk. This is the year when those three phenomena come to a head and it makes it very timely to address these issues.”
A key element of the recommendations in the report is around the need for transparency and accountability in the use of emerging technologies, not to simply stop using these technologies.
“We need government to use technology in a smart way and in a way that is going to demonstrably protect peoples’ human rights. I think in turn that means being really open with the public about showing that the government understands what some of the risks are when you use automation or AI,” Mr Santow said.
“It’s not about suggesting there are no risks, it’s explaining why you’re using automation and AI and what the benefits are, and what you’re going to do to address them. That’s a much better way to build community trust than saying there’s nothing to see, no risks at all.”
In the report, the Commission urges the government to establish an AI Safety Commissioner as an independent statutory body, with a key focus on ensuring existing regulators are able to properly deal with emerging technologies.
“It’s potentially a game-changer. On the whole, AI doesn’t allow us to do wholly new things. Instead it’s allowing us to do the same things we’ve always done but in new ways. All existing regulators need to update their understanding of technologies so they can properly protect people as the companies and agencies they regulate are changing what they do,” Mr Santow said.
“If a bank these days is using algorithms to make decisions about bank loans then the regulator really needs to understand how to make sure people are treated fairly. Banks have always made bank loan decisions but if that’s in a new way then the regulator needs to ensure they’re adhering to laws.”
“One of the biggest positive changes would be to have those existing regulators be absolutely at the forefront of making sure protections in human rights and consumer rights that we all rely on are effectively enforced when it comes to the use of new technologies.”
This Commissioner would also provide independent advice to government on the risks surrounding new technologies, such as the use of AI.
Do you know more? Contact James Riley via Email.