Businesses and governments need to urgently address bias in their artificial intelligence systems and put in place mitigation strategies or stop using the technology entirely, according to Australian Human Rights Commissioner Edward Santow.
The Australian Human Rights Commission, in partnership with the Gradient Institute, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61, has released a new technical paper on algorithmic bias, and what can be done to address the risks it poses.
The study kicked off nearly two years ago, when the Commission became concerned about the prevalence of artificial intelligence tools that paid little consideration to their potential to lead to unfair outcomes for individuals.
“There is limitless enthusiasm to use AI in decision-making, but there is an ever-growing number of scandals where that has gone very badly,” Mr Santow told InnovationAus.
“There’s some good work that’s been done around the diagnosis part of the problem, but there’s less work that’s been done on the more rigorous approaches to addressing the problem.”
“Some companies feel like there’s a binary choice of either using AI and running the gauntlet of being unfair to your customers or going back to a much more conventional decision-making approach,” he said.
“What we’re saying is there are legitimate ways of using data-driven decision-making, but you need to understand the risks and need to be effective at addressing those risks.”
If these risks, primarily algorithmic bias, can’t be addressed, then this technology should not be used at all, Mr Santow said.
“AI isn’t magic, you need to understand what it’s good at and what its limitations are. Not every company would launch a satellite – lots of companies might want to but they know there’s some real complexity involved in undertaking an ask like that.
“We would not suggest that this is necessarily as complicated as that but we would say you can’t just go in there as an enthusiastic amateur because you run the risk of real harm,” he said.
“They need to go in there in a rigorous way. We’re fortunate that the vast majority of companies want to do the right thing, and that includes when it comes to more innovative ways of using technology. They need to be able to say that sometimes they can’t address the problem, so it’s not yet safe to use the AI decision-making system.”
As part of the study, a simulation was conducted to demonstrate how algorithmic bias can creep into AI systems. The simulation was based around a hypothetical electricity retailer, using an AI tool to determine what contracts to offer to customers based on their predicted profitability.
The study identified five scenarios where algorithmic bias arose during this simulation, due to either the data being fed to the system, the use of the technology or existing societal inequality.
These included the AI system reflecting the societal inequality between indigenous Australians and non-indigenous Australians.
“If a protected group is predicted to be less profitable, due to lower incomes and endemic financial stress, then even if we simulate or collect through data and train an accurate model in a way that does not introduce additional bias, the AI system will still perpetuate existing disadvantage by making decisions that mimic an imbalance in society,” the report said.
To attempt to mitigate this bias, interventions can be made at the system design level, such as inputting a lower acceptance threshold for Indigenous Australians to counterbalance the bias issues, the report found.
The simulation was also found to lead to unfairness when it was trained with outdated historic data that led it to believe that women were less likely to be profitable customers than men.
The simulation demonstrates that unfair and potentially unlawful outcomes can come from the use of AI when it is not implemented in a careful manner, and showed an “urgent’ need for businesses to proactively address these risks, Mr Santow said.
The study includes five mitigation strategies when an AI system is being implemented.
These include acquiring more appropriate data, which requires a thorough understanding of the current data and its limitations. Businesses or governments should also preprocess data in order to mask or remove certain data points before they are fed to the algorithm. For example, an individual’s gender can be hidden to ensure it does not discriminate on this basis, although this could also reduce the accuracy of the system, the report found.
The report also listed increasing the complexity of the system, modifying it directly to account for inequality and inaccuracies and changing the actual target variable as means to reduce algorithmic bias.
If these strategies aren’t adopted, businesses using AI run the risk of breaking existing Australian non-discrimination laws, Mr Santow said,
“There is already legislation that makes it unlawful to discriminate against anyone based on age, race, sex or disability. Those laws apply whether you’re making a decision using an abacus, a conventional system or AI,” he said.
“The legal risk of getting this wrong is real. The risk is not that a new law will be passed, the risk is that a company will be already breaching the law.”
The Human Rights Commission is currently putting the final touches on its report from its long-running inquiry into the intersection of human rights and technology and will soon hand this to the federal government.
Mr Santow said the report would provide a more comprehensive look at human rights issues stemming from the use of technology, along with a number of policy reform recommendations.
The report is expected to be released publicly early next year.
Do you know more? Contact James Riley via Email.