Australians need privacy and procedural fairness rights enshrined in federal law to protect against authorities’ increasing use of technology that invades privacy and automates decision making, the Australian Human Rights Commission is warning.
In a new report setting out a model for a federal Human Rights Act, the Australian Human Rights Commission (AHRC) warned authorities are increasingly using higher risk tools like artificial intelligence in decision making that impacts people’s rights.
“It is important that the same procedural fairness principles and rights consideration apply to all decisions made by public authorities, regardless of how the decision is made. This should be explicitly clarified in the Human Rights Act,” the report said.
It comes as governments continues digital identity plans underpinned by facial recognition, and with only limited guidelines on the use of technology like artificial intelligence by industry and government agencies.
The report, launched Tuesday night, said the increasing use of digital and data tools in public administration, law enforcement and by national security agencies is occurring in Australia without a national Human Rights Act.
Australia is the only liberal democracy that does not have an act or charter of rights at the national level, although some laws exist at the state and territory level.
The use of AI tools in particular creates several serious risks because it lacks transparency and ‘explainability’, and introduces automated decision making with inherent bias.
“Machines are made by people, and can exhibit bias inherent in their programming and in the data being processed,” the report said.
“In particular, AI decision making relies on input data, which, if flawed or unrepresentative, may affect algorithmic ‘learning’ processes. AI learning relies heavily on correlation and can manifest discriminatory outcomes that are not necessarily obvious to human users.”
One example is the rise of policing based on algorithms that have been found to associate risk of crime with postcode areas – often areas that have high minority populations, the report said.
“As a result, the AI then ‘learns’ to associate certain racial characteristics with risk of crime.417 When AI systems are used and relied upon across particular government departments or agencies, this kind of bias in decision making may occur on a systemic level.
“When AI technology is used to make public decisions, a range of concerns arise that are associated with the provision of procedural fairness and key rights.”
The AHRC recommended a new Australian Human Rights Act that would explicitly include fairness principles and rights considerations be applied to all decisions made by public authorities, regardless of how the decision is made.
The new rights protections, which would also require earlier considerations of human rights in public administration, could have helped stop the illegal robodebt debt raising scheme, the report suggested.
The scheme led to a $1.8 billion settlement, demonstrating that “dealing with human rights issues early has obvious economic benefits”.
A landmark Human Rights and Technology report made similar recommendations in 2021, but never received a government response.
The new report said an Australian Human Rights Act would enshrine several rights in law, including an individual’s right to privacy. This right to privacy should apply to the collection, processing or retention of personal data through all forms of technology and include state surveillance measures, the AHRC recommended.
The report also calls for the Commission to gain new powers to conciliate human rights complaints, as was envisioned when it was established in 1986. Courts would also be required to interpret legislation in line with the Human Rights Act where possible, under the proposal.
“The beauty of a Human Rights Act, and other measures that frontload rights-mindedness, is that they are expressed in the positive – and they are embedded in decision making and ahead of any dispute,” AHRC president, Emeritus Professor Rosalind Croucher said.
“A Human Rights Act names rights; it provides an obligation to consider them and a process by which to do it – together supporting a cultural shift towards rights-mindedness, becoming part of the national psyche, not just an afterthought.
“The purpose of such an Act is to change the culture of decision making and embed transparent, human rights-based decisions as part of public culture. The outcome needs to be that laws, policies and decisions are made through a human rights lens and it is the upstream aspect that is so crucial to change.”
Do you know more? Contact James Riley via Email.