NSW Police have not ruled out its use of facial recognition technology to identify thousands of protestors from a Sydney anti-lockdown rally on Saturday, despite calls from experts to pause its use. But it won’t confirm the use of the technology either.
Its potential use at a public protest has raised alarm among experts, who say the police need to be more transparent in how they deploy the controversial technology, which remains largely unregulated in Australia.
Following the protests in Sydney’s CBD on Saturday, police established a taskforce to identify individuals at the rally. At least 22 detectives have been tasked with identifying the crowd of thousands which participated in the event in defiance of public health orders.
Police Minister David Elliot said the taskforce would “forensically investigate all CCTV and social media footage collected over the course of the afternoon’s protest”.
NSW Police declined to answer questions on what technology they were using to identify protestors and which databases they would be matched against because of “operational reasons”.
However, The Daily Telegraph has reported police would use facial recognition technology, Uber records and OPAL ticket data to track down anyone they believe attended the protest.
A NSW Police spokesperson queried that report, but told InnovationAus police would use “all resources available to them” to identify protestors.
“I do entirely expect that [NSW Police] will be making use of their facial recognition capabilities in order to identify protestors,” said Deakin University senior lecturer Dr Monique Mann.
There is nothing legally stopping the police from deploying the technology, Dr Mann said, but doing so raises serious questions because in Australia facial recognition technologies are being operated “outside an appropriate legal framework”.
Australian Human Right Commissioner Ed Santow said law enforcement should not be using facial recognition because effective legal protections are not yet in place to prevent its misuse or overuse.
“‘One-to-many’ facial recognition, which is used to identify people in a crowd, is prone to high rates of error,” Mr Santow told InnovationAus.
“Those errors are more likely to affect people of colour, women and people with a physical disability. When police make an error using facial recognition, this can result in significant human rights violations, including unlawful arrest and detention.”
The NSW Police set up a covert Facial Recognition Unit three years ago in anticipation of a national biometrics database which would have given them access to passport and licence photos from around the country.
A joint security committee rejected the federal government’s underlying legislation for the scheme in 2019 because it lacked adequate privacy and security safeguards.
But the NSW Police unit has continued operating, including trialing the national database without legislation, and morphed the unit into an intelligence tool for police.
In a rare media interview this month, police from the unit said they now mainly match images of suspects against a database of people who have already been arrested in the state. Police flagged plans to expand the unit and promised transparency to get the public on board.
But this week NSW Police declined to confirm or deny if the technology will be used to identify anti-lockdown protestors, who breached public health orders by gathering in the Sydney CBD on Saturday.
“The police have an important role in keeping the community safe, especially during the current pandemic,” Mr Santow said.
“It’s vital, therefore, that the technology used in law enforcement is safe and reliable, with effective legal protections.”
Experts this month backed the call by Australia’s Human Rights Commission for a moratorium on the use of facial recognition and algorithms in important decision-making until adequate protections are in place.
Dr Mann said there are some limited scenarios where the use of facial recognition is proportionate and warranted. But the potential use by NSW police during a protest without an effective legal framework or even transparency on how it is being used is alarming.
“When you’re just rolling it out across the city — in terms of live identification in public spaces, or even after the fact through identification of people with CCTV camera or images on social media — with no protections, with no oversight, with no transparency, then that is really concerning,” she said.
“We need to really think about sort of the type of society he wants to live in.”
Do you know more? Contact James Riley via Email.
I just want to look at the issue of bias in facial recognition algorithms. Yes, it’s a horrendous problem (see https://lockstep.com.au/man-made-software-in-his-own-image) but is it reason enough to ban law enforcement use of this technology? Eye witnesses too are biased (in fact the innate bias is probably what leads to madly designed and bady tested algorithms). I am not a lawyer but it seems to me that the judicial system tries to deal with bias, firstly by having rules of evidence, and then by having courtroom processes that allow evidence to be challenged. I would not ban facial recognition in policing, but I would make sure that the style of legal checks and balances we have for regular identification by people are applied to the machines as well. And I would focus on weak policing, where cops might — unwittingly or not — rely too much on face recognition, which I know (and have argued for years) is far from perfect.
I think there’s a profound policy issue here, which is not easily fixed by a ban, which is the exaggerated faith so many people have in biometric technology. We need to reset expectations of how well it works, and use it (and criticise it) carefully.