Vic child protection agency ordered to block GenAI tools


Victoria’s privacy watchdog has ordered the state’s child protection agency to block access to generative AI tools on its network after a caseworker entered sensitive personal information into ChatGPT to draft a report for the courts.

An investigation found the Department of Families, Fairness and Housing (DFFH) breached the state’s Information Privacy Principles when the popular OpenAI tool was used with a case involving a young child at risk of harm.

The caseworker, who is no longer employed by the agency, used ChatGPT to describe the risks to a child if they continued living at home with their parents, who had been charged with sexual offenses.

It is one of 100 cases uncovered during the investigation where ChatGPT may have been used to draft child protection related documents in a single year, highlighting the urgency of mandatory guardrails for AI in high-risk settings.

In its report, the Office of the Victorian Information Commissioner (OVIC) said a “significant amount of personal and delicate information” was entered into ChatGPT to produce the report submitted to the Children’s Court.

Indicators included language that was “not commensurate with employee training and Child Protection guidelines”, as well as incorrect personal information, which OVIC said was “of particular concern”, the report released on Tuesday said.

“… The report described a child’s doll – which was reported to Child Protection as having been used by the child’s farther for sexual purposes – as a notable strength of the parents’ efforts to support the child’s development needs with ‘age-appropriate toys’.”

“The use of ChatGPT therefore had the effect of downplaying the severity of the actual or potential harm to the child, with the potential to impact decisions about the child’s care.”

While the generative AI assisted report’s “deficiencies” did not change the decision making of either Child Protection or the court, OVIC said the “unauthorised disclosure” alone was “serious”. It said OpenAI may now be able to “determine any further uses or disclosures”.

At the time of the incident, the DFFH relied on general policies, procedures and training materials covering topics like privacy, security and human rights, and did not use technical controls to restrict access to GenAI tools.

OVIC found that these policies were “far from sufficient to mitigate the privacy risks associated with the use of ChatGPT in child protection matters”, and that the department had made no attempt to educate staff about GenAI tools.

“It could not be expected that staff would gain an understanding of how to appropriately use novel GenAI tools like ChatGPT from these general guidance materials,” the report said.

OVIC has issued DFFH with a compliance notice requiring it to ban and block GenAI tools like ChatGPT, Google Gemini and Microsoft Copilot for child protection workers from September 24, describing the privacy risks as “simply too great” to ignore.

DFFH accepted that by using ChatGPT to develop a Protection Application Report for court the child protection worker had breached the Information Privacy Principles relating to data quality and data security.

But the department described the incident as “isolated” and inexplicitly contends that the report “did not find that any staff had used GenAI to generate content for sensitive work matters”.

The use of ChatGPT to determine protection arrangements, such as a child being placed in out of home care, highlights the risks posed by GenAI tools in high-risk settings.

The federal government is currently consulting on ten mandatory guardrails for high-risk AI uses, with a European Union-style standalone AI Act among the approaches being considered.

Any regulation to arise from the consultation – and the more than 12 months of work before it by the federal Industry department–  is not expected to arrive before the end of the year.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories