eSafety ups pressure on tech giants over child abuse content


Apple, Meta and other tech giants have been ordered to report the steps they are taking to address child sexual abuse material on their platforms twice a year, in an escalation of Australia’s online safety compliance regime.

eSafety Commissioner Julie Inman Grant issued legal notices to eight companies on Wednesday, requiring them to report their efforts to tackle child sexual abuse material in Australia every six months for the next two years.

Apple, Google, Meta (and WhatsApp) and Microsoft (and Skype), as well as the owners of chat platform Discord and Snapchat, have been targeted for the new reporting regime, partly in response to answers to a previous round of legal notices.

Image: Shutterstock.com/Cristian Dina

Ms Inman Grant said the “alarming but not surprising” answers confirmed what the online safety regulator had long suspected, that there were “significant gaps and differences across services’ practices”.

“In our subsequent conversations with these companies, we still haven’t seen meaningful changes or improvements to these identified safety shortcomings,” she said in statement on Wednesday.

Citing the first Basic Online Safety Expectations (BOSE) transparency report in December 2022, Ms Inman Grant said there had been no attempt by Apple and Microsoft to proactively detect child abuse material in their iCloud and OneDrive platforms.

While eSafety has since introduced mandatory standards, operators of cloud and messaging services will not be required to detect and remove known child abuse material before December.

Ms Inman Grant also harbours concerns that Skype, Microsoft Teams, FaceTime and Discord are still not using technology to detect live-streaming of child sexual abuse in video chats.

Information sharing between Meta services is another worry, with offenders banned on services like Instagram in some cases continuing to perpetrate abuse on the parent company’s other platforms WhatsApp or Facebook.

The legal notices ask the companies to explain how they are tackling child abuse material, livestreamed abuse, online grooming, sexual exploration and deepfake child abuse material created using generative artificial intelligence.

On Tuesday, Ms Inman Grant said she was in discussions with Delia Rickard, who is reviewing the Online Safety Act, on the need to fill “gaps” in existing legislation and codes that currently extend to only abhorrent content and pornography.

“There’s a definitive gap there and now is the time to be thinking about the kinds of powers that we might need to make us more effective at a systemic level,” Ms Grant said.

Another concern is the speed in which companies are responding to user reports of child sexual exploitation, noting that Microsoft was taking an average of two days to respond in 2022.

Ms Inman Grant said the new expectations will force the companies to “lift their game” and show that they are “making improvements”, with the regulator to report regularly on the findings of the notices.

The mechanism is one of three means available under BOSE to help ‘lift the hood’ on the online safety initiatives being pursued by social media, messaging, and gaming service providers.

“These notices will let us know if these companies have made any improvements in online safety since 2022/3 and make sure these companies remain accountable for harm still being perpetrated against children on their services,” Ms Inman Grant said.

“We know that some of these companies have been making improvements in some areas – this is the opportunity to show us progress across the board.”

All six companies will need to provide their first round of responses by February 15 or face penalties of up to $782,500-a-day.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories