A Bretton Woods for content takedowns? eSafety thinks so


Brandon How
Administrator

A Bretton Woods-like system for overseeing the takedown of harmful digital online content could fill the gap if the United States fails to crack down on the social media giants.

eSafety Commissioner Julie Inman Grant, who has struggled to get Elon Musk’s X to pay fines after it closed its Australian office, raised the model at King and Wood Malleson’s Digital Summit 2024 this week.

Ms Inman Grant said, “perhaps we actually need something akin to Bretton Woods for digital takedown, because it’s unlikely that the US is going to legislate to really hold these American-based companies to account”.

The Bretton Woods system was a set of global monetary rules agreed by 44 countries in 1944 and led to the creation of the International Monetary Fund.

eSafety Commissioner Julie Inman Grant at the World Economic Forum.

She also argued that “vexatious litigation is also being used” to inhibit efforts to promote online safety reform, such as X/Twitter’s case against the Center for Countering Digital Hate.

“If you criticise a particular company, then they will try and silence you through litigation, and some of the… laws in California have been used to kind of undermine that.”

Ms Inman Grant said she has “had lots of discussions with the White House, lots of discussions with members of Congress”, and found the US’ discussions around online safety are “much more political, much more polarised”.

While the US has focused on concerns about “the censorship of conservative versus progressive voices”, in Australia it has “been a very bipartisan issue of saying, of course, we want to promote freedom of expression, but we want to draw a line”.

“And when online discourse goes into the realm of suppressing other voices through targeted online harassment, we do want an independent regulator to draw the line,” she said.

Meta president for global affairs Nick Clegg has previously called for a “a Bretton Woods for the digital age” that would “underpin the principles of openness, transparency, accountability, and privacy”.

In terms of local enforcement powers, Ms Inman Grant said the ongoing statutory review of Australia’s online safety act the low value of fines available to the eSafety Commissioner, compared to her counterparts in the United Kingdom and the European Union.

She also argued that requiring social media companies to be more transparent on actions taken against online harms would spur safety improvements.

“We’ve also found over time that it isn’t just regulation that motivates a company to do better and do more, it’s really the reputation and the potential hit to revenue that will make them make meaningful changes,” Ms Inman Grant said.

On July 22, eSafety issued the first of its periodic transparency notices requiring Apple, Discord, Google, Meta, Microsoft, Snap, Skype and WhatsApp to report on compliance with the reformed Basic Online Safety Expectations.

The first notices focus on child sexual exploitation and abuse material and activity as well as sexual extortion. Transparency notices will be issued every six months for the next two year.

New offences preventing the non-consensual sharing of sexual material including those created using a deepfake or another AI-powered tool were added to the Criminal Codes Act with the passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill on Wednesday

The bill was passed unamended with the Greens joining with Labor to vote against the amendments that clarify the definition of “material created or altered using digital technology” and a statutory review of the amendment after two years.

Do you know more? Contact James Riley via Email.

Leave a Comment