Tech giants might get a friendlier defamation regime in Australia, with the options being canvassed by a significant review including a safe harbour scheme or blanket immunity similar to that offered by the US.
The NSW Attorney-General has been leading the Defamation Working Party, which is working to modernise Australia’s defamation laws, particularly around the internet and the liability of big tech companies such as Facebook and Google.
Last year the Council of Attorneys-General agreed to a first stage of defamation reforms and that the second stage would focus on online defamation law.
“When the uniform defamation laws were drafted more than 15 years ago, social media was in its infancy and trolls were confined to children’s books. This review acknowledges times have changed and asks whether internet giants like Google, Facebook and Twitter should be responsible for content posted by platform users,” NSW Attorney-General Mark Speakman said.
“Australia needs contemporary laws that protect reputations in an era when anyone can publish almost anything to the world at large with just the click of a button. However, getting the balance right is crucial to avoid online commentary being blocked unnecessarily.”
The group released its second discussion paper this week, which proposes to separate the “internet intermediaries” into three categories with varying responsibility for defamatory content posted. The paper outlines four options for reform of the law.
These options include doing nothing, introducing a safe harbour regime where tech companies would be free from legal ramification if they comply with a complaints process, or implementing complete immunity akin to the scheme in the US, which former US President Donald Trump last year tried to have repealed.
The discussion paper will likely make for good reading for big tech companies, with most of the options on the table painting a better picture for them than the current situation. It could also offer somewhat of a respite for the likes of Facebook and Google following sustainable reforms from the federal government targeted at curbing their power.
In contrast to many other jurisdictions, Australian courts have previously found Google liable as a publisher of reviews and snippets of information on its search engine.
The discussion paper focuses on internet intermediaries and how liable they should be for defamatory content posted on their platforms and websites. The group has proposed to separate these intermediaries into three categories: basic internet services, digital platforms and forum hosts.
Basic internet services such as ISPs are “mere conduits” in line with telephone or postal services, and are passive and content neutral, and would not be liable at all under the reforms.
Digital platforms would include social media firms and news aggregators, while forum administrations would include a small community group hosting a Facebook page or the administrator of an instant messaging thread. The group has proposed that “any of these forum administrators is potentially liable for defamatory comments made by third parties”.
The first option the working group is seeking feedback on is to simply leave things as is and not make any major changes to Australian defamation law. But this status quo approach means that the laws will remain “unclear and inconsistent”, the report said.
The next major reform avenue would be the introduction of a safe harbour scheme, where internet intermediaries would have a legal defence if they comply with a complaint about defamatory material on their platform.
“This defence has the potential to provide a fast and simple path for complainant to achieve a solution when their reputation has been harmed online – particularly where their primary goal is to have the content modified or removed,” the report said.
This scheme, which is in place in the UK, would see tech companies giving protection until they are put on notice by a complaint about offending content, with this protection then removed unless they follow the set out process.
The final option is to introduce immunity for tech companies for user-generated content, similar to the laws in place in the United States. This would apply even after a company is notified of potentially defamatory content.
This would recognise that these companies are not the creators of content, and “may also be seen as removing a barrier to innovation online”, the defamation working group said.
But the discussion paper lists a number of downsides of this approach.
“This wide immunity would be at odds with the approach to traditional secondary publishers such as booksellers, newsagents and librarians,” it said.
“Granting a broad immunity also fails to recognise that many internet intermediaries have the ability to encourage, but also mitigate, the risk of harm to reputation online.
“Often their business models, which leverage the network effect to attract users to their platforms for longer periods of time, can lead to heightened risk of harm to reputations, while generating profits for these platforms in doing so. Arguably, this should attract a level of responsibility which this option would fail to deliver.”
Submissions can be made on the discussion paper until 19 May, with plans to commence the defamation reforms by July this year.
Do you know more? Contact James Riley via Email.
Online defamation is often done due to rivalry in businesses. The libelous statements are frequently made in online reviews, social media posts, or even in articles. The victims of these false and harmful statements are entitled to legal damages. These may also cause financial and monetary compensation, and foremost the removal of the content.
Online defamation is the online publication of a statement of fact that is verifiably false and that is harmful to your reputation. When it is written, as most online statements are, it is libel. If it is spoken, like in a YouTube published video, it is called slander.
At the very least social media platforms must be required to obtain and retain the true identity of their users and be required to provide this to anyone seeking to commence legal action against the user. Failing that the platform should remain liable for false and/or defamatory posts.