Follow us


From ‘hands off’ to much more interventionist, the last few years have seen proliferating efforts by governments, regulators and courts across the globe to regulate the moderation of online content.

In Australia, the government has convened Parliamentary hearings, regulators have updated their strategic priorities and taken enforcement action and new laws have been proposed or enacted.

The intensity of this regulatory activity is increasing with each year. And we anticipate 2022 to be no different, especially given next year’s wide-ranging Parliamentary inquiry into the impacts of online harms on Australians and the introduction of a new law to unmask anonymous online trolls making harmful defamatory comments.

Though there has been an effort to harmonise some of the existing legislation in this space, particularly through the Online Safety Act, the legal and regulatory framework is still fragmented. This – along with potentially conflicting values driving policymaking (such as safety, speech and privacy) – make it challenging for companies to adopt content moderation practices and procedures that will withstand government, regulatory and public scrutiny.

Key questions companies should be asking themselves to prepare for such scrutiny when thinking about moderating content are:

  1. What content moderation issues could arise for us? For example, illegal content, defamatory content, age-inappropriate content, misleading content or personal conduct that is harmful, such as cyberbullying.
  2. Do we comply with the highest regulatory standard that applies to us globally? Companies should identify commonalities in international regulatory approaches to content moderation issues and comply with the highest regulatory standard to ensure operational efficiency.
  3. Is there a team responsible for user safety? The team would be responsible for developing, updating and enforcing community guidelines, responding to complaints and building relationships with regulators.
  4. Are we communicating our community guidelines or acceptable use policy clearly with the public and our users? This will help everyone understand what behaviours are expected of them and how seriously your business takes user safety.
  5. Are our systems set up to receive, record and appropriately respond to complaints? As well as ensuring your safety team can rapidly respond to complaints, this data could allow for trends analysis to improve your approach and performance.
  6. Are we monitoring and enforcing compliance with our community guidelines or acceptable use policy? There is growing regulatory scrutiny of whether, and to what extent, companies enforce their content moderation policies.  
  7. Are we thinking about the intersection of content moderation and other legal obligations, such as copyright? There will be both challenges (friction with privacy) and opportunities (integrating content moderation with monitoring and enforcing copyright).

The fragmented landscape

Misinformation and disinformation

As set out in the Australian Code of Practice on Disinformation and Misinformation, this type of content includes verifiably false, misleading or deceptive content propagated on digital platforms and reasonably likely to cause harm to democratic political processes and public goods, such as public health. 

Online harms

Principally regulated by the Online Safety Act 2021, which comes into effect on 23 January 2022. Under the Act, online harmful content includes cyberbullying and abuse material, non-consensual intimate images, restricted material and material depicting abhorrent violent conduct.  

Misleading online advertising

This type of content includes representations made online, such as through search or display advertising, that mislead or deceive, or likely to do so, (intentionally or not) reasonable members of a certain class of the public.

Misinformation and disinformation

When it comes to regulating misinformation and disinformation, the Australian government has been relatively hands off in its approach. In part, this may be because of the vexed issues of responsibility – does the government step in, define content considered misinformation and disinformation, prescribe its removal and therefore face inevitable criticisms of censorship as well as legal challenges? Or should it leave moderation to platforms and therefore leave regulating issues of democratic importance, such as freedom of speech, to private companies with broad reach?

To date, the government’s approach has been to call on digital technology companies to self-regulate under the industry code, the Australian Code of Practice on Disinformation and Misinformation. The Code takes a harms-based, flexible and proportionate approach to content moderation. It focuses upon ensuring signatories are transparent in how they achieve the Code’s core objective of safeguarding Australian users against harms caused by misinformation and disinformation.

In doing so, the Code supports the range of actions signatories take to address these harms, including:

  • promoting high-quality and authoritative content;
  • partnering with independent trusted third parties to fact check content or provide additional context;
  • reducing the spread or visibility of content;
  • incentivising users to create and share high quality content; and
  • providing users with tools that give them more control over the content they see.

The Code, and the government’s approach, has not been without criticism. This includes criticism from members of the government itself, who have questioned whether the Code goes far enough. And senior members, such as the Minister for Communication, have asserted the government may regulate directly if it considers the Code to be ineffective, potentially following the European Union who moved from a voluntary to a more mandatory co-regulatory model for their Code of Practice on Disinformation.

Online harms

Unlike misinformation and disinformation, the Australian government has taken the legislative pathway for other online harmful conduct and content, such as cyberbullying and abuse material, sharing of non-consensual images, refused and restricted classified materials and materials depicting abhorrent violent conduct.

This year, it passed the Online Safety Act, which updates Australia’s online safety framework by amending, or repealing and replacing, previous laws, such as the Enhancing Online Safety Act. The Act empowers the eSafety Commissioner to take a range of actions to address online harms, and to do so against a range of internet-related companies, including social media platforms, messaging companies, internet service providers and providers of app stores, web browsers and web hosting services.

Among other provisions, the Act establishes a takedown regime, requiring companies to remove content that has been the subject of a user complaint. If they do not comply within 48 hours of receiving the complaint, the Commissioner can issue a notice requiring its removal within 24 hours.

Whilst social media platforms may be used to such notices, other companies, such as hosting companies or app store providers, may not. Furthermore, even social media platforms may not be used to other powers given to the Commissioner, including strengthened information gathering and investigatory powers. 

The government is also currently consulting on whether the Act should establish a more proactive requirement for service providers to take reasonable steps to ensure safe use and minimise unlawful or harmful content or conduct. Some of these steps would already be taken by providers, including having processes to detect, moderate, report and remove content or conduct, expecting employees to promote online safety and assessing safety risk for products and services from design to post-deployment. However, there are also more novel, and potentially technically difficult, steps to take, such as detecting harmful content or conduct on encrypted services.

Misleading online advertising

Turning to enforcement activity, both ASIC and the ACCC have focused upon imposing higher standards in online advertising through court action:

  • ASIC established that companies in the Mayfair 101 group misled consumers when they advertised debenture products as having a similar risk profile to bank deposits. Mayfair did so in several ways, including through paid search advertising. Following the Federal Court’s decision, ASIC’s Deputy Chair commented ASIC “would continue to focus upon potentially false, misleading and deceptive conduct in online advertising, including domain names, meta-title tags and search.” 
  • On appeal to the Full Federal Court, the ACCC succeeded in establishing that Employsure used paid search advertising to give the misleading impression it was government agency or affiliated with the government. Following the decision, the regulator warned it would continue to take enforcement action against online advertisers that used search engine advertising to mislead consumers.

Both regulators are also dealing with online scam activity. For ASIC, it is dealing with an increase in ‘pump and dump’ campaigns coordinated and promoted on social media. It has expanded its supervision of social media and messaging services, including meeting with moderators of Facebook and Reddit groups to discuss how they monitor and moderate content. It has also tried to disrupt campaigns by entering Telegram chats to warn traders that coordinated pump activity is illegal and they have access to trader identities.

The ACCC is dealing with an increase in scam online advertisements, such as fake celebrity endorsements of products that feature as online advertisements or promotional stories on social media. Though there is legal precedent providing internet intermediaries like digital platforms with protection from liability for misleading advertisements on their platforms, companies operating in this space should take care to not endorse or adopt misleading representations made by users. This could be achieved by having systems in place for receiving and responding to complaints about misleading content or conduct, as well as having appropriate exclusions in terms of service or related documents about potentially misleading statements made by users or other third parties.

What’s next?

Despite the raft of new legislation, we are unlikely to see a slowdown in efforts to police the internet. In the near term, the Australian government has flagged changes to defamation law, including through a new law unmasking anonymous online trolls, as well as the expansion of the Online Safety Act through adoption of the basic online safety expectations. We also expect there to be an increasingly blurred line between national security concerns and content moderation practices, particularly regarding encrypted messaging services.

The breadth and significance of this reform agenda means industry must continue to engage with the government and regulators to ensure any proposed reform is proportionate and effective.

Key contacts

Christine Wong photo

Christine Wong

Partner, Sydney

Christine Wong
Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln
Kwok Tang photo

Kwok Tang

Partner, Sydney

Kwok Tang