Follow us

We assess the patchwork of regulation targeting harmful content as regulators play whac-a-mole with tech industry

The decentralised, global internet has democratised many facets of everyday life, and consequently has allowed everyday citizens, extremists and everyone in between to share their views. The consequential proliferation of online harmful content meant some form of regulation was inevitable. However, it does not appear that this issue can be countered through a single tool or approach. Instead, an holistic, cross-jurisdictional approach is required in order to best consider how to effectively regulate the internet while maintaining its benefit for society.

In the meantime, a patchwork of regulations have either been proposed to apply to, or imposed on, digital platform companies to filter the content they distribute. This reactive approach will lead to regulators playing whac-a-mole to enforce their new, and differing, standards globally, with minimal consideration for their impacts. 

Please click on the image below or the button above to download our interactive map for further details regarding each country and their progress in the race to regulate highlighted below. 


This article was authored by Tony Cooke and Anna Jaffe.

Harms caused by regulating online content

In the short history of the internet, digital platforms have come to be viewed as digital ‘town squares’, a forum for democratic, open dialogues operating within the rules set by user terms. However, this view is increasingly challenged both by the recognition that user terms impose their own form of content control, and a corresponding global trend of governments regulating the content available to their citizens. These new laws regulating online content are often being debated in the immediate aftermath of terrible events. Even so, careful attention should be paid to their ability to restrict civil liberties, alter the nature of the internet and disrupt businesses that rely upon the free flow of information.

Lack of centralised control

The internet was born in an era of ‘permissionless innovation’, where the idea of giving a single body centralised control over a global, nascent technology was viewed as inherently problematic. Accordingly, regulation developed in a fragmented manner (both by jurisdiction and subject matter) and no single body has been able to effectively control content posted outside of their jurisdiction.

This meant that digital platform companies set their own standards for content in a manner often plagued by technical and philosophical challenges. In response to online harms issue, the solution in many jurisdictions is to formalise these standards into laws, and hold digital platform companies accountable for enforcing such laws. This is evident in new laws being targeted at illegal, harmful, or otherwise disreputable content. This includes content such as bullying and harassment, hate speech, fake news and terrorism.

Types of Online harms

In some respects, the proposed frameworks are merely formalising the existing role digital platform providers had assumed in the absence of regulation. However, formally allocating the regulatory burden to the private sector does not come without societal costs. Many citizens and businesses today have unparalleled access to information and markets through the internet. The resulting awareness, access and increased competition has benefited society in a number of ways. Therefore, any regulatory approach that could stifle the free flow of information needs to be considered carefully to avoid the internet becoming a platform for political ideologies or entrenching those with existing market dominance.

Balancing regulation and freedom of expression

Balancing freedom of expression and regulation of potentially harmful content is fraught. Concepts of 'unacceptable' or 'harmful' content are not static and do not have fixed legal meanings (unlike ‘illegal’ content), so it is difficult to understand and apply prohibitions against them. For example, the UK Online Harms White Paper does not have one set definition for the exact content it intends to curtail. This increases the risk of lawful content being removed without a clear reason in the name of the public good. Ultimately, this will reduce the internet’s effectiveness as a platform for communication and as an instrument for change.

There is also a risk of a ‘race to the bottom’ in content regulation. Regulating individual digital platforms or jurisdictions inconsistently will only drive those wishing to post the content to other platforms or jurisdictions with lower standards. Concentrating users in these alternative platforms could reinforce 'echo chambers' of radicalism, whereby one’s pre-existing beliefs are continuously confirmed by the consumption of content from, and interaction with, people sharing those beliefs.

Finding a way forward

Relying on those with existing influence to ‘know it when they see it’ fails to adequately address the online harms problem. Freedom of speech and the right to be protected from harmful content are inherently conflicting principles. Because neither of these rights are absolute, their application requires a balancing exercise from those impartial to the outcome. Although similar arguments can be levelled at individual governments, digital platform companies cannot be considered to be impartial alternative regulators. Many of these companies will be anxious to reduce their compliance costs and risk exposure, and will accordingly take an overly cautious approach to interpreting online harms and removing content.

Instead, we see that a more collaborative framework is essential to preventing the spread of extremist or illegal content being shared online. As the internet spreads globally but governments rule locally, an approach similar to the global human rights frameworks is required. There needs to be an agreed set of rules shared between nations clarifying what is abhorrent without the influence of a particular set of customs, culture or traditions influencing what is harmful or not.


Key takeaways

  • It is crucial for digital platforms and other companies operating on the internet to be aware of the regulations on the horizon that may apply to content on their platform. Where changes are likely to impact them, this awareness and monitoring of developing regulations will also enable such companies to share their opinions in the consultation process.
  • It is equally important to take a snapshot of which regulations apply now so that companies can make sure compliance measures implemented are appropriate. A state of over-compliance may be detrimental to the end user and reduce the value of the online platform as a whole.
  • In order to future-proof, we recommend that attention is paid to user terms and a company’s ability to adapt quickly to new regulatory frameworks. Building an all-purpose response system following a change in regulation is more likely to result in overly restrictive take-down practices and result in user decline.


Key contacts

Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln
Hayley Brady photo

Hayley Brady

Partner, Head of Media and Digital, UK, London

Hayley Brady
Alexandra Neri photo

Alexandra Neri

Partner, Paris

Alexandra Neri