You are here

Online Harms: What is the global state of play?

09 December 2021 | Insight
– By Christine Wong and Tomas Kemmery

2021 saw governments again rushing to tighten oversight of the internet. We assess the key trends and developments worldwide.

This year marked another year of intense government, regulatory and public scrutiny of digital technology companies, particularly around online harms and internet safety.

Since around 2017, digital platforms have faced an increasing number of laws aiming to regulate online content. This year continued that trend. Over the course of the year, we have seen landmark legislation further detailed and new legislation proposed or enacted.

The fast moving legal landscape raises operational and compliance challenges for tech companies that operate globally. To help companies navigate such challenges, our interactive map below has been updated to highlight the latest legal developments across the globe. 

Though there are significant differences in jurisdictions around regulating online content, some consistent themes emerge in regulatory approaches:

AN EMPHASIS UPON GLOBAL ALIGNMENT

Governments and regulators have emphasised the importance of having globally aligned regulatory frameworks for online content. In 2021, we saw this in particular at the G7 Summit, which resulted in a Joint Ministerial Declaration by the UK, Canadian, French, German, Italian, Japanese, American and EU governments on internet safety principles to guide G7 approaches to improving online safety. At a later event for the G7’s Safety Tech Summit, Australia’s eSafety Commissioner presented on how to improve the overall digital ecosystem through global systemic changes.

A BROAD RANGE OF TECH COMPANIES CAPTURED

Unlike proposed competition law reform, which has focused on the largest tech companies, regulatory efforts in this space seek to capture a broad range of tech companies. Though there are differences between jurisdictions, laws from Australia, UK and EU, among other jurisdictions, are intended to apply to internet service providers (such as those that enable user-generated content), other online platforms (such as online marketplaces and app stores) and hosting service providers. Obligations differ depending upon the type of company, but nonetheless represent an intent to regulate all parts of the internet tech stack.  

THE NEED FOR PROACTIVE COMPLIANCE BY TECH COMPANIES

A significant change flowing from legislation is the need for tech companies to be proactive and constantly assess their compliance. In other words, the obligations are not box ticking exercises. Under consideration in Australia, for example, is a requirement that tech companies take proactive steps to minimise harm. The UK’s Online Safety Bill imposes a similar requirement through introducing a legal ‘duty of care’ for companies to take more responsibility for the safety of their users (with significant penalties for non-compliance of up to £18 million or 10% of their annual worldwide revenue, whichever is higher). One hour limits for the removal of certain content, such as in EU legislation, will also likely require significantly greater monitoring and proactive measures by companies. 

STRICTER OBLIGATIONS AND STRONGER PENALTIES

In 2019, Australia introduced criminal offences for tech companies and executives who fail to ‘expeditiously’ remove ‘abhorrent violent material’ from their services or fail to refer such material to the Australian Federal Police "within a reasonable time". Criticisms of being heavy handed have not stopped other countries thinking of doing the same, particularly the UK, which is considering a criminal sanctions regime for senior managers for failing to respond “fully, accurately and in a timely manner” to information requests from Ofcom. Employees of tech companies in Hong Kong may also face criminal sanctions for failing to assist with investigations by the Privacy Commissioner into doxxing.

A number of legislative frameworks also impose stricter obligations to protect vulnerable groups, especially children. Two of this year’s biggest tech controversies involved children (Apple’s rollout of CSAM detection and Facebook whistleblower Frances Huagen’s warnings over the company’s impact on teenage girls) and governments such as Australia, China, the UK and the EU, as well as some senators in the US, have justified increased regulation in this space partly as a need to protect children when they go online.

ILLEGAL CONTENT VERSUS HARMFUL BUT LAWFUL CONTENT

A continued point of contention between industry and government in a number of countries is over differentiation of illegal content and harmful, but lawful content. Countries like Australia and the UK seek to address both types of content. Others only impose measures to remove or encourage removal of illegal content, such as the EU’s Digital Services Act or Germany’s Network Enforcement Act (differentiating between “manifestly unlawful” content and other unlawful content).

CONFLICT AND CONVERGENCE WITH OTHER LAWS AND POLICY OBJECTIVES

It is well-known that regulating online content can come at the cost of freedom of speech on the internet. However, recently, there has been growing conflict between other laws and policy objectives. Concerns have been raised, for example, that online safety legislation in Australia and the UK will undermine privacy and encryption. The UK bill may also create friction with EU legislation, such as the GDPR and Article 15 of the E-Commerce Directive.

For other laws and policy objectives, regulating online content is seen as a complement and additional tool. In particular, from one perspective, a number of jurisdictions have included strengthening national security or political stability as part of improving internet safety. Australia, the EU and the US are all considering regulating political advertising, such as to increase disclosure over who paid for the advertisement. Australia held a Parliament Inquiry into foreign interference through social media, whilst Singapore went one step further and introduced legislation to counter foreign interference including through social media regulation.

Race to regulate: online harms jurisdictions map


 

 

Key Contacts