Follow us


While regulation of the internet in the UK has (up until recently) been light-touch, the last decade has exposed countless examples of the harmful impact of online content and has highlighted the need for greater regulation in this area.

In a bid to make the UK 'one of the safest places to be on the internet', the UK Government has put together a new piece of sweeping legislation, known as the Online Safety Bill to confront the question of what one can and cannot say online and the role that internet service providers will play as arbiters of this debate.

The lengths to which internet service providers can (or even should) be required to go in order to improve online safety remains one of the greatest challenges and uncertainties arising from this legislation and will be one of the key areas of focus as the Bill moves through the legislative process.

The Bill seeks to provide a robust framework for regulating harmful content (such as hate speech, cyber-bullying, misinformation, targeted advertising and the use of algorithms and automated decision-making) on the internet and is expected to set a global benchmark for online regulation going forward. Since its introduction, the Bill has attracted a great deal of publicity and has been scrutinised by a Joint Select Committee which has taken oral evidence from key industry players such as Facebook, Twitter, Google, TikTok and Facebook whistle blower, Frances Haugen.

Regulating lawful but harmful content

Among the more controversial aspects of the Bill are the measures targeted at content that is deemed to be harmful but is not actually illegal. The Bill proposes a range of new duties of care for regulated service providers (see the FAQs for more information on who the Bill applies to) and these include duties for providers of services likely to be accessed by children and so-called "Category 1 services" (which is expected to cover the largest, most popular social media sites):

  • to carry out risk assessments to identify the presence of harmful content on the service;
  • in the case of services likely to be accessed by children, to:
    • take proportionate steps to mitigate and manage the impact of harmful content on children;
    • ensure that the terms of service specify how children will be protected from harmful content which they may encounter;
  • in the case of Category 1 services, to ensure that the terms of service specify how the service will deal with harmful content.

The Bill includes more onerous obligations in relation to higher-risk content which the the Secretary of State may in the future designate (following consultation with Ofcom) as "primary priority content" or "priority content" (e.g. a requirement to use proportionate systems and processes to prevent children encountering primary priority content).

Scope of the definition

According to the explanatory notes to the Bill, harmful content could range from online bullying and abuse, to advocacy of self-harm, to spreading disinformation and misinformation. In addition to the designated categories of higher-risk content as mentioned above, the Bill contains a legislative test as part of the definition of harmful content. A regulated service provider would need to comply with the duties set out above in relation to all content in respect of which the provider has reasonable grounds to believe any of the following: 

The Bill further clarifies that in applying the first test in relation to content that can reasonably be assumed to particularly affect people with a certain characteristic (or combination of characteristics), or to particularly affect a certain group of people, the provider must assume that the hypothetical adult or child possesses that characteristic (or combination of characteristics), or is a member of that group (as the case may be).

The biggest criticism of this definition of harmful content contained in the Bill is that the tests set out above are too vague for social media companies to apply in practice. For example, in its evidence before the Joint Committee, Google noted that although YouTube already prohibits hate speech, detecting it on a global stage is challenging due to the relevance of the context in which it is spoken. Twitter's Head of Public Policy in the UK, Katy Minshall, also noted to the press that the Bill does not go far enough in defining this category.

In her evidence, the Culture Secretary Nadine Dorries welcomed recommendations from the Joint Committee that might help to make the definition more watertight and noted that the Department for Culture, Media and Sport (DCMS) was considering the Law Commission's recommendations on defining new harm-based communication offences in its Report on Communication Offences. The Law Commission has explained that the proposed recommendations are intended to ensure that speech that is genuinely harmful does not escape criminal sanction merely because it does not fit within one of the proscribed categories under existing communications legislation.  At the same time, the recommendations will ensure a context-based assessment of speech so that that communication that is not harmful is not restricted simply because it could be described as falling into one of the existing categories such as being grossly indecent or offensive.

Stakeholders are understandably concerned about the risks posed to freedom of speech and expression by harmful content rules which may be perceived as overly vague or broad. Adding to this concern, Category 1 service providers will also be required to balance their obligations in relation to harmful content with a separate obligation to protect information of democratic importance and journalistic content. Without additional clarity on what is and is not harmful content, this may prove to be a difficult balancing act and one which is likely to place a much greater burden on online service providers as the moderators of online content than they may have anticipated or be equipped to accommodate.

The use of AI in content moderation

It is clear that regulated service providers will heavily rely on the use of AI to facilitate monitoring and takedown of problematic content in order to comply with the Bill. However, several stakeholders have questioned the adequacy of algorithmic moderation to recognise the nuance and subtleties required to effectively identify harmful content without encroaching on freedom of speech and expression. For instance, an algorithm may flag speech that is satire or is intended to raise positive awareness about an issue such as prevention of suicide. As an example, TikTok explained in its evidence to the Joint Committee that "when it comes to harm… getting rid of most of the stuff is straightforward; it is the bit where nuance and context are required and how you do that at massive scale, which is difficult".

The Department for Digital, Culture, Media & Sport explained in its evidence to the Joint Committee, that the Bill's focus is entirely on systems and processes rather than reviewing individual pieces of content. According to the DCMS, this will ensure that the regime stays relevant to the growing number of harms and does not overburden companies leading to over-removal of content. In its evidence to the Joint Committee, Twitter encouraged this "safety by design" approach while flagging that the regulations should remain mindful of the technological barriers that exist in deploying safety tools. It noted that content moderation technologies continue to exist in silos and are most effectively used by big players operating on a large scale. Twitter added that ensuring that a range of service providers can access these technologies (as well as the underpinning data through robust information sharing channels) will be crucial to effectively address harmful content.

Notably, the Bill creates a complaint mechanism for users if they feel that their content has been unfairly removed. In her evidence to the Joint Committee, ICO Commissioner Elizabeth Denham has cautioned against the overwhelming number of individual complaints that could be received if adequate accountability safeguards, especially with respect to algorithmic or AI-based moderation, are not put into place by regulated service providers. If a regulated service provider implements a system that ultimately leads to under-flagging or over-flagging of content, it is unclear how this will be treated by Ofcom from a compliance perspective when it comes to enforcement of the Bill.  

Key Takeaways

Given the issues set out above, the Joint Committee is likely to carry out a robust review of the harmful content regime and it would not be surprising to see substantive revisions to this regime both before the Bill is formally introduced to Parliament and as the Bill makes its journey through the legislative process. Even after the Bill comes into force, the full picture (particularly when it comes to the practical and operational realities of this regime) will only become clear as and when secondary legislation and guidance are made available.

Whilst the draft legislation is still very much in a state of flux, there are some practical steps that regulated service providers can take in readiness for the legislation. For example service providers within the scope of the legislation may wish to:

  1. carry out a review of their current policies in respect of harmful and illegal content and identify areas that will require updating to reflect the new legislation once finalised;
  2. carry out initial risk assessments in respect of their platforms to ascertain the extent of harmful and illegal content on their platforms, the ease of dissemination of such content and also the likelihood of users (and in particular children) encountering such content when using the platform (as these will likely be the key factors in determining the service provider's approach to compliance);
  3. consider what technical aspects of their platforms may require changes to ensure compliance with the new laws – this will likely involve a review of existing content monitoring tools and procedures in place and consideration of what is proportionate in terms of implementation of additional tools and procedures given the size and nature of the platform and its user base (as well as the risk-factors mentioned in (ii) above).

Given that the EU's own online safety regime (in the form of the proposed Digital Services Act and Digital Markets Act) is also being developed in parallel, there may be ways for companies who fall within the scope of both the UK and EU legislation to reduce their compliance burden by responding in lock-step with both initiatives. 

Online Safety Bill - FAQs

  1. When was the Bill published?
    1. 12 May 2021 – see here.
  2. What online services does the Bill cover?
    1. The Bill applies to: (i) internet services that allow users to share user generated content (e.g. social media platforms); and (ii) providers of search engines (collectively referred to in this article as 'regulated service  providers').
  3. Who will oversee/enforce the new legislation?
    1. The Bill appoints Ofcom as the regulator responsible for enforcing the provisions of the Bill.
  4. What are the potential sanctions for breach of the legislation (as currently drafted)
    1. The Bill authorises Ofcom to issue a fine of £18 million or 10% of annual worldwide turnover (whichever is greater).
    2. The Bill also contains a deferred mechanism to impose criminal sanctions on senior management of regulated service providers companies in certain circumstances.
  5. What happens next?
    1. The Joint Committee which has been appointed to consider the draft Bill is expected to publish its report by 10 December 2021.
    2. Following publication of the Joint Committee report, the UK Government is expected to prepare an amended Bill (taking into account the Joint Committee's findings) which will then be formally introduced to Parliament to begin its journey into law.

Key contacts

Hayley Brady photo

Hayley Brady

Partner, Head of Media and Digital, UK, London

Hayley Brady
James Balfour photo

James Balfour

Senior Associate, London

James Balfour