Follow us

While the General Election has thrown some doubt on the light-touch approach to regulation of AI in the UK, Labour would also be likely to start off relying significantly on regulators to regulate AI in their particular area. We consider the various strategies from regulators so far, and practical issues for businesses to consider.​​​​​​

In late Spring a wide range of regulators published their responses to the 2023 white paper on AI, which set out the Conservative government's approach to the regulation of the new technology. This pro-innovation, light-touch approach proposed that sectoral regulators would be primarily responsible for regulating the use of AI in their area, with no new specialist regulator or specific legislation planned at this stage, although the area was to be kept under review. Instead, regulation of AI would be guided by 5 overarching principles:

  • safety, security and robustness;
  • transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

Since then, we have of course seen the current government call a General Election. While this has thrown some doubt on future regulation of AI, early indications suggest that Labour would also start off relying significantly on regulators to regulate AI in their particular area. Labour does however envisage creating a new body to assist the regulators and ensure co-ordination.

We therefore think that the regulators' responses to the white paper, which offer some useful insights into how that approach would be implemented by the different regulators, remain relevant whoever wins the election. In this post we consider the various different approaches, and a co-operative hub launched by four regulators, which may assist businesses with upcoming AI products or projects which will fall within the remit of multiple regulators

Regulators' approaches

Regulators including the FCA, PRA, ONR and Ofgem emphasised that they were technology agnostic in their responses. The FCA will prioritise understanding deployment strategies within the firms it already regulates. The FCA acknowledges that the complexity of AI models may require a greater focus on the testing, validation and explainability of the models as well as strong accountability principles.

Ofgem has undertaken a mapping analysis of to what extent existing law and regulation covers the government's five principles set out above. The only tenet for which they did not find either direct or indirect mapping was explainability, although safety, security and robustness and accountability and governance only had indirect mapping, and accountability and governance raised novel issues around responsibility, auditability and liability.

Some of the regulators detail regulatory sandboxes in their responses, and one sandbox for which more information is available is the Medicines and Healthcare products Regulatory Agency's ("MHRA") AI Airlock. This is a regulatory sandbox for AI as a Medical Device which is designed to help the MHRA identify challenges for regulating in this area and assist in the safe development and deployment of AI as a Medical Device. The findings from it will influence future UK and international guidance.

Clearly the approach of the regulator will depend on the nature of their sector. One regulator in particular, Ofqual, is applying what it describes as a "precautionary" approach to AI, focusing on safety and appropriateness in high-stakes processes, and prioritising stability while remaining open to compliant innovations in AI. Ofqual's response focuses on, among other tenets, ensuring fairness for students and maintaining public confidence. This is perhaps understandable for a regulator that regulates what are, on the whole, tests designed to measure humans' performance, and where the value of these tests is largely based on whether they are viewed and trusted by society as true measures of performance.


As much as regulators can say that they are outcomes focused and technology agnostic, new technology has the potential to alter outcomes significantly, and does often mean that new principles for regulation are needed. To put it simply, not only is the speed limit you need to put in place when people are using horses and carts to travel between cities different to the speed limit you need to put in place when people are using cars, but the principles of the regulation are different. The new technology opened up new possibilities of how efficient travel could be made at higher speeds, but it also created new risks to road users, and new principles of how to balance efficiency and safety were needed.

Regulators are recognising that to an extent. Ofgem understands that deployment of AI in the energy sector is likely to pose novel questions, particularly around supply chain duties, explainability and governance and accountability. The same will no doubt be true in other sectors.

Mapping exercises such as that conducted by Ofgem are clearly useful. For regulators however, it may be the Rumsfeldian unknown unknowns which present the greatest risk. This is why the work engaging with both equivalent regulators internationally and with regulated firms themselves will be key. Regulators need to build up sufficient institutional knowledge and understanding of how AI is likely to be deployed in their area, how it works and what the potential drawbacks are as to make the scope for unknown unknowns as narrow as possible. As well as reducing the scope for unknown unknowns, such engagement is likely to minimise regulatory uncertainty for regulated entities and encourage safe AI exploration, as the ONR notes in its response.  

Practical issues for businesses as a result of these approaches to AI

The first practical issue for a business is to consider the tone of the relevant regulator(s) approach to AI, and to consider whether the firm's own approach to AI is sufficiently aligned. If it is not aligned, organisations should consider whether they need to do further work to align products and projects, and if it is, how to demonstrate that alignment. All regulators have so far been influenced by the government's "pro-innovation" language, but to a greater or lesser extent and, as discussed above, the approach will very much depend on the nature of the sector and the particular risks that arise in that area.

Where regulatory sandboxes aimed at AI are available, businesses should consider whether they have suitable projects or products to introduce in the sandbox. Together with the cooperation forum detailed below, they offer potentially useful tools to flush out issues with AI products and projects before deployment.


The Digital Regulation Cooperation Forum ("DRCF") is a joint forum of the CMA, Ofcom, the ICO and the FCA. It has launched the AI and Digital Hub pilot, an informal advice service for innovative products or projects. The target response time is 8 weeks, and users receive a response incorporating input from the two (or more) regulators relevant to their query. The service will respond to follow-up queries to clarify the indications given. Advice given does not bind the relevant regulators, and if users disclose something non-compliant in their query, they will be informed that the service may be non-compliant, and the query may be forwarded on to other services within the relevant regulators (for example, enforcement).

The service may be most useful for businesses in order to try and get an initial view in the development of a major project or product. The utility of the DRCF depends on the quality of the responses. If users receive a response that helps to elucidate how the regulators would assess a major planned project or product, that would be useful. If they receive a response that simply identifies the broad regulatory provisions that are engaged, without considering how they apply to the particular facts, then the service is less useful. Crafting queries so that the relevant question is easily understood with enough background will be important for users and their advisers.

While it is understandable that resources for such services are not endless, in a fast-moving area, 8 weeks for a response is a relatively long time. It means that the service is most likely to be useful for advisors working on the relatively longer-term development of projects and products.

All change please?

Of course, since the White Paper and the publication of the various regulators' strategies, a general election has been called and we now face a possible change of approach. The Labour party manifesto contains the following interesting extract:

"Regulators are currently ill-equipped to deal with the dramatic development of new technologies, which often cut across traditional industries and sectors. Labour will create a new Regulatory Innovation Office, bringing together existing functions across government. This office will help regulators update regulation, speed up approval timelines, and co-ordinate issues that span existing boundaries. Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models".

While the creation of the new Regulatory Innovation Office departs from the current policy, it is clear that existing regulators would still play a central role under Labour. It will be interesting to see whether the regulators' own language changes if a new Government emphasises safety and regulation over the light-touch approach.

We will be discussing these issues further and unpacking the Labour position in an upcoming podcast.

Related categories

Key contacts

Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter
Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
James Wood photo

James Wood

Partner, London

James Wood
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa
Daniel de Lisle photo

Daniel de Lisle

Associate, London

Daniel de Lisle
Andrew Lidbetter Nusrat Zar James Wood Jasveer Randhawa Daniel de Lisle