Follow us


Explore our FSR Outlook 2022

How will lawmakers, regulators and firms respond to the potential risks of AI use in financial services?

In a nutshell:

  • Big data and AI are transforming how financial services and products are delivered, monitored and regulated, and even how evidence is identified and presented in enforcement cases
  • But concerns remain about the quality, fairness and transparency of automated decision making remain, and how the outputs of learning systems can be effectively monitored and challenged
  • Firms need to be engaging early with regulatory and government policy-making to ensure that legislation or guidance is fit for purpose and assists firms in leveraging the benefits and minimising the risks of AI

AI use in financial services – benefit or threat?

Artificial intelligence (AI) underpins more and more aspects of our daily lives. It can control our cars, our pacemakers, our household appliances, and our interactions with suppliers and public authorities. Increasingly, search engines and algorithms deployed within social media direct us to what they determine we might read, watch, buy or invest in, and to that extent govern our cognitive inputs.

The increasing availability of big data, and improvements in regulated firms' ability to analyse and process it, is transforming the way financial services are delivered: firms are deploying artificial intelligence in all areas of their business, at front, middle and back office level. AI is being used to transform customer relationships, target marketing and offer personalised services, support KYC and due diligence processes, assist in credit assessments, automate trading systems, detect patterns and make future value predictions, monitor risks, regulatory compliance and employees, cast light on dark data for underwriting and collection purposes, and to report data to regulators. 

As firms' professional advisers, we are also using technological solutions to ensure forensically sound document collection and facilitate review. Beyond traditional keyword searching, newer solutions facilitate identification of key communications and custodians prior to data review, highlight key concepts, group conceptually similar documents together for easy identification or exclusion, reduce duplicated effort, enable the identification of potentially missing datasets or key dates prior to review, auto-redaction of documents and, through continuous active learning, more targeted initial review and quality assurance, whilst machine translation and language identification enable better workflow planning.

On their side, regulators, too, are leveraging technological advances in data engineering, advanced analytics, network analysis, natural language processing and visualisation techniques to process the proliferation of data sets made available to them by firms, venues and market participants, to identify patterns and risks, provide insights, facilitate triage and casework, and enable proactive and early intervention to prevent harm.  In investigations and enforcement, it is being used to sift through the often vast quantity of data provided in response to information requests, to model information flow, and to enable evidence to be present in a visual and more accessible way, making it easier to present wrongdoing in court proceedings. Those who find themselves subject to such proceedings will wish to test the integrity of the data inputs, and to be in a position to understand, and potentially challenge the regulators' modelling and visualisation outputs.

As we all seek to harness the potential opportunities and benefits that AI and machine learning can create for consumers, firms themselves and the wider economy, we must also grapple with the attendant risks and challenges. Many of these are well-rehearsed – they include concerns about data quality, accuracy, privacy, protection and bias, the potential for discrimination or other forms of unfairness, the explainability and transparency of models and outcomes, and scarcity of expertise, resource and talent.

The widespread adoption of algorithms in trading, and the associated risks, unsurprisingly prompted the imposition of requirements for systems and controls, a human in the loop, monitoring of trading and client dealings, regulating the provision of liquidity by market makers, and greater regulatory scrutiny for activities perceived as presenting greater market or systemic risks, such as high frequency algorithmic techniques.

Financial services regulators have to date largely (and deliberately) sought to ensure that policy remains technology neutral. Regulatory ambitions to encourage responsible innovation in the interests of consumers and business have generally sought to do so in a controlled fashion, using regulatory sandboxes, and through the development of high-level standards, such as the FEAT principles for the use of AI in Singapore's financial sector, or the IOSCO guidance for regulators on the use of AI and natural language by market intermediaries and asset managers. Such approaches focus on governance, risk management, data quality and bias, staff capability, outsourcing, monitoring and testing of algorithms and outcomes on a continuous basis, ethics, transparency and accountability.

In the US, the National Artificial Intelligence Initiative Act of 2020 directs the newly formed National AI Research Resource Task Force to make recommendations for establishing and sustaining the National AI Research Resource, including technical capabilities, governance, administration, and assessment, as well as requirements for security, privacy, civil rights, and civil liberties. The Task Force will submit two reports to Congress that together will set out a comprehensive strategy and implementation plan — an interim report in May 2022 and final report in November 2022. 

But Europe is taking a dramatically different approach. Hoping to create a leading role for Europe in setting global gold standards, the European Commission has proposed the 'first ever legal framework' to address the risks of AI.  The approach taken in the proposed Regulation is risk-based and cross-sectoral, and is intended to be future proof.  Clear requirements and obligations will be created for developers, deployers and users of AI:

  • AI systems posing unacceptable risks (clear threats to safety livelihood and civil rights) will be banned;
  • those posing high risks, including essential private and services (including for example remote biometric identification systems and credit scoring) will be subject to strict requirements, including registration, conformity assessments, and reporting of serious incidents or malfunctions;
  • AI systems posing limited risk will be subject to limited transparency obligations (eg to flag the use of the AI to humans);
  • the vast majority of AI applications currently in use in the EU (for example video games or spam filters), which are considered to represent minimal or no risk, will be free to use.

The very expansive territorial scope of the proposed Regulation would capture third country (non-EU) providers and users of AI systems where the output produced by the system is used in EU. Breach of the provisions will expose firms to fines.  The proposed Regulation could enter into force in as early as H2 2022, but there will then be a two year transitional period before it comes fully into application. 

Whether or not other countries choose to adopt a similar approach, firms are likely to be confronted by some incompatibilities in the priorities of sectoral and national regulators, both domestically and in different jurisdictions, and in legislation and other requirements, whether newly enacted, or drafted long before digital interaction at this level (or at all) was contemplated.

A range of regulators in various jurisdictions, including the FCA and the Information Commissioner in the UK, and ASIC in Australia through its regtech liaison forum, are engaging with the industry to assist in navigating some of these challenges through public-private partnerships, as is the SEC in the US through its recent request for information and comment. It is important that firms engage proactively in helping to identify areas of incompatibility. This work should also assist firms to carefully articulate goals for their AI and ML use cases, and to ensure that these remain aligned with society's own clear and common goals, however they may be defined, to avoid us being unwittingly manipulated by our incrementally autonomous and increasingly intelligent machines.

Key contacts

Jenny Stainsby photo

Jenny Stainsby

Global Head – Financial Services Regulatory, London

Jenny Stainsby
Andrew Procter photo

Andrew Procter

Consultant, London

Andrew Procter
Karen Anderson photo

Karen Anderson

Consultant, London

Karen Anderson