AI policy and regulation – a busy agenda for 2024

In this blog, we summarise key regulatory developments over the past year impacting the use of artificial intelligence (AI) in financial services and look at how they will develop over the coming year.

The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.

Context: Why talk about AI policy right now?  

  • Following recent developments, including the public appearance of ChatGPT, and a zealous race to develop generative AI from the private sector, there has been a resurgence in interest in AI both in the financial services sector and across the economy.  

  • These developments accelerated the public sector’s consideration of AI, too. The Department for Science and Technology (DSIT) published its flagship AI whitepaper in the second quarter of the year (building on its 2 years of work on AI policy), and the UK showed global leadership by hosting the successful AI safety summit at the start of November 2023.  

  • AI can be broadly categorized into predictive AI and generative AI.  Within financial services, the former – traditional machine learning models – has been widely adopted, particularly in risk management and economic crime functions like detecting fraudulent transactions and money laundering. Generative AI (which has initiated the resurgence in interest in AI) has a wide range of potential uses, like crafting summaries of articles, writing code, generating images or generating text that could be used for purposes as diverse as marketing, job descriptions or simply emails. And indeed, there are probably wider applications yet to emerge. 

  • A recent report from UK Finance and Oliver Wyman, based on a recent survey of UK Finance members, highlights that – although predictive AI has been widely adopted – generative AI’s use is still limited to pilots and proofs-of-concept, and has not yet been expanded to use at scale. Our report also predicts that these applications are likely to spill over to other functions in business to drive efficiency and productivity as the AI systems mature over the coming 18 months and beyond.  

  • Despite the excitement brought about by the innovation and diffusion of AI technology, firms and public stakeholders recognize the risks of AI and the need to ensure effective regulation of the technology. Concerns relate to such issues as the risk of biased outputs, the opacity of AI systems, ‘hallucination’ risks, and risks of AI being used for bad purposes, such as ‘deepfakes’ or fraud. 

How will AI regulation develop in the UK in 2024?  

  • The Department for Science and Technology (DSIT) will set up the AI Safety Institute, absorbing the frontier AI taskforce. The Institute will focus on the most advanced current and future AI capabilities and drive foundational AI safety research by facilitating international information exchange.  

  • The final UK AI regulatory model is likely to come out in Quarter 1, following up on the AI Whitepaper. DSIT has also announced the pilot of an advisory service in 2024 to help businesses launch AI and digital innovations safely and responsibly.  

  • The BoE and FCA will release the next steps following on from their AI discussion paper that they published at the end of 2022. 2024 will also likely see the launch of FCA AI sandbox.  

  • The CMA’s ongoing work on foundational model may involve new consultation papers in 2024.  

  • The Digital Regulatory Co-operation Forum (DRCF) is still developing their workplan for next year. Nonetheless, this will likely involve a section on AI, given their engagement with the AI whitepaper from DSIT.  

  • ICO will need to issue guidance on automated decision-making after the finalisation of the Data Protection and Digital Information Bill and has said it will work on providing an update on AI toolkit.  

  • The Equality and Human Rights Commission (EHRC) has said it will publish guidance on AI and discrimination in 2024 or 2025.  

  • The Lord Mayor has shown a keen interest in AI and is developing skill-based training for senior staff to bring greater awareness and knowledge of AI ethics.  

  • The Centre for Data Ethics and Innovation (CDEI) is continuing its AI work, including research on data ethics and public attitudes towards AI, as well as AI assurance techniques.  

So what? Final thoughts: 

  • 2024 will see a surge in work on AI regulation among key stakeholders that UK Finance works closely with.  

  • UK Finance will work with members to build up our thinking on this important topic. We will collaborate with these public sector stakeholders to share industry insights and help achieve a regulatory framework in the UK that works efficiently and effectively. This in turn will help ensure that the momentum generated by recent AI developments delivers value for customers and businesses in a fair, ethical, and transparent way.