UK Finance response to AI Whitepaper: a pro innovation approach to AI regulation

The Department of Science, Innovation and Technology published a policy whitepaper titled ‘A pro-innovation approach to AI regulation’ on 29 March 2023 with three primary objectives in relation to AI (artificial intelligence).

  • To drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty
  • To increase public trust in AI by addressing risks and protecting our fundamental human values
  • To strengthen the UK’s position as a global leader by using AI technology in addressing global challenges (while ensuring that there are new regulatory approaches to guide responsible innovation)

A pro-innovation approach to AI regulation’ 

Placing these three principal objectives at the centre, the policy paper aimed to position the UK as a global leader in AI regulation by enabling innovation through a proportionate, adaptable, and context sensitive approach.

Recognising the importance of these broader objectives for diffusing AI responsibly within the financial services industry, UK Finance submitted a response to the whitepaper.

Key messages from our response include:

First, we support the sectoral, risk-based approach, based on regulatory guidance. Such an approach provides flexibility and can account for the peculiarities of each sector, use case and the existing applicable rules more readily than a horizontal AI law. It is also more able to adapt to technological developments than primary legislation.

Second, the UK government and the central function can play a key role in convening and coordinating regulators, and facilitating the sharing of information, but will need careful design to preserve regulatory independence. The central function should help manage cross-sectoral issues to promote coherence and avoid duplication, fragmentation or contradictions. This will help maintain a level playing field between sectors and promote innovation. The development of sandboxes – environments where firms can test new products safely – run jointly by multiple regulators can also help deliver business certainty and reveal areas of tension between regulators’ expectations.

Third, it is important to make clear that regulators can rely on their existing, technology-neutral guidance and rules, with tactical AI-specific supplements when required. They should not be expected to produce a full ‘AI overlay’ when existing regulation addresses risks adequately. AI frameworks should consider existing regulation and carefully evaluate gaps in order to avoid regulatory duplication, while ensuring high risk use cases are not overlooked. Publicly available multi-purpose AI tools may need particular attention.

Fourth, there is a mix of principle-based and prescriptive approaches emerging globally. This will make international interoperability challenging but government should seek opportunities to drive alignment – or at least compatibility – when possible.

Lastly, we note some specific policy challenges that will need further working through by government and industry. These include how firms deploying AI can get assurance from AI vendors about key compliance questions, while respecting vendor IP, and thinking through not only the risks posed by AI mishaps on the part of legitimate firms but also risk of bad actors using AI for malicious purposes like fraud.

For a deeper discussion of issues raised by the adoption of AI in financial services, join our Digital Technology workstream at the UK Finance Digital Innovation Summit, on 31 October 2023 at 155 Bishopsgate, London. Our other workstreams include Digital Payments, New Digital Money and Data.