Managing machines in an evolving legal landscape: The rise of AI in financial services

The financial services sector is a cornerstone of modern economies.

In 2021, it contributed £173.6 billion to the UK economy, 8.3% of total economic output. The use and exploration of artificial intelligence in the financial sector has accelerated rapidly in recent years, with the explosion of generative AI bringing a new wave of digital change and the conversation around AI to the mainstream.

AI presents unique ethical and practical dilemmas that have left law and regulation struggling to keep up. The deployment of AI in financial services in particular is raising complex legal and regulatory issues. Certain areas of law have responded in their own way, such as data protection law and competition law, and AI is receiving a lot of attention from financial regulators across the globe.

We know that AI is a big topic of discussion at the moment amongst our financial services clients and the financial sector more generally. AI is a boardroom issue, fuelled by developments such as the UK’s upcoming AI Safety Summit and the EU’s impending agreement on its first of a kind AI Act.

Interest from global regulators

Many governments see great potential for AI to drive economic development and to solve societal challenges. They want to provide the legal framework needed to encourage innovation, attract investment and enable growth. At the same time, they recognise there is a need to protect their citizens and to address the ethical, legal, social and economic issues associated with AI.

At the time of writing, according to the OECD, Governments in 69 countries, territories and the EU have published national strategies on AI, and over 800 AI policy initiatives between them. Many involve consulting experts and industry, proposing ethics and principle-based guidelines, and identifying changes needed to existing law and regulation to enable the use of AI.  There is an increasing recognition that international cooperation is needed if we are to establish the global standards needed to support the safe adoption of AI in all sectors.

How does the UK compare?

The UK government has announced that it intends on adopting a light-touch and industry-led approach, meaning that there won’t be specific legislation like the EU’s AI Act at this stage. Instead, it will empower existing regulators to come up with tailored approaches that suit the way AI is actually being used in their sectors, guided by five overarching principles. This is currently aligned with the approaches being taken for example by Singapore and Hong Kong SAR, which are aiming to foster responsible AI use and innovation through high-level principles and best practice guidelines.

On the other hand, the EU is developing an extensive regulatory and liability regime, with the EU AI Act being the first AI-specific regulation across the globe. Due to be agreed by the end of 2023 and in force by 2025, it is anticipated that the EU AI Act will have extra-territorial reach. It focuses on transparency, accountability, and human oversight, and categorises AI systems into three risk levels – minimal, limited, and high – each with specific requirements, and may also include specific rules for AI foundation models.

The Cybersecurity Administration of China has also launched rules and restrictive measures on companies developing generative AI products like ChatGPT. These measures cover AI algorithms as well as the ‘models and rules’ used to generate content. A fuller draft AI law is expected to be released by the end of 2023 or early 2024 for public consultation.

Meanwhile the US approach is following more of the middle ground. Although there is increasing pressure to regulate AI at a federal level, there are no specific measures gaining traction. The regulatory approach is currently also sector-specific, with some AI guidance in areas like healthcare, financial services and transportation. However some individual states have issued AI specific laws within their jurisdictions and the National Institute of Standards and Technology have produced the most comprehensive and holistic AI Risk Management Framework to date. 

Managing AI-related risk

We are seeing the financial services regulatory process accelerate in response to the rapid development of generative AI. What will be interesting to see is how developments at national, regional and international levels play out, and what level of harmonisation across sectors, jurisdictions and regions can be achieved, when competing political objectives are at play. For example, the divergence in regulatory approach between the EU and UK complicates the compliance challenge for those with businesses operating in both regions.

In financial services, given that many regulators take a technology-neutral approach to enforcing their rulebooks, firms need to continue to map their AI projects against existing law and regulation.

However, with AI-specific regulation being generated in major markets like the EU, US and Mainland China, and a comprehensive AI risk management guidance framework emerging, there will be limited tolerance for firms that do not manage AI effectively. This means that scanning for changes on the horizon has moved from being a nice-to-have to an essential.

Getting the legal and regulatory structuring right upfront can make all the difference to avoiding not only legal consequences, but the potential for serious reputational and financial damage.

For further details, Linklaters’ latest AI in financial services report provides an overview of some of the key legal challenges, and some practical guidance for businesses on managing legal risks when deploying this revolutionary technology within financial services.