What can the financial services sector do to prepare for future AI regulation?

While it may be unclear what future artificial intelligence (AI) regulation will look like, it is a known fact that AI regulation is on the way. Financial services providers should begin to look at what they can do in the meantime to prepare for the new AI landscape.

Regulatory / industry standards, principles and guidance - Regulators and industry bodies have already taken steps to assist financial services providers, whether through existing standards and guidance or new AI-specific guidance. The Financial Conduct Authority (FCA) Principles for Businesses for the financial services sector continue to apply, irrespective of whether services are being provided in the traditional sense or using AI. These Principles provide useful guidance to keep firms on track when faced with key AI-related issues. For example, providers must ensure that they are transparent and able to explain AI decision making, as well as monitor usage of AI to ensure fairness to customers and to avoid breaching principles 6 (customers' interests) and 7 (communicating with customers) of the FCA's Principles for Businesses.

The FCA and the Bank of England have also published a useful report on the use of AI and machine learning in the financial services sector. This includes details on approaches taken within the sector in relation to performance monitoring of models that are deployed, validation of models to ensure systems are being used as intended, and processes used by firms to mitigate risks (such as human in the loop, back up systems, guardrails, kill switches).

Existing legislation - While AI-specific regulation is currently limited, existing legislation such as consumer, data protection and competition laws continue to be relevant and apply in an AI context. The General Data Protection Regulation (EU) 2016/679 and the Data Protection Act 2018 include requirements in respect of automated processing such as the requirement to provide individuals with meaningful information about the logic involved, the consequences of the processing and the option to not be subject to automated processing. It also promotes fair and transparent processing. Rules in relation to unfair contract terms also continue to apply, and AI should not be used for anti-competitive purposes. Existing liability frameworks can also apply where AI gives rise to unintended consequences or where a provider faces claims for breach of contract. Financial services providers should assess whether current terms and conditions need to be updated in order to ensure that the terms used are fit for purpose.

Ethics - It is expected that any future EU regulation will be based on the ethical principles set out by the European Commission over the last few years. Financial services providers should therefore have due regard and take into account the Commission's ethics principles and associated guidance.

Internal controls and measures - Reviewing internal processes and measures frequently can assist with identifying gaps and assessing whether current processes remain fit for purpose in relation to AI use. Risk assessments, monitoring of data sets, governance processes and clear complaints and dispute procedures are measures which financial services providers should consider and implement to ensure readiness for AI regulation.

We expect that regulatory expectations on the use of AI by regulated entities will become more clearly defined over the coming months and years as legislation and guidance comes into force both in the UK and EU. Financial services providers therefore need to ensure that they understand the extent to which they are equipped to meet best practice standards already available in the market in order to address the likely need to comply with more AI-specific regulatory requirements in the future.

Area of expertise: