The future of AI regulation in Europe

European regulation of artificial intelligence (AI) is on its way, but when and what will this look like?

Following its commitment in 2018, the European Commission has been laying the roadmap for the ethical and trustworthy use and development of AI in Europe. Towards the end of last year, the European parliament adopted three proposals for the ethical and legal use of AI with the aim of improving innovation, ethical standards and trust in technology. This was one of a number of efforts by the European Commission (EC) on its trajectory to creating an appropriate legal framework for AI. While the EC aims to publish its legal framework in the first quarter of 2021, it is unclear what this will look like. Below we discuss the possibilities for future regulation following a number of recent EC developments in this space.

What will the legal landscape look like?

The Commission has focused in recent years on the concept of ethical and trustworthy AI and so any potential regulation is likely to be underpinned by these values. Work by the High Level Expert Group on AI (HLEG) in its ethics guidelines on trustworthy AI and trustworthy assessment list can be expected to influence future regulation, as will the principles outlined in the White Paper on AI published in early 2020. Its influence for example can be seen in the most recent proposals adopted by MEPs in the European parliament late last year.

The adoption of ethical and legal proposals relating to intellectual property, civil liability and ethics by MEPs in October 2020  indicate the direction of future legislation. The draft proposals focused on the issues to be addressed, including developments relating to the need for human-centric AI controls, transparency, safeguards against bias and discrimination, privacy and data protection, liability and intellectual property.

The European Commission's Inception Impact Assessment (IIA) on a proposal for a legal act of the European parliament and the European Council laying down requirements for artificial intelligence on AI  also provides useful guidance when anticipating the regulation of AI. The European Commission launched the IIA on AI on 23 July 2020 with the overall objective 'to ensure the development and uptake of lawful and trustworthy AI across the single market through the creation of an ecosystem of trust?.  The IIA offers potential options for regulation including a soft law approach, mandatory requirements for all or certain types of AI and a voluntary labelling scheme.

In addition to the adoption of AI regulation by the European Commission in early 2021, it is possible that we may also see action being taken by UK regulators with any UK-specific AI regulation,.

Regulation of AI will have a significant impact on the financial services sector. Consequently, regulators will need to ensure that any requirements, mandatory or voluntary, are implemented following consultation with financial services providers and regulators in order to strike the appropriate balance between regulation of AI use within the sector and development and innovation of the technology. Regulators should also ensure that such regulation does not have a detrimental impact on a provider's ability to provide services effectively, and that consumer rights are upheld.

Part two of this blog: What can the financial services sector do to prepare for future regulation? can be found here.