AI and the EU - a proposal for regulatory reform

In February the EU Commission announced its strategy for shaping the digital future of the bloc. This included the publication of its long awaited white paper on the future of artificial intelligence, with proposals for introducing a regulatory framework to govern the adoption and application of AI in both the commercial and public realms.

The reforms come in response to growing concerns amongst the public and in the media about the potential harms that may be caused by autonomous machines, and follow on from work that has been undertaken by the Commission's High Level Expert Group on Artificial Intelligence (AI HLEG).

An ecosystem of trust

Developing AI that is considered trustworthy by EU citizens is the foundation for the regulatory proposals, with the objective being to build an 'ecosystem of trust'. As digital technology becomes an ever more central aspect of people's lives, the Commission argues that it is fundamental that citizens are able to trust it and that this is a prerequisite to its uptake. The proposed solution is to develop a proportionate and consistently applied regulatory framework that is fit for adoption across Europe.

The white paper identifies three primary categories of risk that its regulatory framework needs to address in order to prevent both material and immaterial harm being caused to individuals. These are protecting the fundamental rights of individuals as laid down in the EU Charter (e.g. privacy and non-discrimination), ensuring the safety of AI applications and addressing issues surrounding the allocation of liability and division of responsibility for the effects and consequences of autonomous machines.

Regulatory proposals

There are various challenges of proposing new regulation in this field. One of these is how existing laws that already govern areas such as data protection, product liability and anti-discrimination will align with any new regulatory framework.

The Commission proposes to address this issue by taking a two-pronged approach, with existing EU laws being subject to review and potential modification in order to address issues specific to AI and then further supplemented by a new dedicated law.

Perhaps surprisingly, the new regulation that has been proposed is relatively limited in scope, particularly compared with the more expansive ambitions put forward by the AI HLEG in a paper published in April 2019. The white paper advocates a risk-based approach, whereby AI use-cases are each assessed on a case-by-case basis to determine the potential risks posed to individuals and society. Those applications which are deemed to be 'high risk', taking into account the potential safety implications and threats to the fundamental rights of individuals, will be the only ones that become subject to the new regulation.

The six requirements for high-risk AI applications

Where the new law is deemed to apply, the Commission has proposed that the additional mandatory requirements be split into six fields:

  • Training data - data sets that are used to train machine learning algorithms will need to be sufficiently broad so to ensure that all relevant scenarios that the application may come across in a live environment are addressed. Reasonable measures will need to be taken to ensure that AI systems do not lead to outcomes which result in bias or discrimination, while the privacy and personal data of individuals whose data is being used for training will need to be adequately protected.
  • Record-keeping & data - organisations will be expected to be able to demonstrate their compliance with the law in practice. This would include needing to retain records of how AI algorithms are developed and integrating traceability into such algorithms which allows problematic decisions and determinations to be challenged and verified.
  • Information provision - individuals need to be provided with clear information about an AI system's capabilities and limitations (including the expected level of accuracy).
  • Robustness & accuracy - AI applications will be expected to behave reliably and as intended, with an appropriate level of statistical accuracy. Systems will need to be designed so they are resilient against attacks and attempts to manipulate the underlying data and algorithms.
  • Human oversight - in order to avoid undermining human autonomy, the AI will need to be subject to appropriate levels of human oversight, taking into account the circumstances
  • Specific requirements for certain applications - for particular applications, such as facial recognition technologies, additional obligations or restrictions may be introduced.

What next?

The white paper is now subject to open consultation until 19 May 2020, following which it is likely that revised proposals will be put forward.

https://www.hoganlovells.com/en/whitehead-dan