Navigating the EU AI Act: a strategic approach for financial services

As we approach the first anniversary of the EU AI Act's enforcement in August 2025, it's an opportune moment to reflect on its impact and the strategic steps financial services institutions must take to ensure compliance and leverage the act for innovation.

The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.

The EU AI Act, a landmark regulation by the European Union, is reshaping the governance of artificial intelligence, emphasising compliance, transparency, and ethical practices. For the financial services industry, this act requires significant changes and a robust response to ensure both adherence and innovation. The EU AI Act introduces a phased implementation plan with full enforcement by 2027.

Impact of EU AI Act

The EU AI Act uses a risk-based approach, distinguishing between different risk levels: unacceptable risk (prohibited AI practices), high risk, and limited (minimal or low) risk. 

Table 1: Three main risk categories under EU AI Act

Three main risk categories under EU AI Act

AI systems that pose unacceptable risk are banned from use by Financial Services Institutions (FSIs). As per Annexe I and III of the AI Act, FSIs must pay particular attention to high-risk AI applications like credit scoring or risk assessment and pricing for life and health insurance, as they could potentially discriminate against any particular group. These systems face stringent requirements to ensure transparency, accountability, and bias mitigation.

Figure 1: Key requirements for high-risk systems

Key requirements for high-risk systems

Lower-risk applications should still undergo proper assessment, particularly when they involve direct interaction between AI systems and end users. In such cases, it is essential to clearly inform users that they are engaging with an AI system. 

While not specifically mandated by the AI Act, it is nonetheless good practice to strengthen the trust of users and customers in AI-driven systems with lower risks. One effective approach is to have personalised financial advice tools provide clear and easily understandable explanations for their recommendations, helping to prevent confusion or misinterpretation. Similarly, AI systems used in investment management should rely on unbiased data and present insights in a transparent and accessible manner, enabling users to make well-informed decisions.

Challenges in complying with the EU AI Act

Compliance with the EU AI Act presents several challenges for financial services institutions:

  • Data quality and bias: Ensuring that AI systems are free from biases and operate on high-quality data. Poor data quality leads to inaccurate outcomes and biases, which the EU AI Act aims to mitigate.
  • Transparency and explainability: The act requires that AI systems be transparent and explainable, so users understand how decisions are made. This is particularly challenging for complex AI models.
  • Continuous monitoring: The need for ongoing monitoring and periodic reviews to ensure systems remain compliant is demanding. It requires dedicated resources and advanced monitoring tools.
  • Integration with legacy systems: FSIs often operate with legacy systems that might not be easily compatible with new AI regulations. Integrating compliance measures can be technically difficult and resource-intensive.

Strategic Recommendations 

To successfully navigate the EU AI Act, FSIs should consider the following:

  • Conduct comprehensive audits: Conduct thorough audits of existing AI systems to assess readiness and compliance. Categorise AI applications by risk level and document these audits meticulously.
  • Develop robust governance frameworks: Implement a strong governance framework that includes risk management, data governance, and compliance accountability. This framework should continuously evolve based on new information and risks.
  • Ensure transparency and explainability: Maintain detailed documentation of AI models and clearly communicate AI interactions to users. Implement tools that enhance the explainability of AI decisions.
  • Engage in continuous monitoring: Establish mechanisms for real-time monitoring and periodic reviews of AI systems. Develop feedback channels for users to report issues and refine AI systems accordingly.
  • Provide training and education: Invest in training programs that cover AI compliance, ethical practices, and technical skills. Ensure that all employees understand the EU AI Act's requirements and their roles in maintaining compliance.

Conclusion

The EU AI Act presents both challenges and opportunities for the financial services industry. By understanding and adhering to the act's requirements, FSIs can leverage it as a catalyst for innovation and ethical AI deployment. FSIs must act now to align their AI strategies with the regulatory demands. Beginning with thorough audits of your AI systems, establish stringent governance frameworks, and invest in continuous monitoring and staff training. Proactive measures today will ensure compliance and pave the way for ethical and transparent AI implementation.

Discover how our expertise in Banking and Capital Markets, combined with our innovative solutions in Data & AI, can transform your business.