Risk and Reward: The double-edged sword of AI in finance & insurance

As the Financial Services and Insurance (FSI) sector manages vast data volumes and critical daily transactions, artificial intelligence (AI) promises to streamline processes, enhance customer experiences, and spawn innovative products.

Yet, these advancements are not without emerging risks that every stakeholder within the sector must consider. Regulators and policy makers are working to account for emerging risks AI presents the economy and there are a number of actions organisations can take to help them adopt AI effectively and safely in this context

Data security and privacy top the list of risks in AI adoption. Given that AI models feed on large data sets for accurate decision-making, any breach could compromise sensitive customer or commercially sensitive data, resulting in significant financial and reputational damage. The intricacy of AI algorithms renders them vulnerable to adversarial attacks. Malicious actors can subtly manipulate input data to deceive AI systems, leading the models to make erroneous predictions or decisions. The black-box nature of some AI models can make it challenging to identify, understand, and rectify potential vulnerabilities, thereby creating a blind spot for financial institutions.

Additionally, AI's inherent complexity could inadvertently introduce biases, making decisions unfair or discriminatory in crucial areas like lending or claims processing. This obscurity also challenges the justification of decisions to regulators and customers. With AI's potential to predict market trends, detect fraud, automate underwriting, and offer tailored financial counsel, it is crucial to remember that adversaries will seek to manipulate and exploit AI for personal gain or explicit disruption to business operations. As AI becomes increasingly central to the operations of financial entities, addressing these security issues becomes paramount to maintaining trust and ensuring the sector's resilience.

Regulatory compliance is non-negotiable for FSI firms in this AI-dominated landscape. It is not merely about data protection but extends to the very essence of how AI-driven decisions occur. As the UK Government's upcoming policies and AI Whitepaper indicate, there is a global movement towards ensuring fairness, transparency, and accountability in AI practices. 

Generally, the approach of many jurisdictions aligns with the Organisation for Economic Co-operation and Development's (OECD) AI guidelines, emphasising fairness, transparency, accountability, sustainable development, and robustness in security and safety. However, there is a divergence among governments regarding the necessity of regulatory adherence to these principles versus endorsing a voluntary stance. Some nations prioritise flexibility and innovation in their policies, while others adopt a more cautious stance. For example, the European Union's (EU) AI Act seeks to outright prohibit AI systems that pose an "unacceptable risk," such as societal scoring systems. Contrarily, the UK champions a "pro-innovation" strategy, refraining from outright bans and instead advocating for regulatory sandboxes and experimental zones. Notably, regulations are more pronounced in sectors such as financial services, where the stakes are higher.

An AI-integrated FSI sector calls for a transformative organisational approach. Essential training shouldn't be confined to merely using AI tools; understanding their outputs and inherent risks is equally critical. For example, insurance adjusters must recognise AI's evaluation nuances, while underwriters should comprehend the potential biases that AI might bring to risk assessments.

Mitigating these security challenges requires a multi-faceted approach. Robust data management protocols should be in place to protect data integrity and confidentiality. Regular data access and usage audits can help detect and prevent unauthorised data breaches. Financial institutions should consider investing in research and tools for adversarial defence, which can detect and counteract attempts to deceive AI models. Furthermore, to address transparency issues, there's a growing emphasis on explainable AI (XAI) techniques - these tools can demystify AI decision-making processes, ensuring that they align with regulatory standards and ethical considerations.

There is no one-size-fits-all solution concerning AI use and adoption. Organisations ought to:

  • Recognise the threats they face, handle the associated risks, and exploit the advantages that AI brings to enhance their cyber security stance.
  • Embrace an adversary perspective by performing penetration tests and security audits against AI platforms, continuously operating under the assumption that adversaries are also using AI to refine their attack strategies.
  • Ensure that privacy, information security and ethics, as well as evolving regulatory requirements, are considered when developing or using AI within business applications.

Cyber security leaders in FSI must communicate with their boards, executive teams and employees to explain the risks and opportunities posed by AI and how they plan to manage it.

While AI is revolutionising the FSI sector, offering unparalleled operational enhancements, it also introduces new security concerns. However, with suitable safeguards, strategies, and continuous vigilance, these institutions can harness the full potential of AI while ensuring that their operations remain secure and trustworthy.

NCC Group is at the forefront of AI security research and AI risk advisory across FSI. We've just published a new research paper which goes into the security implications of AI in greater detail. Download your free copy