You can use the search function to find a range of UK Finance material, from consultation responses to thought leadership to blogs, or to find content on a range of topics from Capital Markets & Wholesale to Payments & Innovation.
The financial services sector is hyper competitive, but financial institutions are also highly interdependent and routinely support each other in several essential ways: by sharing cyber threat information and best practices against cyber-attacks and fraud, and by participating in joint exercises to prepare for severe but plausible scenarios.
The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.
One such exercise was recently held in-person by FS-ISAC on Artificial Intelligence (AI). Security practitioners from many EMEA-based financial institutions gathered to learn from each other and experts about AI’s ability to exacerbate incidents in the current threat landscape and to discuss mitigation tactics to reduce its effects. Their realistic scenario was to defend their institution against a deepfake attack, i.e., synthetic media that adversaries generate to spread lies or incite market panic (a tactic known as MDM, or Mis/Dis/Mal-Information).
AI exercise objectives
Financial institutions exercise incident scenarios together because a crisis impacting one can quickly cascade and become systemic. Last year a large US bank collapsed because of a bank run. What if an AI-based MDM campaign incited a similar trajectory? Community collaboration can tackle such scenarios before they start and speed up the response.
In the case of AI, exercises help practitioners understand the implications of the technology on existing threats and mitigate the effects. During this exercise, teams:
Outcomes and recommendations
AI amplifies certain threats by lowering the entry threshold for malicious actors to enact sophisticated social engineering attacks. MDM campaigns in particular can quickly erode trust in the proper functioning of markets and could lead to bank runs or undue market volatility.
It is acknowledged that deepfake detection tools are lagging behind the quickly evolving threat of synthetic media. To combat the threats raised by synthetic media, financial institutions may consider implementing some of the following mitigation techniques:
Collaboration is critical
During an MDM campaign – as well as many other attack scenarios – a peer-to-peer early warning system accelerates individual responses. Additionally, a coordinated sector response can calm the market and sustain consumer trust in the proper functioning of the economy.
Both approaches rely on collaboration. Such mutual support helps each firm stay competitive in its market, where business resilience is key to an individual firm’s ability to succeed. Likewise, the sector as a whole benefits when the security posture of each organisation is improved.
In fact, collaboration is a form of operational resilience. Considering the new AI-enhanced tools exploited by malicious actors and the increasing regulatory requirements around information sharing and resilience, the business case for mutual support gets ever stronger.
Against the backdrop of manifold threats and attack vectors such as AI, hyperconnectivity, and the speed of technological and geopolitical developments, firms must take a proactive and collective approach to building robust systems and processes to withstand adverse events. Sharing information and real-world exercises are crucial to that approach.
Link: FinCyber Today UK in London
15.07.24
Moona E. Ederveen-Schneider, Executive Director, EMEA, FS-ISAC
23.01.26
21.01.26
22.01.26
By downloading this document, you understand and agree that any sharing, distribution or republishing of the content, without prior written authorisation from the author or content managers at UK Finance, shall be constituted as a breach of the UK Finance website terms of use.