Details on the AI Safety Summit emerge

How relevant will it be to financial services?

On 25 September the Department for Science, Innovation and Technology (DSIT), published an “introduction” to its AI Safety Summit.

Limited to around 100 participants, the Summit will take place on 1 and 2 November 2023 at Bletchley Park in Buckinghamshire. The Summit will focus on managing the risks from the most recent advances in artificial intelligence.

Focus of the Summit

The Summit will focus on specific types of AI systems: 1) Frontier AI and 2) ‘Narrow AI’ with “dangerous capabilities”. ‘Frontier AI’ refers to “highly capable general-purpose AI models that can perform a wide variety of tasks, and match or exceed capabilities present in today’s most advanced models”. The latest Large Language Models (LLMs) – such as the one powering ChatGPT – are one example. Government states that such AI technologies can be unpredictable and could be used by many actors for many purposes. ‘Narrow AI’ refers to AI tools that are built to perform a much more specific task, rather than being ‘general purpose’. Only Narrow AI with “dangerous capabilities” will be covered by the Summit, such as AI tools that can be used for bioengineering.

These two types of AI system are the Summit focus, as they are seen as posing the highest risks in terms of ‘misuse risk’ and have the potential for unpredictability / loss of control. Other types of AI system are acknowledged to bring risks, such as misinformation, bias, or workforce impacts. But the intention is for these risks to be left to existing domestic and international policy processes, including the UK’s AI Whitepaper process. (See the UK Finance response here).

Goals of the Summit

According to the latest information, the Summit has the objectives of:

  • “a shared understanding of the risks posed by frontier AI and the need for action
  • a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
  • appropriate measures which individual organisations should take to increase frontier AI safety
  • areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
  • showcase how ensuring the safe development of AI will enable AI to be used for good globally”

What does this mean for the financial sector?

Although we understand that financial services is an area of government interest in relation to AI, the Summit’s focus seems to be elsewhere. The classic AI risks – bias, explainability, accountability, etc – apply to financial sector use cases but they probably do not amount to having ‘dangerous capabilities’.

In terms of ‘frontier AI’, the main developers will likely be in the tech sector, given the scale of data required. That said, where financial services firms are users of generative AI or similar technologies, they will need to ensure that they have controls in place that can account for any novel or amplified risks.

As such, the Summit will certainly be an event for the sector to follow but most of its outputs may not turn out to be very applicable, with more directly relevant next steps coming out of the separate AI Whitepaper process.

Nonetheless, there will be a series of wider engagement workshops that could be of more direct interest to financial services firms. These will run between 11 and 25 October.

You might also be interested in our Associate Member blog on AI use in chatbots. An important use case, but probably not the focus of the AI Safety Summit.

Area of expertise: