Human in the loop

Partnership, not replacement.

The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.

Artificial Intelligence (AI) increasingly permeates every aspect of business operations. Because of this, the surveillance industry stands at a critical crossroad whereby it must establish whether AI is a useful tool, or a compliance risk (or perhaps both). Attitudes towards AI appear to have shifted far beyond initial approaches that sought to ban AI from financial services. For some firms, AI and Large Language Models (LLMs) now form the foundation of compliance processes. A recent report by technology vendor Global Relay found that 31 per cent of surveillance teams are now using AI in surveillance. Global Relay’s State of AI in Surveillance Report 2025, offers crucial insights into how financial services view this technological revolution.

Current industry sentiment

The report reveals cautious yet evolving attitudes, with 31 per cent of teams have no immediate plans to implement AI. A further 38 per cent are monitoring developments, demonstrating that most compliance professionals recognise AI’s transformative potential while maintaining scepticism.

A Financial Conduct Authority (FCA) survey at the end of 2024 revealed that 75 per cent of firms are using AI. Despite this, only 34 per cent feel confident in their understanding of how AI works. 

Data overload to meaningful insights

The most transformative use of AI in workflows comes from Large Language Models (LLMs), which often significantly increases efficiency by virtue of their accuracy. LLMs deliver less false positives due to their ability to read through business communications, analyse the text, and detect genuine risk at a speed far faster than the human eye. Like lexicon-based surveillance, both outcomes still require human review, but the former means the review queue is significantly reduced and contains real risk. LLMs and generative AI models represent a fundamental shift, not by eliminating human intervention, but by making it more impactful, as humans are needed to review and validate the output of AI to ensure it won’t cause operational problems.

MIT Sloan’s 2023 study confirms that AI can improve worker performance by nearly 40 per cent when paired with humans, compared with workers who don’t use it. Therefore, rather than replacing compliance officers, AI helps them focus on what truly matters, where attention is diverted away from sifting through countless false positives, and towards understand real risks. This elevates their roles from data processors to strategic analysts and run complex investigations.

The trust factor

Despite it’s potential, complete trust in AI remains a challenge. Global Relay’s report reveals that professionals rate trust in AI at 4.92, on a scale of 1-10, between minimal and complete trust. This shows a continued reliance on human oversight. 

The FCA’s 2024 Survey showed that 55 per cent of their AI use cases had some degree of automated decision-making, with 24 per cent being semi-autonomous, meaning that while they can make decisions on their own, they are designed to involve human oversight for critical or ambiguous decisions. Unsurprisingly, only 2 per cent of AI use cases used full-autonomous decision making.

Elise Kindig, Deputy Chief Compliance Officer, FCM Compliance, speaks on this positioning, stating “I know the adoption of AI is new in the industry, so the biggest barrier for us as a compliance team is the unknown. How can we trust AI and what is it reviewing? Is it able to review communications and properly flag them?”. Although AI may fast track surveillance it is still viewed and approached with caution, where firms still look for human eyes over systematically identified risk.

AI transformation

As organisations begin to navigate the adoption and introduction of AI into surveillance, success will not come from diminishing human involvement but from redefining it, shifting from manual data processing to strategic analysis, interpretation, and decision-making. As Gartner found, by 2026 enterprises that have built adaptive AI systems will outperform their peers in the number and time it takes to operationalise AI by 25 per cent. This research may signal an increase in AI adoption, and with that the need for human-in-the-loop AI systems. The future is not tech alone, but humans being in the loop and working with this tech. Perhaps this partnership will decrease fears around explainability and ensure transparency.

Read Global Relay’s latest report to gain more insights around how AI is transforming workflows in the compliant communications field.

Area of expertise: