The AI and automation revolution in financial services

In May, UK Finance's Jonathan Middleton led a Confirmation-sponsored webinar on AI and automation. The panel comprised Shearin Cao, executive director at Standard Chartered, Daniella Tsar, senior data scientist at Thomson Reuters Innovation Labs, and Caroline Winch, Confirmation's commercial director.

We began by considering what the future holds for financial AI and automation. Firms are taking stock of the impact of Covid-19, and many look set to drive up investment in these technologies.

Daniella introduced statistics from Confirmation's forthcoming survey, Letting Go of Legacy IT Systems in 2021 showing that 70 per cent of firms reported the pandemic had increased their reliance on tech solutions, while 40 per cent of firms expect an increase in machine learning investment because of Covid-19.

So what are the most promising areas for machine learning and other AI techniques? From her experience in TR Labs, Daniella said that AI works best when the question is specific and depends on ?a focused piece of reasoning?. She also pointed to the need to ensure the right problems are solved, because AI is not always the answer.

According to Daniella, questions such as ?what should my strategy be?? are too open-ended. The AI specialists at the Innovation Labs work with subject-matter experts in multidisciplinary teams.  She added: ?We found that when human experts do not agree it's very likely that the AI system isn't going to give a good result.?

Well-understood cognitive tasks are a more fruitful domain for AI, for example, generating concise summaries of larger pieces of content, where the viewpoint is objective, as with Reuters news stories or legal texts. The Innovation Labs teams also have to consider who they are designing the system for and what the current pain points are.

The panel then discussed key business sectors for AI applications. Jonathan's own experience is that UK financial firms have focused on areas such as quality monitoring and fraud detection. Meanwhile Shearin said she was seeing ?a massive wave of compliance or regulatory technology in the space, which covers anything from surveillance to financial crime to capturing fraud, as well as better and more sophisticated modelling?.

I mentioned that managing risk is a high priority, as a recent Bank of England survey confirmed, adding that above all, clients require solutions - they are less interested in the technology, they just want results.

I then discussed a specific auditing use case - the process of getting the annual audit signed off. This process includes a number of repetitive, time-consuming tasks that lend themselves to advanced automation, such as that offered by Confirmation's API.

Automating this manual process eliminates risk and allows staff to concentrate their efforts on valuable, human intervention processes.

The panel then responded to a question from the audience: ?How do you ensure that your training sets don't have human bias in them, so that the AI doesn't learn the same bias?? Shearin said that it is important 'to think holistically on AI ethics and bias?, adding that the financial services sector is looking to ?establish a common set of goals and high-level principles in this space?.

Shearin recently participated in a UK Finance working group, which monitored and developed emergent thinking on AI ethics - how to enhance AI's value to wider society and encourage greater financial inclusion. The group established five high-level principles for AI, which included ?an alignment to human rights and the human empowerment element?.

This is the balance that the finance sector is looking for: increasing operational efficiency, while maintaining a human-centric approach towards AI and automation.

If you missed watching the webinar, you can view the full session here. If you would like to discuss the key takeaways in more detail, feel free to get in touch with me at caroline.winch@thomsonreuters.com.