Data can be used to help both business and society" but get the ethics right

Data seems to be discussed everywhere. It has perhaps become a truism that firms stand to benefit from better leveraging the data they hold, for example to better understand their customers, enhance their products and improve efficiency.

At the same time, firms, the authorities and the public are becoming increasingly aware that artificial intelligence (AI) and advanced algorithms also bring some new risks while  accentuating others.

The financial services industry is already heavily regulated, but firms will need to make sure that their approach to GDPR, ePrivacy Regulation, the Financial Conduct Authority (FCA) rulebook etc, is calibrated to fit these new technologies. For example, how do requirements to be transparent with customers and treat them fairly map onto decisions made using machine learning? It may not be clear how existing regulation applies to new technologies.

Regulators are certainly getting interested too. The Information Commissioners Office (ICO), FCA and the Bank of England (BoE) are all planning work and guidance on data ethics and AI, and the new Centre for Data Ethics and Innovation has been set up to advise government and business on a wide range of potential challenges.

Our joint KPMG-UK Finance report on data ethics suggested a set of principles and some next steps to help firms ensure they look at risks in the round and embed a data ethics approach throughout the business.

Questions around data ethics take on a new angle when data is not being used for business purposes but used in order to meet public policy objectives and regulatory requirements.

Take, for example, the growing focus on delivering fairer outcomes and better support for vulnerable customers with the FCA publishing draft guidance on this in July 2019, the same month that the Domestic Abuse Bill was introduced in parliament.

However, vulnerability is not always easily recognised.

Banks could in theory analyse the data they hold to identify when a customer's financial situation may be suffering due to a transient state of vulnerability. This data is currently used today in pre-emptive arrears and fraud detection,  but it is possible that it could also be used for wider public interest purposes. The use of AI or algorithms over these data sets might allow financial services organisations to better identify customers experiencing financial difficulty, and offer them additional support, particularly if firms also have access to a wider variety of data through Open Banking. For example, sudden changes or evolving trends in spending could provide helpful insights.

Of course, any building of new use cases for processing customer personal data would need to meet regulatory requirements, which firms would need to think through carefully. We can also see that there is a wider ethical challenge: some people might welcome their bank keeping an eye on their spending patterns and making contact if they raise concerns? but for others the idea of a bank monitoring their data could feel uncomfortably ?Big Brother-esque?.

As an example of a first control, financial services firms could consider inserting a human decision point into any unsupervised algorithms or automated processes. This would give the firm more assurance until it has had time to either determine that existing processes are suitable, or to implement new customer journeys and processes.

Dr Leanne Allen from KPMG will be one of the speakers on the Difficult Decisions with Data panel at the Digital Innovation Summit on 15 October 2019, where she and other panellists will discuss the ethical use of customer data. To find out more and book a place, click here.