8 minute read 6 Sep 2021
Human figure that disappears among computer networks. virtual and futuristic environment

The importance of explainable AI and AI-based process transparency in financial services

Authors
Roger Spichiger

Partner, Responsible AI Leader in Financial Services | EY Switzerland

Supports financial services clients on their AI journey with deep finance, risk and regulatory transformation expertise. Roger loves the water, mountains and all kinds of sports.

Jean-Noël Ardouin

Partner, Consulting, Risk & Actuarial in Financial Services | EY Switzerland

Committed to delivering exceptional client service. Passionate about teaming and coaching. Husband, father and avid trail runner.

8 minute read 6 Sep 2021

As AI evolves in financial services, banks are embracing techniques to enhance explainability and process transparency.

In brief
  • For all the promise of AI, there is often a lack of transparency that goes against stakeholder and regulatory expectations
  • Various techniques enable financial institutions to understand how AI-based models work – from black-box to white-box models
  • As businesses get more complex and data volumes grow, process mining combines powerfully with AI to deliver real-time insights into business processes

Insurers and banks have long since recognized that being analytics-driven is a matter of survival. Artificial intelligence (AI) is already embedded in many aspects of daily life, and the trend will only accelerate. Financial services are no exception, and AI is increasingly being used by financial institutions to reduce risks and increase efficiency in the areas of anti-money laundering (AML) and counter-terrorism financing (CTF), sanctions, market abuse, fraud and financial crime. Parallel to this development there is broad consensus among regulators and legislators that interpretability and explainability are prerequisites for the use of AI models in the financial sector.

Growing expectations around explainable AI (XAI) are reflected in the Swiss Financial Supervisory Authority’s requirement that “When applying new technologies […], the focus should therefore be on preventing inappropriate discrimination, abusive unequal treatment, data protection and potential market shifts”.  The European General Data Protection Regulation (GDPR), which includes the “right to explain”, stipulates that all individuals are entitled to obtain “meaningful explanations of the logic involved”. XAI can be summed up as the demand for AI to be fair, transparent and accountable. Further drivers of AI model interpretability are the increased stakeholder demand for trust and acceptance, ethics, knowledge gains (discovering unknown relationships), safety as well as transferability and, finally, debugging and improvement of the model.

(Chapter breaker)
1

Chapter 1

Turning black-box models white

A look at methods to enhance explainability in AI

Machine learning models have the reputation of being a “black box” and many AI models use sophisticated techniques to arrive at answers in a way that is not transparent. As we see AI moving into all aspects of our personal and business environment, the transparency conversation is growing in importance. Decisions derived using AI models will need to be traced, challenged and immune of bias. In other words, organizations need to embrace XAI and ensure that model output can be explained appropriately to all stakeholders. 

When discussing XAI, it’s important to understand the difference between:

  • Global explainability
  • Local explainability

Global explainability explains the overall model behavior which can help user to understand key features driving that behavior. Techniques for global explainability include feature importance and partial dependence plots. 

For instance, feature importance shows which features have the highest impact on the model (but not their direction), as the chart above shows with regard to survival probability for passengers on the Titanic.

Local explainability is about a specific prediction. Techniques include local surrogate models or Shapley values. The former explains predictions of single instances by approximating the complex model by an intrinsically interpretable (– e.g. linear) – model.

Shapley values explain individual predictions by the sum of the overall average prediction and the “marginal contributions” of the individual feature values.

The above, and many other, techniques are powerful in turning a black box AI model light gray.

(Chapter breaker)
2

Chapter 2

From model design to validation

How the validation unit works

In this chapter, we discuss the AML model as an illustration of how techniques like those described above can be used.

A bank’s internal validation unit needs to understand how the AML model behaves. So, besides all other analyses, they will need to know what features drive the model’s behavior. Relevant questions in this case include:

  • Are there any highly influential features that are not plausible?
  • Are any features missing that a subject matter expert would expect to be relevant?
  • How does the model respond when the value of features varies across their range?
  • Are there any spikes in the model’s response, or is the model strictly monotonous with respect to a certain feature?

To answer the first two questions, a validator might use feature importance, while partial dependence plots can help with the latter two. These techniques enable the model to be properly challenged, which increases the chances of the model being accepted by users and regulators.

Accountability is a paramount issue in AI and we must start moving away from black-box models and toward XAI. Machine learning models are more complex than classical statistical models but there are techniques available to significantly increase transparency and transform these models into light-grey or white boxes. These techniques support the interpretation of the model behavior – useful for validation, model approval boards and regulators. At the same time, they can generate additional information that support the model users in their daily work.

(Chapter breaker)
3

Chapter 3

Spotlight on process transparency

The combined power of AI and process mining

The issue of transparency is also relevant in the day-to-day business of financial institutions, and in the digital transformation process. Before key business processes can be analyzed, transformed or automated, they must first be understood – easier said than done in today’s complex setups. Traditionally, a company’s business processes are analyzed and modeled based on interviews and workshops with key stakeholders (management, business, operations, IT etc.). While these methods provide interesting insights, they take a long time, cost a lot of money, and rarely provide the full picture of what is happening right now. There is also a strong bias in these interviews towards the official process (usually captured in Business Process Modeling (BPM) documentation) and negligence of the existing reality as the interviewees want to be seen compliant and, in many, cases designed the processes in the first place. 

As most financial institutions already possess huge volumes of untapped data on their processes, it’s time to adopt a data-driven approach to understanding, analysing, transforming and managing processes. This can be achieved by combining AI with process mining. Process mining starts with an analysis of operational data that originates from event log files. Much like the records of an airplane black box, these log files capture all process steps executed on a computer system, usually in an ERP system. This data can then be used to understand process flows during process discovery. A seamless view of a process from beginning to end can be generated, allowing the financial institution to monitor the process as it operates. This way, bottlenecks, inefficiencies, errors, non-compliance and even fraud can be identified. By quickly extracting and reading these event logs, process mining software builds an instant visual model displaying the flows of events in what is known as a process graph.

AI thrives on big data and can uncover patterns hidden to the human eye, providing transparent, timely, and detailed information about the performance of processes. 

In a next step, process discovery can be further enhanced. As rules-based models are heavily dependent on knowledge gained from past experiences and also have limited capability to identify “hidden” patterns, AI-based models are capable of uncovering. AI thrives on big data and can uncover patterns hidden to the human eye, providing transparent, timely, and detailed information about the performance of processes. AI algorithms applied to the discovered processes can be used as a powerful tool to identify patterns and outliers related to potential inefficiencies in speed and workload, errors, compliance issues and risks. Also, predictions algorithms using AI models applied to the as-is discovered process can then be employed to better manage resources and reduce risks.

  • Use case – invoicing process

    A Swiss bank was looking for a holistic understanding of the global invoicing process. A process mining analysis was run on the purchase-to-pay (P2P) process rolled out globally where each location had its own IT system to capture invoices.

     

    One major challenge of the analysis was to capture the invoicing process across countries due to the heterogeneity of IT systems. System logs collected from the invoicing systems across the countries were analysed and harmonized. This helped compare invoicing process across countries and presented tangible input to enable standardizing and streamlining best business practices within the organization.

    Gaining an understanding of these processes allows ongoing assessment of the effectiveness of controls and efficiency of resource allocation. The bank is now able to identify bottlenecks or performance issues and detect deviations from defined process flows carried out across the organization. Going forward, the analysis could also be extended to incorporate predictions for invoice ageing and detection of outliers to further increase the efficiency and effectiveness of the process.

Summary

Even though technology adoption in decision making is happening at a fast pace within financial services industry, organizations should be aware of the increasing requirement for transparency and explainability demanded by existing as well as upcoming regulations. It is crucial that organizations understand the importance and the consequences of explainable AI when considering implementing AI-based methods and models. 

Many thanks to Karl Ruloff and Madhumita Jha for their valuable contribution to this article. 

 

About this article

Authors
Roger Spichiger

Partner, Responsible AI Leader in Financial Services | EY Switzerland

Supports financial services clients on their AI journey with deep finance, risk and regulatory transformation expertise. Roger loves the water, mountains and all kinds of sports.

Jean-Noël Ardouin

Partner, Consulting, Risk & Actuarial in Financial Services | EY Switzerland

Committed to delivering exceptional client service. Passionate about teaming and coaching. Husband, father and avid trail runner.