Female Developers Using Ai Writes The Code For Data Analytics

Leading the AI revolution: tangible opportunities in risk management

We provide research-based insights into AI applications in financial risk management and offer a regulatory outlook.


In brief

  • Fast-paced developments in AI are already leading to tangible applications in the Swiss banking and asset management space. 
  • AI-driven financial risk management emerges as an opportunity, with first movers expected to elevate the quality and efficiency of their risk models. 
  • FINMA also shows a proactive approach towards AI, with initial supervisory expectations to be discussed on an application-specific basis later in 2023.

Not so long ago, the widespread diffusion of artificial intelligence (AI) in finance appeared to be more of a futuristic projection than an impending reality. Yet rapid advancements in the technology have now made AI not only tangible, but increasingly prevalent in the banking sector. 

In 2022, the Swiss Financial Market Supervisory Authority (FINMA) established a specialist unit to monitor the increasing use of AI in the Swiss financial market. As part of its focus, a survey was carried out to picture AI’s implementation within the banking and asset management sectors, revealing that around 50% of the participants were either using AI or specifically planning to deploy it.

Analytical graph fields of application for AI in Swiss banking and asset management

While many of the surveyed institutions employed in-house solutions, this was far from an exclusive approach, with several also turning to third-party providers. Interestingly, we note that FINMA itself is exploring AI applications for its supervisory roles, such that: “findings can now be made that were previously impossible in terms of precision, scope, reactivity or anticipation”.1

Forrester’s Global AI Software Forecast, 2022
Forecasted annual growth rate of AI software spend, approximately doubling from $33 billion in 2021 to $64 billion in 2025.

While spending on AI software is set to grow rapidly – 18% per year according to Forrester’s 2022 Global AI Software Forecast – banks will also need to invest in understanding the risks and opportunities of this evolving technology. In this article, we explore AI’s potential for risk measurement, applications in in risk analysis and regulatory developments against this shifting technological landscape.

Global Business Research
1

Chapter 1

Unlocking AI’s potential for risk measurement: GAN-based VaR & ES

The use of generative adversarial networks can lead to a more accurate estimation of tail risk.

EY notes the ample growth potential for the deployment of AI in financial risk management, a field where the complexity of models inevitably creates opportunities to unleash this technology’s computational power. Indeed, while its current use in the calculation of value at risk (VaR) and expected shortfall (ES) is limited, we see strong indications suggesting that banks could improve risk measurement by harnessing AI’s power to handle large data sets and identify complex patterns. This is especially true if we consider the increasing importance of adequate modelling of systemic dependencies between risk factors, an example being the Fundamental Review of the Trading Book (FRTB) text’s prescription of a stressed ES accounting for a “joint assessment across all relevant risk factors, which will capture stressed correlation measures”3.

An example of AI’s potential is found in the much-researched use of generative adversarial networks (GANs) to simulate financial time series (e.g., Wiese et al., 20204), which can then be converted to returns for estimating VaR and ES. GANs belong to the broader category of generative AI, a term that encompasses various techniques used to generate new content using algorithms. While RL focuses on learning optimal decision-making policies in interactive environments with feedback, GANs specialize in generating realistic synthetic data. Unlike traditional VaR models that require simplifying assumptions, GANs enable the simulation of hypothetical, yet plausible, scenarios that are based on complex interdependencies learned from the training data. Indeed, research5 has demonstrated that a GAN-based VaR/ES model can provide “accurate tail risk estimates, and is able to capture certain stylized features observed in financial time series, such as heavy tails, and complex temporal and cross-asset dependence patterns” (Cont et al., 2023). This eliminates the need to either assume a distribution (e.g., as in Monte Carlo simulation) or to assume that future returns will be identical to those observed in the lookback window (e.g., as in historical simulation). Indeed, while the synthetic data is statistically similar to the training data, it maintains an element of variability due to the GAN generator starting with a random seed (noise). GANs may therefore provide better estimates of tail risk that traditional methods would struggle to achieve, especially when faced with data limitations.

Female Analyst Viewing Financial Market Data On A Screen
2

Chapter 2

Unlocking AI’s potential for risk analysis: dynamic stress tests

AI-driven stress testing models can yield more dynamic, realistic simulations of stress scenarios.

Another application of AI in risk management is found in stress testing, a critical tool used by banks to evaluate their potential vulnerability to adverse events (often supplementing VaR and ES). Stress testing involves running simulations to evaluate how adverse scenarios would affect the bank’s balance sheet, capital adequacy, liquidity and overall financial health. Traditional methodologies typically involve a limited set of predetermined scenarios, relying heavily on human judgment both for scenario calibration and analysis of results. 
 

Research7 has shown that AI can significantly transform stress testing by more effectively modelling the intercorrelation between PnL drivers, enhancing the dynamism and reliability of simulated scenarios. Indeed, stress tests are known to be constrained by computational limitations, resulting in the currently employed techniques often failing to adequately model non-linear relationships between risk factors. The stress models are often static, meaning, for example, that they inadequately capture the propagation of stress shocks between risk drivers, while also ignoring the effects of sequential managerial responses to a stress scenario’s unfolding.
 

AI techniques such as GAN promise a more expansive and plausible spectrum of scenarios, enabling the identification of complex dependencies that may otherwise be overlooked. Further, machine learning models can improve the accuracy with which key risk parameters (such as default probabilities) are estimated, and are also capable of modelling the path-dependent effects of actions put in place by other economic participants (e.g., regulators, industry competitors, etc). AI can thus be leveraged to improve the quality of stress modelling, while also streamlining the often-laborious processes needed to recalibrate the scenario narratives.
 

In addition, we note that regulatory expectations increasingly tend towards a higher granularity of stress tests, exemplified by the FRTB requirement of “a rigorous and comprehensive stress testing programme both at the trading desk level and at the bank-wide level.”3 This is in line with FRTB’s broader change in paradigm, whereby supervisory approval of IMA will be granted at the level of bank’s individual trading desks. If continued, such tendency may increase the computational burden on banks, paving the way for AI deployment. For example, banks could consider using RL algorithms to dynamically optimize stress scenarios, in order to expose the specific vulnerabilities of any given trading desk. By tailoring stress shocks to the desk’s evolving risk profile, they could monitor both systematic and residual risks more effectively, thus preventing spillovers to other areas of the business.

Inventing New Tech Software Interface
3

Chapter 3

A preliminary regulatory perspective

As regulators strive to keep pace with AI developments, we provide a concise preview of what can be expected.

Though regulators have yet to specifically address the treatment of risk applications outlined in Chapters 1 and 2, as early as 2020 FINMA had highlighted some of the risks that AI entails.8

As part of its 2022 annual report1, FINMA announced that it has formulated initial supervisory expectations concerning AI, with the aim to discuss them on an application-specific basis in 2023. Furthermore, it highlighted the key risk areas currently being targeted:

Four key focus areas for risks assosiated with AI

These come as no surprise, since AI models are often referred to as “black boxes”. For example, it may be difficult to interpret and explain which factors influenced a VaR estimate obtained using GANs. To this end, it is worth noting that version 1.0 of the “Artificial Intelligence Risk Management Framework”9, published by the US National Institute of Standards and Technology (NIST) in January 2023, provides distinct definitions for “transparency”, “explainability” and “interpretability” (see also “Four Principles of Explainable Artificial Intelligence” 10, also published by NIST).

Importantly, for an all-round perspective of the AI regulatory landscape, the effects of two upcoming regulations should be closely monitored: the European Union Artificial Intelligence Act (AI Act) and the revised Swiss Federal Act on Data Protection (revFADP). The former is currently being discussed in trilogue negotiations by EU co-legislators and, once adopted, a 24-month transition period will follow allowing organizations to implement the respective measures and obligations. The AI Act will likely have an extraterritorial impact on Swiss organizations that provide or use AI systems, even if they do not have a legal presence in the EU. Irrespective of the level of risk associated with specific types of AI software, which could be subject to legal interpretation, banks planning to leverage AI for financial risk management should ensure that their models are transparent and have adequate governance in place. In addition, banks must make sure that AI training is performed in compliance with the data protection requirements set out in the revFADP (coming into force on 1 September 2023) and, where relevant, the EU General Data Protection Regulation (GDPR).


Summary

Leveraging AI unlocks new opportunities for better risk management and greater operational efficiency. While banks must navigate challenges related to reliability, computational complexity and transparency, striking a balance between traditional methods and AI-based approaches will be key to retain competitiveness as the AI revolution unfolds.

Even though the initial investment may be substantial, the implied quality and efficiency gains are such that EY considers this a classic scenario in which first movers will reap the highest rewards. 

Acknowledgements

We kindly thank Vadym Sheiko and Giovanni Facchini for their valuable contribution to this article.

About this article

Related articles

Hot topic: Legal, Regulatory & Compliance Considerations about ChatGPT

Article basic fullThe ChatGPT hype is huge given its impressive capabilities, however its widespread use has sparked debates concerning legal, regulatory and compliance aspects.

The EU AI Act: What it means for your business

The EU regulation for artificial intelligence is coming. What does it mean for you and your business in Switzerland?