Electronic circuit board close up.

Risks and benefits of generative AI in the financial sector


FINMA highlights AI risks and expects financial industry compliance across governance, reliability, transparency and non-discrimination.


In brief

  • Switzerland regulates AI through existing laws and monitors international trends, with a focus on the financial sector and a review due by the end of 2024.
  • FINMA’s 2023 risk report highlights AI risks in governance, reliability, transparency and non-discrimination.
  • AI can revolutionize financial services by enhancing data analysis, decision-making and customer service, but requires robust governance to manage risks.

In Switzerland, there is currently no legislation dedicated to artificial intelligence (AI) in general or generative AI (GenAI) specifically. However, this does not mean that Switzerland has no legal framework to govern this area. Indeed, AI is regulated by a combination of existing laws and regulations addressing the subject (e.g., data protection law1, civil law, intellectual property law2).

In addition, Switzerland has been closely monitoring international and European developments such as the EU AI Act 3 to identify regulatory trends that are likely to have a long-term impact, notably on the Swiss financial sector. In this context, the Swiss Federal Council has mandated the responsible federal department to examine suitable regulatory approaches to AI by the end of 2024.

In its 2023 Risk Monitor  report4 published in November 2023, the Swiss Financial Market Supervisory Authority (FINMA) outlined several challenges and risks arising from AI while underlining the increasing importance of this technology in many areas of life, which makes it a long-term trend. FINMA takes a technology-neutral and principle-based approach. This ensures flexibility and adaptability in the regulatory framework. In this context, FINMA expects the financial industry to manage the risks of AI in the areas outlined below:

1.  Governance and responsibility

As AI becomes more integral to decision-making – with some AI systems even acting autonomously – understanding and overseeing these decisions can be a challenge. The potential lack of clarity can lead to mistakes being overlooked and accountability being obscured, particularly in complex organizational settings with limited AI expertise. For instance, ChatGPT can produce convincing yet potentially inaccurate responses (as it bases its answers on the highest probability), making it hard for users to verify the information. To mitigate these risks, FINMA asserts that it is vital to define and enforce explicit roles, responsibilities and AI risk management frameworks. Decision-making accountability should remain with humans and not be transferred to AI or third parties. In addition, all stakeholders within financial institutions must possess a solid understanding of AI.

2.  Robustness and reliability

AI’s learning process relies on extensive data, which can present challenges when the data is substandard or not fully representative. This can lead to AI systems self-optimizing in undesirable ways, a problem referred to as “drift.” Furthermore, the surge in GenAI use cases, coupled with increased outsourcing and cloud service reliance, escalates IT security vulnerabilities. A common issue is “hallucinations” - where machine learning models generate false or distorted information, often as a result of overfitting to training data or encountering scenarios they were not trained on. This can lead to misleading outcomes or erroneous interpretations, particularly in fields like natural language processing or image recognition.

FINMA underlines that institutions must maintain a critical evaluation of the data and models, and their outcome throughout AI development, training and operational phases. Furthermore, reliance on large data sets for AI applications increases the risk of data breaches and misuse. Unauthorized access to sensitive information can lead to privacy violations and exploitation.

Ensuring robust cybersecurity measures and ethical data handling practices is essential to protect against such vulnerabilities. Additional key factors are data quality and AI governance, as the accuracy and reliability of the output heavily depend on the input data’s integrity. Establishing strong data governance protocols ensures that data is not only high-quality and relevant but also managed in a way that is compliant with regulations and ethical standards.

3.  Transparency and explicability

The intricate nature of AI applications, with their multitude of parameters and complex algorithms, often precludes identifying how specific elements affect the final outcomes. This obscurity can lead to AI-driven decisions that are difficult to validate or explain, posing challenges for reviews by the entity using AI, as well as by auditors or regulatory agencies.

Moreover, when customers are not informed about the deployment of AI, they cannot properly evaluate the potential risks. In the view of FINMA, it is essential for financial institutions to ensure that the workings and usage of AI are clear and comprehensible, aligning with the expectations and understanding of the intended audience, the significance of the application and its integration into the workflow.

4.  Non-discrimination

GenAI technology frequently processes personal information to personalize risk assessments and services. Insufficient data for specific demographics can lead to biased or erroneous analyses, which may inadvertently lead to discriminatory practices. More specifically, algorithmic discrimination occurs when AI systems inadvertently perpetuate biases present in their training data, leading to unfair treatment of certain groups. This can manifest specifically in the financial services sector, where AI-driven decisions might disadvantage individuals based on race, gender or other characteristics. Addressing this requires proactive measures in data sets and algorithm design to ensure a fair outcome. It carries both legal implications and the risk of damaging the company’s image and reputation. Therefore, it is crucial for companies to actively ensure that their AI use does not lead to unwarranted bias.

Embrace AI and elevate your business while managing the risks

Particularly in the financial sector, AI stands as a transformative force, capable of revolutionizing how companies approach data analysis, decision-making and customer service. By leveraging AI, financial institutions can unlock powerful insights from vast datasets, streamline operations and offer more competitive, personalized products. GenAI capabilities, including predictive power, enable better risk management and fraud detection, providing a significant edge in a highly regulated industry. Moreover, automating routine tasks frees up valuable resources, enabling firms to focus on strategic growth and innovation. Additionally, AI may potentially also elevate mediocre resources to perform on par or above. In this context AI can help mitigate the ongoing “war for talents” in many areas.

 

Despite the complexities and challenges associated with AI, the potential for enhanced efficiency, improved customer satisfaction and a stronger bottom line makes it an indispensable tool for financial institutions aiming to thrive in today’s digital economy. A crucial factor, however, will be how businesses manage associated risks within an appropriate AI governance framework that supports the idea of “trusted AI.”


Summary

While Switzerland lacks specific AI legislation, it effectively regulates AI through existing laws and closely monitors international trends. FINMA's 2023 risk report underscores the importance of managing AI-related risks in governance, reliability, transparency and non-discrimination. The financial sector stands to benefit significantly from AI, but must implement robust governance frameworks to mitigate risks and ensure ethical practices. By doing so, financial institutions can harness AI’s transformative potential while maintaining trust and compliance.

Acknowledgement

We thank Marwa Eid for her valuable contribution to this article.


About this article

Authors


Related articles

The EU AI Act: What it means for your business

The EU regulation for artificial intelligence is coming. What does it mean for you and your business in Switzerland?

As Gen AI reshapes business, what will the legal landscape look like?

The rise of generative artificial intelligence has sparked debate around various legal, regulatory and compliance considerations.