In Switzerland, there is currently no legislation dedicated to artificial intelligence (AI) in general or generative AI (GenAI) specifically. However, this does not mean that Switzerland has no legal framework to govern this area. Indeed, AI is regulated by a combination of existing laws and regulations addressing the subject (e.g., data protection law1, civil law, intellectual property law2).
In addition, Switzerland has been closely monitoring international and European developments such as the EU AI Act 3 to identify regulatory trends that are likely to have a long-term impact, notably on the Swiss financial sector. In this context, the Swiss Federal Council has mandated the responsible federal department to examine suitable regulatory approaches to AI by the end of 2024.
In its 2023 Risk Monitor report4 published in November 2023, the Swiss Financial Market Supervisory Authority (FINMA) outlined several challenges and risks arising from AI while underlining the increasing importance of this technology in many areas of life, which makes it a long-term trend. FINMA takes a technology-neutral and principle-based approach. This ensures flexibility and adaptability in the regulatory framework. In this context, FINMA expects the financial industry to manage the risks of AI in the areas outlined below:
1. Governance and responsibility
As AI becomes more integral to decision-making – with some AI systems even acting autonomously – understanding and overseeing these decisions can be a challenge. The potential lack of clarity can lead to mistakes being overlooked and accountability being obscured, particularly in complex organizational settings with limited AI expertise. For instance, ChatGPT can produce convincing yet potentially inaccurate responses (as it bases its answers on the highest probability), making it hard for users to verify the information. To mitigate these risks, FINMA asserts that it is vital to define and enforce explicit roles, responsibilities and AI risk management frameworks. Decision-making accountability should remain with humans and not be transferred to AI or third parties. In addition, all stakeholders within financial institutions must possess a solid understanding of AI.
2. Robustness and reliability
AI’s learning process relies on extensive data, which can present challenges when the data is substandard or not fully representative. This can lead to AI systems self-optimizing in undesirable ways, a problem referred to as “drift.” Furthermore, the surge in GenAI use cases, coupled with increased outsourcing and cloud service reliance, escalates IT security vulnerabilities. A common issue is “hallucinations” - where machine learning models generate false or distorted information, often as a result of overfitting to training data or encountering scenarios they were not trained on. This can lead to misleading outcomes or erroneous interpretations, particularly in fields like natural language processing or image recognition.
FINMA underlines that institutions must maintain a critical evaluation of the data and models, and their outcome throughout AI development, training and operational phases. Furthermore, reliance on large data sets for AI applications increases the risk of data breaches and misuse. Unauthorized access to sensitive information can lead to privacy violations and exploitation.
Ensuring robust cybersecurity measures and ethical data handling practices is essential to protect against such vulnerabilities. Additional key factors are data quality and AI governance, as the accuracy and reliability of the output heavily depend on the input data’s integrity. Establishing strong data governance protocols ensures that data is not only high-quality and relevant but also managed in a way that is compliant with regulations and ethical standards.
3. Transparency and explicability
The intricate nature of AI applications, with their multitude of parameters and complex algorithms, often precludes identifying how specific elements affect the final outcomes. This obscurity can lead to AI-driven decisions that are difficult to validate or explain, posing challenges for reviews by the entity using AI, as well as by auditors or regulatory agencies.
Moreover, when customers are not informed about the deployment of AI, they cannot properly evaluate the potential risks. In the view of FINMA, it is essential for financial institutions to ensure that the workings and usage of AI are clear and comprehensible, aligning with the expectations and understanding of the intended audience, the significance of the application and its integration into the workflow.
4. Non-discrimination
GenAI technology frequently processes personal information to personalize risk assessments and services. Insufficient data for specific demographics can lead to biased or erroneous analyses, which may inadvertently lead to discriminatory practices. More specifically, algorithmic discrimination occurs when AI systems inadvertently perpetuate biases present in their training data, leading to unfair treatment of certain groups. This can manifest specifically in the financial services sector, where AI-driven decisions might disadvantage individuals based on race, gender or other characteristics. Addressing this requires proactive measures in data sets and algorithm design to ensure a fair outcome. It carries both legal implications and the risk of damaging the company’s image and reputation. Therefore, it is crucial for companies to actively ensure that their AI use does not lead to unwarranted bias.