EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Limited, each of which is a separate legal entity. Ernst & Young Limited is a Swiss company with registered seats in Switzerland providing services to clients in Switzerland.
How EY can help
-
The EU AI Act will be adopted shortly with far-reaching extraterritorial impact. As it is often more costly and complex to ensure compliance when AI systems are operating than during development, we recommend that firms start preparing now with a professional AI Act readiness assessment and early adaptation.
Read more
Legal boundaries of AI
AI can only within legal constraints. The EU AI Act, a significant regulatory step by the EU Parliament and Council, was provisionally agreed upon in on 9 December 2023, and unanimously approved by EU member states on 2 February 2024. It marks a pivotal moment for AI regulation in Europe and will potentially influence other.
In Switzerland, the Federal Council announced on 22 November 2023 that it had commissioned the Federal Department of the Environment, Transport, Energy and Communication (DETEC), together with all relevant federal offices, to identify possible approaches to the regulation of AI by the end of 2024. This came after the Federal Administration had already set guidelines for the handling of AI in November 2020.
The draft EU AI regulation was created amidst AI’s rapid development and its increasing integration into various aspects of our lives. It aims to leverage the opportunities of AI, while at the same time minimizing risks and abuse. The European Parliament deems it essential for a smoothly functioning internal market. Key points of the regulation include:
Definitions: The definition of AI was highly debated. The text focuses on systems processing data, recognizing patterns, and making decisions based on this information. The text ensures adaptability for emerging AI technologies.
Promotion of innovation: Innovation is strengthened through the introduction of a “regulatory sandbox”. This allows companies to develop and test innovative AI systems without immediately having to meet strict regulatory requirements. This significantly lowers initial development and testing costs and encourages both start-ups and established companies to invest in new AI technologies.
Risk-based approach: This was central to the regulation, focusing on increased supervision for AI in sensitive areas, such as biometrics, to uphold security and data protection, guided by risk categories to aid companies in risk assessment.