EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Limited, each of which is a separate legal entity. Ernst & Young Limited is a Swiss company with registered seats in Switzerland providing services to clients in Switzerland.
How EY can help
-
The EU AI Act will be adopted shortly with far-reaching extraterritorial impact. As it is often more costly and complex to ensure compliance when AI systems are operating than during development, we recommend that firms start preparing now with a professional AI Act readiness assessment and early adaptation.
Read more
The Act lays out examples of systems posing an unacceptable risk. Systems falling into this category are prohibited. Examples include the use of real-time remote biometric identification in public spaces or social scoring systems, as well as the use of subliminal influencing techniques which exploit vulnerabilities of specific groups.
High-risk systems are permitted but must comply with multiple requirements and undergo a conformity assessment. This assessment needs to be completed before the systems is released on the market. Those systems are also required to be registered in an EU database which shall be set up. Operating high-risk AI systems requires an appropriate AI risk management system, logging capabilities and human oversight respectively ownership. There shall be proper data governance applied to the data used for training, testing and validation as well as controls assuring the cyber security, robustness and fairness of the system.
Examples of high-risk systems are those related to the operation of critical infrastructure, systems used in hiring processes or employee ratings, credit scoring systems, automated insurance claims processing or setting of risk premiums for customers.
The remaining systems are considered limited or minimal risk. For those, transparency is required, i.e., a user must be informed that what they are interacting with is generated by AI. Examples include chat bots or deep fakes which are not considered high risk but for which it is mandatory that users know about AI being behind it.
For all operators of AI systems, the implementation of a Code of Conduct around ethical AI is recommended. Notably, General-purpose AI models (GPAI), including foundation models and generative AI systems, follow a separate classification framework. The AI Act adopts a tiered approach to compliance obligations, differentiating between high-impact GPAI models with systemic risk and other GPAI models.
Step 3: Prepare and get ready
If you are a provider, deployer, importer, distributor or affected person of AI systems, you need to ensure that your AI practices are in line with this new artificial intelligence regulation. To start the process of fully complying with the AI Act, you should initiate the following steps: (1) assess the risks associated with your AI systems, (2) raise awareness, (3) design ethical systems, (4) assign responsibility, (5) stay up-to-date, and (6) establish a formal governance. By taking proactive steps now, you can avoid potential significant sanctions for your organization upon the Act coming into force.
The AI Act is set to come into force in Q2-Q3 2024 following publication in the Official Journal of the European Union. Transition periods for compliance will subsequently be imposed with companies having 6 months to adhere to requirements for prohibited AI systems, 12 months for certain General Purpose AI requirements, and 24 months to achieve full legislative compliance.
What are the penalties in case of non-compliance?
The penalties for non-compliance with the AI Act are significant and can have a severe impact on the provider’s or deployer's business. They range from €7.5 million to €35 million or 1% to 7% of the global annual turnover, depending on the severity of the infringement. Hence, it is essential for stakeholders to make sure they understand the AI Act fully and comply with its provisions.
How is the financial services sector impacted by the Act?
Financial services have been identified as one of the sectors where AI could have the most significant impact. The EU AI Act contains a three-tier risk classification model that categorizes AI systems based on the level of risk they pose to fundamental rights and user safety. The financial sector uses a multitude of models and data-driven processes which will come to rely more on AI in the future. Processes and AI systems used for creditworthiness assessments, or the evaluation of risks with AI premiums of customers fall into the high-risk category under the AI Act. Additionally, AI systems used in operating and maintaining financial infrastructure considered to be critical also fall under the scope of high-risk AI systems, as do AI systems used for biometric identification and categorization of natural persons or employment and employee management.