EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can Help
-
Our Consulting approach to the adoption of AI and intelligent automation is human-centered, pragmatic, outcomes-focused and ethical.
Read more
Five regulatory trends in Artificial Intelligence
Recognizing that each jurisdiction has taken a different regulatory approach, in line with different cultural norms and legislative contexts, there are five areas of cohesion that unite under the broad principle of mitigating the potential harms of AI while enabling its use for the economic and social benefit of citizens. These areas of unity provide strong fundamentals on which detailed regulations can be built.
- Core principles: The AI regulation and guidance under consideration is consistent with the core principles for AI as defined by the OECD and endorsed by the G207. These include respect for human rights, sustainability, transparency and strong risk management.
- Risk-based approach: These jurisdictions are taking a risk-based approach to AI regulation. What that means is that they are tailoring their AI regulations to the perceived risks around AI to core values like privacy, non-discrimination, transparency and security. This “tailoring” follows the principle that compliance obligations should be proportionate to the level of risk (low risk means no or very few obligations; high risks mean significant and strict obligations).
- Sector-agnostic and sector-specific: Because of the varying use cases of AI, some jurisdictions are focusing on the need for sector-specific rules, in addition to sector-agnostic regulation.
- Policy alignment: Jurisdictions are undertaking AI-related rulemaking within the context of other digital policy priorities such as cybersecurity, data privacy and intellectual property protection – with the EU taking the most comprehensive approach.
- Private-sector collaboration: Many of these jurisdictions are using regulatory sandboxes as a tool for the private sector to collaborate with policymakers to develop rules that meet the core objective of promoting safe and ethical AI, as well as to consider the implications of higher-risk innovation associated with AI where closer oversight may be appropriate.
Further considerations on AI for policymakers
Other factors to consider in AI policy development include:
- Ensuring regulators have access to sufficient subject matter expertise to successfully implement, monitor and enforce these policies
- Ensuring policy clarity, if the intent of rulemaking is to regulate risks arising from the technology itself (e.g., properties such as natural language processing or facial recognition) or from how the AI technology is used (e.g., the application of AI in hiring processes) or both
- Examining the extent to which risk management policies and procedures, as well as the responsibility for compliance, should apply to third-party vendors supplying AI-related products and services
In addition, policymakers should, to the extent possible, engage in multilateral processes to make AI rules among jurisdictions interoperable and comparable, in order to minimize the risks associated with regulatory arbitrage – that are particularly significant when considering rules governing the use of a transnational technology like AI.