The accelerating capabilities of Generative Artificial Intelligence (GenAI) — including large language models (LLM) — as well as systems using real-time geolocation data, facial recognition and advanced cognitive processing, have pushed AI regulation to the top of policy makers’ inboxes.
It isn’t simple. In Europe, for example, while some member countries wanted to liberalize the use of facial recognition by their police forces, the EU Parliament wanted to impose tight restrictions as part of the AI Act, resulting in marathon negotiations before a compromise agreement could be found.1 In another debate on AI legislation, the Indian Ministry of Electronics and IT published a strong statement in April 2023, opting against AI regulation and stating that India “is implementing necessary policies and infrastructure measures to cultivate a robust AI sector, but does not intend to introduce legislation to regulate its growth.”2 Yet in May 2023, the IT minister announced India is planning to regulate AI platforms like ChatGPT and is “considering a regulatory framework for AI, which includes areas related to bias of algorithms and copyrights.”3 Similarly, while the US is not likely to pass new federal legislation on AI any time soon, the Executive Order issued by the Biden administration in October 2023 emphasized concerns for safety, security and civil rights as key considerations in the federal procurement of AI, and regulators like the Federal Trade Commission (FTC) have responded to public concerns about the impact of Generative AI, by opening expansive investigations into some AI platforms.4
AI is transforming a diverse range of industries, from finance and manufacturing to agriculture and healthcare, by enhancing their operations and reshaping the nature of work. AI is enabling smarter fleet management and logistics, optimizing energy forecasting, creating more efficient use of hospital beds by analyzing patient data and predictive modeling, improving quality control in advanced manufacturing, and creating personalized consumer experiences. It is also being adopted by governments that see its ability to deliver better service to citizens at lower cost to taxpayers. With global private sectors investing in AI, the investment levels are now 18 times higher than in 2013.5 AI is potentially a powerful driver of economic growth and a key enabler of public services.
However, the risks and unintended consequences of GenAI are also real. A text-generation engine that can convincingly imitate a range of registers is open to misuse; voice-imitation software can mimic an individual’s speech patterns well enough to convince a bank, workplace or friend. Chatbots can cheat at tests. AI platforms can reinforce and perpetuate historical human biases (e.g., based on gender, race or sexual orientation), undermine personal rights, compromise data security, produce misinformation and disinformation, destabilize the financial system and cause other forms of disruption globally. The stakes are high.
Legislators, regulators and standard setters are starting to develop frameworks to maximize AI’s benefits to society while mitigating its risks. These frameworks need to be resilient, transparent and equitable. To provide a snapshot of the evolving regulatory landscape, the EY organization (EY) has analyzed the regulatory approaches of eight jurisdictions: Canada, China, the European Union (EU), Japan, Korea, Singapore, the United Kingdom (UK) and the United States (US). The rules and policy initiatives were sourced from the Organization for Economic Co-operation and Development (OECD) AI policy observatory6 and are listed in the appendix to the full report.