EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
Navigating governance challenges
The increased adoption of Generative AI has prompted widespread regulatory responses globally. Notable examples include the National Institute of Standards and Technology (NIST), which has introduced an AI Risk Management Framework. The European Parliament which has proposed the EU Artificial Intelligence Act, while the European Union Agency for Cybersecurity (ENISA) has been at the forefront of discussions regarding cybersecurity for AI. Additionally, HITRUST has released the latest version of the Common Security Framework (CSF v11.2.0), now encompassing areas specifically addressing AI risk management.
These guidelines and frameworks provide valuable assistance to enterprises but have limitations in fully addressing ethical, legal implications, and AI regulatory compliance associated with use of Generative AI, even as they foster innovation.
The US has taken a notable step by issuing a comprehensive executive order on AI, aiming to promote the “safe, secure and trustworthy development and use of artificial intelligence.” A White House fact sheet outlining the order has been released. This Executive Order stands as a significant contribution to the discussion of AI accountability in the development and deployment of AI across organizations.
Further, at the international AI Safety Summit, it was announced that like-minded governments and AI companies had reached an agreement to test new AI models prior to their release and adoption. The UK will also establish the world's first AI safety institute, responsible for testing new AI models for a range of risks.
These developments underscore the seriousness and commitment of governments and regulators worldwide in managing risks and governance in Generative AI.