EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
Related content
The past 12 months have brought an explosion of interest in AI technologies, driven by the mass availability of new GenAI tools. These large language models (LLMs) are easy to use and can produce text, image, video and musical content with suitable prompts.
Many businesses were already using general purpose AI tools to automate processes and detect patterns and trends in data. Now they are exploring how they can deploy GenAI at scale to operate more efficiently and deliver innovative products and services. Bloomberg Intelligence has predicted that the GenAI market will grow to $1.3 trillion by 2032, up from $40 billion a decade previously.1
To embrace the opportunities presented by AI technologies, organizations will need to ensure they have the right technology skills and an inquisitive, open-minded culture. AI systems will only operate effectively if they have access to high-quality data, are integrated with existing systems, and are used ethically by trained individuals. To generate real value for the business, it is essential that AI systems are applied to appropriate use cases as part of an overall cultural commitment to innovation.
While boards should understand the opportunities associated with AI technologies, they must also be aware of the risks. These risks include bias, breach of copyright, privacy threats and hallucination (where an AI system presents false or misleading information as fact). The rise of GenAI is fueling a surge in increasingly sophisticated cyberattacks, with bad actors using GenAI tools to craft personalized phishing emails, create ”deepfake” videos and gain unauthorized access to personal devices.
Additionally, companies need to monitor the key regulatory and legal developments relating to the use of AI systems in the jurisdictions where they operate. In the EU, for example, the use of AI will soon be regulated by the AI Act, with AI systems deemed “high risk” required to be registered in an EU database.