EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Harnessing the power of generative AI carries both risk and reward. EY teams are enabling clients to create holistic strategies and operating models.
Read more -
Our Consulting approach to the adoption of AI and intelligent automation is human-centered, pragmatic, outcomes-focused and ethical.
Read more
The key AI/ML implementation focus areas for bank risk management teams are credit risk management and fraud detection. Additionally, with generative AI, use cases are being explored in these areas and for broader regulatory compliance and policy frameworks. Generative AI has the potential to bring significant advancements and transform business functions.
However, AI/ML early adopters face increased risks, such as lawsuits arising from the use of web-based copyrighted material in AI outputs, concerns about bias, lack of traceability due to the “black box” nature of AI applications, and threats to data privacy and cybersecurity. As a result, many financial institutions are opting for a cautious approach to AI/ML. They are initially implementing applications in non-customer-facing processes or to aid customer-facing employees where the primary goals are improving operational efficiency and augmenting employee intelligence by offering insights, recommendations and decision-making support.
Lack of clear regulatory direction complicates board oversight. Regulators have expressed concerns about AI use in the business, including the embedding of bias into algorithms used for credit decisions and the sharing of inaccurate information by chatbots. Data privacy and security and the transparency of other models are also on authorities’ radars. Generative AI has amplified these concerns.
With AI usage increasingly democratized, robust, agile governance has become an urgent board priority. Even if companies don’t define or set up controls, boards must be diligent in ensuring that companies take a holistic and strategic approach to overseeing AI usage in risk management and overall business operations.
Four things for boards to consider
1. AI and machine learning are central to digital transformation, and CROs expect risks to increase as a result.
AI/ML are crucial for speeding up digital transformations in financial services over the next three years, alongside modernized platforms, automated processes and cloud technologies. Improvements in generative AI over the last year have only increased this urgency. Directors should be aware that technology risk and project risk are interconnected and can reinforce each other. There is a risk that AI could be overshadowed by project risks as banks strive to modernize core functions and migrate to the cloud.