EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Our Consulting approach to the adoption of AI and intelligent automation is human-centered, pragmatic, outcomes-focused and ethical.
Read more
Just as the benefits and pitfalls of AI to society and the economy remain to be seen, so too do the rules and frameworks that will govern its adoption. The recent case study of digital asset regulation indicates that regulators often take a wait-and-see approach to nascent technology, with guidance trailing innovation by three to five years. While it’s impossible to predict the shape of the regulatory overlay, there are a few main themes inherent to AI with which financial regulators will have to grapple.
AI bias
AI models are only as good as the data they are trained on, including human feedback provided to improve their performance. Natural language models in particular often use reinforcement learning, a technique in which humans serve as “labelers” to validate model outputs and identify correct answers. Without guardrails, this human feedback can, however unintentionally, introduce bias to the model’s data, with downstream impacts to AI decisioning. Regulators in the future will need to ensure that the reliance on AI for business support does not result in unequal distribution of its benefits.
Explainability
Financial institutions operating within a regulatory environment are often called upon by regulatory authorities to substantiate their risk decisions. Decision-making enabled by complex AI tools governed by thousands of underlying indicators may help to accelerate certain processes, but institutions must be mindful of their (or their vendors’) ability to produce rationales for these decisions in a format that can be understood/interpreted by nontechnical resources, such as relevant citations for a fact-based search or key data attributes/values influencing predictive model outputs. Regulators will look to establish minimum standards for risk decisioning, and it will be incumbent on compliance personnel to understand requirements and maintain sufficient line of sight into the information sources and logic utilized by AI models to offer recommendations or support decision-making.
Data management
One immediate advantage of leveraging AI tools will be enhanced customer insights, combining inputs from open and closed sources to create advanced profiles of risk, behavior and profitability. These profiles, however, are predicated upon the availability, quality and security of customer data. In an AI economy, personal data comes at a premium, becoming even more valuable than it is today. In response to commercialization of personal data by large technology firms in recent years, regulatory bodies around the world have already moved to implement data privacy and security laws designed to return control to the individual the data pertains to. Efforts to secure, protect and govern access to data will only accelerate as AI is deployed commercially and demand for personal data grows.
Cyber risks
As the economy becomes ever more dependent on technology and data, the potential for hacks and data breaches will increase. Furthermore, as advanced technologies become more accessible to the general public, use of these technologies to execute scams and other fraudulent activities is likely to become more common (e.g., using AI to alter pictures or create fake videos to influence the behavior of unsuspecting consumers). Financial institutions today struggle to protect against cybercrime, and regulators have implemented cybersecurity laws to govern the strength and durability of these controls. As AI is introduced into the economy at scale, the potential for illicit actors to access and manipulate AI models, along with their underlying data, will become an even greater concern. Regulators will be tasked with revamping cybersecurity frameworks to account for these incremental risks and combat the unauthorized use of AI tools for personal gain; compliance organizations will need to monitor and adapt accordingly.