EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
In the seventh episode of our Generative AI Unplugged series, we dive into the transformative impact of artificial intelligence (AI) on the financial services industry with our senior Partners—Pratik Shah from Financial Services Consulting and Kumar Abhishek from Technology Consulting Partner at EY India. Together, they explore the strategic framework that can help businesses implement GenAI in financial services, discuss key building blocks, and shed light on the applications and scenarios where such solutions are making a significant impact. They also discuss various risks associated with implementing various GenAI models across these businesses.
Financial services firms should prioritize their GenAI objectives, establish operating models accordingly, and focus on sustainable tech and data architecture.
They should choose between pre-trained models or in-house LLMs, balancing speed-to-market and data privacy considerations for GenAI success.
GenAI can help financial services transform customer service, underwriting, marketing, and more, aiming for improved experience, innovation, cost reduction, and revenue growth.
GenAI strategy requires prioritizing use cases based on metrics, establishing a solid operating model, and ensuring sustainable tech architecture.
For your convenience, a full-text transcript of this podcast is also available.
Tarannum: Hello and welcome to the EY India Insights Podcast. I am your host Tarannum Khan and in our latest episode of the Generative AI Unplugged series, we will explore the widespread adoption of artificial intelligence (AI) in the financial services (FS) industry. From cost reduction to revenue growth, AI is reshaping the financial services industry at large.
In this episode, we delve into crucial elements of AI implementation, the importance of having a strategic framework in place, and managing potential risks for a resilient future. To facilitate our discussion further, we are joined by Pratik Shah, Financial Services Consulting Leader at EY India and Kumar Abhishek, Technology Consulting Partner at EY India, who spearheads intelligent automation and digital transformation for EY's FS clients.
Thank you, Pratik, and Abhishek for joining us in this episode.
Pratik: Thanks Tarannum.
Tarannum: GenAI has captured the interest of business leaders, investors, and consumers alike. Across all industries, we are seeing organizations either implementing or exploring specific use cases. How would you recommend organizations formulate a robust strategy, and what are some of the key building blocks, particularly in the FS sector?
Pratik: Financial services firms are currently in the early stages of shaping their GenAI strategy. It is similar to when digital transformation happened a few years ago. In a recent global survey we did for financial services companies on GenAI, 78% of the respondents have either implemented or are in the early stages of exploring the GenAI use cases.
When a business leader thinks about implementing GenAI at an enterprise level, I would keep in mind five key considerations:
What is the strategic objective with which you (firm) prioritize a use case for GenAI? Whether this will be driven by customer experience, efficiency; what would be the metrics and success criteria based on which you would prioritize the use case?
What would be your operating model around implementing GenAI? Will you build a centralized team that will set consistent standards or whether you will allow different departments to do their own thing in terms of bringing their own innovation on GenAI use cases? So, what sort of an operating model in building GenAI and implementing it at an organization level?
Data and technology architecture are also key because what whatever you build on GenAI needs to be sustainable and implemented consistently in the organization. So, what are your standards around data that goes in building the models, training the model, and how technology supports it consistently are key? Therefore, tech and data architecture are very important.
Then comes people, training and change management. Like digital, GenAI is going to change fundamentally how we acquire service and manage customers, and therefore making sure our people are ready to adopt and operate in a new world. Just like a transition happened when an organization moved from physical to digital, GenAI is another big lever in terms of change management, and where people and training are concerned.
Last is risk management. GenAI brings itself a multitude of risks - from privacy to security to cyber, as models and data will move seamlessly across internal systems to various training models that may be hosted in any part of the world. So, making sure that these are the five things that you keep in mind as you implement GenAI level at an enterprise level is critical.
Tarannum: Those are some really interesting insights, Pratik. Thank you for setting the context for our listeners. When it comes to actual implementation, can you throw some light on how FS firms should approach it?
Pratik: When it comes to implementation and actual usage of models, organizations have two options; one, to procure and integrate commercially available pre-trained models like ChatGPT and others which work through Application Programming Interface (API), and the other is to construct vertical Large Language Models (LLMs) from the ground up, using proprietary data. Each one has its own benefits, but they should be looked at it from a cost as well as from an effort perspective. Using commercially pre-trained model helps you reduce your speed-to-market, but they have to still be fine-tuned for organization and their own purposes. These models will be integrated via API and because you are using your own data to train them and if you are not building them on a private cloud, there is an element of data privacy because you might be using a lot of data which might not be necessarily residing within the country.
Given financial services and the whole focus on data privacy, some of these components need to be borne in mind when using commercially available models. On the other hand, building a vertical LLM in-house is a very labor-intensive process and, more importantly, requires the right skills that you need to attract to build this. So, there is an execution risk. If you do not have the right skills, you may not necessarily be able to build your own LLM.
I believe a strategic implementation approach would be to construct in-house models exclusively for domain and specific use cases, and therefore, building your own proprietary solutions to meet your organization's unique needs. At the same time, you could rely on more pre-trained foundation models for use cases that are not necessarily novel and that would help from a speed-to-market and from a cost perspective as well. Tarannum: Thanks, Pratik. That is an interesting take. With the rapidly evolving landscape of the financial services industry, we see a multitude of use cases emerging. Specifically, to you Abhishek, can you share with our listeners some of the key applications and scenarios where such a way to solutions is making a significant impact? Abhishek: If you look at banks and other financial services clients, they have been using AI in functions like credit risk and fraud detection for years. But with GenAI, that is there is a significant shift from these previous approaches. In a recent survey we did, we found that customer service and experience, product, and service design, underwriting and onboarding, marketing and sales, and processes around collections and recovery were the top use cases for GenAI within the financial services space. They (FS firms) were targeting significant improvement in customer experience, product innovation, cost reduction and revenue acceleration.
Let me take an example. We all are very much aware of the use of virtual agents within banks for answering customer queries. The earlier versions were mostly Q&A based that had fixed answers without any personalization. But now with GenAI, there is a very strong case to use intelligent virtual agents powered by these large language models, which are personalized as well as understand and respond in a humanlike language and context. Imagine getting a personalized response from a virtual agent on our own EMIs, loan eligibilities, charges, new product offers, right on the call or a message. Internally, clients are also working on areas like knowledge management, agent training, call classification and summarization. A lot of work is also happening on the engineering side, with use cases like business requirement, documentation creation, user story creation, application development, code migration and testing.
And as Pratik mentioned earlier, it is important to note that despite all of these advancements, GenAI is still in its early stages and its effective utilization relies on an ecosystem where it collaborates, augment with humans, and leverages the best of both. Tarannum: Thanks Abhishek. You would agree that the discussion on GenAI remains unfinished without talking about the accompanying risks. And these risks are more pronounced for financial institutions considering the sensitive nature of data that they deal with. By now, the risks of GenAI are well known, but the key question is: how can leaders manage these risks and be better prepared for the future? Abhishek: That is so much important and especially within the financial services space as they face so much regulatory scrutiny given the nature of businesses. Some of the key risk considerations when thinking of implementing GenAI within enterprises would be:
Risk related to models - Whether it is a proprietary model, open-source model or any hybrid set up. There are greater chances of biases, hallucination and toxicity showing up in the model responses. The LLMs can sometimes even generate plausible sounding outputs but may lack factual accuracy or logical coherence. We need to think of ways on how we can manage all of this and improve model outputs.
Risk related to data – The LLMs are as good as the data they are trained on. If the training data is biased, incomplete, toxic, inaccurate, or sometimes even not relevant to the use case, it can cause hallucinations in the model outputs. It is important that we follow some best practices with LLMs, such as data minimization, encryption, and access control, etc. to manage some of these risks.
Regulatory and compliance related risks: Since the regulatory landscape is still evolving in India and globally, it is important for organizations to keep an eye on the regulatory developments across regions to proactively prepare for the regulations that could come into play for GenAI. Enterprises must also consider risk around compliance, conduct and data that may potentially violate and regulations of the state or jurisdiction.
Other risks: There are also a few other important risks to consider. For example, risk linked to technology, third-party risks, IPR related risks, which need to be thought through when thinking about GenAI within enterprises.
Tarannum: Great to hear your thoughts on that Abhishek. Thank you and Pratik for taking the time to speak with us on this pertinent topic. All our listeners definitely have a lot to take back from this episode.
Pratik and Abhishek: Thank you for the opportunity.
Tarannum: Thank you to all our listeners for joining into this insightful discussion. Your feedback and questions are invaluable to us. Feel free to share it on our website or email us at mailto:markets.eyindia@in.ey.com. From all of us at EY India, thank you for tuning in.
If you would like to listen to our podcasts on the go: