EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
This episode of the Think Ecosystem podcast, hosted by Anindo Dutta, EY-ServiceNow Global Alliance Leader, centers on the integration of artificial intelligence (AI). Featuring Kevin Barnard, Deputy Chief Innovation Officer, ServiceNow, Pascal Bornet, an award-winning expert on AI, and Amy Gennarini, EY Global and America's Risk Technology Leader, our guests discuss the challenges of adopting and scaling AI into business and the ethical ramifications around it, while presenting incisive case studies illustrating the technology’s transformative potential.Â
Listen to the episode to understand how AI offers immense opportunities yet must be approached with a well-defined governance framework, transparent communication and a focus on human-centric design.Â
Key takeaways:
AI is projected to hit a $190 billion market value by 2025, highlighting the vast scope of the opportunity and the need for an ethical and bias-free approach to adoption.
In an era of misinformation and fraud, building confidence and trust with technology is crucial. AI governance and responsible usage can help, both internally and with customers.Â
AI projects succeed when they are built with the needs of users in mind. Adoption improves when AI is intuitive and solves real human problems.
For your convenience, a full text transcript of this podcast is also available.
Announcer
Welcome to the EY Think Ecosystem podcast, a series exploring the intersection of technology collaboration and innovation. In each episode, we orchestrate insights, stories and perspectives from across the EY partner ecosystem, our client base and leadership team, to address the important issues and challenges of today.
Anindo Dutta
Hello. Welcome to the EY Think Ecosystem podcast. I'm your host, Anindo Dutta. I'm the global EY ServiceNow Alliance Leader on the EY side, and I'm going to introduce my panelists and guests here. But I just want to give a little bit of color on myself and why I'm so excited to host this.
I run the relationship between EY and ServiceNow globally, working with our service lines, our stakeholders, and we're looking to build a very strong partnership across the dimensions of risk, cyber, supply chain, and you'll hear from some of our experts on the podcast today. In this episode, we will discuss the timely and critical topic of integrating AI (artificial intelligence). There's a lot of buzz around AI, and now certainly with GenAI (generative artificial intelligence), we're going to tackle AI broadly as a topic, implications, the ethical ramifications around it, some of the anxieties, some of the challenges that we think clients face in the marketplace.
So, there's a variety of different topics we'll talk all around, all around AI. Before we dive in, please remember that these conversations during EY podcasts should not be relied upon from an accounting, tax, legal, and investment professional advice perspectives. Listeners must consult their own advisors.
Okay, great. With that, let's jump in and introduce our guests. Joining us from ServiceNow, we have Kevin Barnard, Deputy Chief Innovation Officer. Kevin is a leading voice in the AI space and has been instrumental in helping organizations implement innovative technologies to drive efficiency and transformation. Welcome, Kevin.
Kevin Barnard
Thank you very much. It's a pleasure to be here, Anindo. Thank you for the introduction. As we were talking on our pre-brief call yesterday, I would love to have you introduce me every single time I'm in front of customers. Thank you, sir.
Dutta
You got it. Okay, great. Also with us is Amy Gennarini, EY Global and America's Risk Technology Leader. Amy has been a key partner in crime with me and the team. And Amy has been a driving force in developing EY's AI governance and complied solutions, ensuring that AI is used responsibly and effectively across various industries. Amy, welcome to the podcast.
Amy Gennarini
Anindo, thanks for having me. I'm really excited for this conversation, and it's a pleasure to be here.
Dutta
Finally, we're pleased to welcome Pascal Bornet, an AI expert who will provide us insights on the responsible use of AI. He's the author of various books, very active on social media. Pascal, you have a lot of accolades, and we can go on and on, but welcome.
Pascal Bornet
Thank you, Anindo. Pleasure to be with you. Very excited.
Dutta
Great. Excellent. Let's start with some of the foundational topics. Obviously, there's a lot of buzz around AI, a lot of the possibilities. But the key question that comes up in our heads quite a bit is why is responsible AI governance crucial in today's business environment? We want to understand the benefits of it, the excitement around it, but responsible use of AI and the governance of it, I think, is a key area that folks are trying to figure out. Maybe we'll start off with you, Amy. Could you help by maybe starting by defining what responsible AI governance entails and why it's important for businesses?
Gennarini
Yeah, Anindo, it's interesting. I see this being one of the most major pieces of getting to even adoption. How do you govern it is going to help us with adoption. AI is predicated to grow to $190 billion in market value by 2025. And so business leaders continue to grapple on how to harness the power of AI while managing the associated risks and responsibilities. I really see one of the trickiest parts of channeling AI is the human bias factor. AI absorbs everything it receives without judgement. So individual choices, however innocent and inadvertent, can easily affect AI outcomes and becoming, unfortunately, a permanent part of the AI universe. With that, responsible AI governance is the combination of defining a framework and its related practices for managing the development and deployment of AI in a way that ensures ethical standards, transparency, accountability, and fairness. I'm sure you can -- listening to me talk, that's a mouthful. But unfortunately, there's more to it. It involves creating policies and guidelines to address issues like bias, privacy, security, and the broader societal impacts of AI. You also need to consider setting up oversight mechanisms, clear ethical principles need to be considered. Really, it also comes down to making sure that someone is accountable for making those decisions and actions of the AI systems themselves.
Getting these types of things set up in an organization is really complicated. I'm sure there's going to be a lot of questions coming forward with how you go about doing that. We see organizations set aside about a third of their budget to manage AI risks. This is going to be huge. If you think about just the overarching budget of AI, one third of that budget being invested to make sure that we can risk manage and govern AI is just astounding to me.
Dutta
That is really staggering, which is this is an important component. As we work with clients, I'm sure this is a key component that they're also asking us for advice. Kevin, again, switching gears a little bit, from a ServiceNow perspective, I would love to get your views on what the driving forces are behind the need for responsible AI today?
Barnard
It's a great question. In my role, I meet with hundreds of customers a year. I am on the front lines. I have innovation in my title, right? It's one of those things that we're always trying to think about, not only where are we today? But where are we going? I was also a customer of ServiceNow before I joined the organization six years ago. I still try to bring that customer lens to the topic of innovation writ large, but especially when it comes to AI. I think there's three key factors that ServiceNow is thinking about as we look at the ethical considerations, the market demands, and so forth. The first, of course, is trust. Amy touched on this earlier, and I'm sure we're going to hear this throughout the conversation today. But it's essential as organizations are relying on AI to power their operations, bias is going to be an issue. Hallucinations are going to be an issue. We've designed our own LLMs (large language models) to emphasize transparency and fairness and security as we deploy these solutions across the entire platform because we know that as we partner with organizations and their own transformation, that's going to be crucial to not only maintaining our customers' loyalty, but our customers' customers' loyalty.
So that's a huge part of this. Regulatory compliance and risk is a second area that plays a significant role. We know that regulations are coming, and we know that they're going to quickly evolve. So we're proactively integrating responsible AI practices to mitigate any of these potential risks and really trying to stay ahead of the regulatory changes. We're trying to lead the way. We have a research team up in Montreal, the former Element AI organization that's been thinking about these things, really focused on how do we make sure that our AI solutions are compliant and secure across the entire ecosystem. Last but not least, of course, is sustainable innovation. We believe that that's a competitive advantage, and we are on a regular release cadence to not only meet the current needs, but also to innovate. We partner with customers, with partners, to ensure the long-term value differentiation in the marketplace. It is really something that we are trying to focus on day in and day out to make sure that we're meeting the needs of the market.
Dutta
Great. Thanks, Kevin. I like the catchphrase you mentioned, customers, customers, loyalty.
Barnard
Very important.
Dutta
Maybe Pascal, over to you. You engage with a wide audience on AI topics in different settings. How do you see organizations struggling to balance AI innovation and responsible usage?
Bornet
Yes, Anindo, that's a very important question. As I'm helping a lot of companies getting into their AI transformation, I've identified a lot of obstacles and challenges they are falling into. But the few ones that come to in my mind now. The first one is, I've never seen an AI project succeed without putting the people in the center. I mean, successful AI is not about primarily about the technology, but really about the humans. I like to say that AI has been built by people, and it is to be used by people. People are in the center, and I'm sure we'll talk about it later, but it's a lot about information, education and avoiding unnecessary fears around it and so on. I really liked what Amy and Kevin said around trust, because trust, I think, is really the central point here. In an age of misinformation, deepfakes, cyber security threats, scams and all those scandals, trust is the most important asset of a company. I sometimes struggle to have CEOs understand this. I think it should even be on the balance sheet of a company as really as an asset that we can value.
What I mean here by the importance of trust, trust is not a commodity that can be bought. It's really earned through consistent, transparent, honest interactions with customers, with partners, and three key pillars of trust, in my view, and that's very close to what Amy and Kevin said before. Fairness and bias mitigation. The key critical success factor here is begun by integrating bias identification right at the data collection stage. So don't wait for your model to be ready to have a look at that because otherwise you have to redo everything. Transparency is the second one. So making AI systems transparent so that we allow the users to understand how AI models make decisions and they can then understand how they can make their own decisions. I like to say it's about transforming the AI black box into a glass box. So it's really about this transparency here. Finally, the third one is about privacy-first culture, which is about developing a culture that is centered around privacy. I want the companies I work with to always ask themselves three questions regarding data privacy. The first one is, am I entitled to keep this data? The second, where can I store it? And the third, how do I secure access to this sensitive information? Those are the key critical success factors that come to my mind.
I love the phrase AI black box to glass box. So that is absolutely trying to create more transparency in that environment. We'll switch gears to another topic that's been very topical in boards and the C-suite around ESG and now the linkage with AI. ESG is an important topic. It's top of mind for a lot of folks. So maybe there, Amy, we'll start with you in terms of the whole ESG aspect around AI. So somewhat open-ended, would love to get your thoughts on how you look at the hooks there, there's a pretty close connection, and hear your thoughts on that.
Gennarini
Look, I think the availability of AI is transforming the world at rapid speed. I talked about the size of it earlier, too. Used responsibly AI can add exponential speed, scale and impact to the efforts to solve humanity's existential problems while also building a more inclusive and equitable future. I think that tying this together is going to be something really profound as we as a world walk together on this.
Dutta
Kevin, I wanted to get your perspective, specifically on some of the things we're doing together in the ESG space around social responsibility and the EY-ServiceNow partnerships in this space. There's a lot of good things that have happened. I would love to get your thoughts on your perspectives and how we continue to expand that and leverage it.
Barnard
It's a great question. ESG is a subject, as Amy really dove into, that's top of mind for executives around the world. I talk to hundreds of customers a year. ESG initiatives are a board-level discussion that is something that if we can attach our solutions, and I mean our the collective our, between all of our organizations here on the call and in the audience, to environmental, social, and governance drivers, that is going to ensure not only executive alignment, but investment, which for many of us is hard to come by sometimes. So first and foremost, ServiceNow, just for folks who may not be aware, we run our own data centers. We have our own cloud. So we have data centers around the world, and so trying to continuously improve on the optimization of our systems at a deep, deep level is top of mind to our engineers around the world. And so we're always trying to reduce that energy output footprint.
And that's simply on just the consumption side of things. Also, on the social good side of things, we have launched within the last year, I think it is now, jeez, it goes by so fast, Servicenow.org. It's part of ServiceNow that is completely focused on the nonprofit sector, and it is something that is top of mind to build to our board. And we really want to make sure that we are showing that we are serious about this. In 2024 alone, we committed to $2 million in grants to nonprofits to really help them invest in using technology in impactful ways that drive greater efficiencies that do social good for their own community. These are things where we're trying to put our money where our mouth is, but also to lead the way and to that these things do require a partnership of a number of different organizations to accomplish goals that we all know in our hearts are the right things to do.
Dutta
Kevin, again, maybe a different view, but from a ServiceNow perspective, how do you integrate governance principles into the AI approach? You leverage specific capabilities or frameworks to ensure compliance? Would love your thoughts.
Barnard
There's a famous CEO out there who says that trust is the ultimate human currency. For the listeners of the podcast, I encourage you to go look up who that individual was. But yeah, we are partnering with a number of organizations as well as bringing that thought leadership in-house. This is an ecosystem play, truly. This is the beginning, I think, of the technology, finally allowing for true enterprise service management to be realized. And so, we have to think about that across the entire ecosystem. And that, the enterprise is not just the four walls of a business. It is the partners, it is the customers, it is the entire supply chain that's involved in delivering those services. We're thinking about that across the board and really trying to help our customers meet their regulatory obligations, regardless of where in the world they reside and what industry they're in.
Dutta
I think those are pretty interesting perspectives, and ServiceNow does work with clients in a variety of different industries, sectors, et cetera. So thanks for sharing that. Pascal, maybe we'll switch over to you. It's maybe a slightly different spin on the same question, but in terms of common pitfalls that you see organizations face when implementing AI responsibly, how can clients and teams avoid these? It'd be great to get your perspectives.
Bornet
Yes, Anindo, that's a very important topic. As I said before, I really believe that people are in the center of any AI transformation. When we implement AI in a company, we have a critical role in communicating, in educating, in order to avoid AI-related anxieties of employees. I like to implement a four-stages approach at the companies I help. The first one is about informing, informing the employees, the people, on what is going to happen with them with this transformation. AI is coming. What are the benefits they will get from it, what will be the journey to get there? And reassuring them on what is their future. We will need them. We will need them to be augmented by this technology. It's important to inform them because I've seen in a few companies, key talents leaving the company just because those people thought that because AI was coming, they would be made redundant. It's extremely important to communicate. We never communicate enough on those things. The first pillar is information. That's what I was explaining. The second one is about education. When people know what's going to happen, they now need to understand what are the systems and how to use them. It's about educating them on how to use them, how to grab the benefits from those systems.
The third pillar is about empowering. Empowering those people to have access to those systems and be able to change their day-to-day work and see basically the benefits on the day-to-day work of those technologies so that they become the best change managers in the company because they really believe in what the technology can do for them. They've been able to see the evidence, so they are the best change managers and the best communicators and evangelists of the transformation. That was the third pillar. The fourth pillar, which is critical for the long term and sustainable aspect of the transformation is about incentivization. It's about making sure that the KPIs, the key performance indicators of people, are aligned with the need for the company to transform with AI. We need to make sure that people use those systems. They use them well and responsibly, and that's necessary to manage the right behaviors in the company.
Dutta
No, it's great. I like the way you succinctly position that, the four steps you mentioned, informing, education, empowering, and then incentivization. I like the way you package that up so that it is well understood and it's a process that you advise clients on. You did also touch on something which actually was a perfect lead into my next question around the human elements of putting AI around projects and things like AI anxiety you touched on. AI anxiety is a real issue. Again, some of us probably want to understand that a little better. Amy, we'll start with you. How can businesses address the concerns and anxieties of their employee populations around AI, et cetera?
Gennarini
Anindo, I find it really interesting that AI is moving at the speed that it is, yet there's so much concern around it and anxieties. But I think there's ways that we can really combat that. Pascal talked a little bit about that, and that's around the communication, making sure that there's transparent communication, both on a micro and macro basis. It's things like communicating the goals and impacts of AI and the overarching implementation, how it's going to be integrated into the work processes and various roles or responsibilities. I think that's very, very important. Without giving the whole view from a comms perspective, it's hard for employees to work with it. Other pieces around that education training, I think Pascal did also touch upon that, but it's making sure that there's that ability for the employee to understand the AI technologies, and there's certainly various ones to be understood, and different types of models, how they're used, how they work, the benefits and then how they're going to be used in the workplace. Super important. That usage is highly related to the organization's strategy. I think giving that wholesome view on the education and training really helps an employee get a better understanding of it.
I think getting them involved in the AI implementation is another way to combat some of that anxiety. What's not better than throwing somebody in there but getting them part of that implementation so they can feel it and understand it and the work around it. I think also making sure folks understand that this is not a replacement, it's an augmentation. Emphasizing that AI is intended to augment the work and make sure that there's a human in the loop, there's not machines that are going to fully take over. It's that complement and augmentation that's important. The final point is making sure that folks understand how it's used ethically and fairly within the organization and addressing a lot of what I mentioned, but that ethical usage and fair practices is so important that it's not just a machine working willy-nilly, right?
Dutta
Right. No, thanks, Amy. I think to your point, this has to get reinforced, I guess, over and over again, because these things probably don't go away immediately. It's interesting, obviously, Amy does a lot of work from an EY perspective with many clients, ServiceNow. Kevin, yourself and the team do a lot of work with many other clients as well. Sometimes we work a lot with a common set of clients. But from a ServiceNow lens, how do you ensure that the AI implementation is transparent, it involves all levels of an organization, to Amy's buy-in point as well. I'd love to get your perspectives.
Barnard
I think the first piece is recognizing that this is not an IT conversation. This is a business-wide conversation. This is business transformation. As such, I think that probably one of the most key critical success factors is executive sponsorship, really for the workers across the entire organization to understand, as Amy rightly mentioned, this is an augmentation conversation, not a workforce reduction opportunity. And those of us who have been around for a while, we're a little cynical, and we might not believe it, but it is true that this really does allow us to get, finally, once and for all, eliminate the mundane work and focus on what matters. As an extension, though, we have to keep in mind that if we're going to do something different tomorrow than we did yesterday, that is going to require new governance models, new operating models, a more inclusive posture when it comes to development of these things and releasing of solutions, because now this is a truly end-to-end process conversation. It is no longer the work that happens in finance, stays in finance. It is no longer the work that happens in HR, stays in HR, or on the factory floor, out in the field, or what have you.
If we zoom out enough, we understand that this is about service delivery. This is about service management, and this is about optimization. And what the technology affords us is transparency by design, by default, really. We see the workflow; we see who the work is assigned to. We see when things have happened. The machine learning and process mining capabilities can help us optimize in ways that we never thought possible in the past. And at the end of the day, what that does is, for one, as we've been talking about on this call already, it bakes in by design, governance, and compliance needs being met across the board. So that gets the Chief Risk Officer really excited. It gets the CISO really excited. It shows optimization opportunities and faster time to value. That gets the Chief Operating Officer more excited about things, Chief Product as well. But then, as I mentioned earlier, that individual, both Pascal and Amy mentioned, the human being in the loop. They are going to see the value in this, and that is going to drive adoption. They're going to adopt these things because they're getting value from them, not because they were forced to use them.
They're going to have positive experiences. They're going to want to use it more. Give them the opportunity to experiment, of course, make mistakes and learn things. But I think this really gives us an opportunity to focus on self-service in a way that was never possible before because we never got the results that we really wanted. But now that has changed. It is the right answer for the right question at the right time. So, this is really exciting, and it allows us to, again, digitize things today that in the past had not been possible and to link things that in the past had not been possible. And that is a game-changing proposition for many, many organizations.
Dutta
Kevin, I like that, right answer for the right question at the right time. I think that's very appropriate for the times we're in right now.
We're going to talk about scale up. We're going to talk about client use cases, what each of us are seeing, because we're seeing a lot of pilots, a lot of POCs out there. But let's try to see how it becomes real. How do you scale up? What's the strategy behind it? And then more importantly, what are the use cases that folks are using it for? So, Amy, first, maybe on the scale up question over to you in terms of what should organizations focus on in order to scale up these initiatives responsibly? What have you seen work for clients, et cetera?
Gennarini
When we talk about scale, I think the most important thing to make sure is that there's that stand up. What do you need to do to stand up the responsible AI organization and practices and framework, the scale and they [inaudible] . On the stand-up side, it's highly related to the organization's AI strategy and establishing commensurate governance framework around that. I spoke about that a bit earlier. But on the scale side, there's probably four or five things to consider to really get grounded and put in place across the organization. I think all three of us at this point, Pascal, Kevin, myself have all mentioned about the ethical piece of the practices and making sure that AI is used fairly, transparently and without bias. I think If that's not there, you simply just can't scale. Another piece is around data privacy and security and making sure that you're implementing strong data protection measures around secure sensitive information. Then Pascal did reference this a fair amount on ensuring compliance with your regulatory requirements, data privacy regulations. Without having that program in place and the related controls to support that, you simply just can't scale.
The third piece around scaling is around the bias mitigation. Having processes in place around continuously monitoring and addressing potential biases by using diverse data sets and implementing fairness audits is super important to make sure that you're not going to have any biases upon setup or just as the models and the systems continue. The fourth element around scaling is around the transparency and explainability. Making sure that you document, understand how the decisions are made in the AI systems and fostering that accountability. I referenced that earlier, much earlier in our discussion. Developing scalable infrastructure. You want to make sure your infrastructure can handle increasing data and processing requirements and invest in robust cloud services and computer resources. I think it's going to be next to impossible without getting your scale infrastructure set up. Then finally, it's important to collaborate amongst the industry and with partners such as ServiceNow, for example. But that collaboration piece across the industry is super important around, whether that be with industry experts, academic institutions, and other regulatory bodies to stay ahead of the best practices and emerging trends.
Dutta
Amy, thanks for sharing that. We're going to touch on the case study piece of it. Kevin, we'll maybe go over to you and certainly want to understand where clients are using these successfully. What have you seen in the case studies that are successful? I will say maybe to start us off, EY has done a ton of work around AI. And just recently, we've done some major deployments with the Now Assist platform. So certainly, the whole concept of drinking our own champagne together with our partners, in this case, ServiceNow, and then taking it out to help clients get on this journey. So we're really excited about being able to take our own learnings and working with clients. Kevin, maybe some other case studies that you've seen out in the market would be great to share.
Barnard
Do we really need any other case studies? I think you just said it for us. That was pretty good. But no, it's a great partnership. We're big advocates of joint development with our customers and partners for new and emerging technologies, and the relationship with EY is indicative of that. For folks in the audience who might not be familiar, what we call Now Assist is our platform-wide virtual agent capabilities. I'm sure we're all old enough to remember when decision trees were really what was masquerading as bots and virtual agents. We have now on the platform brought in true AI capabilities to do things like playbook generation, catalogue ordering, case resolution, summarization, knowledge article creation. We love making knowledge articles, don't we? Wouldn't it be great if we could have the AI summarize things for us based on things that actually happened in our enterprises? Search Q&A, and one of my favorites, of course, is code generation, because that is really something that if you want to talk about clawing back time or getting a jumpstart on writing code, that is something that is key to a lot of development shops within the organizations. When we think about use cases, we actually believe in being customer zero of our own platform.
And so we call it Now On Now. So ServiceNow using ServiceNow. And again, just to give some more IT-centric examples, when you think about incident case management, 89% improvement in time to resolve of incidents by leveraging our AI capabilities. 99% of our cloud operations changes have been automated. We've reduced cost of our DevOps operations by six million. And in fact, developers can be a little bit cynical. They didn't actually believe that the AI was going to be as productive as it actually has been. And it has dramatically increased the amount of work that we can do with the amount of FTEs that we have. Hugely beneficial. And again, focusing on the work that matters to the organization. But then also it's not just in IT. In finance, for example, case productivity is up 70%, legal has been able to decrease their outside counsel spend by 10%, sales processing, procurement, marketing, all across the entire enterprise we see examples where we're having success in deploying, even as Amy was mentioning, AI capabilities that are, quite frankly, at their infancy, both from capability and adoption. That's a really important one that we've been trying to do.
We want to drive humans being able to answer their own questions without having to call or talk to somebody. But then also being able to very quickly route the right cases to human beings who can help them address the needs as quickly as possible. So we're seeing things like 55% reduction in time to resolve an issue. Case summarization, handing things off from one to the next. This wonderful world of not having to reenter our information more than one time. These are the types of things that while they sound very simple and basic, they've been difficult to resolve, quite frankly, in the past, but we are finally here and realizing it. And the beauty of it is it is customers doing it for themselves with their own data, with their own information, with their own resources, empowering those human beings who have been struggling for a long time, trying to keep track of all the stuff that we have in our enterprises and trying to figure out exactly what may have happened. The AI augments their ability to perform, and that in and of itself is the secret sauce to success in any of these deployments.
Dutta
I love the hard metrics you mentioned, whether it's hard dollar savings, deflection of call in the call center scenario. So, I think those are some tremendous use cases you mentioned that clients can use in their organizations as well. Okay, Pascal, just to round out this topic and this theme around scale up use cases, but in terms of putting a well thought out strategy together, just wondering if you can touch on key components of that strategy. You had some great thoughts earlier. It'd be great to get a quick recap.
Bornet
That's really what's at the top of mind of all companies. 'I've started my AI transformation. Now, How can I scale and how can I get the impact that I'm looking for?' We've seen large top-down implementations. I've always seen that in my 20 plus years of experience, helping companies implement digital transformations. Those top-down implementations might not now be enough anymore. Just to illustrate my point, I'm going to give you three stats. The first one is 90% of companies are still only experimenting on AI on a small scale, meaning only 10% have been able to implement at scale. On the other side, 75% of the workers, basically the employees of those companies, already use AI at work. 78% of those people are using AI that are not sanctioned by their company, so basically not officially allowed by their company. What can we learn from that? People seem to be adopting AI faster than their companies. This is a risk for companies if we talk about unsanctioned systems. But again, here, as we said with Amy and Kevin earlier, we need to educate and be clear with users on what are those systems and what are the risks involved.
But most importantly, there is an opportunity here, a huge opportunity for companies to scale their transformation. It's about really accelerating those transformations at scale by leveraging what their people are doing. They are currently using AI at work to improve their day-to-day work. Here is what I recommend companies to do. It's a three-step. First step, start by deploying AI in everyday tasks to enhance productivity by 10% to 20%, by empowering everyone in the company to use AI and use it the right way in their day-to-day work and making sure that they don't use unsanctioned programs. Second, focus on critical functions such as marketing, product management. Basically, those use cases that can deliver the highest impact. Reshaping them for 30% to 50% boost in efficiency. Finally, third step is about exploring new business models with AI for long-term competitive advantage, where you will create new revenue streams and boost your business for long term. This is a very easy to remember framework to help scale companies that has been very proven and successful.
Dutta
All right, team. We're getting towards the tail end of our discussion, but a very important topic, and that is around operational structures. What can organizations do to put some of the organizations, the operations behind it to make it successful? Amy, in the past, we've talked about some of the key elements that you're advising clients on. It'd be great to get a quick recap on some key elements that you've seen organizations needing to establish in their journey with AI.
Gennarini
Operational structures for something like this is super complicated. I'm glad you asked about it. It requires the enterprise to work together in a way that they're generally not used to. There's a good handful of areas that need to be operationalized. The first one is around that AI Governance Board or committee. They're the one that helps lay out the overarching AI strategy and related governance components. That includes policies and other compliance matters across leadership, IT, legal ethics, and risk and compliance. The second component of that structure is actually the ethics and compliance in the first line and really in the business. That's the dedicated team focusing on those ethical considerations and bias mitigation and regulatory compliance that we spoke about earlier. Another component of the structure is around the data management and privacy office, and that's making sure that this group oversees the collection, usage, and protection of that data, all those factors that Pascal and Kevin and myself have already spoken about. We get into the technology department, and that's around the actual AI development and its related ops. This includes things like data scientists and the engineers and the operational staff to keep it running.
Then we've got risk management and audit. This could require some upskilling a bit because these are new risks, certainly. But it's that oversight in auditing and the review of the particular controls in place to ensure that all the potential risks and issues are being addressed. Then it's the common stuff around training education programs that I spoke about, the communications. Then you got to make sure that you have a fair amount of performance and monitoring and evaluation. You got to make sure that you have a structure to just do that. Then incident response and the actual incident management for things that do come up. Then finally, you need regulatory and legal affairs. I think there's about maybe nine or 10 different areas that I just referenced here. But having all these structures work together across the enterprises is certainly a daunting task, but something that's needed for standing and scaling AI and the responsible usage of it.
Dutta
Pascal, maybe just one last one, a slightly different spin, but as we wrap things up, the significance of human-centered design. A lot is being said, but in the context of AI project, how does it impact user adoption, satisfaction, maybe some quick thoughts for you.
Bornet
Yes, Anindo. I think that's critical because any AI program is made to be used by users. If we don't know what's important for those users and how to build the right relationship that is trust-based with those users, then we lose. Just a few sentences to illustrate the point. When we talk about transparency through explainability, for example, no one else than users can better determine the level of technicality or the level of detail that is needed to trust and effectively interact with an AI system. When we talk about establishing privacy first culture, no one else than users can better indicate their personal data boundaries and their privacy expectations. When we talk about fairness and bias mitigation, no one else than users can better identify and articulate the subtle biases that a development team has probably overlooked. Here, very important point is the more diverse is this team of users reviewing the fairness and the potential bias of a system, the better it is because the diversity helps to mitigate those issues. Those are just a few examples coming top of my mind, but user-centric design is critical.
Dutta
Great. Thank you, Kevin, Amy, Pascal. Really appreciate your thoughts, insights. I certainly learned a lot. I'm sure the audience did as well. It's clear that while AI offers immense potential opportunities, it must be approached with a well-defined governance framework, transparent communication, and a focus on human-centric design. I think this was hopefully educational for all of us. I appreciate all of you sharing your thoughts and perspectives. One last note before we go, and I have a quick blurb from the attorneys. The views of third parties set out in this podcast are not necessarily the views of the global EY organization or its member firms. Moreover, they should be seen in the context of the time they were made. I'm Anindo Dutta. I hope you enjoyed the show and hope that you will join us soon for the next edition of EY Think Ecosystem podcast. Thank you all.