EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
Generative artificial intelligence (gen AI) is a disruptive technology. By harnessing the capabilities of AI-driven creativity and innovation, it can transform industries, enhance productivity, and redefine the role of humans in the workplace. As we navigate this transformative era, understanding and adapting to the potential of gen AI is key to staying competitive and thriving in the evolving world of work.
Topics discussed include:
Gen AI 101 and myth busting- Gain insights into gen AI, dispel common misconceptions, and understand its potential.
HR applications- Explore how gen AI is revolutionizing human resources and reshaping the workforce landscape
Ethical considerations- Delve into the ethical challenges and responsibilities associated with gen AI adoption
Placing humans @ the centre of gen AI-Learn how to take a human-centric approach to your gen AI journey to drive and sustain positive results.
Simon Goupil: [00:00:07] Welcome everyone. And we're just going to get started. There's a few hundred people joining the session today. I'm Simon Goupil, I'm a partner in EY People Advisory Services here in Montreal, Canada. And you're joining a webinar as part of the Thinking Ahead series, where today we'll be discussing about generative AI and more specifically, the impact of generative AI in the future of work. So, before I introduce our panel today, I wanted to let you know that this session is being recorded and will be shared with all the participants after the webcast. So, without further ado, I want to introduce our three panelists today. We'll start with Marie Le Clech. Marie comes from 20 years of experience in HR, including 13 years specializing in HR Transformation Consulting. Her extensive experience spans various multifaceted areas including change management, business process optimization, large scale transformations, and HR system implementations. She's currently serving as an Associate Partner, putting the digital workforce transformation practice at EY in Montreal. Marie is passionate about helping organizations navigate their future challenges, while harnessing the power of technology and driving impactful digital transformation in the HR space. Bienvenue Marie. Lindsay Falkov is a seasoned professional with over two decades of experience in education, skills, labour market policy research, and organizational development. With a background in economics and a career that has spanned both the public and private sectors, Lindsay has successfully built a strong personal brand as a thought leader and innovator, particularly in the areas of workforce management and people performance. Lindsay's experience extends to a wide range of industries including financial services, retail and mining, which making him a valuable asset to our clients and a key contributor to the PAS Workforce Advisory Leadership Team. Welcome, Lindsay. And Sherif Barrad. So, Sherif is a dedicated strategy and innovation leader. Sherif delivers supply chain and transformative programs to help organizations achieve impactful goals. He leads a team here of engineers and scientists to develop breakthrough technologies to address top business challenges by using artificial intelligence, machine learning, deep learning, and natural language processing in quantum computing. Welcome, Sherif. Alright, let's go to the next slide. Alright. So, on the agenda today we're going to cover first a brief introduction to Gen AI that will be followed by an examination on how the success of GEN AI hinges on prioritizing the central role of humans, which we call humans at the center here at EY. And then we will move into our panel discussion where we're going to explore the insights and more specifically with the Sherif, Marie and Lindsay. As with all our webcasts, we want to keep them as engaging as possible so please send us your questions. Use the Q&A box in the bottom right of your screen, and don't feel that you have to hold them until the Q&A at the end. We will be keeping an eye on your questions throughout. Feel free to ask questions. Pose them in the chat. We'll try to answer as many as we can towards the end of the webinar. Alright, so let's get started. Sherif.
[00:03:56] Thank you Simon. So, a pleasure to be here. So, we thought that the way we would typically start the presentation is to discuss the similarities and the differences when you're looking at the AI landscape. So, what we learned through our conversations with clients is that, you know, there's a lot of ambiguity or fuzziness when it comes time to differentiate between the key technologies, whether it's machine learning, deep learning, what are the similarities and what are the differences. So, our objective over the next couple of minutes is to really explain the landscape very specifically. What are the similarities, what are the differences, and then how they interact with one another. So, I'll get started and I'll talk to you a little bit about artificial intelligence. So, that is the definitely, you know, at its most fundamental stage, it's a bunch of scientists got together back in the 1940s, and their objective was to try to figure out if there's a way of getting computers to mimic human behaviour, or to perform particular operations that were once done by humans. Initially, it started with decision trees. Okay, so think about a decision tree where you're faced with specific situations, and you have to make a decision. In its simplest form, if you're faced with decision X, deploy action plan A, if you're faced with decision Y, deploy plan B. So, basically, it's a series of operations. And it all depends on the kind of situation that you're facing. It became a little bit more complex, and it turned into more sophisticated type of instructions, which we call algorithms. And that would include not only decisions but also computations like mathematical operations. So, that's artificial intelligence at its basic core. If there was one key sentence that we would like you to remember around what artificial intelligence is, at its fundamental state is that it is in this particular situation, computers are explicitly programmed by humans. Okay. So, there's no room for self-learning. It's really just it's very explicit. If you're faced with a situation, adopt this particular operation. If we move over to machine learning, one of the similarities with machine learning and artificial intelligence is that they are both attempting to mimic human behaviour. The difference between machine learning and artificial intelligence is that machine learning are usually we refer to them as techniques, okay? And they are sometimes supervised, and they are sometimes unsupervised. So, when you unpack the world of machine learning and think about a situation where the human is supervising the computer and helping him make decisions, and then there's other situations where the human would not supervise the algorithm to perform specific operations. The fastest and easiest way to understand this is, you know, taking the image classification example, if I had 10,000 images that I want to classify in a supervised setting, I would feed it 100 dog pictures and I would label them as dog pictures. So, the system or the algorithm is going to start analyzing the pixels and understand that dogs have round ears. Then I would move on to labeling cat pictures and then feeding it cat pictures. And as the algorithm is analyzing the pixels level by level, it's going to start realizing that cats have pointy ears. So, what do I do with the next 9,900 pictures or 9,800 pictures? I would just feed it and the system has an idea, or the algorithm has an idea on how to classify. You can do that for customer profiles. You can segment your customers using classification. And there are other many use cases that are involved with that. If there was one key takeaway that we would like to leave you with when it comes to machine learning is number one is the human is in the loop, it is supervising the algorithm. And then the second element is that they are self-correcting. So, they improve over time based on the feedback they receive, they will adjust. They will try different iterations until they get to a better outcome. Okay. Then we move over to deep learning. So, deep learning is going to be, you know, a topic that we're going to be talking about for the rest of this presentation. When you think about similarities, it is also attempting to mimic human behaviour. The difference is, is that it is a more sophisticated approach. The question is why is it more sophisticated? Deep learning algorithms allow you to usher in new and unconventional layers of intelligence. Okay, so what that means is that as opposed to just looking at structured data from ERP systems or documents, you can bring in audio, you can bring in video, and you can have like a multidimensional problem that the algorithm will take all of these different inputs and try to get to a certain outcome. They are very powerful algorithms and in very, very often if you don't get to. The right answer because it is a multi-layer, you can go back into whatever layer and understand where the error came from. And if that doesn't make sense, we're going to animate that for you on the next slide. But before we go there, generative AI. The difference between generative AI and deep learning is that generative AI is also a form of deep learning. It's just an even more sophisticated form of deep learning through the introduction of a very specific algorithm, which is called the Transformers algorithm, and I'll have the pleasure of discussing what that algorithm is and why it works, and or how it works and why you know, it's become very popular, or why generative AI has become popular. So, we'll stop here and we'll move on to the next slide. On the next slide, what we thought we'd do is that we would show you exactly how a Neural Network works. And the way we're going to do it is that we're going to demonstrate a large language model like ChatGPT and how it typically works. A Neural Network, if we click on neural networks, you're going to see three different layers. Okay. It's exactly like the human brain is architected. As humans, when we're faced with information, we have different layers of information coming in and we have to make a decision. Based on the decision that we're trying to get to or the objective that we're trying to attain. Some information may be more important than others. Okay. So that's what you see here is on the left-hand side you have input layers. If we click on the input layers or input factors, these are some of the input factors that you would see show up in a large language model. So, it will scrape over 400 million web pages. It will look at books and literature, scientific and academic papers to be able to get you an answer if you ask it a question, if you click on weights, every single one of these layers of of intelligence is going to have some sort of a level of importance. That level of importance is called a weight. Okay. So, when you think about the 70%, what it did here is that it took all of the elements of information. It understood the question that you're asking it, and it associated a certain level of importance to the different feeds to be able to try to get you a right answer. In some cases, the answer may not be accurate the first time. So, then there's this technique called back propagation. You can think about it as an iteration. So, here what we're seeing in the animation is that it's going to keep going back and forth in an iterative approach, just like we discussed with machine learning. Until it gets to a point where it's getting to a more precise outcome. So, that's typically how it works. So, as an organization you have different layers of intelligence. You have people conversations, you have documents, policies. So, think about having some sort of a central repository where you're storing all of that information. And then you're asking questions. And the computation is understanding what you have in terms of information. It's understanding the question, and it's doing all the computation in the middle to get you to the right answer. So, that's the most technical part of this conversation or this presentation today. We're going to move over to the next slide. And I'd like to take maybe a couple of minutes just to talk to you quickly about GPT. So, we'll skip this one. There's a small little animation here, but we'll skip it and we'll move straight to the next slide. So, some of you have already been introduced to ChatGPT. We'll go back just one please. Some of you have. Yeah. Right here. Some of you have been introduced to ChatGPT I think it took, it hit the market really strong. I would say at the end of December, beginning of January. Chat stands for the conversational part of things. And then GPT is the algorithm that's powering the platform. A lot of people saw a lot of great use with this platform because you would just go into it, you would ask questions and it would give you answers. So, the question is, what is GPT and how does it really work? So GPT, every single letter stands for something. The first element that I think is important to understand around G is that it's generative. Generative means that every single time you ask it a question, it actually understands the question. So, there's something that's called contextual understanding, okay. So, you will write a sentence. In that sentence there will be words. It's going to look at the words in your sentence. It's going to make some associations and it's going to understand what you're trying to ask it from the sentence. What's interesting about generative AI or the generative part of GPT, is that it will generate an answer in the way that you want to see it delivered. So, we've tried many, many different, you know, experiments where we're asking to the system to do some sort of an analysis, and we're asking it to show it in a bullet point, in a table, in a pie chart, etcetera. What's really important to note here is that it's taking the question or it's taking the information, it's understanding the question, and it's generating an answer that is very specific to the way you want to see it to be delivered. These models are trained. Okay. So, ChatGPT, there's different versions. 3.5 was trained on 175 billion parameters. And when you think about 4.0 which is the latest version, it's been trained on 1 trillion parameters. So, the models are becoming more sophisticated, they are becoming more detailed. And the more you ping systems like these, you get more elaborate answers and more complete answers as well. And then finally, transformer. That is the name of the algorithm that was published back in 2017 by Google Research and Google Brain. A bunch of scientists got together, and they realized that when you read a sentence, there's going to be words in that sentence that is all what you need to be able to understand the sentence. So, when we looked at that neural network in the animation a few minutes ago, and you're going to see that the weight associated to certain words and certain sentences are enough for you to be able to understand what the sentence means. So, the transformer algorithm is a very powerful and effective algorithm that allows you to understand long sequences of data much faster, and to be able to generate a response based on whatever question you're asking it more effectively. So, how can you use this technology? We're going to move over to the next page, though. For those who have used ChatGPT, that's a perfect example of what you see here on the top right. So, you'll see Open AI. You'll see, and there's a lot of other players in the ecosystem as well. But at a high level you can use it for, you know, to generate speech. You can use it to generate audio, images, and videos as well. When it comes to coding, you can use it to generate code just by using simple language. So, you want to create a piece of code. You want to develop an application. You don't need to be a coder. Like, you can have the system understand what you're trying to code, and it will prepare for you a piece of code. The flip side is also true. If there is a piece of code that's already written, but you don't understand what it means, you can highlight it and it will explain exactly what the function is attempting to do from a business process standpoint. So, there's a lot of different use cases, but I would say the ones that are the most popular at the moment is for sure text. For sure, image of video and definitely audio and sorry, auto coding. Okay. So, we'll skip and we'll I guess move on to the next slide. In this particular example these are additional use cases. I'll give you a very, very quick example of a use case that we just deployed. It covers contact center and knowledge management. So, think about a financial institution that has agents across all of its branches. And then clients come in and they have very, you know, very specific questions regarding, you know, a mortgage application. So, sometimes the agents in the banks or they don't necessarily understand the process end to end, and they don't necessarily have the questions to some of the more complex questions, customer questions. So, then there's this call center made up of experts who have a lot of experience in, you know, lending and, you know, different types of situations. So, the issue here is that some of these agents at the branches, when they would contact the experts at the call center, it would take time to get through. And by the time you explain what you're trying to do, and you get an answer, it would take another maybe 7 to 8 minutes. So, the whole process could take around 15 to 20 minutes. So, what we and had to do is we had to work with this client to take all of their procedures, all of their rate tables and all of their client profiles and we created a multi-dimensional large language model where the agents could just go into a Teams channel, like a Teams interface, and they can ask a question directly in Teams. And Teams will not only give you the answer, but it will also allow you to reference the documents that it used to give you an answer. So, now the agents, instead of waiting, you know, eight minutes to get through the line and another seven minutes to get an answer, they're getting answers in less than 30 seconds. So, that's one very efficient way of deploying a large language model to leverage and to harness all of the information that the organization may be sitting on. Okay. So, AI in the working world. So, every couple of years, you know, there's if we move on to the next page, every couple of years there's these new technologies. You think about quantum computing, you think about, you know, machine learning and deep learning. A few years ago, one thing we can say is, is that as we look through the market and as we see clients adopting various technologies, this is the one that we are seeing accelerate at speeds like never before. Why? Because there's a lot of efficiency gains associated with doing something like this. Think about HR who is trying to generate a job description and can now do it in a matter of an hour, as opposed to a few hours or a few days. Think about procurement. Who used to take, I don't know, 3 to 6 days to analyze responses from vendors. Now, with a large language model or a generative AI, they have a bot that's summarizing the offers, that's preparing negotiation strategies for the procurement agent. So, everything is being compressed when you think about time and effort and manual tasks. So, this is really the benefit associated. And this is why we're seeing a very, very fast growth in the adoption of generative AI. So, with that said, I guess I'll pass it over to Marie to talk a little bit about humans at the center.
Simon Goupil: [00:19:19] Actually, I think just before we get there, we're we're going to transition to Lindsay, who's going to talk to us a bit about the concept of humans and the center now that we've touched the technology portion. Lindsay.
Lindsay Falkov: [00:19:31] Thank you and great to be here with everyone today. So, think I'm going to sort of shift a little bit and just think through, you know, what are the organizational and workforce implications of deploying and adopting Gen AI and think, you know, what is clear to us is that it will require every organization to think through the implications across their strategy, organizational structure, the operating model, and for the workforce and approaches to talent management and leadership as well. You know, I think everything that we're going to be talking about from here on out is, is our view that however you decide to leverage AI, its success is really going to be dependent upon placing humans at the center. So, just to touch fairly briefly on the different components of the evaluation, organizations will need to go through as you look to apply Gen AI in your organization. I think it starts really with thoughtful application and use case selection embedded in your organizational strategy. So, really understanding how Gen AI will support new opportunities for value creation within your business, but and alongside that, giving the the appropriate attention to and consideration to the risks that accompany implementation of gender. I think cultural considerations are absolutely key. What are the values, behaviours, leadership styles that will need to shift in order to create an environment that's receptive to adoption of Gen AI. Old ways of working will need to be replaced by more innovative and experimental approaches where teams are starting to deliver through fast, fail, learn, and improve cycles. Gets you to understand what the technology can and can't do, where it's where it's strongest, but also where trust, I think, and transparency are really going to need to be given special attention. Ways of leading, as I said, will also need to evolve to manage risks effectively while also building more collaborative work environments and starting to remove silos so that the technology can achieve the kind of results that it's able to. Operating models and structures need to be evaluated to support immediate as well as the long-term evolution of Gen AI applications. We need to understand how Gen AI is impacting core processes and to optimize our org structure, and particularly to get really clear on how roles and responsibilities are going to change to guide our people. And then governance will need to evolve as well. We'll need the right oversight and expertise to manage risks and threats posed by Gen AI. In addition, we're going to see think shifts in how we measure performance as well as, for example, how we recruit for roles that now are working alongside Gen AI. Last but not least, I think critically is the strategic, practical implications of providing people with skills to succeed. And that's going to be foundational. Recent research is showing that up to 63% of US employment will be complemented by Gen AI requiring significant upskilling and reskilling. So, you know, a lot of this may sound familiar for anyone going through a big transformation. I think what what makes Gen AI unique from particularly from a people perspective, are the risks associated with it, the potential productivity gains, and certainly the breadth of impact on people. Where to begin? Experiment in safe ways considering all the risks. Conduct a change impact analysis early to understand the different implications across the organization and start upskilling and reskilling your people so you prepare them for, you know, at the very outset. With that, I'll hand over back to you, Simon.
Simon Goupil: [00:23:48] Thank you. Lindsay. We can go to the next slide. Alright. So, we have now reached the panel discussion portion of this webinar. Sherif gave us, I would say, a foundational understanding, although detailed of generative AI and Lindsay explained, you know, the role that people play and being central in this transformation. So, thank you both. We'll now start the first panel question with Marie. So, Marie, in what ways is Gen AI transforming human resources functions and practices?
Marie Le Clech: [00:24:29] Thank you, Simon. Let me start to answer your question by taking a 10,000 view approach. So, before diving into what way Gen AI is transforming the HR function practice from a practical level, I believe it's important to consider this question from a mindset perspective. When you 20 plus years and ever since, we've seen the shift from paper to air transactions and processes being handled by technology, we've been talking about people, process, system. People are initiating a process supported by the data available leveraging an HR information system. With Gen AI, we've seeing the shift from people process system to system, data process, people. So, the Gen AI technology powered by data, execute processes that are consumed by people. So, in other words, Gen AI can now be at the forefront of some key transaction, can do all the heavy lifting, initiate processes, provides insights for the valuable outputs. But this doesn't mean that HR will be replaced by AI. It means that AI can enhance the HR function. But really, the transformation of the HR function and practices is just a continuation of a movement we've observed ever since HR Information System have been introduced. Since the late 90s early 2000, we've seen the burden of a lot of HR admin tasks being removed from the HR team, so they can focus more on more value driven initiatives. So, this is when we've seen manager and employee self-service being introduced alongside with automated workflow approvals or automated notifications. So, with Gen AI, we're going even further in this logic, where tasks or processes that require time and effort from the HR team can now be done by Gen AI. And again, this doesn't mean that this is the end of HR or the beginning or the continuation of linear HR. This means that HR will be able to focus more on people matters, put human in the centers and be more productive in the completion of some task, and get more insights to make better decisions.
Simon Goupil: [00:27:18] Okay. I see that we're reaching about the middle of this webinar. So, let's deep dive a little bit. Marie, could you, if we go to the next slide, could you, you know, describe a potential use cases for HR that we're foreseeing in the future? Oh, actually, we can come back. Okay.
Marie Le Clech: [00:27:38] Yeah. No, absolutely. We can see a lot of use cases where Gen AI can be leveraged throughout the employee cycle. I think, I personally think that Gen AI will be the next excel for HR operation and will be leveraged for every HR function and every HR some functions. So, let's start to look at all the HR processes. And that's talent acquisition and onboarding. Gen AI can be leveraged to optimize the process of creating job description. Sherif talked about that. And through the use of tools such as ChatGPT, you can draft a job description in a few seconds. By no mean this is going to be your end product. Doesn't, can be seen as your end product, but this will fast track the creation of your first draft. Gen AI can also be leveraged to screen resumes, build interview questions for onboarding. New hires can get a personalized experience for Gen AI. For recruitment and onboarding. Learning and development. Learning and development. Gen AI can help you identify the skills and learning that the employee needs to develop based on the current and potential new roles in the organization. Gen AI can create personalized learning plan and even create personalized learning content. Also, Gen AI can not only assist and share in the learning and development activities, but it can also provide you with insights on, say, knowledge, skills and ability that you will need in the future. And you know, Simon, the list goes on and on. Top management Gen AI can enhance the traditional performance process by analyzing data on achievements, goals and feedback, provide real time performance insight, identify skill gaps, and suggest development and learning opportunities. And again, what's important is Gen AI can enhance internal processes by automating and bringing efficiencies, but also by providing insights. And the best example that comes to my mind in terms of insights is how Gen AI can be leveraged for engagement survey. I know we have a lot of HR people in this webinar, and we all know that exploring the results of an engagement survey and drawing recommendations is very time consuming. With Gen AI, you can analyze engagement survey results and identify trends, issues and suggest actions to enhance, for example, the employee satisfaction and well-being. You can do that in no time. Again, a draft, but a draft that will allow you to save a significant amount of time. So, not only can Gen AI increase efficiencies and productivity, but what's interesting is this predictive and the probabilistic aspect of the recommendation stemming from the analysis.
Simon Goupil: [00:30:47] Okay. So that's what we're seeing a little bit everywhere in different industries with our customers, but can you talk about the use cases that we're seeing here at EY.
Marie Le Clech: [00:30:57] So, as you may know, it's been all over the news. EY has made a large investment in Gen AI with the objective to transform the organization, transform EY, transform our client, help transform our client and you know transform, contribute to the transformation of society. So, for HR specifically we are leveraging our own in-house ChatGPT. It's a really cool tool. It's called EYQ and it can be seen as an HR assistant or an air co-pilot. And it can be used actually for any employees who may have some questions. Right. So, the HR team are using it for writing job description or for any prompts where support can be beneficial. That's one-use case. Another use case, performance management. So, the way performance management works at EY, it's based on feedback. Revolves mainly around collecting feedback. So, we are currently setting the foundation to leverage Gen AI to collect all the feedback that the employee had received and provide insight on the overall performance and personalized recommendation on development activities. So, we know feedback can be sometime worded in a way that is not constructed. So, Gen AI can help every counselor to put forward what should be a pathway to success. Third use case, learning. So, we've leveraging Gen AI for personalized learning content. But also, we're leveraging Gen AI from a learning management perspective. So, for example, we are currently developing automated scheduling transaction. If some of our consultants are not associated to the projects on the bench, Gen AI will automatically book the consultants on outlook to some training activities that correspond to the level and the skills they need to develop. So, you can see that as a, you know, the scheduling assistant. But it doesn't mean that traditional LMS, learning management system, will go away. It means that it will be enhanced by Gen AI. And, you know, the final Gen AI use case that I can see in HR EY really reside in the people and the organization transformation. We believe that transforming the organization, transforming oneself, it's extremely important. So, a lot of efforts currently being put on skilling up EY employee on Gen AI and ensuring that we are prepared for the futures. As consultant, it's important that we walk the talk and that we experience Gen AI firsthand to be able to support our clients in their transformation.
Simon Goupil: [00:33:48] Alright. Well, thank you, Marie, for covering the impacts of Gen AI, more specifically on the HR function. And I know Halloween is just past. Right. But let's talk about something a bit scarier. So, Sherif, if we go to the next slide right. What are the risks associated with adopting Gen AI in your perspective.
Sherif Barrad: [00:34:09] Sure. And there are many risks but just in the interest of time, I'd like to talk, you know, based on, you know, deploying, you know, EY deploying the solution at many of our clients. The first is change management, training and education, like underestimating the importance of that. And the second one is around solution architecture. So, let me start with, you know, change management, training, and education. So, when we first started working with clients last year, we decided that we were going to double down on change management and communication. And what we did is that we really took the time to create awareness about what the technology is, how it can be used, but most importantly, what you should not expect out of that technology. So, it was very important because there's a lot of confusion in terms of what this does and the differences. And that's why we took the time at the beginning to talk about that. But it's very important, you know, so no matter how well the LLM you deploy works for the organization, if people don't understand how to use it and what it can do and what it cannot do, it's a huge risk, especially from a deployment success standpoint. The second thing that we learned is that it's very, very important as you deploy these models that you have some sort of a multifaceted approach when it comes to getting feedback from the model. Why? Because clients want results fast and they want good quality results as they think, you know, these LLM models. So, we had we have a very specific playbook. We have these very specific templates. We look at the input / output sequences of the responses as the pilot group is using the LLM. So, these are all things that are very important to do to make sure that one, people adopt the technology and two, from the first time they try it until the time, it's actually good to be used in business, that span of time is very short. This is a risk. And if you don't address those two elements, it's going to be very difficult to scale, enterprise-wise. So, that's the first part. The second part is around solution architecture. And it's all about responsible AI. And we have conversations, and we have panels where we spend an hour just talking about responsible AI. But what's really important is that a lot. There's three studies that were published just in the last quarter, even one from Microsoft, suggesting that 70% of employees are using LLM's without letting their employers know. What's really important about this is that sometimes the answers you get from these models are answers that you are using to inform your customers or end users, and that becomes a very, very significant risk for the organization's reputation. So, I think those two elements are really important. And again, there's many others like fairness, bias, scalability, quality control, but those first two, like change management and training and education and then making sure that you have a solid solution architecture, I would say those are the two areas you definitely want to invest in to make sure you have a successful deployment. Back to you, Simon.
Simon Goupil: [00:37:18] Merci. Sherif. I think we now better understand some of the risks. Right? There's several. Some of the risks and how to mitigate them. But you know, we're actually tracking pretty good. Right? There's about 20-21 one minutes left to the webinar. Would like to do is to go back to people. Right. So, Lindsay, you know, if we go to the next question, right? How can businesses strike a balance between automation through the Gen AI and maintaining that human centric workplace culture?
Lindsay Falkov: [00:37:49] Yeah, I think, you know, the large potential productivity gains from Gen AI, together with an ability to mitigate risks. It sounds a bit trite, but it's true. It depends on people being able to work alongside the Gen AI. So, you know, I think balancing human and machine in this context is a necessity. It's certainly not a nice to have. I think it's worth just recapping a couple of examples of the ways in which humans will actually work with the Gen AI. My colleagues have already sort of pointed to some of these, but just to recap AI. I will, you know, generate data driven insights and recommendations and that enables more informed and accurate decision making by people. Risks, which Sherif has just talked about surrounding privacy or hidden biases. They need people who can mitigate these and, Gen AI will take process automation to a new level, and that'll allow people to do more valuable work and increase time for people to collaborate and innovate. So, to fully benefit, I think, from these opportunities and from the technology, the workforce needs to be empowered to embrace the new capabilities. And to do this, we need to make sure that we bring people along on the Gen AI implementation journey from day one. And this is borne out by research that we've recently done with Said Business School, University of Oxford, which showed that organizations that placed humans at the center of their transformation were over two and a half times more likely to be successful than those who don't put their people first through that process. So, to enable these human machine interactions, I want to highlight, I think, three core cultural traits that really help us to create that more human centric culture. I think the first is innovation and experimentation. Another piece of research we've done recently shows that organizations that have achieved what we call a high level of digital maturity were focusing on building an experimental mindset, firstly, organizational agility, and data driven decision making. And these organizations were 14 times more receptive to adopting new technologies than less digital, digitally mature organizations. The second trait is around growth mindset, nurturing an environment in which learning mistakes and improving is seen as core both to individual and to business success. And then transparency is the third one, allowing people to share their ideas and concerns openly, and building transparency into the Gen AI so people can trust the information they're getting from that. I'm going to end by talking about one further key issue about creating that appropriate culture. And that's a few points on how leaders will need to show up a bit differently and why it will be so important to clarify the roles that leaders will be playing in enabling Gen AI. Leaders are going to need to firstly adopt a learning mindset. They're going to need to learn with their teams. They're going to need to be adept at managing change. They're going to have to answer the tough questions. They're going to need to think possibilities. So, look at the possibilities in front of them, while at the same time helping their people to adjust to the new environment, to their new roles, and concentrating on driving improved outcomes in combination between their people and the Gen AI. Leaders in this context, cannot underestimate or we cannot overstate just how important trust and empathy is for better outcomes. Shifting from a me mindset to a co-created we mindset really is the basis for kind of unlocking productivity and connection through teaming. And then, early on in AI journey, leaders play a critical role in educating employees on what Gen AI is, what it's not. And then two-way communication between leaders and their people is important as employees ask questions, raise concerns, and I think, probably most importantly, provide critical feedback on the Gen AI applications in the business. So, I think these are some of the key activities that really place humans at the very center of this disruptive technology and its deployment and adoption.
Simon Goupil: [00:43:00] Alright. Thank you, Lindsay, and I saw you. You started to talk more about technology and the human and then, you know, pivoted to talk about culture a little bit more and then started to talk about leadership and other skills, but I'd like to rebound on the skill side specifically and, and then go to Marie and ask Marie, like, in your experience, Marie, what skills do employees need to thrive in a workplace where generative AI is prevalent?
Marie Le Clech: [00:43:29] So, Lindsay mentioned already a few cultural traits, but if we think skills, we think practical skills. I think one fundamental skill is the technological literacy, right? Understanding the different technology, understanding the AI concept, the ability to work with AI driven tool and platform. This is critical. So, in that sense, knowing how to use prompting. We talked about that at the start of this webinar with Sherif talking about contextual, generative, and asking the right question. It's a key skill that needs to be developed. So, a prompt is a specific instruction, an input that is given to an AI system to generate a response. And it serves as the starting point for the Gen AI model to generate an output. And really the quality and specificity of the prompt is what will influence the relevance and coherence of the AI generated response. So, this means that in order to gather the right information, one needs to develop an ability to ask the right question in the right way. So, that's the first skill. Second skill, curiosity adaptability, flexibility. These are also skills that are key. Things are moving very fast. And it's important to be open to learn new technologies and processes and to have a mindset geared towards continuous learning and upskilling. But this goal with critical thinking and ethical and responsible behaviour, right? Gen AI does not replace human. It can enhance, facilitate, optimize some tasks. But users should be aware of potential bias. They should use Gen AI as a foundation, as a draft, as a starting point, and adhere to ethical guidelines when working with AI systems. And finally, I think it's important to, and Lindsay mentioned that to be open to change and creativity. To change outlook is essential. Employees should be open to revisit the word work and be creative and ready to embrace new ways of doing things. The change that we will experience with Gen AI is inevitable. And the change is not coming, it's already here. Face it, we have to embrace it, and we all have to learn how to be ahead of the curve.
Simon Goupil: [00:46:24] Merci Marie, on these insights on the skills required. Just, you know, we're, there's about 12 minutes left in the session. And before we open up to the floor, we have received a few questions. I'd like maybe to get your perspective Lindsay. On the next slide. Right. How can organizations support employees, you know, on upskilling and to adapt to the changes that are brought by Gen AI, like based on the skills that Mary was just mentioning.
Lindsay Falkov: [00:46:55] Yeah. I think, you know, maybe just to begin by sketching a quick picture of the scale of Gen AI impact on roles and skills. These early days, we've still got a lot to learn, but I think the current research is instructive. So, the adoption of Gen AI capabilities, research is showing could result in about 30% of hours worked today, globally being automated by 2030. And expose the equivalent of 300 million full time jobs to automation globally. It's estimated that about 7% of the current US, of current US employment, will be substituted by AI, and 63% complemented. AI will mostly complement human labour and enhance productivity across sectors. But there will be some displacement in roles like e-commerce, admin roles, food, customer service and production related roles. And then on the other hand, there are more stable roles that that are envisaged for health care management, transport and the stem occupations. Business functions are going to be impacted significantly. Gen AI will probably will impact every single business function, but it's envisaged to have the greatest impact in terms of cost savings on customer operations, marketing and sales, software engineering and R&D. These roles, the research is showing, could account for some around about 75% it's estimated of annual of the annual value from use cases. And then workers in low wage jobs and without college degrees are much more likely to have to change occupations by 2030. So, I think this just underlines just how important the investment in workforce training and empowerment is to the adoption of AI capabilities, but also to helping our employees to be able to adjust effectively to this new reality. So, what are some of the things we can do more effectively to help this process? The first is we've really got to understand how roles and skills are evolving with the new technology and applications and use cases, and we need to be able to clarify the new tasks, the performance metrics, and the skill sets required by our employees to be able to work alongside the Gen AI. The next point is that we need to be able to know what skills our people actually have, and most importantly, where are the big skills gaps? So, we can repurpose learning budgets and investments and focus those investments on our most important and urgent skills to be developed. And then alongside that, we need to make sure that we develop learning pathways and content that builds that focuses on building the skills related to where those gaps are. So, the skills we need and not focused on the skills that we don't need. I want to talk a couple of final points. Leaders and management managers will need to pay special attention to coaching. And there's coaching and supporting roles, helping employees to build capabilities and confidence to work in new ways and get the right results from Gen AI. And then, as Marie has spoken about, I think finally we'll see a shift from managing processes to focus on writing good questions, prompt engineering, but also shaping better quality outputs, advice and decisions using the analytical and creative results of Gen AI. And this transition is not going to be easy for many employees and many managers, and hence I think why we are starting to see an increase in the appetite for investing in learning associated with Gen AI and other new generation technology. I'll hand it over with that back to you, Simon.
Simon Goupil: [00:51:07] Thank you Lindsay. And you know, at this point we definitely open the floor, right? So, we have received a few questions, but if you have more questions, please go to the Q&A box. Ask your questions. You know, we'll try to answer as many as we can in the seven minutes, eight minutes left to this webinar. So, we'll start with the first one that we have received. Right? So, maybe I'll direct this to a Sherif. So, first question we received. Do you foresee a future where the majority of online content is simply regurgitated ChatGPT content as the program continuously scrapes itself as people pass it off, work more and more for sites and blogs. So, essentially that because people are reusing the same content and posting the same content that we would have the content, I would say the content would be evolving.
Sherif Barrad: [00:52:02] Yeah, it's a really good, it's a really good question. So yes, it's definitely going to be a risk. But there are some things that you can consider doing to be able to get the best out of these LLM models. And I think, you know, when Marie was talking about skills, the first thing that is very important to note is that back then it was all about getting all the data, and then it went into doing all the analytics. And today the shift with the introduction of generative AI is on creativity. And I know Marie and Lindsay both talked about creativity, especially in the prompt engineering phase. Prompt engineering is the way you ask the question. So, there's a couple of things that people are doing today to be able to get, you know, accurate answers to their questions and that they don't get the regurgitated answers that you typically get. So, there's definitely some work that needs to be done from a prompting engineering standpoint. One of the things that we see a lot and that we've used with our client is something that's called personas via system messages. So, before you even ask your question, you are setting the system and you're saying as an LLM model, you are a customer service agent who is going to be answering an insurance question. So, you set the LLM in a specific context, you are giving it a specific persona, and then you launch the question. So, we experimented with that. And we saw that sometimes when you really give very specific instructions, the answers that you get are much better. The sequence in which you organize your sentence also has an impact on on things. And then the completeness of your question also has an impact on the quality of the information that you're going to get. The second element around this also to be able to avoid this regurgitation is leveraging different AI models. So, today we talked about, you know, Open AI and ChatGPT. But if you were to study the ecosystem, you'll notice that there's over 50 LLM models. Open-source models. There's vendor hosted models. There's industry specific models, like for example, Bloom by Bloomberg, which is focused on financial and market data. You got one for health care, which is, you know, from epic. So, there's a lot of different models and some of these models work better on certain questions than others. Some work better on scientific articles. Some work better on business documents. So, you have to explore the models. You have to look into trying different models to get the right answer. And in combination with prompt engineering, just like my colleagues Marie and Lindsay talked about. Back over to you, Simon.
Speaker6: [00:54:34] Thanks. We'll go on another question that we got from participants, right? Which is another on the spooky track. Right? So, how can we cultivate a robust trust ecosystem between the users and the Gen AI systems, particularly when these systems themselves caution us about the potential of inaccuracies in their output, right? So, this debate. Right. So.
Sherif Barrad: [00:55:00] Sure. Is this one for me, Simon?
Simon Goupil: [00:55:03] Yes, please.
Sherif Barrad: [00:55:04] Okay, sure. So, let me give you a concrete example of a situation where we are, like Simon was saying at the beginning, like we work with engineers and scientists that not only deploy generative AI, but we're also doing a lot of research and development. So, for this one, very particular client, we had to prevent that from happening. So, they're pinging over 6000 different documents and if the answer is not in the document, typically LLM will try to fabricate an answer. So, there are some very advanced techniques from a programing standpoint that allow you to make sure that if the answer is not being referenced to an existing policy of the organization, that the system prevents you from, or we prevent the system from giving you a specific answer. So, there's things that you can do internally as an organization. That's step one. And then step two is that we've set up these reference architectures when you're picking the outside world. So, the second example is you're pinging the outside world and you're also pinging your corporate documents. So, there are some architectures where you have an interface before the question goes out to the outside world. And there's a whole bunch of vectorizations, embedding, anonymity type techniques. And then when the answer goes out to the outside world and it comes back in, there's some vetting around the answer that's coming back in before it is given back to the user. So, there are some reference architectures, and there are some solution architecture components that you can implement in your overall architecture to make sure that whatever goes out has been anonymized and safe, and whatever comes in makes sense before you actually use it with clients. Back to you, SimoN.
Speaker6: [00:56:40] Thank you. Were actually, I think we're, I'm afraid that we're running out of time. So, just a minute or so left. So before we go, I would want to sincerely thank you. Thank our panelists today, Lindsay, Sherif and Marie. And thank you so much for sharing those rich insights with us. And also, thank you to everybody for joining us today. I know that there was a quite a bit of interest, right, for this webinar. So, look for an email from us in the next few days with the recording of the session and the contact of our speakers. And please reach out to them, or your local EY advisor or anybody you would connect with at EY with any of your questions, right? There's going to be a short survey that's going to pop up momentarily. We would appreciate if you take a moment to complete that, and a reminder as well, the next webinar in this series right? Thinking Ahead series will be in December. So, where we'll continue to explore the trends and insights into the topics that are influencing and shaping the people's agenda. So, thank you everyone and have a great day.