EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
This episode of the EY Tax and Law in Focus podcast, hosted by Susannah Streeter, explores the transformative impact of generative AI (Gen AI) on the legal sector. Our panel includes Jeff Soar, EY Global Law Leader; Saskia Vermeer-de Jongh, Partner in HVG Law and AI and Digital Law Leader; and Heather Deane, former EY Americas Law Managing Director, now Director at The Clean Fight, an accelerator for climate tech start-ups.
The discussion highlights the strategic implications for legal departments and broader business operations as the use of GenAI grows. Key topics include the various roles legal departments will play in GenAI governance and adoption, from advising businesses on compliance with rapidly changing laws and regulations to using GenAI tools to enhance efficiency and free up time for higher-value tasks. The panelists stress the importance of proactive risk management and the establishing of robust GenAI governance frameworks to ensure ethical and legal compliance.
The episode delves into the differing regulatory approaches, with the EU focusing on human rights and product safety, the UK leveraging existing legislation with a principle-based approach and the US emphasizing corporate responsibility and self-regulation.
Additionally, the conversation addresses the disillusionment some feel about GenAI and the necessity for a cultural shift within legal departments to embrace what it has to offer, along with the importance of ongoing training and support from data scientists and engineers for legal teams.
Key takeaways:
Understand how GenAI can revolutionize legal departments by automating routine tasks, enhancing efficiency and improving compliance and risk management.
Learn the strategic role of legal departments in GenAI governance, emphasizing proactive risk management, ethical considerations and regulatory compliance.
Appreciate the need for a cultural shift within legal teams, highlighting the importance of ongoing training and collaboration with business teams to leverage GenAI's full potential.
Gain insights into the differing global regulatory approaches to GenAI and the practical steps for implementing it in legal practices, including developing clear governance frameworks and integrating data-driven strategies.
For your convenience, full text transcript of this podcast is also available.
Susannah Streeter
Hello and welcome to the EY Tax and Law in Focus podcast. I'm Susannah Streeter, and in this edition, we're going to be looking at the wave of innovation and change that GenAI is having on the law department. It's not surprising there is so much excitement and a little apprehension around right now about the potential for this technology and the benefits and challenges it brings. A report by Goldman Sachs suggests that a quarter of work tasks could be automated by AI in the US and Europe, and legal is one of the sectors where the biggest effect is expected to be felt, with 44% of jobs exposed to potential automation. But GenAI's ramifications will go much wider. There will be strategic implications for the legal department itself and the wider business. According to a survey in the EY CEO Imperative Series, most CEOs - 95% - are planning to maintain or accelerate their transformational change this year. Nearly half of CEOs report that adopting AI technologies to drive efficiencies and business performance will be a priority over the next year. It will mean that Chief Legal Officers (CLOs) will have to be prepared to provide the necessary guardrails that both protect and enable the business to lead the competition and enhance customer value.
Providing strategic business leadership to harness the benefits of AI will be crucial. So, in this podcast, we're going to be discussing the different hats that law departments will be wearing when it comes to AI. These varied roles include advising the business on compliance in an environment where there's often piecemeal guidance and helping organizations as they apply gen AI to data to enhance products and services. Legal departments will also be adopting the technology in their roles as practitioners, with AI becoming a useful assistant, freeing up time for tasks that add greater value. So I'm delighted to say we have a panel of people with deep knowledge of the subject who are uniquely placed to offer insights into the opportunities which are there to be seized and why they could make such a difference to the legal function. But before I introduce them, please do remember conversations during this podcast should not be relied on as accounting, legal, investment nor other professional advice. Listeners must of course consult their own advisors.
Now, I'm really delighted to welcome Jeff Soar, EY Global Law Leader. Jeff, where are you talking to us from today?
Jeff Soar
Hi Susannah, I am based in the UK, so I'm in a very sunny Hampshire at the moment.
Streeter
Good to hear. And also please welcome Saskia Vermeer-de Jongh, who is Partner in HVG Law, which is part of the Global EY Law Network, and AI and Digital Law Leader. Saskia, great to have you with us on the podcast as well. Where are you today?
Saskia Vermeer-de Jongh
Hello, Susannah. I'm dialing in from a very sunny Amsterdam.
Streeter
And please welcome Heather Deane, former EY Americas Law Managing Director, now Director at The Clean Fight, an accelerator for climate tech start-ups. Great to have you with us. Heather, where are you today?
Heather Deane
Hi, Susannah. I am joining from New York City, and the sun is also not shining here.
Streeter
Shame. I'm sorry to hear that. But let's try and illuminate this subject that we're talking about here today. So much to chat about, and first of all, I want to take a wider view. So, Jeff, let's start with you. Do you think that many businesses are still yet to understand AI's potential?
Soar
I think it's a good question, but does anyone truly understand the full potential? I suspect today we're only imagining a really small percentage of what could be possible, but even that small percentage is incredibly interesting. I think our appreciation will grow with our understanding. But it is going to take some time. I remember when I first started working Internet and email was really new, and it was something we used on the side now and then, but now it's ubiquitous. And I think we're on a similar sort of journey with AI, whether you understand it or not. I think almost everyone is investing in it and trying to understand it. Our EY CEO Survey said 99% of people are planning on investing. I kind of wonder what the other 1% are doing. I think over $150 billion a year is predicted to be invested in AI by 2027, and that number's growing fast. We ourselves have invested almost $1.4 billion already, and so that $150 billion number seems to me to be quite conservative. Our pipeline is growing faster than any other part of our business in AI, so I suspect there's a long way to go to see the full potential. But it's clear to me that almost everyone is on the journey and trying.
Streeter
Thing is, though, Jeff, the headlines have been full of the risks. So what do you think the mitigation actions may be?
Soar
I don't think risk is a bad thing. I think unmanaged risk or uncontrolled risks are the problem. I think when you do a risk assessment, you don't just stop when you identify the risk, you then consider your risk appetite. You think about whether the risk falls within that appetite, and then you have to think about how can you mitigate it. But all of that presupposes you understand the risk in the first place. So that has to be the starting point. I think one of the biggest mitigating factors will be rules and regulations, though. And as AI is in its infancy, so too is AI regulation and controls. So, organizations have to build their own guardrails and frameworks. These need to align with the organisation's values and consider the ethics of AI use and work within existing legal frameworks, things like data protection and privacy. But I think this applies both to the legal function itself, but also to the wider business. And I think the legal function is likely to play a significant role in designing and embedding those frameworks in the wider organisation.
Streeter
And we're going to speak more about the principles that AI should be governed by, essentially a bit later in the podcast. But Jeff, just one quick thought. I mean, do you think we're at the peak of AI expectations? What do you think is going to happen next? Will we see a kind of trough of disillusionment before adoption on a wider scale?
Soar
There's certainly a lot of hype. Every conversation you seem to have links back to AI and how it will work in the future, and I think that hype is driving huge investment. As we've mentioned, the combination of hype and investment are probably driving expectations beyond where they should be. We're seeing use cases develop and go through a process of trial and error, but it is trial and error, it's not trial and success, and therefore there's a bit of a mismatch between expectations and outcomes, and there's likely to be some misalignment. My own first use of AI sort of didn't meet my expectations. I thought I would get more out of it than I did. But like anything, you need to keep trying and learn from those experiences. My second attempt was a lot better, my third, better still, and so on. I think that gap between expectations and outcomes may well lead to a trough of disillusionment, as you mentioned, but I think it's important to stick with it. I think managing expectations is key. Don't think AI will revolutionize something overnight, but it will lead to improvements. At last year's EMEIA Tax Leader summit, Professor Hannah Fry was our keynote speaker, and she said, you should think about AI a bit like a forklift truck. It makes the impossible, or very difficult possible, but it still needs a driver, it still needs someone. So we won't get to a revolution on day one, but we will see progress. And as organizations go through trial and error, they'll see movement. Those pilot proofs of concept will lead to bigger trials and, more change will happen and so on. And so on. And I suspect that's when results will catch up with expectations. I think you just have to stick with it.
Streeter
So try, try and try again. So, with that in mind, let's drill down to the huge implications for legal departments. I mean, the potential is vast. So how, Heather, are leaders thinking about where the benefits will be?
Deane
I think, as Jeff has indicated, the transformative impact on how legal work is going to be performed is going to be profound. I think we're at the very beginning of that journey. I think it's hard for anyone to say exactly what it will look like a few years from now, but it is going to be a lot of change and it's going to happen in a very compressed timeframe for practitioners. So many legal departments have, they're already on a digital transformation journey. They have been over the past several years. Whether that's been driven by efficiency initiatives, cost takeout, other drivers like the C suite's desire to see more data-driven decisions coming out of legal departments. And so generative AI's capabilities with unstructured data are really just going to supercharge that and accelerate the journey that departments are really already on. Where this is really manifesting, I'm seeing this manifest in the corporate legal departments that I'm working with is as efficiency and initiatives. That is really the first main driver is how can we help our people get lift on this technology. I do think that there's going to be increased pressure coming from the C suite on CLOs to demonstrate that efficiency and to show the hours saved and to see what you're going to do with your resources. So I think that pressure is really going to continue pretty intensely.
Streeter
So, Heather, in what way should we be flipping the equation here and focusing on just how much in house teams, how much time they spend looking for information and how this could change?
Deane
We're seeing this in projects that we're doing now where attorneys spend a fair amount of time looking for the correct documentation, looking for position papers, looking at the latest regulation, comparing that. There's a lot of looking and analyzing that they do relative to the time that they can spend advising. There's tremendous potential here for the quality of life for lawyers because there's a lot of that kind of repetitive tasks. That's not really what they went to law school for. They went to law school to be advisors. And so we're really trying to flip the equation on percentage of time spent sort of hunting, pecking, getting to that first draft versus figuring out what actually needs to be done and actually actioning that.
Streeter
Thanks, Heather. Let me bring Saskia in. I mean, Saskia, right now there is this increasing business imperative to embrace evolving GenAI capabilities and really drive efficiencies, as Heather's been pointing out. So how important is it in your view that the legal function really becomes a leader in cross functional GenAI governance? And if so, what does best practise look like?
Vermeer-de Jongh
Yeah, I think jumping in on what Heather just mentioned around the use of AI by the legal department, and I think also, Susannah, what you just mentioned in your introduction, I think it's key here that law departments have indeed a dual role here. With the use and implementation of AI, as Heather also explained by the law department, that could not only lead of course to drive efficiencies, which is very important, but it will also lead to many insights and lessons learned. I think Jeff just mentioned it, the trial and error phase, which they can use when they're advising the business. I think at the results of both those angles, they will be very beneficial to understand how the compliance actions, how they can be implemented within the entire organization, so more the advice functions they have. If we dive into the road to compliance, then the regulatory side, of course, plays an important role. But I think also the ethical and the business consideration, they should also be taken into account. I think, however, the legal function, legal departments, they are key in the AI compliance journey and therefore they are perfectly positioned to play a very important role. I think some elements come together here because they are aware of all the other legislation. Jeff mentioned it already, the legislation framework, privacy data acts, data protection, and they know the key vendors and of course the corresponding contracts and the terms and requirements of those contracts. They have the experience, based on other legislation, to advise on the ethical aspects as well. But as said, and I think we were going to say it many times more in this podcast, I think AI is new. Everyone needs to find their new role in the Gen AI governance framework and the GCO and the legal function. They are no exception here. And, of course, it will be difficult, especially when you want to be a front-runner here in this field. And I do believe that it requires a change of the more traditional mindset. But if, as a legal department, if you're able to do that, then for sure it allows you to take a more forward-looking approach and to take, and I think that's really important here, one of the leading roles in this digital transformation.
Streeter
And as you say, Saskia, AI is new, it's also evolving very quickly. So, how should the approach towards GenAI be managed to comply with the rules of not just today, but tomorrow as well? And what should the key considerations be?
Vermeer-de Jongh
AI entails all types of data and it touches many elements of the business. So I think collaboration, mutual understanding between different stakeholders within your company, within an organization, it was always important, but I think it has become more important than ever to, on one hand, embrace and implement a digital transformation. I think we can all agree that the opportunities are endless, but on the other hand, the compliance journey, which should also be controlled and managed. And we all have to admit that's not an easy job to do. The road to compliance is very difficult here at Gen AI technology. It will evolve and it will be hard. And I dare to say it's not impossible to already set up an entire future-proof compliance program. But in the meantime, and if we talk about the rules today, doing nothing is also not an option.
It is important to take control within your internal organization, for example, on how employees are allowed to use AI. But if you do not set any boundaries right now, I think a culture will develop that is difficult to change later on. However, it also has a lot of opportunities, right? So you don't want to miss out on all advantages GenAI can offer. So I recommend the approach of a no regret policy. Even if you're not using AI now, have the basics in place and be flexible to easily adopt changes following from your AI strategy. Therefore, my advice would be just start and make it small. Discuss, for example, among the most important stakeholders what your GenAI strategy is. And I think that's very important for the legal department, link an immediate compliance action to that. For example, an important risk of GenAI is the bias that could exist in the AI model. Organizations should therefore already start to define general rules to prevent this happens. So employees, they should ensure the quality of the input data, but the model should provide explainable output. And this helps on its turn, employees to verify the output and see if they would come to the same conclusion as the system. In the end, I think just start experimenting. I think that's also emphasizing what Jeff for what he said, do it controlled.
Streeter
And doing nothing is not an option. Those small steps are really needed. Thanks very much, Saskia. So we have an eye on the present, we have an eye on the future. But Heather, I want to ask you whether you think there's a disconnect between what lawyers want AI to help them with and what is really possible.
Deane
Well, I think Jeff said it well early on, which is that the expectations for what AI can do may be running a little bit ahead of where the reality is today. And so I'm seeing early excitement among many corporate legal departments being tempered with the experience on the ground with the AI. So I see sort of two main challenges emerging here. And the first is that even really good generative AI outputs are first draft. I'm going to come back to this theme a few times. You think in terms of first draft, they are not going to be perfect outputs. And so part of the change management that organizations really need to focus on is they need to make sure they're helping attorneys understand how to evaluate the results, how the technology works, how to look at what they are getting out in that first draft. So the analogy that I think works really well is that we're seeing outputs that are kind of on par with what you would expect from a first or second year associate. Right, which is very powerful. But keep in mind, you would be giving that associate very clear instructions. You would never take their draft and hand it over to a client without reviewing it, file it with the court without reviewing it. You would expect that you would be doing another pass at it, that you may be issuing additional instructions before you get to the final version of what you're going to actually act on. The second challenge is that the technology itself is evolving so rapidly that there really is not a finish line with this technology, as we've seen with other emerging technologies. This is just moving at an exponential rate. And so that means that the solutions and the tooling that we're using today could actually look very different six months from now. And that's a lot for people to manage. Generative AI really introduces a permanent change to the way of working for the profession and practitioners will need support to get through that so that they are not exhausted by the change.
Streeter
So, yes, Heather, it certainly can be daunting. We're talking about permanent changes here. But where do you see are the biggest possibilities for the legal function's core work?
Deane
Yeah, I think, and we're seeing this within EY as well with the services that we provide to clients. Legal research is a very rich area of opportunity there. Regulatory horizon scanning, regulatory governance, is another leading set of use cases that we're seeing emerging with clients. Governance compliance, the ability to accelerate the rate in which you could get advice to the business. Contract drafting, analysis and negotiation guidance is another area where there's a lot of experimentation going on. And finally, I think knowledge management is another very compelling area for legal departments to be exploring. So, for example, many corporations have invested a lot of money over the years on getting opinion letters from their law firm. So imagine a world in which the lawyers in-house and or the business people could essentially converse with that advice as a first pass before getting work done. So those are some leading areas, like FAQs, chatbots. I think in legal departments, one of the main challenges that we hear articulated to us is that lawyers are asked the same questions over and over again, right? So many questions that have been posed have been asked and answered. So, creating knowledge bases of FAQs that the business can essentially self-serve with is another, I think, very powerful area for departments to be exploring.
Streeter
So the potential clearly is huge. But I want to bring our chat back to data, because, Saskia, earlier you talked about the risks of bias. It is clear that data is king, but often it can be kept in silos. So what should the legal team be doing to ensure data management is up to scratch and importantly, compliant with security, regulatory and ethical requirements?
Vermeer-de Jongh
Yeah, I think as one of the stakeholders, and after this podcast, one of the key stakeholders, of course, in the digital transformation. Again, I think that the legal function can take a more proactive role in the shifting towards a more data-driven organization. And I also often use the example of legal functions being a data blocker. So no, you can't, if they give advice, unless, to go to more data enabler. Yes, you can with these considerations. And it's about the shifting of the mindset, which we also talked about earlier. And I think the beauty of data, because there are so many opportunities, of course, deriving from data. But on the other hand also, the challenge is that it's not resting in a separate entity in a company, but instead it is used, it's flowing, it's exchanged by all layers of the business, and data management and compliance should be treated accordingly to these data flows. And I think one of the starting points there is, instead of looking separately to all legislation that contains data, and I think there are already a lot of data driven legislation, AI, for example, but also cyber privacy, the new data acts initiated by the European Union. I would recommend a more holistic data approach. We're not monitoring all the new legislation, but monitor on common elements as transparency, as security, notifications, breaches, purposes, etc. And in this way, you are also able to plug new legislation on top and be flexible in your risk based approach. Most importantly, you can also make a combination of controls, and those controls are not only derived from the legislation, because that said, that's one pillar, but are in line also with the business controls and the ethical framework a company has in place, or will have in place in the coming future.
Streeter
And so, as you've highlighted, AI brings risks as well as benefits. And that mindset shift really is key. And that holistic approach to talking about, Saskia. But how should teams formulate risk management strategies regarding this? What actions should they take? And should establishing a governance framework be a first step, for example?
Vermeer-de Jongh
I wish it would be that simple. And I think the governance framework is, of course, a very important step. But I wouldn't consider it the first step, because initially I think, I think it's important that you look at the AI definition, as this is where things often go wrong, right? What is AI and how do you define AI for your company? Because it's still a risk-based approach. If you have the AI definition clear, then look at the strategy corresponding with the definition and the inventory of the AI systems. Because based on the inventory, you can determine the possible impact on your organization. And then I think it's also important to verify which risk frameworks you already have within your company, determining which risks are already covered and where there's a gap or a potential gap. Certain systems, they are already being tested and monitored from the perspective of model risk management. And key here is try not to reinvent the wheel here, so you can use these existing frameworks and processes as a starting point for your compliance to AI. Then you have to work on the governance framework.
Streeter
As Saskia points out, Heather, there are so many opinions washing around about AI, and given the myriad use cases out there for GenAI, it can be very hard to navigate. So to what extent might it help if use cases are first considered that would benefit the broader business?
Deane
Yeah, I would say that the industry is still in the brainstorming phase, if you will, for use case ideation, prioritization and development. But you'll always want to start by considering what the business is trying to accomplish and how that is going to manifest in the legal support model to help drive those objectives. So then the question is, where can gen AI help? When you look at those areas of the legal support model, where do you think that the generative AI can help? And the rules of thumb to keep in mind are, number one, where is their volume? Where's their significant amount of volume coming in around those legal requests? The second is, is the data accessible that you need to actually build these generated AI solutions? Can you get your hands on it? Is there a workable data model? There's work to be done underneath that that we'll talk more about. But really, can you get your hands on that data? The third is, can you document the business process? People often think that technology can be like a real easy button, but you actually have got to be able to document the business process.
Deane
You can't automate something that has not been sort of very well defined end to end in order to give workable instructions to the technology. And the fourth, I would say, is really identify users who are change champions. So where are their leaders and people in the frontline ranks that are excited to embrace this technology and are maybe on the leading edge of wanting to experiment, make them part of your tighter team on these initiatives, because as Saskia and Jeff have both mentioned, you really have to test, validate, refine before you go and expand. And you've got to get the word out to the people in the department about where it's working well and get that case for change out there.
Streeter
Jeff, let me bring you back in. Heather talked about change champions there and the key role that they can play. And obviously, there are risks with any new technology, but do you think companies could be too risk-averse? What role should the legal function play in shining a light through the fog? Could they be change champions?
Soar
I think absolutely they can. I think risk is very unique to every company, it's very unique to every set of circumstances, and everyone will have a different appetite. So you have to really understand how much risk your organization is prepared or able to take on, and that will depend on so many different factors. I think it's important to understand why you land where you do when you're assessing it. So some of those influences for that risk appetite might be financial, it might be commercial, but there will definitely be some regulatory or legal factors in there. So we've talked about guardrails and guides and frameworks, and I think it's really important to build those because they allow you to stay within your risk appetite. It's a sort of a safety net, so you don't act in an uncontrolled way and people will take confidence from that safety net and therefore be prepared to give things a go. I think that legal function will play a huge role in helping frame, embed, manage and iterate that framework, and it may very well own it as well. But I think it's key that the legal function does that hand in hand with the wider business, because that framework needs to be relevant and makes sense.
Soar
But you shouldn't stop doing something just because it exceeds the risk appetite you should think about can you reduce that through mitigations? If you can carry on, and if you can't, then perhaps stop. And so we talk about change champions and you asked, can the legal function be the change champions? I think they are. I think that legal framework, those guardrails, that safety net, as I talked about, that the legal function will be incredibly important in driving, can help the business understand, can those mitigations be effective? Can it help facilitate further progress? I would say the legal function therefore has an enabling role. It has a change champion role, it has the role to help the iteration and the development continue by supporting the business through allowing it to take some risks and to manage those risks and progress their plans.
Streeter
And given we're talking about risks, what implications, Heather, does the advent of GenAI have for companies' legal technology strategies?
Deane
I think this is a really profound golden opportunity for legal departments. The advent of generative AI, study after study in our industry have shown that legal departments are traditionally underinvested in relative to other functions when it comes to technology. So the push for AI adoption that is coming from CEOs, plus the legal department's very crucial role in governance, as has been discussed earlier, it means that they should have a seat at the table on enterprise technology strategy that fully contemplates what the legal function itself needs in order to capitalize on the AI opportunity. So I think the advent of generative AI really offers the general counsel the opportunity to connect and align with the CIO or the CTO technology agenda, and possibly even more importantly, to those budgets which are substantially different generally from what legal departments have seen in the past. So the IT group, the CIO's office, can be great allies to the legal department. So, as has been discussed, there is a massive technology and data component to getting the best outputs out of generative AI. Legal departments are generally not staffed with data scientists and data architects, and these are not generally the kinds of services that they have procured.
Deane
So the ability to get aligned with and really collaborate effectively with the CIO or the CTO, I think really opens up tremendous opportunity for the legal department to get the investment and to get the build and get the support, whether it is from the company's internal resources or whether it is by also connecting through third parties that the CIO's office would generally have relationships with, that can come in and actually help do the data work, do the application build, support the deployment of these capabilities. What this should ultimately mean for the legal technology stack is that it should be simplified. Typically the legal department would have quite siloed data, and there are considerations, and Saskia has spoken to this and I think will elaborate a little more. You're going to need governance frameworks around that, but typically you have to get access into those siloed data stores in order to get solutions that the lawyers themselves and the business are actually going to get value out of.
Streeter
And Heather, Jeff and Saskia have talked about the need for clear frameworks. I mean, ultimately, what should be the guiding principles for the adoption of GenAI for the legal function? What should be in this overall playbook?
Deane
I think that to really, from an operational standpoint, to bring this to life, you first have to understand and identify the needs and objectives of the business. Where can AI help the business? That's got to be the North Star. You need to understand your current processes, you need to assemble the right team. This is an intensely cross functional effort to get Gen AI outputs that are going to be valuable to this stakeholder base. You are going to need people from technology with technology backgrounds, with legal domain expertise, with business process expertise. So get the right team together and identify whether you've got those resources in-house or whether you're going to need external support. Budget with the big picture in mind, I think don't fall into the trap of thinking that the budget is really just like the licencing fees for a particular technology. There is going to be implementation, training, ongoing maintenance. So keep that in mind when you're building your budgets; you're going to need to unify your data ecosystem. So the way I translate this, sort of in plain English is people talk about data models. It's a little abstract for lawyers. I think what that really means is thinking about what questions do you need your data to answer for you and in what format do you need that information. That is what is going to drive the establishment of the data architecture. You're going to need to train your users, you need to communicate continually with your stakeholders, share wins, solicit feedback, again, test, iterate, refine, monitor and improve the way those solutions perform. As I alluded to earlier, the tech is changing at an exponential rate. So you're really going to have to stay on top of this almost from a programmatic standpoint, to monitor, improve and of course, stay on top of the ethical and legal obligations that are going to be governing the use of this technology.
Streeter
And of course, we're facing this uneven regulatory landscape as well, aren't we? So Saskia, what should be the guiding principle principles, in your view, for the adoption of Gen AI, given the fact that it is pretty uneven out there, what areas should law departments prioritise in the medium term, particularly from the EU perspective?
Vermeer-de Jongh
Well, back to the basics, I would say because the EU acts and other AI legislation that is developed across jurisdictions it is based on the OECD principles. And these principles were already defined some years ago, and they focus amongst others on principles like transparency, explainability, robustness, security, safety and accountability. If we take then a specific look at the EU AI act. It's focused on human rights. So to prevent discrimination by removing and recognising bias, and also takes an approach of product safety. That means that different actors, different roles in the AI lifecycle, they have also different responsibilities.
And altogether it ties up that it should lead to the safe and ethical development of AI. But if you really dive into the short term, then organizations should focus on developing the AI inventory I mentioned before and the phasing out of prohibited AI systems. And one element of Gen AI is that the training data typically includes many copyright protected works. And the lawmakers within the EU therefore decided also to include specific rules for Gen AI and copyright. And although most of the obligation, when you really look at the Gen AI specific obligation, they are focused on the developers, some on the providers. But it's still really relevant for organisations that buy AI and use AI, as you of course want to monitor if that specific developer also meets their obligations. So it's important from a legal perspective to review the terms and conditions of the provider of the contract, of the provider of the system, for example, indemnities and warranties regarding the use of copyright-protected works.
I also mentioned before the medium term. I also recommend organizations take privacy into account when using and developing Gen AI. Input and training data, they could of course, both contain personal data and input can also be further used to train all the models. So I therefore recommend to see what existing privacy controls from the organisation can be leveraged on, or can of course be amended to fit these new circumstances. And then it comes all down again to that multidisciplinary framework, right? And it's also part of the governance framework that law departments could tackle this together with the DPO, for example.
Streeter
And Jeff, let me bring you in. How does what Saskia has been explaining about the approach of the EU differ from the UK? Can you highlight any major changes?
Soar
So, I mean, unlike the EU, the UK is not proposing to implement any AI-specific laws but rather follow what is described in its white paper as a pre-innovation approach. What does that really mean? That means following a set of standards and a standards-led and principle-based approach, which really leverages as much of the existing regulation as possible, all with a focus on supporting consumers and encouraging innovation. Obviously, companies in the UK don't exist in a vacuum just within the UK, so they'll still have to comply with rules and regulations imposed elsewhere where they trade. And the UK sees this cross-sector collaboration between UK regulators and overseas counterparts as a real priority. I think the UK is confident that its existing legislation can accommodate and even encourage advancements in technology, including AI. And to do that, it's developed five principles intended to guide and inform responsible development in the UK. Actually, they're very similar to the ones that Saskia just mentioned in the OECD principles. So the five are - number one: safety, security and robustness. Number two: appropriate transparency and explainability. Three: fairness. Accountability and governance is number four. And lastly, contestability and redress. So there are some similarities in the principles and the underlying nature, but taking a very different approach in allowing existing laws to carry on rather than bring in anything separate.
Streeter
So let me bring in Heather. I mean, you have a specific US lens. What's your take on this?
Deane
Well, I think it will not surprise listeners to hear that the US is taking a rather different approach to AI regulation than, say, the EU, as we're a little closer to the UK approach that Jeff has described. We currently do not have any laws in the US governing AI. We do have the Biden administration executive order, which lays out a set of principles and guidelines for the development. But that really only applies to how the federal government is going to engage with companies who are doing this work. It is not a law that is going to be applied outside of that. So really, the onus here is on companies to implement AI in an ethical manner that their motivations are to not lose the trust of their customers and other stakeholders. There are many responsible AI frameworks. You see this from really, these are US companies primarily, that are developing the major large language models, they all have responsible AI frameworks that they follow that they promulgate. So really I think the approach that we're taking here is that companies need to act responsibly, provide transparency and follow their own guidelines and implementations. And again, as Jeff mentioned, we don't operate in isolation. These are multinationals and they are and will be taking steps to comply with the laws and the jurisdictions where they operate. So I think sort of depending on your philosophy about innovation and the best ways to encourage that, the US is either ahead or way behind.
Streeter
But how important is it that firms also ensure that they have the right talent strategy in place to be the backbone of the AI revolution? What kind of upskilling will be needed?
Deane
Oh, I think this is really critically important. And the legal department and companies need to be thinking about talent strategy now. So the legal department of tomorrow, that I feel like we've all been in the industry, been talking about for years, it's going to arrive a lot faster than anybody would have thought twelve or 18 months ago. And I think practitioners, they're really going to need a lot of ongoing training to understand generative AI's evolving capabilities and limitations. On the people note, I want to really emphasise how important that I think it is for leaders, for general counsel, other leaders in legal departments to set the tone by articulating a positive vision for what this technology means for the profession and how it will change for the better. The way the legal department engages with the business and their external service providers, their teams are going to be experiencing a tremendous amount of change and they really need to see the destination while they're on the journey. We haven't really discussed this too much here, but I do think there is some fear that goes along with this technology. Susannah, you mentioned in the opening 44%, maybe a quarter to half of the tasks that are performed in legal are going to be exposed to automation.
Deane
So I think people really are concerned about their jobs and what this is going to look like. And so it is incumbent on leadership to be painting a positive vision for what this is going to mean and how they're going to uplevel the practice. I also think that practitioners are going to need training on, I would say, the sort of basic coding, like really understanding the principles instead of how this technology works, the concepts and what is behind how they get some of the outputs. So I think it's prompt engineering training. You're going to ongoing training on what the technology is, and the features and functionalities that need to be offered to professionals at all levels in the legal organization so that they can experiment a little more competently with the technology. You're also going to need business process training for your legal professionals to help the department get the most out of AI. They've really got to understand how to map the process. As I said before, you cannot automate what isn't documented. Training on how to guide the business on responsible AI use. Saskia has spoken at some length around that and really invest in programme and change management, because you're going to need, again, that village mindset and approach, technical, operational, legal and business process to do this work. So I think what the legal department could look like, and that I think leaders should keep in mind, is if you think about the pie chart of the composition of the talent set within a legal department, it's probably 80% lawyers today, 85 with legal professionals, paralegals and others. You may see that start to shift to legal departments actually having data scientists, data architects, more engineering talent in support of the lawyers in the department.
Streeter
So it's absolutely clear that upskilling really will be crucial to propel the AI revolution. Jeff, what do you think will be the catalyst for change? More widespread adoption, kind of across business and the wider society?
Soar
I think change is constant, isn't it? And it'll happen for any number of reasons. More data, increased computing power, better disruptive tech. I think even more openness towards change will have an impact. But I think there are some big themes that in this case will be a real catalyst. I think they need to be competitive, it will undoubtedly be a driver. Even if you can't lead the pack, you probably want to be in it, you definitely don't want to fall behind. And 70% of the CEOs we surveyed spoke about how this competitiveness was really at the front of their mind in their investment in AI. They don't want to fall behind. I think as we continue to go through that trial and error we spoke about, advances will be made that lead to technological advances giving real, tangible benefits. And I think that will start to snowball. AI has the potential to take on routine, repetitive tasks which free up employees to do more value added things. Heather mentioned earlier the kind of the level of the first draft, if you like. How do you add the value thereafter? I think that's really important. And the EY Work Reimagined survey found that almost half of employees thought that AI would improve their work flexibility and their work experience.
Soar
Regulation will have an impact and that will continue to drive progress. There are mixed feelings now. Heather just mentioned fear. And actually the Edelman Trust Barometer is a pretty even split between those that are resistant to AI, those that are supportive of AI and those that are undecided, and that will continue to move forward. And as regulations continue to develop, trust will be built and that trust will drive adoption. And finally education. We just heard about training, but I think it's really important. We're all on a learning curve and need to understand more. We need to close that expectation gap. I mentioned my first experience earlier and it wasn't great, but we were trained how to do that kind of prompt engineering and how to have a better conversation with the AI. And the more you play with it, the more you experience it, and the more you develop the skills to interact with it to give you the best experience, the better that experience will be, the more adoption will be, and it will start to snowball from there.
Streeter
Thank you. Heather, what's really been standout for you in terms of helping the client journey?
Deane
You know, I think when we have been able to get cross functional groups of clients together, like in a workshop environment or a lab, we sometimes refer to it again with these technologist, legal domain business process, we have been able to automate entire legal workflows end to end. So that's really exciting, to be able to go further than summarise the content of this document, summarise the content of this regulation, but to actually have been able to get the impact from the practitioners of okay, what are the things that you go through in order to advise the business on the impact of a new regulation in a jurisdiction? First I do this, then I do this, then I pull up this form. But to actually automate that entire thing to the first draft, really tremendous impact and excitement. And it's been wonderful to see that come to life with that collaborative approach and with that group, and then to take that sort of initial proof of concept back to a more general population who had been trying to get there on their own with manual prompting, and just say, wow, this is like, this is incredibly helpful, what a great start. But I think when you get the right group together, you really can make a profound impact.
Streeter
Certainly sounds like it. Okay, Heather, thank you so much. Now, we are nearing the end of the podcast, but before we go, if you could each give one nugget of advice to legal teams who may feel a bit overwhelmed by the task in hand when it comes to GenAI adoption, what would it be? Jeff, can I start with you?
Soar
I think it was mentioned earlier, I think I'd say collaboration, teaming, get connected with the wider project teams and business, understand their perspectives, concerns and hopes, and bring the legal knowledge, function and to those discussions. I think, as I said, they've got a big role to play in giving organisations comfort, but it has to be done hand to hand with the business.
Deane
I would say just start. Identify the use cases that you believe can provide meaningful lift to your practitioners. Leverage the principles that we've discussed today, including getting the right team of experts to help you and start experimenting and measuring impact.
Jongh
Embrace the changes that it brings and continue educating yourself and do not be afraid, right? Change that mindset. Yes, AI will bring new challenges, but it will also bring a whole new world of opportunities.
Streeter
Do not be afraid. I'm going to take that with me. Thank you so much to all three of you for your insight.
Deane
Thanks, Susannah. It's been a great conversation.
Soar
Thanks. It's been really enjoyable podcast. I've really learned a lot.
Vermeer-de Jongh
Thank you very much Susannah.
Streeter
It's clear that legal teams have a great opportunity ahead to lead GenAI compliance and adoption. For more information you can visit ey.com. And a quick note from the legal team. The views of third parties set out in this podcast aren't necessarily the views of the global EY organization nor its member firms. Moreover, they should be seen in the context of the time in which they were made. I'm Susannah Streeter. I hope you'll join me again for the next edition of Tax and Law in Focus, brought to you by EY. Building a better working world.