EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
Still, it’s difficult to realize the full potential of these tools without clear purpose. Adopting technology just for technology’s sake carries inherent risks: new technology plus an outdated process will only equal an expensive old process, creating a subpar experience for employees and customers. The more sustainable value of technology adoption doesn’t come from what the technology does, but what the user can do with it.
With this in mind, we can plot the potential impact of generative AI in two powerful categories of use cases: AI collaboration and talent and governance strategy.
AI collaboration
Organizations will have to assess how AI could influence both back-office functions, and customer-facing work. For example, AI might be thought of as an added digital assistant taking on bulk analytical or technical tasks in a first instance. Employees could utilize their AI tools for mundane or repetitive tasks. The Digital Worker Experience Survey from Gartner found that 47% of digital workers struggle to find the information or data needed to effectively perform their jobs, presenting an opportunity for AI-powered tools and collaborative technologies to increase efficiency.
More wide-reaching for internal use is generative AI’s influence across business functions, like HR and Payroll (via ey.com US). AI tools can perform rolling analysis on performance indicators of employees and recommend training and upskilling opportunities. Through individualized credentials and authentication, AI tools can give different levels of information appropriate to the seniority and job title of the requestor. These digital tasks can run independently, around the clock, allowing for reports or red flags to be delivered constantly. This is more robust than traditional automation because these new tools have deeper context and generative abilities.
Talent and governance
Generative AI’s ability to identify training opportunities for employees can also contribute to an overall assessment of the skills organizations have now and will need in the future. The ability to customize these technologies can enable new ways to create a career development track attuned to employee experience and business need.
Importantly, the enterprise-level implementation of generative AI with large data sets and confidential materials can help ease compliance exercises, while also creating areas for potential risk. These AI tools built for purpose can automate the gathering, cleansing and interpretation of data, while summarizing findings and offering recommendations from it. These tasks need to be done within the context of an organization’s ethical AI governance framework, maintaining guardrails that are appropriate for an organization’s particular needs. Generative AI outputs should be seen as first drafts, with people making adjustments and final assessments of the work.
But appropriate guidelines can make that first draft even better.
For business functions requiring bulk retention of certain types of data to maintain regulatory compliance, or more stringent privacy controls, generative AI tools need to follow clearly defined and transparent access and retention policies and have processes in place to monitor for potential legal risks around the use or leveraging of intellectual property.
This need extends to AI-enabled HR tools to assess opportunities for improvements to internal communication, process change or technology investments, or other unseen challenges. By mixing quantitative and qualitative assessments of the workforce, both with AI tools and apart from them, leaders can better understand the state of organizational culture and capabilities. Industrializing the analysis of multiple data sources may help identify bright spots or hotspots related to employee retention, and quickly, deeply and consistently respond to and anticipate opportunities.
Similarly, this assessment can flag potential compliance or regulatory risks in real time. But the effectiveness and reliability of this activity relies on an appropriate ethical generative AI framework and governance model, including countering potential bias transmitted through public training data, and avoiding the recycling of toxicity received in certain inputs.
Even with the need for regular management and maintenance of generative AI tools, these systems can provide better visibility on how an organization is operating and organizing itself for the “next normal” of work.