Abstract crowds of people with virtual reality street display

Five generative AI initiatives leaders should pursue now

Related topics

It’s time to move beyond quick efficiency gains to a cohesive AI strategy that is actionable and provides options in a fast-changing space.


In brief

  • Generative artificial intelligence brings huge potential — but many are stymied by significant uncertainty and organizational constraints.
  • Prioritize a small number of cross-cutting initiatives to bridge gaps and move up the AI maturity curve.
  • Within each initiative, determine where to act now versus decide later – while identifying the criteria and thresholds used to trigger future activities.

Generative artificial intelligence (GenAI) poses a dilemma. On the one hand, its transformative potential and rapid acceleration are creating an imperative for business leaders to act — and move quickly. On the other hand, significant uncertainty and organizational constraints are slowing uptake and dissuading many from launching major initiatives.

While companies are investing in AI — 43% of CEOs have already begun, with another 45% planning to do so in the next year — many are pursuing quick efficiency gains rather than more fundamental changes to maximize AI’s growth potential. Ninety percent1 of organizations are still in the earliest stages of AI maturity — running proofs-of-concept or developing capabilities in pockets. In this environment, how do you ensure your actions today are aligned with building an AI-ready enterprise for the future? How do you chart a course amid so much uncertainty?

EY teams have developed a process for creating an actionable, focused and adaptive strategy tailored to this environment of uncertainties and constraints. This approach identifies the most impactful strategic initiatives, distinguishes near-term priorities from longer-term issues, and provides optionality in a fast-changing space.

Set goals and identify challenges 

Start by setting overarching goals, aligned to your organizational values and purpose. We believe an AI strategy should be guided, at minimum, by certain core objectives. AI’s unprecedented ability to enter the most human of domains — intelligence and creativity — makes augmenting human capabilities a key strategic focus. Growing concerns about the risks raised by AI mean that building confidence in your AI systems needs to be a fundamental principle. Finally, to drive exponential value, your strategy cannot be piecemeal or siloed — it needs an end-to-end approach.

To achieve these goals, you need to identify and address your biggest gaps. Think of this in two ways. First, what is the gap between your current state and your desired future state? To measure this, you need a maturity model, such as the EY.ai Maturity Model, to benchmark your current AI implementation relative to a mature, enterprise-wide deployment of AI.

Download 'If AI is a transformative force, how do we ensure it’s a force for good?'

Second, focus on the gaps — the uncertainties and organizational constraints — that are limiting your ability to quickly move up the maturity curve. Companies across sectors typically face multiple uncertainties and constraints. These include being inundated by large numbers of unprioritized use cases, while lacking an overall vision on business transformation and value creation; uncertainty about AI regulation and the risks raised by new use cases; and talent and information technology (IT) infrastructure gaps.

Companies face critical challenges in developing and implementing AI



Launch strategic AI initiatives

A chasm separates these goals and challenges. Bridging it requires prioritizing a small number of strategic initiatives that are both cross-cutting and aligned. This means addressing multiple uncertainties or constraints simultaneously while working together to achieve the core objectives listed above, further your company’s purpose and accomplish a shared vision.

Based on these criteria, as well as a series of interviews and workshops with EY AI and strategy specialists, we have identified five strategic initiatives addressing the gaps commonly faced by companies across sectors. Within each initiative, leaders should decide where to act now and what to decide later — while identifying the specific criteria and thresholds that will trigger those future activities.

Diverse Air Traffic Control Team Working in a Modern Airport Tower at Night. Office Room is Full of Desktop Computer Displays with Navigation Screens, Airplane Flight Radar Data for Controllers.
1

Initiative 1

Establish an AI "control tower"

To reduce risk and align resources, direction must come from the top.


To develop a strategic vision and ensure alignment with it, your AI strategy needs a control tower. Unlike “centers of excellence” that many companies are creating to centralize technical capabilities for use case execution, the control tower is the business unit charged with defining your organization’s strategy and ensuring that your resources and the other four initiatives are aligned to this vision. It needs to be led either by someone in the C-suite, or someone with a direct line to the C-suite. It should be empowered to allocate capital and command sufficient resources to work across business functions.

The benefits of this approach are exemplified by an Australian water utility that EY teams have worked with. The utility was concerned that its uncoordinated use of AI in business processes scattered across the organization was creating significant risk. The utility assessed its AI maturity and developed a clear roadmap for achieving its strategic ambition. A key component of the new strategy was establishing a control tower AI office, which in turn enabled a systematic prioritization of use cases, the establishment of company-wide best practices and governance, as well as the upskilling of talent and tech capabilities. The result was not just reduced risk, but more value capture from its AI investments. 

Where to act now

Appoint a leader with strong experience leading digital transformation. Empower them to build a team with the right size, seniority, budget and skills to coordinate across your organization. Establish relationships with the board and key committees around AI risk and governance. Begin identifying the metrics you’ll later use to measure progress and return on investment.

What to decide later

  • Decide which use cases, business models and alliances to wind down, consolidate, or scale up. Do this on an ongoing basis, using the metrics established earlier, and in coordination with the initiatives responsible for business models and functions and ecosystem alliances.
  • Determine how the control tower should evolve over time. Decide, for example, whether to become a dedicated function to maintain strong central governance or to transition to a federated model with authority delegated across functions to increase flexibility and speed of innovation.
Close up of hand touching moon jellyfish
2

Initiative 2

Reimagine your future business model and functions

AI is an opportunity to transform from the ground up.

Preparing your organization for the era of AI requires anticipating and preparing for the wide-ranging disruptions it is likely to unleash. So far, businesses are mostly thinking incrementally: “How could GenAI make existing processes more efficient?” rather than “How could AI transform business functions and business models from the ground up?” According to EY research, 91% of organizations are using AI primarily to optimize operations, develop self-service tools like chatbots, or automate processes; only 8% are driving innovation, such as new or improved offerings.

 

Where to act now

 

In the near term, continue applying GenAI to specific use cases with the goal of improving efficiency and productivity. Prioritize use cases using a couple of criteria.

 

First, focus on the greatest value creation opportunities by assessing how AI can drive impact to the bottom line of the organization. Use all tools available, such as the EY.ai Value Accelerator, to help identify and implement AI initiatives and solutions based on their contribution to metrics such as revenue, cost and EBITDA.

 

As EY teams have seen in recent months while helping several clients assess and/or implement such opportunities, value acceleration can be found in actions such as using generative content and automated workflows to boost the conversion rate of sales representatives (in this case, at a business information services company  — a $100 million opportunity) to automating processes across engineering, customer services, knowledge management and other functions (at a telecommunications and media conglomerate — a $1-1.5 billion opportunity).
 

Second, in this early and evolving risk environment, focus on lower-risk use cases. For instance, some internal functions are lower risk than many public-facing ones that could invite consumer backlash and brand damage.

 

At the same time, move beyond use cases by laying the groundwork for a long-term vision and direction. If taking on the entire business model proves too challenging, given the uncertainties about AI’s evolution, consider instead edging toward the business model from both ends: a bottom-up and top-down approach.

In the top-down approach, develop one or more scenarios envisioning how your sector might be reinvented in the future and how your value proposition would need to change to remain competitive. Identify metrics to track which scenarios are becoming more plausible and thresholds for when your organization needs to take additional action.

In the bottom-up approach, start by revisiting roles and processes where you anticipate AI will play a significant role. As AI takes over a portion of the work, what new roles will your workforce play? Use your growing understanding of how roles will change to build out a vision for corresponding business functions.

What to decide later

  • As AI becomes more prevalent in certain parts of the enterprise, reinvent these business functions based on the increasing capabilities of AI and the changing roles of people.
  • As questions are resolved (e.g., around the evolution of particular scenarios, new market offerings or entrants) embark on a fuller exploration of business model disruption. Ask yourself,  how, in this changing environment, will you create, deliver and capture value in new ways.
Business woman, computer seo work and coding of young employee with blue light and glasses. Digital code, female face and reading of a it employee at night planning with online ecommerce and ai data
3

Initiative 3

Ensure confidence in AI

Robust governance frameworks are needed to address a broad range of risks.


As the use of AI increases across the enterprise, so will the risks and stakeholder expectations. These go well beyond legacy issues such as privacy and cybersecurity — or even widely known AI risks such as biased training data or “hallucinations” providing fictitious information. The next wave of risks and expectations will include use case-specific issues, from the explainability of loan application denials, to the accuracy of medical diagnoses, or the ability of people to control autonomous vehicles.

It will also include broader risks, such as intellectual property issues related to Large Language Model (LLM) training data and implications for third-party users of these models; the risk that hallucinations prove harder to fix than many are assuming; or the possibility that AI fails to deliver on its potential in the immediate future.

Regulators are responding to these risks with new legislation, the most prominent of which is the EU’s proposed AI Act (for more, see our recent study). But AI is a fast-moving space, while legislating is, by design, consultative and slow.

“Despite the growing need for robust AI regulation, it’s going to be extremely hard to achieve,” says Gordon M. Goldstein, Adjunct Senior Fellow at the Council on Foreign Relations. “Television took five years to regulate, airlines took 20 years to regulate, and most estimates for AI think it will take a decade to regulate this technology.”

Therefore, much will depend on robust governance frameworks developed proactively by companies to build confidence in their AI applications.

Unfortunately, such approaches are not yet the norm. While a recent EY survey found 77%2 of executives agree GenAI will require significant changes to their governance to manage issues of accuracy, ethics, and privacy, a 2022 EY study found that only 35% of organizations even have an enterprise-wide governance strategy for AI.


A robust governance approach should aim to build confidence in AI across a wide set of stakeholders — not just consumers and regulators, but also employees, C-suites and boards. To pull this off, it should cover the entire tech stack — data, model, process and output.

Critically, it must account for a unique characteristic of GenAI. “LLMs are probabilistic, not deterministic,” says Nicola Morini Bianzino, EY Global Chief Technology Officer and Co-Leader of EY.ai. “Unlike prior IT platforms, giving an LLM a particular input does not lead to the same output every time. GenAI models instead produce a range of outputs with an underlying probability distribution — and any approach to measuring confidence needs to similarly adopt a probabilistic approach.”

Regulation has long been a compliance exercise. With AI, governance will become strategic — a driver of growth and competitive advantage. If you can do a better job increasing confidence in your AI, you will achieve more market penetration and competitive advantage.

Where to act now

Establish bodies to oversee your AI governance such as an AI council or AI ethics committee. Consider establishing ethical principles for your AI, similar to those adopted by many non-governmental organizations and big tech companies. Use these principles to guide policies and procedures.

Television took five years to regulate, airlines took 20 years to regulate, and most estimates for AI think it will take a decade to regulate this technology.

Ensure that any new use cases at a minimum comply with existing regulations (e.g., GDPR) with respect to issues such as privacy and data residency. At the same time, work with initiatives responsible for business models and functions to map the emerging risks created by new use cases. Coordinate with the AI confidence initiative to begin defining controls addressing these risks.

Track evolving government regulations across the markets in which you operate. Include these potential regulations when envisioning how AI might disrupt your industry long-term and ask potential ecosystem partners about their preparedness for these regulations.

What to decide later

  • Based on use case prioritization and timing of deployment, implement controls for risks associated with new use cases as they are rolled out.
  • Implement a probabilistic approach to test the robustness of these controls and estimate the degree of confidence across the tech stack. Continue to monitor confidence over time to ensure it does not decline with the addition of new data or the release of new model versions.
  • Prepare for newly passed legislation by understanding the changes your enterprise will need to implement for compliance. As new regulations are rolled out, implement updates to controls, policies and internal reporting systems.
An office team working at a computer desk and sharing ideas in front of a large, leafy botanical display.
4

Initiative 4

Address talent and technology gaps

Almost two-thirds of companies are hampered by skills gaps and legacy IT.


Companies face their biggest gaps in two functions: Talent and IT. Almost two-thirds (62%) of companies2 agree that their ability to maximize the value of GenAI is hampered by their data structures, legacy technology, or key skill gaps — a challenge that is consistent across sectors.

These gaps include capabilities, such as machine learning engineers, that companies possess and need to scale up — but may be in short supply. The bigger challenge, though, is not capabilities that need to be scaled up as much as entirely new capabilities that need to be sourced or developed. Integrating LLMs, for instance, will require capabilities such as knowledge graphs and retrieval-augmented generation (RAG) systems, which most companies are not familiar with.

With respect to skill gaps, GenAI itself can provide part of the answer. Already, GitHub’s Copilot is  accelerating code writing; one study showed developers using it were 55% faster on a specific coding task. This not only helps alleviate skills shortages, but also has other benefits — 74% of Github’s Copilot users say it allows them to focus on more satisfying work, while 60% report feeling more fulfilled in their jobs.

Indeed, AI could have a profound impact on human fulfilment and potential. “This could be the greatest democratizing force of our time,” says Beatriz Sanz Sáiz, EY Global Consulting Data and Analytics Leader and Co-Leader of EY.ai. “AI will augment work, create new jobs and increase human potential. It could expand access to education for millions, while allowing lower-skill workers to take on higher-paying opportunities.”


But realizing AI’s human potential requires human acceptance and adoption. Unfortunately, the EY Work Reimagined 2023 Survey highlights a different kind of talent gap: an emerging expectations disparity between leaders and employees. While both expect GenAI to improve work, leaders have significantly higher expectations than employees. Exposing employees to GenAI may help, since sectors with higher adoption also perceive more benefits from the technology. Yet, leaders rank training on GenAI ninth out of 11 possible employee development priorities.

In the longer term, the opportunity is to fill a different kind of gap: between today’s Talent and IT functions and the AI-empowered Talent and IT functions of the future. While AI will reshape functions across the enterprise, some of the biggest opportunities for a fundamentally different approach are in these two functions, which are simultaneously at the frontlines of deploying AI and most directly impacted by it.

Where to act now

Engage with GenAI platform providers. Develop or source the computational power, data fabric, and algorithm requirements for your enterprise’s GenAI objectives. Develop or source capabilities required for integrating enterprise models, such as RAG and knowledge graphs, or evaluate the feasibility of leveraging open-source customized models. Similarly, focus on preparing your proprietary data for use in integrating GenAI models, by ensuring it is properly vetted, cleansed, secured and processed.

Artificial intelligence could be the greatest democratizing force of our time.

Fill key skills gaps. Use GenAI to augment or streamline repetitive tasks and elevate workers. Upskill workers to prepare them for future roles. Launch AI pilots for workers in selected roles to build proficiency in using GenAI, as well as to learn and refine your approach for the broader rollout. Coordinate with the ecosystem partnering initiative as appropriate to fill talent gaps.

 

Address the gap in employee buy-in with consistent messaging highlighting how AI is not here to take away jobs, as much as empower your people and free employee time for more fulfilling work. Leverage case studies from early successes and employee testimonials to make the case.

What to decide later

  • As technologies and offerings mature, decide which capabilities and infrastructure you need to develop in-house versus source from external vendors or partners, based on an ongoing assessment of which parts of the tech stack are becoming commoditized and which remain critical for creating and capturing value.
  • Track the progress of GenAI models to assess the pros and cons of open-source vs. proprietary models and determine where in the organization you should deploy each type of model.
  • Based on the timing of rollout across the enterprise, retrain your broader workforce with the skills required for working alongside AI — from prompt engineering to interpretation and filtering of outputs.
Close-up of man holding string aking a cats-cradle
5

Initiative 5

Develop an ecosystem of alliances

Alliances will be essential in this rapidly evolving space.


Ecosystems of external alliances unlock tremendous value. They can drive double-digit revenue growth and cost efficiencies, while increasing access to a wider pool of talent and capabilities. Unfortunately, in a 2021 EY study of 300 CEOs from the Forbes 2000, only 29% had a strategy that included an ecosystem of external alliances — meaning many companies are relatively inexperienced with this approach.

GenAI’s ability to work with unstructured data could overcome a key obstacle to external partnering: data interoperability. In a world of structured data, partnering with external entities often required data to be cleaned and reformatted to make it interoperable — a slow, labor-intensive task. With GenAI, the interoperability challenge is diminished and, as companies build out knowledge graphs to capture their best practices and business processes, it will become increasingly easy to seamlessly combine not just data, but knowledge and processes across organizations — driving new offerings and business models.

All of this should open the floodgates to a world of faster and easier multiparty alliances. That’s good news because alliances will be essential in this rapidly evolving space. Developing an LLM is such a massive undertaking that partnering to integrate existing platforms will be vital. Similarly, alliances with GenAI solution providers will be useful to close talent and tech gaps and reengineer business functions.

The expanded use of ecosystem partnering, however, also increases risk and governance challenges. Combining data across organizations raises the specter of collective liability: you are as vulnerable as your weakest link. Based on our experience conducting AI strategic assessments for a multinational oil and gas firm and other clients, partnering with AI providers across multiple business functions makes third-party risk management a key component of strategy and governance. Given the growing landscape of AI vendors, companies need to ensure that ecosystem partnering is closely aligned with the strategic initiative responsible for ensuring confidence in AI.

Where to act now

If you’re new to ecosystems, get started — both because GenAI has lowered the barrier to entry, and because companies orchestrating ecosystems capture greater revenue share than those that just participate. Identify the strengths that make you an attractive partner, such as proprietary data, deep sector knowledge and robust cybersecurity. At the same time, define what you’re looking for in partners, including the ability to fill gaps and complement your proprietary data. Establish pilots with multiple entities. Coordinate with the AI control tower to regularly review the performance of these alliances.

What to decide later

  • Decide which alliances to prioritize for further investment based on initial success and the evolving partner landscape. Winnow unsuccessful pilots and scale up successful ones.
  • Identify new partners as new gaps and needs emerge.
  • Move from a series of alliances to multiparty ecosystems in which various entities contribute unique competencies to achieve shared objectives.

Summary

If AI delivers on its potential, it could be every bit as transformative as the personal computer has been over the last five decades, supercharging productivity, unleashing innovation, and spawning new business models — while disrupting those that don’t adapt quickly enough.

The uncertainty and resource constraints confronting many companies are real, but don’t let them become an excuse for inaction and delay. The five initiatives described here provide a path through these challenges. It’s not too early to start transitioning from tactical to strategic, and to begin developing a long-term vision for your company.

That vision, and the strategy it informs, can be adjusted as uncertainty gets resolved. There’s much you can decide later.

And there’s much on which you can act now.


About this article

Authors

Related articles

How a top-down holistic strategy can maximize GenAI ROI

Explore how companies that put strategy first by applying a holistic Generative AI maturity model for AI adoption will be positioned to unlock greater ROI.

How to navigate global trends in Artificial Intelligence regulation

Learn why the AI regulatory approach of eight global jurisdictions have a vital role to play in the development of rules for the use of AI.