Aerial view of many carpets on field

How leaders can rethink human intelligence amid thinking machines

The era of hybrid intelligence is dawning. The actions you take will determine how this impacts your organization — and you.


In brief

  • Breakthrough technologies and growing complexity are poised to redefine intelligence.
  • A new intelligence, a hybrid of human and machine, will smooth individual intelligence gaps — but also pose new risks.
  • Companies can thrive by making their organizations complexity-ready, and by redesigning their learning programs, technologies and workspaces.  

Intelligence, perhaps more than any other trait, sets humans apart. Our capacity for learning and problem solving, individually and in groups, has driven our species’ success. For most of human history, we lived in environments so fundamentally different from the modern world that our brains did not evolve to anticipate or be optimized for today’s complex, fast-changing environment. But the human brain is also uniquely plastic — so our thinking adapts to new environmental catalysts, in both positive and negative ways.

This has been apparent in recent years, as social media and smartphones empowered us with instantly accessible information and content co-creation, while also fueling screen addictions, disinformation, and polarization.

The next generation of disruptive technologies will reshape intelligence once again. Artificial intelligence (AI), and particularly generative AI (GenAI), is bringing machines into the domain of human thinking to an unprecedented extent. Other emerging technologies, especially when combined with AI, could bring additional effects on human intelligence. Quantum computing could increase the speed and scale of AI by several orders of magnitude, enabling new leaps in AI’s capabilities. The metaverse and extended reality (XR) have the potential to deliver radically different human-machine interfaces, with profound impacts on behavior and intelligence. Brain-machine interfaces (BMIs) could seamlessly link our biological and silicon brains. The combination of AI and the industrial internet of things (IIoT) would extend these impacts into physical world, through devices such as robots and autonomous vehicles.

Research on these technologies is growing at an exponential pace, as measured by patent filings and grants.


Technology is just one driver of a larger environment of increasing complexity that will challenge and reshape our intelligence. Since approximately 2020, we have shifted to a world in which disruptions are bigger, quicker, more interconnected and more likely to overlap. This environment runs up against several heuristics and biases documented by behavioral scientists — from our difficulty understanding and dealing with exponential change, to an inbuilt need for assurance and certitude that hampers responding to uncertainty.

These shifts will reshape not just how individuals think but, critically, how teams and organizations think. For businesses leaders, the key challenge will be ensuring your enterprise’s “organizational intelligence” can adapt and grow in this fast-changing environment. Companies will need to rethink existing best practices and assumptions, many of which may no longer be fit for purpose. This will require learning from disciplines such as neuroscience, complexity science and behavioral science, and applying these evidence-based insights to design workspaces, technologies and learning programs that boost intelligence.

In this article, we explore how human intelligence is being reshaped primarily by two forces: complexity and AI. While we also touch on other technologies, these have not yet achieved the widespread adoption that AI is now gaining. The timing of their market adoption, and their potential impact, is more speculative, which is why we focus mainly on AI. 

Big human brain sculpted from purple marble stone.
1

Chapter 1

How intelligence is being reshaped

Intelligence is evolving into a human-machine hybrid — with both positive and negative effects.

As new technologies move into the domain of human thinking at an unprecedented scale and speed, this has the potential to fundamentally reshape intelligence. This brings both great potential and significant risks. Here are some ways in which intelligence could be reinvented in the years ahead:

 

1. Intelligence is becoming a human-machine hybrid

Human intelligence has been augmented by technology for some time — think spreadsheets, search engines and GPS devices. With the next wave of technological breakthroughs, this will go further, as more extensive augmentation transforms intelligence into a human-machine hybrid, in which synthetic intelligence becomes as much a part of our thinking as biological intelligence. Much as Google and Wikipedia commoditized access to knowledge, AI will commoditize the application of knowledge. As AI’s role as a co-thinker expands across our daily lives, it will take on much more of our cognitive load and perform much (but not all) analytical and creative work. BMIs and XR could provide new human-machine interfaces that make the connections between our silicon and carbon brains seamless and effortless.

 

This raises the question: what will remain uniquely human? AI is acquiring new skills at a rapid clip — Stanford University analysis shows it has equaled or exceeded average human performance on seven of nine task benchmarks of human intelligence.1


This is challenging assumptions about the soft skills that most expected would remain the sole domain of humans in an AI-driven future. With that future now dawning, many of these soft skills look automatable to a degree unimaginable a couple of years ago — including creativity, communication and critical thinking. Even empathy may not be off-limits; a recent study led by researchers at the University of California San Diego found that responses to patient questions generated by ChatGPT were rated 9.8 times more empathetic than those written by physicians.2

Where is the new boundary between automatable and human? Within skills that are automatable, what component will remain uniquely human? We’re still in early days, but some themes are emerging. For instance, contextual awareness and judgement are likely to become increasingly valuable as machines move further into the domain of human thinking.

Another vital skill — perhaps the attribute most essentially human — is curiosity. In an era when questions are becoming instantly answerable, the ability to ask questions — and hone in on the important questions worth asking — will become more critical than ever. Amid large language models (LLMs) that hallucinate and don’t know what they don’t know, recognizing the limits of one’s knowledge and asking what lies beyond will be valued. And with models that are backward-looking by design (since they are trained on historical data), humans will stand out by being forward thinking, to question and envision what lies beyond in ways machines can’t.

The boundary between automatable and human isn’t static or unidirectional; it will likely shift back and forth over time, as AI gains new competencies and humans are challenged to raise their game.

For instance, even as AI increases the imperative for humans to become better at asking questions, it could also provide the tool that helps us sharpen this skill. A Harvard Business Review article by Hal Gregersen, Senior Lecturer in Innovation and Leadership, MIT Sloan School of Management and Nicola Morini Bianzino, EY Global Chief Technology Officer points out that partnering with AI can help people ask smarter questions, making them better problem solvers and innovators. Working with AI increases the velocity, variety and novelty of questions and helps people ask questions that spark change — what the authors call “catalytic” questions.3

Something similar might happen with respect to developing empathy. Does the study referenced above really show that GenAI is more empathetic than humans — or merely that today’s educational and working environments have rewarded other behaviors among doctors, and allowed the essentially human trait of empathy to atrophy in clinical settings? Rather than ceding empathy to machines, such results should provide a wakeup call to universities and employers to rethink their approach to learning and training. 

2. Differences across individual human intelligence profiles will be smoothed out

As a co-thinker, technology will not just help us conduct work; it will also fill in gaps in people’s intelligence profiles. GenAI can take someone who is disorganized but great at networking and help them organize their calendar. Conversely, it could allow an individual who has strong analytical capabilities but gaps in communication skills to communicate better. The metaverse and multi-modal AI could help workers bridge gaps in linguistic abilities and communication styles.

This could allow companies to increase organizational intelligence in several ways. AI can assist people with below-average writing or coding skills to elevate their capabilities, boosting overall productivity and organizational intelligence. Technology could enable employers to access hidden pools of intelligence that were previously inaccessible, such as neurodiverse individuals whose capabilities had been underutilized because of mismatches in communication styles. It could empower individuals with disabilities to perform at a higher level by making information more accessible to them.

“AI can be game-changing for people who are blind or have low vision using assistive technology to navigate a screen,” says Aleš Holeček, Corporate Vice President, Office Product Group at Microsoft. “Traditional versions of such software described what’s on a screen very literally — cycling mechanically through every button, piece of text, color, and so on. Now, AI is able to understand what’s on a screen at a semantic level and describe it in ways that are more human and comprehensible.”

However, employers will need to be mindful of some risks and challenges. The first is from a recent study of call center workers that found that, while GenAI increased overall productivity and raised productivity for lower-skilled and less experienced workers, it did not do much for top employees — and may even have reduced their performance in some instances — which could present potential problems for motivation and retention of top performers.4

The second risk is that of homogeneity in thinking. Smoothing intelligence profiles could make us more similar in how we think and communicate. GenAI itself could converge to homogeneity in thinking; as successive releases of LLMs are trained on larger datasets and synthetic data, it is possible that they will all end up with similar training data and converge on similar answers.5

3. Machine-to-machine interaction could make thinking more efficient — and inaccessible

The next generation of GenAI has the potential to bring machines even further into the domain of human intelligence. Instead of GenAI that is passive and reactive — responding when prompted by humans — the technology is moving toward autonomous AI agents that can act proactively to make decisions based on large volumes of contextual data and adapt to changing conditions.

It is not clear to what extent companies and regulators will allow AI to function without retaining a ‘human in the loop’. In many cases, they might balk at letting AI act with complete autonomy. At the very least, such decisions will be made in the context of potential risk — use cases that could pose greater harm will likely see lower autonomous agent adoption than will relatively innocuous use cases. The use of autonomous agents will also heighten issues of worker retraining and redeployment.

To the extent companies and regulators permit AI to operate without retaining a “human in the loop”, this could have significant implications for communication. Agents could enable machine-to-machine thinking and communication that is qualitatively different from human-machine interactions. When freed from having to communicate to be intelligible to humans, AI could proceed in ways that are fundamentally alien to human thinking. We have already seen instances of chatbots inventing their own language to communicate more effectively with each other.6 Robots in retail fulfilment warehouses organize products in ways that make no sense to humans, placing very dissimilar items next to each other in what appears to be a chaotic layout. But doing so allows robots — which, unlike humans, have perfect recall — to store twice as many goods in the same space, increasing the efficiency of operations.7

The design of the working world is optimized for the ways in which humans process information. This inevitably involves some degree of inefficiency. A world in which consumers offload purchasing decisions to AI could render obsolete marketing as we know it (which is typically designed to exploit our behavioral biases and appeal to our emotions). An organization in which agents deal directly with each other across departments could reduce paperwork and red tape. Such shifts would likely increase efficiency — but could also make it harder to understand or audit AI.

4. Emerging risks could undermine intelligence

Disruptive technologies could also raise risks that undermine our cognitive abilities, much as social media and smartphones have done in recent years.

One such risk is overreliance on AI. Thinking is extraordinarily expensive from a resource perspective; our brains use 20% of our energy while only constituting 2% of our mass. So, we are genetically predisposed to conserve energy by avoiding thinking and taking mental shortcuts — something behavioral economists such as Daniel Kahneman have explored extensively.8 In an era of thinking machines, the predisposition to conserve mental energy could lead many to become excessively reliant on AI and let key skills such as curiosity and critical thinking atrophy.

Misinformation and polarization have become major challenges in the era of social media and smartphones. The next set of disruptive technologies could supercharge the problem. Armies of bots or avatars could create the false impression of widespread support for misinformation. AI could gain the ability to understand the emotional and other triggers of individual users and make itself irresistibly engaging and persuasive. Indeed, in a recent study comparing the persuasiveness of GenAI relative to humans, participants were 81.7% more likely to agree with arguments made by an LLM with access to participants’ demographic information, which allowed the LLM to personalize arguments and make them more convincing.9

Over time, GenAI hallucinations could make their way onto the larger internet, and in short order pollute the training data of future LLM releases — making it unprecedentedly challenging to distinguish fact from fiction. 


Roller Coaster, Salou, Spain.
2

Chapter 2

Actions firms can take to prepare for the future of intelligence

The future of intelligence requires companies to rethink the design of technologies, workspaces and learning.

The future of intelligence will require companies to rethink key aspects of their businesses, since many existing assumptions and best practices may no longer be suitable. How can companies prepare for complexity? How can they maximize the potential of hybrid intelligence, while mitigating emerging risks that threaten to undermine the cognitive abilities of their employees?

Here are three actions leaders should take:

1. Reshape organizational intelligence to be ready for complexity

Increasing complexity will strain organizational intelligence, since many of the cognitive biases documented by behavioral scientists are at odds with the ways of thinking and operating needed to thrive in complex environments.

Jennifer Garvey Berger, author and co-founder/CEO of Cultivating Leadership identifies several “mindtraps” — cognitive patterns that individuals and organizations need to be mindful of in complex operating environments. “We like simple stories, so we try to impose neat narrative structures on a complex reality,” she says. “We have a mindtrap of rightness and certainty, which is ill suited for an environment of increased uncertainty. We long for alignment and agreement, while complexity requires us to embrace disagreement and be comfortable with it.”

To reinvent organizational intelligence to thrive amid complexity, companies will need to start by identifying and addressing such mindtraps, heuristics and biases. Beyond this, companies will thrive by rethinking their organizational structures, decision-making processes, and incentives based on insights from complexity science.10

For instance, research shows that diverse organizations do better at dealing with complexity.11 Bringing in artists and creators or leveraging the untapped potential of neurodiverse workers already in the organization can increase the ability to make unexpected connections — a key need at a time when complexity can lead to surprising or counterintuitive outcomes. Businesses can and should expand the kinds of external expertise they seek, to include specialists in fields such as complexity science, behavioral science, and neuroscience.

Organizations can thrive amid complexity by tapping the power of intelligence across their networks through flatter, networked organizational structures instead of hierarchical ones. And in an environment of accelerating change, firms will succeed by adopting structures and incentives that maximize their ability to perceive changes and respond nimbly to them.

In many cases, the technologies reshaping intelligence can enable such shifts. AI could accelerate decision making and become a sparring partner to help leaders manage complexity and identify solutions. XR could put executives in immersive environments to explore the implications of future scenarios. AI’s co-thinking abilities could allow companies to delegate more authority to frontline workers, helping firms respond more nimbly to fast-changing environments.

2. Redesign technologies and work to boost human intelligence

Design shapes behavior. To boost human intelligence and avoid the behavioral risks and pitfalls identified earlier — such as overreliance and screen addictions — companies will need to make deliberate and informed design choices in their workspaces and technologies. For instance, the design of LLMs can influence their impact on human intelligence. Today, LLMs are designed to give users answers in response to prompts, which can lead to overreliance and reduce the discernment of users. Professor Pattie Maes and other researchers at the MIT Media Lab have found that if LLMs are designed to first engage users to think about a problem by posing questions, instead of immediately spoon-feeding them answers, users end up more engaged and discerning.12

Companies need to similarly pay close attention to the design of workspaces. Once again, insights from behavioral science and neuroscience can help develop evidence-based best practices that will boost human intelligence.

“It’s a myth that our brains can produce big ideas and high-quality thinking for eight hours a day,” says social psychologist Heidi Grant, Director of Research and Development in Learning at Ernst & Young U.S. LLP. “Most of us have two or three great hours, and they don’t always coincide; some of us are morning people and others are evening people. To build workspaces that foster great thinking, teams should design work environments that empower people to carve out their thinking time and protect it. Change norms around the number of calls and distractions during the day. Use insights and incentives based on behavioral science to build habits and increase curiosity.”

The good news is that the perverse incentives of the social media era that fueled negative cognitive and behavioral outcomes may now have flipped. Social media apps were consumption platforms, but AI is a productivity tool. Companies will make more money when their workers are focused and productive, rather than when they are outraged, polarized and screen-addicted. Employers are customizing LLMs with their own data; they can use the opportunity to design LLMs in ways that increase curiosity and critical thinking in their employees, instead of encouraging overreliance and intellectual laziness.

3. Redefine learning for the era of hybrid intelligence

When many business leaders think about learning and development (L&D) in the context of GenAI, they go to matters such as upskilling their workforce with AI skills. Yet, this is merely the tip of the iceberg, because education is undergoing a seismic shift, and it’s coming for the corporate world soon.

Today, this shift is most visible in primary and secondary education, where educators are grappling with both how to integrate GenAI in the classroom as well as more fundamental questions of what learning even means in the era of GenAI.

“So far, when we’ve taught writing, what we’ve really taught is a combination of thinking and writing — making orderly, logical arguments and assembling them in compelling sentences,” says Ethan Zuckerman, Professor at the University of Massachusetts, Amherst and Director of the UMass Initiative for Digital Public Infrastructure. “Well, we now have GenAI tools that are pretty competent at writing compelling sentences. So, the question becomes, how do we teach people to put together a coherent argument and think the way a writer thinks — without necessarily teaching them how to write?”

Business leaders will need to ask similar questions and challenge existing paradigms, as AI and complexity create both the imperative and the opportunity to redefine learning.

Most corporate L&D programs are structured around information transfer, for instance, communicating policies and expectations, as well as penalties for noncompliance, along with some basic testing immediately afterward. Learning is typically scheduled and conducted independent of when it is needed.

These practices need to be rethought. As GenAI takes on much basic work in the organization, the purpose of L&D programs can shift from knowledge transfer or imparting basic skills to developing deep understanding and building meta skills such as judgement and curiosity.

“In the 1960s, NASA performed a well-known study that assessed 160 five year-old children on a creativity scale,” says Hiren Shukla, Neuro-Diverse Center of Excellence Leader, Ernst & Young U.S. LLP. “It found 98 percent of them scored as creative geniuses. The same children were retested every five years, and their performance plummeted every time. By the time they were adults, only two percent of them tested at the creative genius level. What happened? We start out as inherently creative, but experience via societal expectations, educational and employment hierarchies enforce convergence — which is the antidote to creative intelligence.”

How do you redesign L&D programs to foster curiosity and engagement — while being more effective at imparting the sorts of learning people will need in a world of GenAI and complexity?

Once again, the combination of technology and evidence-based insights could be a game-changer. Seminal research by the psychologist Hermann Ebbighaus, for instance, shows that people have a steep “forgetting curve”; a huge amount of information we ingest is lost within an hour, and substantially more within a day.13

In light of this insight, how useful are learning modules that give people information months ahead of when it may be needed, and test retention immediately afterward? GenAI could enable an entirely different approach that is based on how people actually learn, retain and use information — for instance, by incorporating design features that increase engagement and curiosity, and using GenAI to provide personalized coaching at the time when it is actually needed.

Man with laptop and headphones working outside in tree house. Low angle view.
3

Chapter 3

Exercising agency and exercising brainpower

Augmented human intelligence is a conscious choice and adaptation, not a given reality.

The themes we’ve explored here — the ways in which intelligence will likely be reshaped and emerging risks that could undermine cognitive abilities — are scenarios based on current trends. That doesn’t make them inevitable. The extent to which any of these futures become real depends on choices we all make.

 

We have agency, and the ways in which we exercise this agency will shape the future of intelligence. Companies are developing and deploying GenAI in their organizations; they can choose to exercise agency over how user interfaces will be designed, what data these models will be trained on, and how GenAI will be governed. As they adapt to a world of complexity, firms should exercise agency by making deliberate and informed decisions about the design of incentives, structures, processes, and programs to increase and diffuse their organizational intelligence.

 

This applies not just to companies, but to all of us. The EY AI Anxiety survey (via EY.com US) finds that 75% of employees are concerned about AI making certain jobs obsolete, while 72% are concerned about AI’s negative impact on pay.14 As we’ve discussed, AI will indeed automate some tasks while making other cognitive skills even more desirable — such as curiosity and judgement. But this doesn’t mean that this will happen automatically or that everyone will sharpen these valuable skills. Many will succumb to overreliance and intellectual laziness. It is the subset of individuals who take deliberate action to maintain and increase their cognitive abilities that will thrive.

 

The ultimate way we can exercise agency is by exercising our mental muscles and continuing to build our cognitive abilities.

 

Grantley Morgan, Associate Director, EY Insights, EY Global Services Limited was a contributor to the article.


Summary

Human intelligence is being reshaped by the complex modern environment and disruptive technologies like AI and GenAI. This new intelligence — a hybrid of human and machine — will smooth out differences across individual intelligence profiles, challenge traditional notions of uniquely human skills and raise new risks. Organizations must adapt by learning from neuroscience and behavioral science, redesigning workspaces and technologies, and redefining learning to foster curiosity and judgment. The future of intelligence will be shaped by our choices and actions in response to these technological changes.

Related content

Why we need to rethink intelligence in a world of constant change

Neuroscientist and entrepreneur, Beau Lotto, explores the need for openness, curiosity and adaptability in today’s complex world. Listen to the podcast.

40m 49s

How will the metaverse change our behavior as it reshapes experiences?

The potential impacts of the metaverse on human behavior are yet unknown, but behavioral economics can help us explore the possibilities. Find out more.

In a complex world, how can rethinking everything bring you clarity?

In a more volatile and interconnected environment, growth entails rethinking fundamental questions and learning from diverse disciplines. Learn more.

Gil Forer + 1

    About this article

    Authors