Women working on green table

Putting a premium on human intelligence


In the AI adoption stage, it’s critical for companies to be honest with their customers.


In brief:

  • GenAI boosts content marketing efficiency by enhancing personalization through data integration and automation.
  • But as large language models (LLMs) become more ubiquitous, the expectation of tagging content as AI-generated may fade.
  • For now, disclosing AI involvement mitigates the risk of customer backlash, allowing companies to control the narrative and maintain trust with their customers.

Imagine a world where we consume content created by artificial intelligence (AI) every day, but rarely interact with any human-created content.

That might sound extreme, but it’s not far off from the potential future emerging before us. As AI-generated and edited content becomes increasingly commonplace and companies continue to adopt AI capabilities, original content created by humans will become a premium service. Some consumer demographics are already showing a willingness to shift their preferences toward the assurance of human involvement, elevating the "human tag" to a high-value proposition.

But as large language models (LLMs) become more ubiquitous, the expectation of tagging content as AI-generated may fade. In its early years, the use of Google for academic or professional purposes came with a frown and a warning label, but now Google is an accepted tool for information gathering.

For now, disclosing AI involvement mitigates the risk of customer backlash upon discovering its use independently, allowing companies to control the narrative and maintain trust with their customers while still leveraging transformative technologies.


Guide to leading generative AI deployment for marketing

Learn from GenAI first movers and dig deeper into the three primary challenges that hinder CMOs in deploying GenAI in marketing.  A new report from the EY-Adobe Alliance.

Report hero image

How AI does its (or our) work

Large language models (LLMs) are the powerhouse behind all generative AI (GenAI) applications, designed to process and understand language in a way that mirrors human communication. Understanding LLMs gives users a glimpse behind the magic curtain of what’s happening behind AI software.

 

The process begins by feeding the LLM unimaginable volumes of training data, which allow the model to recognize intricate patterns. The LLM then generates numerous predictions about what should logically follow or how it should respond to a prompt. These predictions are ranked based on the likelihood of their accuracy, and the LLM selects the highest ranked prediction as its response.

 

Here’s an example.

 

Human to AI: What’s an LLM?

AI to human:

An LLM, or large language model, is a type of artificial intelligence model that is designed to understand and generate human-like text. These models are trained on vast amounts of text data and use deep learning techniques, particularly a type of neural network architecture known as a transformer, to predict the next word in a sentence given all the previous words. Large language Models are capable of performing a wide range of language-related tasks, such as translation, summarization, question answering and conversation. They can also be fine-tuned for specific applications or industries to provide more specialized responses. The model you are currently interacting with is an example of an LLM, designed to understand your questions and provide informative and contextually relevant answers.

 

Due to their reliance on training data, these models are becoming increasingly adept at mimicking human behavior across various scenarios. The advantage of this is that AI models are likely to provide answers that a human might give, creating an experience for the end user that is comparable, if not superior, to human interaction. However, there’s a potential downside: without a continuous influx of fresh content and ideas, AI models may become stale and repetitive, particularly in content creation. That points to a future where human-created content is valued as a premium offering.


How can GenAI reveal the humanity behind your data?

Deliver on GenAI’s full potential in customer experience.

Adobe GenAI woman light reveal data

Why transparency matters

Disclosure and tagging of AI-generated content is critical, particularly when dealing with sensitive topics such as politics, social issues and customer interactions. Legacy media and entertainment companies must establish enterprise-wide guidelines to determine how AI should handle or restrict discussions around these subjects. Responsible use of AI and should consider the following:

By being transparent about the capabilities and constraints of AI, companies can mitigate risks and foster a more informed and trusting relationship with their users.

How companies are disclosing AI usage

While the methods of disclosure differ, the most important consideration is that it should be clear and in real time, meaning the user should not have to search to learn about your company’s AI usage. Leading methodologies emerging across companies include:

Finding the new comfort zone

As AI continues to evolve, companies should proactively consider AI disclosure, and blockchain technology will play a pivotal role as a security solution. Blockchain's ability to enhance verification of human identities through advanced security methods such as digital identification, biometric verification, and decentralized authorization will be instrumental in ensuring the authenticity and integrity of interactions.

And as blockchain technology expands, it can increase the comfort level of late-adopter consumers with artificial intelligence and other emerging technologies. This enhanced trust could accelerate the adoption and acceptance of AI across various sectors.

In order to further prepare for the impending shift, companies need to build a repository of assets that includes AI-generated metadata, ensuring that the use of AI is tracked and tagged throughout the creation and distribution process. Those who neglect to tag their content now may face higher risks and slower technological adoption in the future. Establishing a clear system for AI disclosure early on will be crucial for risk management and maintaining a competitive edge.

It is also essential for large enterprises to develop comprehensive guidelines and guardrails for AI usage. This includes aligning on the implementation of AI-enabled capabilities, management of data, and embedding company values into AI models. Such alignment ensures that AI applications are consistent with both company ethics and societal expectations, which is particularly vital for media and entertainment companies that significantly influence public content and values.

As AI changes the world around us, it is becoming increasingly important for companies to determine their role in this shift. While much of the future around AI is uncertain, it is clear that remaining transparent with employees and consumers will be vital.

And if you’ve forgotten exactly what blockchain technology is, you can always ask AI.

Special thanks to the writing, research, and contributions from Hunter G Brickey, Senior, Consulting, Ernst and Young.

Summary

As AI-generated and edited content becomes increasingly commonplace and companies continue to adopt AI capabilities, original content created by humans will become a premium service, and AI disclosure is an increasing expectation.

About this article

Related articles

Are your customers content with your marketing content?

Use GenAI to optimize your marketing content strategy and drive growth. Reveal the humanity behind the data.

How can GenAI turn personal into personalized?

Smarter marketing and sales tools are helping the manufacturing industry reach valued customers.

How can GenAI reveal the humanity behind the data?

Together, EY and Adobe transforms CX. Our business and technology experience combined with Adobe GenAI can help you reach every customer where they are.