EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
We provide consulting for digital transformations that improve government efficiency and ease of use for residents.
Read more
AI literacy: challenges and risks to adoption
Despite the promise of GenAI, its implementation is not without obstacles. The primary concerns revolve around data security, privacy and the integrity of the models themselves. Risks include:
- Sensitive information disclosure: The use of GenAI tools, especially those that are open source or web based, can lead to inadvertent exposure of sensitive information. User prompts may become part of the training data, potentially leaking proprietary or confidential data into the public domain. LLMs may inadvertently reveal confidential data in responses, leading to unauthorized data access, privacy violations and security breaches.
- Prompt manipulation: Malicious actors could exploit the responsiveness of LLMs to prompts, leading to the generation of false or harmful content, or execution of exploitative activities.
- Training data poisoning: This type of cyber attack involves manipulating data or fine-tuning processes to introduce vulnerabilities, backdoors or biases that could compromise the model’s security, effectiveness or ethical behavior. Inherent biases in training data can be perpetuated and amplified by LLMs, leading to skewed outputs that may affect decision-making processes.
The advent of GenAI brings with it a proliferation of social engineering threats. As GenAI technologies become more sophisticated and accessible, they provide powerful tools that can be used to craft highly convincing and manipulative content. This capability significantly lowers the barrier for malicious actors to conduct social engineering attacks, which are designed to exploit human psychology rather than technical vulnerabilities.
The pervasiveness of GenAI has made it easier for threat actors to generate phishing emails, fake news and other forms of deceptive content at scale. These materials can be tailored to target specific individuals or organizations, making them more difficult to detect and resist. The challenge is further compounded by the speed at which GenAI can produce such content, outpacing traditional security measures that rely on manual detection and response.
The complexity and potential risks associated with LLMs might instinctively lead some chief information security officers (CISOs) to block these tools until they have a better handle on the risk, and mitigation techniques. However, this approach presents its own challenges. Large agencies have implemented various rules over the years, limiting access to certain applications and social media platforms. Employees, often unaware of the nuanced reasons behind these decisions, might equate their inability to access LLMs to concerns about productivity rather than sensitivity toward data protection. Often, when applications are blocked on official networks, employees access them through personal devices. According to the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn1 , at least three in four employees are using GenAI at work, and over half of users are reluctant to admit to it. Cyberhaven Labs recently analyzed ChatGPT usage for 1.6m workers across various industries and detected thousands of attempts to paste corporate data into ChatGPT (and employees copied data out of ChatGPT even more, at a nearly 2:1 ratio). We’re beginning to understand the impact of tools such as ChatGPT on organizations — and the enterprise risks it creates.
Strategic recommendations for AI literacy
In this environment, the workforce at large must be empowered to become the first line of defense against these emerging threats. By integrating AI capabilities into both mission-focused and support functions, employees gain firsthand experience with leading-edge technology, fostering a deeper understanding of its potential and limitations. This familiarity is crucial for developing security literacy, as it allows individuals to recognize the nuances of AI-generated content, discern patterns that may indicate manipulation and appreciate the sophistication of the technology that adversaries might employ.
When the workforce is well-versed in how GenAI operates and what its outputs entail, they are better equipped to spot inconsistencies or anomalies that could signal malicious attempts to exploit AI tools for disinformation or deception. In essence, the more knowledgeable individuals are about the inner workings and outputs of GenAI, the less susceptible they become to the social engineering tactics that increasingly sophisticated threat actors might use to craft persuasive and manipulative narratives.
To harness the benefits of GenAI while mitigating the associated risks, government agencies should consider the following recommendations:
- Education and training: Develop systematic programs to educate the workforce on the nuances and risks of GenAI technology. Understanding the capabilities and limitations of AI is crucial for recognizing potential threats and reducing susceptibility to social engineering and unintentional sensitive data leaks. Encourage a culture of security literacy where employees can discern AI-generated content and identify patterns indicative of manipulation.
- AI guardrails and frameworks: Implement AI governance frameworks that provide both reactive and proactive measures to guide the use and security of GenAI tools. This will promote self-directed compliance and minimize the risk of errors.
- Risk management: Adopt novel approaches to risk management that extend beyond traditional cyber risk frameworks, addressing the inherent vulnerabilities of LLMs attributable to threat surfaces associated with training data, model bias and prompts.
- Security as a service and AI application development blueprints: Provide a low barrier to entry for security and LLM risk detection services, inclusive of GitHub-style shared tools that can be deployed across multiple environments for consistent management.