Corridor of supercomputers with visuals

Eight AI-related US policy issues for boards and management to consider

Related topics

As the use of AI evolves, boards and the C-suite should consider these key AI-related issues attracting US policymaker attention.


In brief

  • US policymakers, alongside the C-suite and board, are considering what artificial intelligence (AI) could mean for capital markets, the economy and society.
  • Dynamics in Washington and the increasing complexity surrounding AI could lead to a patchwork of regulations for companies to navigate.
  • Moving forward, policymakers are looking to create a stable regulatory scheme that addresses concerns and remains relevant as AI continues to evolve.

Artificial intelligence (AI) has seized the attention of US policymakers in recent months. The launch of new AI tools and the rapid adoption of AI have sparked a dialogue about how best to foster innovation and opportunity while addressing associated risks.

Perspectives on AI include predictions that the technology will lead to promising scientific breakthroughs and an explosion of innovation and efficiencies, as well as serious concerns that AI could threaten national security, replace workers, result in discriminatory decision-making, introduce a host of privacy and copyright infringement risks, and promote deepfake content.

US Public Policy Spotlight: Artificial Intelligence

Whatever the perspective, AI policymaking also faces challenges such as:

As the US public policy debate around AI evolves, several themes have emerged. This publication explores eight key AI-related issues attracting US policymaker attention as well as related developments at the federal and state levels and considerations for c-suite leaders and boards of directors engaging on the issue.

1. Many lawmakers are concerned with the implications of AI for national security

Many lawmakers are concerned with the implications of AI for national security, including the pace of adoption by the US defense and intelligence communities and how AI is being used by geopolitical adversaries. For example, congressional hearings1 have examined2 barriers to the Department of Defense (DoD) adopting AI technologies and considered risks from adversarial AI. There have also been calls for guidelines to govern the responsible use of AI in military operations, including weapons systems, to avoid unintended actions when AI is used.3

Establishing and maintaining a competitive advantage on the global stage is a top priority of many lawmakers. Launching a bipartisan initiative to develop AI regulation, Senate Majority Leader Chuck Schumer (D-NY) expressed4 the need for the “U.S. to stay ahead of China and shape and leverage this powerful technology.”

2. Policymakers have raised concerns about AI’s potential impact on jobs

Many policymakers have raised concerns about AI’s potential impact on jobs, particularly in areas where workers could eventually be replaced, and who should bear the cost of displacement and retraining workers. In a new world powered by AI, there are also questions about how to train a workforce to adjust to the rapidly evolving technology and whether AI-reliant companies should be regulated and taxed differently than companies staffed by humans. While concerns about the impacts of technology on workers are not new, the rapid pace of companies adopting AI technology is unparalleled, creating additional challenges and pressure.

3. Policymakers are focused on the risk AI technologies carry in making discriminatory decisions

Policymakers are focused on the risk AI technologies carry in making discriminatory decisions.

Bias issues have been examined in several congressional hearings on AI and will continue to be a key concern as regulatory approaches are considered. Policymakers are focused on the risk AI technologies carry in making discriminatory decisions — just as human decision-makers do — and how AI technologies are only as effective as the data sets and algorithms they are built upon and the large language models that underpin them. In congressional hearings⁵, policymakers have expressed concerns about the potential for AI to discriminate and have heard testimony about the misidentification of individuals, particularly those in minority groups, by facial recognition software.

A report⁶ from the National Institute of Standards and Technology (NIST) provides an “initial socio-technical framing for AI bias” that focuses on mitigation through appropriate representation in AI data sets; testing, evaluation, validation, and verification of AI systems; and the impacts of human factors (including societal and historical biases).

4. Some policymakers are focused on the need for consumers to understand how and why AI technologies work

Some policymakers are focused on the need for consumers to understand how and why AI technologies work, to help promote acceptance of the technologies and create trust in the results AI produces.

 

In its Four Principles of Explainable Artificial Intelligence report⁷, NIST identifies key qualities of an explainable AI system: “We propose that explainable AI systems deliver accompanying evidence or reasons for outcomes and processes; provide explanations that are understandable to individual users; provide explanations that correctly reflect the system’s process for generating the output; and that a system only operates under conditions for which it was designed and when it reaches sufficient confidence in its output.”

 

These factors are aimed at addressing the so-called “black box problem”: Consumers might understand what data is inputted into an AI system and see the result it produces, but they don’t understand how that result is reached.

 

Transparency is also part of the policymaking debate as being critical to building trust. AI typically works behind the scenes, which means consumers often are unaware that they are engaging with an AI system that is making recommendations, calculations and decisions based on an algorithm. To address transparency concerns, some policymakers have called for new rules requiring disclosure to consumers when they are communicating with AI software so they can make an informed decision about the use of the technology.

5. Policymakers are concerned that consumers may not be aware how personally identifiable information is being collected

AI systems often collect, analyze and use large sets of data, including individuals’ personally identifiable information. Policymakers are concerned that consumers may not be aware that such information is being collected or know how long it is being retained and for what purposes. At a hearing in May 2023⁸ of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, senators on both sides of the aisle voiced concerns about data privacy, including calls for greater awareness of how consumer data is being used in AI applications. There also is growing discussion in Washington about whether consumer data protection measures are needed to specifically address the use of AI; for example, the Federal Trade Commission reportedly has launched an investigation into OpenAI’s use of consumer data in its ChatGPT system.⁹

6. Modern AI technologies have the potential to push disinformation and inaccuracies to a new level

Recent congressional hearings also have highlighted that while disinformation and inaccuracies are rampant on the internet, modern AI technologies have the potential to push those concerns to a new level. AI can fabricate videos of individuals, generate lifelike photographs of fictitious people and create social media profiles for nonexistent people. During a hearing earlier this year, Sen. Richard Blumenthal (D-CT) used AI to impersonate himself and demonstrate to committee members the risks of deepfakes.

As deepfakes proliferate, it will become increasingly difficult for consumers to trust the content they encounter even from seemingly trusted sources.¹⁰,¹¹ Proposals to address the threat include requirements to “watermark” AI-generated content¹² and outright bans¹³ of certain deepfake content. Most recently, the Federal Election Commission in August 2023 advanced a petition¹⁴ that calls for banning political campaigns from disseminating deepfake content that may fraudulently deceive voters about candidates.

7. Some policymakers have suggested governance requirements for the development and deployment of AI

Some policymakers have suggested governance requirements for the development and deployment of AI to address concerns about bias and potential unintended consequences. The Algorithmic Accountability Act¹⁵ is one response being considered. The bill seeks to “bring new transparency and oversight of software, algorithms and other automated systems that are used to make critical decisions about nearly every aspect of Americans’ lives” by requiring assessments of algorithms and public disclosures about their use.

The US Equal Employment Opportunity Commission (EEOC) is also exploring¹⁶ the potential benefits and harms of AI in employment decisions through hearings and the efforts of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative¹⁷.

Real accountability can only be achieved when entities are held responsible for their decisions. A range of AI accountability processes and tools … can support this process by proving that an AI system is legal, effective, ethical, safe, and otherwise trustworthy—a function also known as providing AI assurance.

In addition, policymakers could look to some of the accountability mechanisms contemplated in the NIST AI Risk Management Framework to address their concerns. The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) delved specifically into the issue of AI assurance in an April 13, 2023, request for information, which observed¹⁸ that: “Real accountability can only be achieved when entities are held responsible for their decisions. A range of AI accountability processes and tools (e.g., assessments and audits, governance policies, documentation and reporting, and testing and evaluation) can support this process by proving that an AI system is legal, effective, ethical, safe, and otherwise trustworthy — a function also known as providing AI assurance.”

 

On the subject of accountability, regulators and others are also looking at outcomes based on AI technologies. For example, Securities and Exchange Commission (SEC) Chair Gary Gensler recently remarked in an interview that investment advisors who use AI remain responsible for their recommendations: “Investment advisers under the law have a fiduciary duty, a duty of care, and a duty of loyalty to their clients. And whether you’re using an algorithm, you have that same duty of care.”¹⁹

8. Policymakers are also raising questions about the rights and ownership of content created by AI

Policymakers are also raising questions about the rights and ownership of content created by AI. During recent congressional²⁰ hearings²¹, members have considered whether AI-generated content is protected via patents, trademarks and copyright like other intellectual property and raised questions about who owns the AI-generated content and the data sets that are used to train AI systems.²² These and other questions have already been the subject of litigation and will continue to be debated as the AI regulation discussion evolves.

Fall 2023 AI policy updates

Questions for boards to consider


EY.ai - A unifying platform

Introducing EY.ai - A unifying platform that combines our vast experience in strategy, transactions, transformation, risk, assurance and tax, with EY technology platforms and leading-edge capabilities.

Summary 

It is unlikely that Congress will pass comprehensive legislation regulating AI in a highly polarized political environment leading up to the 2024 US elections. In the absence of congressional action, state legislatures may fill the policy void, which could lead to a patchwork of laws. We also expect the Biden administration to continue to work with leading AI companies to enact change on a voluntary basis, and the federal agencies to continue to use enforcement actions to police AI use. Differing national approaches in the development of AI regulation may complicate the regulatory landscape for multinational companies using AI technology.

About this article

Authors

Related articles on AI

Four lessons for boards in overseeing emerging technology

Learn about the important role boards play in identifying strategic opportunities and overseeing risks related to innovation and emerging technologies.

How bolder CEOs take charge to shape their future with confidence

EY CEO confidence index assesses CEO sentiment across sector growth, price and inflation, business growth, talent, and investment and technology. Read more