Overview
In our comment letter, we respond to select questions on artificial intelligence (AI) and accountability mechanisms in the request for comment from the National Telecommunications and Information Administration (NTIA), the federal agency responsible for advising the President on telecommunications and information policy issues.
While there is currently no generally accepted, standardized accountability mechanism or supporting information disclosure regime for AI systems, there are several measures that are being explored by industry, government and other stakeholders. All solutions have benefits and risks and must be managed effectively to promote successful implementation and public trust of AI systems.
This letter describes the purpose and value of AI accountability mechanisms to establish trust and confidence in AI systems with both internal and external stakeholders. We provide various examples of AI accountability mechanisms being used today across certain sectors such as technology, health care and financial services. We also describe verification schemes, as part of an accountability system, and list key factors to consider in establishing verification systems. These factors include:
- Whether the mechanism is intended for internal or external accountability,
- What objective achievement is to be measured, and
- The amount of evidence to be collected as part of a verification system.
Our letter also describes the types of records and documentation necessary to support AI accountability, and responds to questions about how to communicate accountability results and the importance of uniformity of requirements across jurisdictions, citing several key considerations.