0 votes
asked ago by (56.3k points)
edited ago by
Apr 12 -- The National Telecommunications and Information Administration (NTIA) hereby requests comments on Artificial Intelligence (“AI”) system accountability measures and policies. This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem. Written comments must be received on or before June 12, 2023.

To advance trustworthy AI, the White House Office of Science and Technology Policy produced a Blueprint for an AI Bill of Rights (“Blueprint”), providing guidance on “building and deploying automated systems that are aligned with democratic values and protect civil rights, civil liberties, and privacy.” The National Institute of Standards and Technology (NIST) produced an AI Risk Management Framework, which provides a voluntary process for managing a wide range of potential AI risks. Both of these initiatives contemplate mechanisms to advance the trustworthiness of algorithmic technologies in particular contexts and practices.

Mechanisms such as measurements of AI system risks, impact assessments, and audits of AI system implementation against valid benchmarks and legal requirements, can build trust. They do so by helping to hold entities accountable for developing, using, and continuously improving the quality of AI products, thereby realizing the benefits of AI and reducing harms. These mechanisms can also incentivize organizations to invest in AI system governance and responsible AI products. Assurance that AI systems are trustworthy can assist with compliance efforts and help create marks of quality in the marketplace.

The term “trustworthy AI” is intended to encapsulate a broad set of technical and socio-technical attributes of AI systems such as safety, efficacy, fairness, privacy, notice and explanation, and availability of human alternatives. According to NIST, “trustworthy AI” systems are, among other things, “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful bias managed.” 

Along the same lines, the Blueprint identifies a set of five principles and associated practices to help guide the design, use, and deployment of AI and other automated systems. These are: (1) safety and effectiveness, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, consideration and fallback. These principles align with the trustworthy AI principles propounded by the Organisation for Economic Co-operation and Development (OECD) in 2019, which 46 countries have now adopted. Other formulations of principles for responsible or trustworthy AI containing all or some of the above-stated characteristics are contained in industry codes, academic writing, civil society codes, guidance and frameworks from standards bodies, and other governmental instruments. AI assurance is the practical implementation of these principles in applied settings with adequate internal or external enforcement to provide for accountability.

Many entities already engage in accountability around cybersecurity, privacy, and other risks related to digital technologies. The selection of AI and other automated systems for particular scrutiny is warranted because of their unique features and fast-growing importance in American life and commerce. As NIST notes, these systems are “trained on data that can change over time, sometimes significantly and unexpectedly, affecting system functionality and trustworthiness in ways that are hard to understand. AI systems and the contexts in which they are deployed are frequently complex, making it difficult to detect and respond to failures when they occur. AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks—and benefits—can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.” 

The objective of this engagement is to solicit input from stakeholders in the policy, legal, business, academic, technical, and advocacy arenas on how to develop a productive AI accountability ecosystem. Specifically, NTIA hopes to identify the state of play, gaps, and barriers to creating adequate accountability for AI systems, any trustworthy AI goals that might not be amenable to requirements or standards, how supposed accountability measures might mask or minimize AI risks, the value of accountability mechanisms to compliance efforts, and ways governmental and non-governmental actions might support and enforce AI accountability practices. . . .

FRN: https://www.federalregister.gov/d/2023-07776

Please log in or register to answer this question.

...