0 votes
asked ago by (44k points)
edited ago by
August 24 -- NIST extends the comment period to September 15, 2021. https://www.federalregister.gov/documents/2021/08/24/2021-18108/artificial-intelligence-risk-management-framework
July 29 -- The National Institute of Standards and Technology (NIST) is developing a framework that can be used to improve the management of risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST Artificial Intelligence Risk Management Framework (AI RMF or Framework) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, and use, and evaluation of AI products, services, and systems. This notice requests information to help inform, refine, and guide the development of the AI RMF. The Framework will be developed through a consensus-driven, open, and collaborative process that will include public workshops and other opportunities for stakeholders to provide input. Comments in response to this notice must be received by 5:00 p.m. Eastern time on August 19, 2021.
Congress has directed NIST to collaborate with the private and public sectors to develop a voluntary AI RMF. The Framework is intended to help designers, developers, users and evaluators of AI systems better manage risks across the AI lifecycle. For purposes of this RFI, “managing” means: Identifying, assessing, responding to, and communicating AI risks. “Responding” to AI risks means: Avoiding, mitigating, sharing, transferring, or accepting risk. “Communicating” AI risk means: Disclosing and negotiating risk and sharing with connected systems and actors in the domain of design, deployment and use. “Design, development, use, and evaluation” of AI systems includes procurement, monitoring, or sustainment of AI components and systems.

The Framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as transparency, fairness, and accountability during design, deployment, use, and evaluation of AI technologies and systems. With broad and complex uses of AI, the Framework should consider risks from unintentional, unanticipated, or harmful outcomes that arise from intended uses, secondary uses, and misuses of the AI. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services. NIST is interested in whether stakeholders define or use other characteristics and principles.

Among other purposes, the AI RMF is intended to be a tool that would complement and assist with broader aspects of enterprise risk management which could affect individuals, groups, organizations, or society.

NIST is soliciting input from all interested stakeholders, seeking to understand how individuals, groups and organizations involved with designing, developing, using, or evaluating AI systems might be better able to address the full scope of AI risk and how a framework for managing AI risks might be constructed. Stakeholders include but are not limited to industry, civil society groups, academic institutions, federal agencies, state, local, territorial, tribal, and foreign governments, standards developing organizations and researchers.

NIST intends the Framework to provide a prioritized, flexible, risk-based, outcome-focused, and cost-effective approach that is useful to the community of AI designers, developers, users, evaluators, and other decision makers and is likely to be widely adopted. The Framework's development process will involve several iterations to encourage robust and continuing engagement and collaboration with interested stakeholders. This will include open, public workshops, along with other forms of outreach and feedback. This RFI is an important part of that process.
Artificial Intelligence Risk Management Framework RFI: https://www.federalregister.gov/documents/2021/07/29/2021-16176/artificial-intelligence-risk-management-framework

Please log in or register to answer this question.