0 votes
asked ago by (56.3k points)
edited ago by
1) Nov 3 FRN -- The Office of Management and Budget (OMB) is seeking public comment on a draft memorandum titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (AI). As proposed, the memorandum would establish new agency requirements in areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public. The full text of the draft memorandum is available for review at https://www.ai.gov/​input and https://www.regulations.gov. Written comments must be received on or before December 5, 2023.

Through this Request for Comment, OMB hopes to gather information on the questions posed below. However, this list is not intended to limit the scope of topics that may be addressed. Commenters are invited to provide feedback on any topic believed to have implications for the content or implementation of the proposed memorandum.

OMB is requesting feedback related to the following:

1. The composition of Federal agencies varies significantly in ways that will shape the way they approach governance. An overarching Federal policy must account for differences in an agency's size, organization, budget, mission, organic AI talent, and more. Are the roles, responsibilities, seniority, position, and reporting structures outlined for Chief AI Officers sufficiently flexible and achievable for the breadth of covered agencies?

2. What types of coordination mechanisms, either in the public or private sector, would be particularly effective for agencies to model in their establishment of an AI Governance Body? What are the benefits or drawbacks to having agencies establishing a new body to perform AI governance versus updating the scope of an existing group (for example, agency bodies focused on privacy, IT, or data)?

3. How can OMB best advance responsible AI innovation?

4. With adequate safeguards in place, how should agencies take advantage of generative AI to improve agency missions or business operations?

5. Are there use cases for presumed safety-impacting and rights-impacting AI (Section 5 (b)) that should be included, removed, or revised? If so, why?

6. Do the minimum practices identified for safety-impacting and rights-impacting AI set an appropriate baseline that is applicable across all agencies and all such uses of AI? How can the minimum practices be improved, recognizing that agencies will need to apply context-specific risk mitigations in addition to what is listed?

7. What types of materials or resources would be most valuable to help agencies, as appropriate, incorporate the requirements and recommendations of this memorandum into relevant contracts?

8. What kind of information should be made public about agencies' use of AI in their annual use case inventory?

FRN: https://www.federalregister.gov/d/2023-24269

2) Nov 1 [press release] -- OMB Releases Implementation Guidance Following President Biden’s Executive Order on Artificial Intelligence

This week, President Biden signed a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As the United States takes action to realize the tremendous promise of AI while managing its risks, the federal government will lead by example and provide a model for the responsible use of the technology. As part of this commitment, today, ahead of the UK Safety Summit, Vice President Harris will announce that the Office of Management and Budget (OMB) is releasing for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This guidance would establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI. [OMB welcomes public comment on or before December 5, 2023.]

Every day, the federal government makes decisions and takes actions that have profound impacts on the lives of Americans. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society. The proposed guidance also builds on the Blueprint for an AI Bill of Rights and the AI Risk Management Framework by mandating a set of minimum evaluation, monitoring, and risk mitigation practices derived from these frameworks and tailoring them to context of the federal government. By prioritizing safeguards for AI systems that pose risks to the rights and safety of the public—safeguards like AI impact assessments, real-world testing, independent evaluations, and public notification and consultation—the guidance would focus resources and attention on concrete harms, without imposing undue barriers to AI innovation.

o Strengthening AI Governance

To improve coordination, oversight, and leadership for AI, the draft guidance would direct federal departments and agencies to:

-- Designate Chief AI Officers, who would have the responsibility to advise agency leadership on AI, coordinate and track the agency’s AI activities, advance the use of AI in the agency’s mission, and oversee the management of AI risks.
-- Establish internal mechanisms for coordinating the efforts of the many existing officials responsible for issues related to AI. As part of this, large agencies would be required to establish AI Governance Boards, chaired by the Deputy Secretary or equivalent and vice-chaired by the Chief AI Officer.
-- Expand reporting on the ways agencies use AI, including providing additional detail on AI systems’ risks and how the agency is managing those risks.
-- Publish plans for the agency’s compliance with the guidance.

o Advancing Responsible AI Innovation

To expand and improve the responsible application of AI to the agency’s mission, the draft guidance would direct federal agencies to:

-- Develop an agency AI strategy, covering areas for future investment as well as plans to improve the agency’s enterprise AI infrastructure, its AI workforce, its capacity to successfully develop and use AI, and its ability to govern AI and manage its risks.
-- Remove unnecessary barriers to the responsible use of AI, including those related to insufficient information technology infrastructure, inadequate data and sharing of data, gaps in the agency’s AI workforce and workforce practices, and cybersecurity approval processes that are poorly suited to AI systems.
-- Explore the use of generative AI in the agency, with adequate safeguards and oversight mechanisms.

o Managing Risks from the Use of AI

To ensure that agencies establish safeguards for safety- and rights-impacting uses of AI and provide transparency to the public, the draft guidance would:

-- Mandate the implementation of specific safeguards for uses of AI that impact the rights and safety of the public. These safeguards include conducting AI impact assessments and independent evaluations; testing the AI in a real-world context; identifying and mitigating factors contributing to algorithmic discrimination and disparate impacts; monitoring deployed AI; sufficiently training AI operators; ensuring that AI advances equity, dignity, and fairness; consulting with affected groups and incorporating their feedback; notifying and consulting with the public about the use of AI and their plans to achieve consistency with the proposed policy; notifying individuals potentially harmed by a use of AI and offering avenues for remedy; and more.
-- Define uses of AI that are presumed to impact rights and safety, including many uses involved in health, education, employment, housing, federal benefits, law enforcement, immigration, child welfare, transportation, critical infrastructure, and safety and environmental controls.
-- Provide recommendations for managing risk in federal procurement of AI. After finalization of the proposed guidance, OMB will also develop a means to ensure that federal contracts align with its recommendations, as required by the Advancing American AI Act and President Biden’s AI Executive Order of October 30, 2023.

Every day, the federal government makes decisions and takes actions that have profound impacts on the lives of Americans. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society. AI is already helping the government better serve the American people, including by improving health outcomes, addressing climate change, and protecting federal agencies from cyber threats. In 2023, federal agencies identified over 700 ways they use AI to advance their missions, and this number is only likely to grow. When AI is used in agency functions, the public deserves assurance that the government will respect their rights and protect their safety.

Draft policy: https://ai.gov/input/ and https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf
OMB press release: https://www.whitehouse.gov/omb/briefing-room/2023/11/01/omb-releases-implementation-guidance-following-president-bidens-executive-order-on-artificial-intelligence/

Please log in or register to answer this question.