1) FACT SHEET: Biden-Harris Administration Announces Key Actions to Advance Tech Accountability and Protect the Rights of the American Public
Today, the Biden-Harris Administration’s Office of Science and Technology Policy released a Blueprint for a “Bill of Rights” to help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public. President Biden is standing up to special interests and has long said it is time to hold big technology companies accountable for the harms they cause and to ensure the American public is protected in an increasingly automated world. The framework builds on the Biden-Harris Administration’s work to hold big technology accountable, protect the civil rights of Americans, and ensure technology is working for the American people.
Automated technologies are increasingly used to make everyday decisions affecting people’s rights, opportunities, and access in everything from hiring and housing, to healthcare, education, and financial services. While these technologies can drive great innovations, like enabling early cancer detection or helping farmers grow food more efficiently, studies have shown how AI can display opportunities unequally or embed bias and discrimination in decision-making processes. As a result, automated systems can replicate or deepen inequalities already present in society against ordinary people, underscoring the need for greater transparency, accountability, and privacy.
The Blueprint for an AI Bill of Rights addresses these urgent challenges by laying out five core protections to which everyone in America should be entitled:
-- Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
-- Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
-- Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
-- Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
-- Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
Developed through extensive consultation with the American public, stakeholders, and U.S. government agencies, the Blueprint also includes concrete steps which governments, companies, communities, and others can take in order to build these key protections into policy, practice, or technological design to ensure automated systems work for the American people. . . .
2) OSTP: Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age
The White House Office of Science and Technology Policy (OSTP) is today releasing the Blueprint for an AI Bill of Rights to help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public.
These technologies can drive great innovations, like enabling early cancer detection or helping farmers grow food more efficiently. But in the United States and abroad, people are increasingly being surveilled or ranked by automated systems in their workplaces and in their schools, in housing and banking, in healthcare and the legal system, and beyond. Algorithms used across many sectors are plagued by bias and discrimination, and too often developed with without regard to their real-world consequences and without the input of the people who will have to live with their results.
These problems, which have expanded dramatically over the past decade, are threatening the rights of millions and hurting people in historically marginalized communities.
That’s why today we’re laying out five common sense protections to which everyone in America should be entitled . . .
We have been guided in this effort by a set of pressing questions: What could it look like for industry developers and academic researchers to think about equity at the start of a design process, and not only after issues of discrimination emerged downstream? What kind of society could we have if all innovation began with ethical forethought? How do we ensure that the guardrails to which we are entitled in our day-to-day lives carry over into our digital lives?
The Blueprint for an AI Bill of Rights begins to answer these questions. It offers a vision for a society where protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people. . . .
3) DOL -- What the Blueprint for an AI Bill of Rights Means for Workers
The growth of Artificial Intelligence, including the use of algorithms and automated management systems, poses unique opportunities and challenges in the workplace. It also carries the risk of increasing inequities and prejudicial outcomes for workers, from hiring decisions to scheduling, disparities in pay, promotions, demotions, and termination.
Some of these systems pose particular risks for women and workers of color, by potentially embedding systemic biases, and also because these workers are more likely to be working in sectors utilizing this technology. Workers are also affected by AI systems when accessing benefits or services, such as unemployment insurance. This growth of AI in the employment and benefits space should capitalize on opportunities to enhance decision-making and efficiency, but should also protect workers’ privacy, safety, and rights.
Workers are coping with unprecedented and enhanced forms of electronic monitoring and productivity tracking, according to Labor Department stakeholders. These technologies, they tell us, are negatively affecting their workplace conditions and health and safety, including their mental health. For instance, call center agents, who are often electronically monitored and held to similarly intensive productivity standards as warehouse workers, report high levels of stress, difficulties sleeping, and repetitive stress injuries. In addition, constant monitoring may discourage workers from engaging in legally protected activities, including taking action with other co-workers to improve working conditions, organizing and collectively bargaining, or filing complaints about violations of labor and employment rights laws with government agencies.
Even when workers know data about them is being collected and used to monitor their performance and provide valuable information to their employer, they don’t control or own this data. In addition, a lack of human oversight in automated systems may mean an inability to correct or appeal adverse employment decisions or benefits determinations.
Monitoring practices have become more common with increased remote work. This is particularly true for occupations which require large amounts of computer work. For instance, some companies require workers to install facial or eye recognition systems that scan workers’ faces at regular intervals to verify their identity and ensure that they are in front of their computer and on task. If workers look away from their screens for too long the system would register them as no longer at work.
The administration’s Blueprint for an AI Bill of Rights, which followed extensive collaboration with stakeholders and across the federal government, acknowledges and notes steps employers can take to mitigate these potential harms. The Blueprint shares five aspirational and interrelated principles for building and deploying automated systems that are aligned with democratic values and protect civil rights, civil liberties and privacy . . .
The Blueprint for an AI Bill of Rights also includes a technical companion with concrete steps that can be taken now to integrate these principles into the use of AI. Many of these are topics the Department of Labor is already exploring. For example, the Office of Federal Contract Compliance Programs and the Equal Employment Opportunity Commission launched a multiyear collaborative effort to reimagine hiring and recruitment practices, including in the use of automated systems. In a recent roundtable, speakers explored potential barriers automated technologies present to diversity, equity, inclusion, and accessibility.
The Office of Labor Management Standards is ramping up enforcement of required surveillance reporting to protect worker organizing. The Partnership on Employment & Accessible Technology (PEAT), funded by the Office of Disability Employment Policy at the Department of Labor, has released the AI & Disability Inclusion Toolkit and the Equitable AI Playbook.
Businesses will increasingly look to adopt AI tools that automate decision-making and promote productivity, efficiency, customer satisfaction, and worker safety, among other goals. This Blueprint provides the framework to ensure these tools are safe and effective, do not have unintended consequences, and are not used to threaten workers’ access to a healthy and safe workplace, collective action and labor representation, and a workplace free from discrimination. Ensuring worker input and voice are included in the design and deployment of such AI is critical to enhancing its value in the workplace.