Biden sets out AI Bill of Rights to protect citizens from threats from automated systems

US president Joe Biden has set out plans to create an AI Bill of Rights that is intended to protect citizens from automated systems “that threaten the rights of the American public”.
The blueprint is intended to provide a guide to the development of artificial intelligence across the US, as part of an effort to address concerns that AI systems can embed or exacerbate existing societal biases. According to the White House, “well documented” problems with AI include that systems which are supposed to help with patient care have proven unsafe, ineffective, or biased, while algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. In addition, unchecked social media data collection undermines people’s privacy, often without their knowledge or consent.
The White House said that these “deeply harmful” outcomes are not inevitable, and added that automated systems and algorithms had also brought benefits, such as helping farmers to grow food more efficiently, predicting storm paths and identifying diseases in patients.
However, to ensure positive outcomes, the White House said “civil rights or democratic values” needed to be affirmed, in line with Biden’s pledge to use the power of the federal government to root out inequity, embed fairness in decision-making processes, and advance civil rights, equal opportunity, and racial justice.
Launching the plan, Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy (OSTP) said that the Biden administration is “really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the centre and civil rights at the centre of the ways that we make and use and govern technologies”, adding: “We can and should expect better and demand better from our technologies.”
The Bill of Rights is intended to reflect these priorities in guidance for the development of AI both in the private and public sector. The OSTP has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public, across five areas: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback.
Read more: Canada to create top official to police artificial intelligence under new data law
These five principles are intended to form “an overlapping set of backstops against potential harms”, and as well as a high-level blueprint for the use of AI, OSTP has also developed a handbook for anyone seeking to incorporate these protections into policy and practice.
This framework provides guidance on automated systems that aims to ensure that rights, opportunities, and access to critical resources are available equally and fully protected.
Those involved in the development of AI systems are urged to consider the impact of automated systems on individuals or communities rights, opportunities, or access; civil rights, civil liberties, and privacy; equal opportunities; and access to critical resources or services, such as healthcare, financial services, and social services.
Like this story? Sign up to Global Government Forum’s email news notifications to receive the latest news and interviews in your inbox.