The Blueprint for an AI Bill of Rights and Its Impact

By Carolyn Wimbly Martin and Sara Etemad-Moghadam

After a yearlong process of gathering input from communities, policymakers and experts across fields and sectors, The White House Office of Science and Technology Policy (“Office”) released a “Blueprint for an AI Bill of Rights” (“Blueprint”) in fall 2022 to guide the responsible design, use and deployment of automated systems and artificial intelligence. The Office identified five principles that should be considered in developing policy and practice: (1) Safe and Effective Systems; (2) Algorithmic Discrimination; (3) Data Privacy; (4) Notice and Explanation; and (5) Human Alternatives, Consideration and Fallback. While the Blueprint itself is non-binding and does not constitute government policy, it is intended to support the development of policies and practices for use of automated systems that align with democratic values and civil rights. 

The Blueprint uses a two-part test to determine what systems fall within this framework, i.e., “(1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.” The Blueprint defines an “automated system” as “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.” Examples of such automated systems include real-time facial recognition systems, social media monitoring, systems that use or collect health-related data, ad-targeting systems, admissions algorithms, hiring or termination algorithms and loan allocation algorithms. The framework’s purpose is to outline protections that should be applied to all automated systems against potential harms to rights, opportunities or access, including civil rights, civil liberties, privacy, equal opportunities and access to critical resources or services. Throughout the Blueprint, accessibility and transparency are recurring themes, highlighting the importance of using clear, understandable language, rather than technical jargon, to inform the public’s understanding of this fast-changing technology. 

Principle 1: Safe and Effective Systems

The first principle of the Blueprint is that systems should be developed with insight into and in consultation with diverse communities, stakeholders and domain experts to identify, evaluate and protect against risks and potential harm. This includes pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective for their intended use. The Blueprint recognizes that often these technologies do not work as intended or as promised, causing substantial harm. To ensure that an automated system is safe and effective, the system should implement safeguards to protect individuals and communities from proactive and ongoing harm; avoid using inappropriate, outdated or irrelevant data, including reuse of data that could cause compounded harm; and demonstrate the safety and effectiveness of the automated system. 

The first principle has already been implemented into laws, policies and practical approaches including Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. Executive Order 13960 requires covered federal agencies to adhere to nine principles – (a) legality and respect for the nation’s values, (b) purposefulness and a performance-driven approach, (c) accuracy, reliability and effectiveness, (d) safety, security and resilience, (e) comprehensibility, (f) responsibility and traceability, (g) regular monitoring, (h) transparency and (i) accountability. 

Federal agencies, including the Department of Energy (“DOE”), have already released their examples of AI uses and are implementing compliant AI systems plans. Multiple National Science Foundation (“NSF”) programs, including the National AI Research Institute, the Cyber Physical Systems Program, the Formal Methods in the Field program and the Designing Accountable Software Systems programs support research into appropriate automated systems. 

Principle 2: Algorithmic Discrimination Protections

The second principle is that systems and algorithms should not discriminate and should be used and designed in a consistent, systematic, fair, just and impartial way. Unfortunately, there is significant evidence showing that automated systems can produce inequitable outcomes when they use data that fails to factor in existing systemic biases based on race, color, ethnicity, sex, religion, age, disability or other classifications protected by law. Examples of such automated systems include, in the words of the Blueprint, “facial recognition of technology that can contribute to wrongful and discriminatory arrests, hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount the severity of certain diseases in Black Americans.” 

Companies, nonprofit organizations and federal government agencies have already taken steps to prevent algorithmic discrimination, including instituting bias testing in their product quality assessment and launch procedures, developing standards and guidance to prevent bias and implementing audit and impact assessments to identify and mitigate potential algorithmic discrimination. The impact of these additional safeguards is yet to be determined. 

Attributes that are highly correlated with demographic features, known as proxies, can also lead to algorithmic discrimination. A proxy is a variable used in place of a variable of interest when the variable of interest cannot be directly measured, i.e., per capita GDP is commonly used as a proxy for the standard of living. Critically, the U.S. health care system relies on commercial algorithms to guide crucial health decisions, such as health costs as a proxy for health needs. Because less money is generally spent on Black patients with the same level of need as White patients, the algorithm incorrectly concludes that Black patients are healthier than White patients. 

Additionally, system designers and non-governmental organizations should ensure that automated systems are accessible to people with disabilities and adhere to compliance guidelines

Principle 3: Data Privacy

The Blueprint’s third principle is data privacy. Users should be protected from abusive data practices via built-in protections and should have control over how their data is used. Data collection should conform to reasonable expectations and be strictly used for the specific purpose for which it is collected. System developers should seek consent and respect users’ decisions regarding the collection, use, access, transfer and deletion of data. Entities should establish clear timelines for data retention and promptly delete data once it is no longer necessary for the purpose for which it was collected. 

Federal law has not yet addressed the increasing prevalence of private data collection, nor the ability and means of acquiring and using this data. However, existing federal laws, including the Health Insurance Portability and Accountability Act (“HIPAA”), the Americans with Disabilities Act (“ADA”), the Fair Credit Reporting Act (“FCRA”) and the Fair and Accurate Transactions Act (“FACT Act”), do protect certain personal identifiable information (“PII”). The Blueprint highlights the importance of certain “sensitive” domains, like criminal justice and personal finance, which deserve enhanced data protections. The public should also be free from unchecked surveillance, and any surveillance technologies should be subject to oversight and assessment, either by an ethics committee or other compliance body.

Principle 4: Notice and Explanation

Notice of the automated system and how and why it contributes to impactful outcomes is the fourth principle of the Blueprint. System designers should provide notices and clearly describe the function, role and outcomes of the automated system. System users should be notified of significant uses or key functionality changes. For example, an applicant might believe that a person rejected their resume or credit application, or a defendant in a courtroom might think that a judge used their own judgment to deny bail. If such individuals do not know that an algorithm or automated system made these decisions, then they will not be able to correct possible errors or appropriately contest these results. Notices should identify the entity responsible for designing each component of the system and the entity using the automated system. Notices and explanations will need to be accessible to people with differing educational backgrounds, language understanding and users with disabilities, as appropriate to the target audience. The National Institute of Standards and Technology (“NIST”) has conducted research on how to explain AI, and the NSF’s program on Fairness in Artificial Intelligence also includes guidelines for research foundations to provide explainable AI.

Principle 5: Human Alternatives, Consideration and Fallback

The last principle of the Blueprint is the requirement for an opt out provision for automated systems, allowing for a human alternative when deemed appropriate. Appropriateness is determined on a case-by-case basis. Human consideration should “be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.” This is especially important in automated systems used in sensitive domains. 

Earlier this year, for example, the Internal Revenue Service (“IRS”) announced that it would not require face biometric data to identify taxpayers. Instead of mandating taxpayers to take a selfie for ID.me, a commercial online identity verification service, taxpayers now have the option of verifying their identity during a live, virtual interview with an agent. 

2023 Initiatives Following the Blueprint Release

In January 2023, NIST published a concept paper seeking additional input in developing an AI Risk Management Framework to foster the development of AI products, services and systems that are accurate, explainable, reliable, safe, secure and free from harmful bias. NIST is scheduled to finalize the framework by early 2024. On June 22, 2023, U.S. Secretary of Commerce Gina Raimondo announced that NIST is forming the Public Working Group on Generative AI under NIST’s AI Risk Management Framework. The working group will draw upon technical experts from both the private and public sector to gather input, support NIST’s work on testing, evaluation and measurement and explore the productive applications of generative AI.

In May 2023, the Senate Judiciary Committee on Privacy, Technology, and the Law met to discuss “Oversight of AI: Rules for Artificial Intelligence,” with experts including Sam Altman, the CEO of OpenAI. Altman expressed his approval of regulation of AI and called for policymakers “…to facilitate regulation that balances incentivizing safety, while ensuring that people are able to access the technology’s benefits.” He also proposed that Congress create a new government agency responsible for overseeing automated systems. 

In August 2023 at DEF CON 31, a hacker convention, Vice President Kamala Harris will meet with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI to discuss whether their respective technologies align with the Blueprint and public and private interests. 

Conclusion

While the Blueprint addresses many concerns related to the use of automated systems, the vastly changing technology makes it difficult to address all potential ramifications of these systems. Illustrative of the pace of change, the European Parliament green-lighted the AI Act, legislation to regulate Artificial Intelligence, in a vote on May 11, 2023 and is already working to change the definition of AI in anticipation of future technologies. The original version of the AI Act did not affect systems without a specific purpose, but the success of ChatGPT and other language models has forced EU lawmakers to reconsider how to regulate this type of AI. The principles laid out in the Blueprint should provide guidance for Congressional action in the near term with the flexibility to address the rapidly changing AI landscape. 

The AI landscape is evolving, and Lutzker & Lutzker will continue to monitor and provide updates on the technological, regulatory and legal developments in the U.S. and globally.