Hallway of servers.

The White House, on October 4, 2022, unveiled its “ Blueprint for an AI Bill of Rights ,” outlining non-binding recommendations for the design, use, and deployment of artificial intelligence (AI) and automated systems when such tools are used in ways that affect individual’s rights, opportunities, or access to critical resources or services.

Employers are increasingly using AI and automated decision-making systems, which refers to the use of software, algorithms, or processes that “make decisions” or provide recommendations, including through the use of data analysis and machine-learning. Employers use this technology in many ways, including to screen job candidates, provide employee self-service tools, or to evaluate and assess employee job performance. This technology can also increase efficiency, lower the costs of products and services, improve quality, and reduce errors, which the White House’s Office of Science and Technology Policy (OSTP) recognized in the Blueprint.

The Blueprint’s primary focus, however, is on broad guidelines to mitigate or address potential negative effects of the emerging uses of these technologies. The White House stated that, for instance, algorithms used for hiring decisions may reflect existing biases and discrimination. In that regard, the Blueprint identified “five principles” to guide entities designing, developing, and deploying AI and automated systems:

1. “Safe and Effective Systems”

According to the Blueprint, AI and automated systems should be developed in consultation with a diverse set of communities, stakeholders, and experts to identify potential risks and impacts from the systems. The Blueprint also recommends that AI and automated systems to be designed to allow for independent evaluation, including by “researchers, journalists, ethics review boards, inspectors genera, and third-party auditors.” The Blueprint further recommends that the “entities responsible for the development or use” of such a system should provide “regularly-updated reports” on how the system is functioning and on the results of independent evaluations.

2. “Algorithmic Discrimination Protections”

According to the Blueprint, individuals should not face discrimination by the use of AI and automated systems based on their race, color, ethnicity, sex (including pregnancy and childbirth and gender identity or sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other protected class. The guidelines highlight various “proactive and continuous measures” designers, developers, and deployers should undertake to avoid discrimination including equitable design, proactive equity assessments, and plain language reporting of algorithmic impact assessments.

3. “Data Privacy”

According to the Blueprint, AI and automated systems should include protections against the over collection and abuse of personal data.

4. “Notice and Explanation”

The Blueprint states that designers, developers, and deployers of AI and automated systems should provide clear explanations on how an overall system functions and provide notice that such systems are in use.

5. “Human Alternatives, Consideration, and Fallback”

According to the Blueprint, AI and automated systems should include access to an individual who can review and remedy an error or issue and where appropriate, the ability to opt out of an automated system and deal with a human without being disadvantaged.

Key Takeaways

While the Blueprint does not create any specific regulations or legal obligations, the document outlines how regulators may view the use of AI and automated systems and may forecast key concepts (like notice, auditing, and human alternatives) to be expected in forthcoming federal or state regulation or legislation. The White House separately announced that several federal agencies will be taking action to advance the guidelines in the Blueprint.

The U.S. Department of Health and Human Services (HHS), in July 2022, issued a proposed rule that includes a prohibition on discrimination by algorithms for health care decision-making by covered health programs. In May 2020, the U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ) issued new technical assistance advising employers that the use of AI and algorithmic decision-making processes may result in unlawful discrimination against individuals with disabilities.

Additionally, a growing number of jurisdictions, including Illinois, Maryland, and New York City, have passed laws regulating the use of certain types of AI and automated systems to make employment decisions. Others, like California, are currently considering specific regulations concerning the use of these tools.

Employers using AI and automated systems may want to continue to watch federal and state developments regarding these tools. When designing or implementing new systems, employers may further want to take into consideration how to establish compliance guardrails to mitigate against legal risks.

Ogletree Deakins will continue to monitor and post updates to the firm’s Technology blog on the evolving regulatory landscape and potential compliance issues related to the use of this emerging technology in the workplace. Important information for employers is also available via the firm’s webinar and podcast programs.

Authors


Browse More Insights

Midsection of senior woman and female healthcare worker with hands stacked at retirement home
Industry Group

Healthcare

The attorneys in Ogletree Deakins’ Healthcare Industry Group understand the unique legal challenges facing healthcare industry clients that must balance vital and demanding work with numerous compliance regimes and heavy regulation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now