Hallway of servers.

On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued the latest federal guidance on employer use of artificial intelligence (AI) and automated decision-making tools. The new guidance reinforces the EEOC’s ongoing focus on the use of AI in the workplace and serves as an important reminder to employers of potential legal compliance issues associated with the use of such tools. Importantly, the new guidance is nonbinding, does not have the force of law, and does not announce “new policy,” but it does provide additional insight into the EEOC’s views on how existing rules or policies apply to employer use of AI-based tools.

Quick Hits

  • The EEOC issued new guidance on the potential disparate impact of employers’ use of algorithmic, automated, or AI decision-making tools to make job applicant and employee selection decisions.
  • The guidance follows similar warnings from the EEOC and the DOJ issued in 2022 regarding the impact on employees or job applicants with disabilities under the ADA.

Level Setting

In May 2022, the EEOC and the U.S. Department of Justice (DOJ) issued technical assistance guidance outlining their enforcement position and recommendations concerning employer use of various advanced technologies and compliance with the Americans with Disabilities Act (ADA). The ADA AI Guidance did not include an extended discussion of theories of disparate treatment (intentional discrimination) or disparate impact (a facially neutral policy or practice that disproportionately impacts a protected group). Instead, the ADA AI Guidance focused, in large part, on the potential for new workplace decision-making technologies to “screen out” or exclude individuals with disabilities from employment opportunities. The EEOC also identified recommendations for “promising practices” to mitigate the screen-out risk.

As with the ADA AI Guidance, the new guidance from the EEOC on automated decision-making tools is narrow in focus, addressing only:

  1. Title VII of the Civil Rights Act;
  2. AI tools as employee selection procedures, including hiring, promotion, and firing; and
  3. Potential disparate or adverse impact.

Also similar to the May 2022 ADA guidance, the EEOC began its latest AI guidance by providing definitions for three key technology terms that are important for employers to understand:

  • “Software.” Defined by the EEOC to refer to the “information technology programs or procedures that provide instructions to a computer on how to perform a given task or function,” including applications or apps.
  • “Algorithm.” Defined by the EEOC as a “set of instructions that can be followed by a computer to accomplish some end.” The agency notes, for example, that human resources software uses algorithms to “process data to evaluate, rate, and make other decisions about job applicants and employees.”
  • “Artificial Intelligence (AI).” The EEOC noted that the definition of AI is “evolving” but referred to a definition by the U.S. Congress in the National Artificial Intelligence Initiative Act of 2020 that defined AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” The agency stated that employers and software vendors often “use AI when developing algorithms that help employers evaluate, rate, and make other decisions about job applicants and employees.”

These definitions are nearly identical to those provided by the EEOC in its ADA AI Guidance. In each, the EEOC highlighted common uses of technology tools that the EEOC is increasingly watchful of, including resume scanners that prioritize applicants, employee monitoring software that counts keystrokes or other factors, “virtual assistants” or “chatbots” that interact with job candidates, video interview software that examines candidates facial expressions and speech patterns, and employment testing software that generate “job fit” scores.

In a departure from ADA AI Guidance, the EEOC in its latest guidance included an extended discussion of disparate impact theory in the context of algorithmic decision-making tools. Specifically, the EEOC reinforced for employers that, under disparate impact theory, if an employer uses an employment practice that has a disproportionate impact based on race, color, religion, sex, or national origin, an employer must show that the procedure is job-related and consistent with business necessity. If an employer shows that the procedure is job-related and consistent with business necessity, the analysis further considers whether there is a less discriminatory alternative available. The EEOC referred to and described its 1978 Uniform Guidelines on Employee Selection Procedures, which provide information for employers on how to assess whether selection procedures are lawful in a disparate impact analysis.

Key Guidance

In the latest guidance, the EEOC emphasized a number of important points:

  • Algorithmic decision-making tools can be selection procedures subject to the 1978 Selection Guidelines. Any algorithmic decision-making tool that functions as a “measure, combination of measures or procedure” that is used as a basis for an employment decision—i.e., “used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees”—may be subject to the 1978 Selection Guidelines. Such tools can be reviewed under the 1978 Selection Guidelines by assessing whether “use of the procedure causes a selection rate for individuals in the protected group that is ‘substantially’ less than the selection rate for individuals in another group.”
  • The “four-fifths rule” is not always the standard that will be relied upon to assess disparate impact. As noted, to assess disparate impact, the selection rate for one group must be calculated and compared to the selection rate for another group or groups. The most common test to assess whether selection rates “substantially” differ is the “four-fifths rule,” which states that “one rate is substantially different than another if their ratio is less than four-fifths” (or 80 percent). In its most recent guidance, the EEOC cautioned employers that the four-fifths rule is not always an appropriate test and that smaller differences in selection rates may indicate adverse impact. The EEOC gave employers the practical tip that when evaluating algorithmic decision-making tools, employers may “want to ask the vendor specifically whether it relied on the four-fifths rule … or whether it relied on a standard such as statistical significance,” suggesting employers may want to carefully conduct and document due diligence efforts with vendors of algorithmic decision-making tools.
  • Employers may be held responsible under Title VII for use of algorithmic decision-making tools, even if the tools are designed or administered by a third party (such as a software vendor). Consistent with its technical assistance guidance under the ADA, the EEOC underscored that employers may have liability even if they neither design nor implement an algorithmic decision-making tool. As noted, the EEOC suggested employers may wish to ask any vendor that develops or administers a tool about the steps that have been taken to assess disparate impact. Importantly, the EEOC stated that even if the vendor is incorrect about its disparate impact assessment, an employer could still be liable under Title VII. The EEOC suggested that employers may want to consider conducting their own assessments.
  • Employers may want to consider ongoing assessments. While the EEOC’s latest guidance focuses principally on adverse impact assessments before an employer implements an algorithmic decision-making tool, the EEOC also encouraged employers to conduct self-analysis on an “ongoing basis” and, if appropriate, make proactive changes based on the results of such assessments. Employers may want to consider whether and how to conduct such assessments and whether they wish to do so under attorney-client privilege.

Next Steps

The EEOC and other federal enforcement agencies have now repeatedly made it clear that they are scrutinizing the potential discriminatory impact on employment from the growing use of new technologies by employers. We expect further guidance to be forthcoming. In the meantime, employers may want to review their processes to determine the extent to which they or their vendors may be using such tools.

Ogletree Deakins’ will continue to monitor developments with respect to the use of automated decision-making tools and AI and will provide updates on the Technology, Cybersecurity and Privacy, and Employment Law blogs as additional information becomes available. Important information for employers is also available via the firm’s webinar and podcast programs.

Authors


Browse More Insights

Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more
Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now