[US] EEOC and DOJ guidance on the use of AI tools for employment decision making

[US] EEOC and DOJ guidance on the use of AI tools for employment decision making
20 May 2022

On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) issued long-awaited guidance (the “Guidance”) on the use of Artificial Intelligence (AI) tools to automate employment decision-making - such as software that can review resumes and “chatbots” that interview and screen job applicants - the Guidance examines how employers can seek to prevent AI-related disability discrimination. Law and the Workplace takes a closer look.

The EEOC Guidance identifies a number of ways in which the employment-related use of AI can, even unintentionally, violate the Americans with Disabilities Act (“ADA”), including if:

  • (i) “[t]he employer does not provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly and accurately by” the AI;
  • (ii) “[t]he employer relies on an algorithmic decision-making tool that intentionally or unintentionally ‘screens out’ an individual with a disability, even though that individual is able to do the job with a reasonable accommodation”; or
  • (iii) “[t]he employer adopts an [AI] tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.”

The Guidance further states that “[i]n many cases” employers are liable under the ADA for use of AI even if the tools are designed and administered by a separate vendor, noting that “employers may be held responsible for the actions of their agents . . . if the employer has given them authority to act on [its] behalf.”

The Guidance also identifies various best practices for employers, including:

  • Announcing generally that employees and applicants subject to an AI tool may request reasonable accommodations and providing instructions as to how to ask for accommodations.
  • Providing information about the AI tool, how it works, and what it is used for to the employees and applicants subjected to it. For example, an employer that uses keystroke-monitoring software may choose to disclose this software as part of new employees’ onboarding and explain that it is intended to measure employee productivity.
  • If the software was developed by a third party, asking the vendor whether: (i) the AI software was developed to accommodate people with disabilities, and if so, how; (ii) there are alternative formats available for disabled individuals; and (iii) the AI software asks questions likely to elicit medical or disability-related information.
  • If an employer is developing its own software, engaging experts to analyze the algorithm for potential biases at different steps of the development process, such as a psychologist if the tool is intended to test cognitive traits.
  • Only using AI tools that measure, directly, traits that are actually necessary for performing the job’s duties.
  • Additionally, it is always a best practice to train staff, especially supervisors and managers, how to recognize requests for reasonable accommodations and to respond promptly and effectively to those requests. If the AI tool is used by a third party on the employer’s behalf, that third party’s staff should also be trained to recognize requests for reasonable accommodation and forward them promptly to the employer.

Finally, also on May 12, the U.S. Department of Justice (“DOJ”) released its own guidance on AI tools’ potential for inadvertent disability discrimination in the employment context. The DOJ guidance is largely in accord with the EEOC Guidance.

Employers utilising AI tools should carefully audit them to ensure that this technology is not creating discriminatory outcomes. Likewise, employers must remain up to date with any new developments from the EEOC and local, state, and federal legislatures and agencies as the trend toward regulation continues.


Source: Law and the Workplace

(Links via original reporting)

On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) issued long-awaited guidance (the “Guidance”) on the use of Artificial Intelligence (AI) tools to automate employment decision-making - such as software that can review resumes and “chatbots” that interview and screen job applicants - the Guidance examines how employers can seek to prevent AI-related disability discrimination. Law and the Workplace takes a closer look.

The EEOC Guidance identifies a number of ways in which the employment-related use of AI can, even unintentionally, violate the Americans with Disabilities Act (“ADA”), including if:

  • (i) “[t]he employer does not provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly and accurately by” the AI;
  • (ii) “[t]he employer relies on an algorithmic decision-making tool that intentionally or unintentionally ‘screens out’ an individual with a disability, even though that individual is able to do the job with a reasonable accommodation”; or
  • (iii) “[t]he employer adopts an [AI] tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.”

The Guidance further states that “[i]n many cases” employers are liable under the ADA for use of AI even if the tools are designed and administered by a separate vendor, noting that “employers may be held responsible for the actions of their agents . . . if the employer has given them authority to act on [its] behalf.”

The Guidance also identifies various best practices for employers, including:

  • Announcing generally that employees and applicants subject to an AI tool may request reasonable accommodations and providing instructions as to how to ask for accommodations.
  • Providing information about the AI tool, how it works, and what it is used for to the employees and applicants subjected to it. For example, an employer that uses keystroke-monitoring software may choose to disclose this software as part of new employees’ onboarding and explain that it is intended to measure employee productivity.
  • If the software was developed by a third party, asking the vendor whether: (i) the AI software was developed to accommodate people with disabilities, and if so, how; (ii) there are alternative formats available for disabled individuals; and (iii) the AI software asks questions likely to elicit medical or disability-related information.
  • If an employer is developing its own software, engaging experts to analyze the algorithm for potential biases at different steps of the development process, such as a psychologist if the tool is intended to test cognitive traits.
  • Only using AI tools that measure, directly, traits that are actually necessary for performing the job’s duties.
  • Additionally, it is always a best practice to train staff, especially supervisors and managers, how to recognize requests for reasonable accommodations and to respond promptly and effectively to those requests. If the AI tool is used by a third party on the employer’s behalf, that third party’s staff should also be trained to recognize requests for reasonable accommodation and forward them promptly to the employer.

Finally, also on May 12, the U.S. Department of Justice (“DOJ”) released its own guidance on AI tools’ potential for inadvertent disability discrimination in the employment context. The DOJ guidance is largely in accord with the EEOC Guidance.

Employers utilising AI tools should carefully audit them to ensure that this technology is not creating discriminatory outcomes. Likewise, employers must remain up to date with any new developments from the EEOC and local, state, and federal legislatures and agencies as the trend toward regulation continues.


Source: Law and the Workplace

(Links via original reporting)