EEOC Issues New Guidance On ADA-Compliant Use of AI For Employment Decision-Making

Kollman & Saucier
Kollman & Saucier
05/13/2022

For the first time, the EEOC has issued guidance on how to comply with the Americans with Disabilities Act (ADA) when using artificial intelligence (AI) for employment decision-making. Overall, the Q&A document reiterates familiar principles of ADA law: (1) provide reasonable accommodations for an applicant/employee with a disability; (2) don’t use testing tools that intentionally or unintentionally screen out disabled individuals; and (3) don’t seek out information about an individual’s disability status or physical or mental impairments.

            As these concepts are not always easy to apply, even without the added complication of AI, the guidance is helpful in that it provides examples of potentially unlawful conduct and describes best practices for both employers and employees. The following is a summary of the new EEOC guidance with reference to applicable Q&A sections:

  1. Does the ADA require an employer to provide reasonable accommodations when using AI to assess applicants and employees? (See Qs 4-7.) 

            If an applicant or employee informs an employer that a medical condition may interfere with their ability to take a test or make it less accurate, an employer may request supporting documentation to confirm a disability (not obvious or known) and must provide a reasonable accommodation for such a disability, absent undue hardship. Examples of reasonable accommodations include extended test time; alternative testing means (e.g., oral instead of typed); and alternative testing format (e.g., accessible technology compatible). 

  1. When is an individual unlawfully “screened out” because of a disability? (See Qs 8-12.) 

            Certain testing tools may “screen out” applicants, meaning “a disability prevents a job applicant or employee from meeting – or lowers their performance on – a selection criterion, and the applicant or employee loses a job as a result.” If the applicant can perform the essential functions of the job with a reasonable accommodation, the “screen out” is unlawful.

            To reduce the risk of unlawful screen out, the EEOC recommends an employer take steps to inquire about the development of its AI tools. For example, an employer might ask a vendor if it made the tool interface accessible to as many individuals as possible or if it attempted to determine whether use of the algorithm disadvantages disabled individuals. An employer should also on its own clearly indicate that reasonable accommodations are available; provide clear instructions for making requests; and explain in advance as much information about the AI tool as possible, including skills measured and testing methods.

  1. How might AI tools violate ADA restrictions on disability-related inquiries and medical examinations? (See Q13.)

            If administered prior to a conditional offer of employment, AI tools that either: (1) ask applicants/employees questions likely to elicit information about a disability, or 2) seek out information about physical or mental impairments or health, may violate the ADA. 

            The EEOC cautions that not all AI tools asking about health-related information constitute unlawful disability-related inquiries or medical examinations. For example, a personality test could permissibly ask about whether an individual is described by others as “generally optimistic”, even if the question might somehow be related to certain mental health diagnoses. It is possible, however, that the same question might constitute unlawful screen out depending upon the circumstances.  This is why upfront analysis is so important.

See Q14 for more information about best practices related to AI in the workplace.

No Comments
prev next
Email Updates

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Loading