Artistic image evocative of AI.

While artificial intelligence (AI) and other automated systems offer new opportunities for employers and employees, they also have the potential to discriminate. The federal Equal Employment Opportunity Commission (EEOC) has set up an Artificial Intelligence and Algorithmic Fairness Initiative to ensure that the use of software, including AI, machine learning, algorithms, and other emerging technologies used in hiring and other employment decisions comply with federal civil rights laws.

As part of that initiative, in 2022 the EEOC sued an online tutoring company, claiming that it used software programmed to reject female applicants over the age of 55 and male applicants over the age of 65, thereby denying employment to more than 200 qualified tutors because of their age. The case was settled in August, 2023 when the company agreed to pay $365,000, adopt anti-discrimination policies, conduct anti-discrimination training, and re-consider all of the applicants who were purportedly rejected because of their age. EEOC v. iTutor Group, Inc.; No. 1:22-cv-2565 (E.D.N.Y.)

This is the first known AI settlement, but it certainly won’t be the last. According to the Society for Human Resources Management, more than half of employers now use some form of AI in recruiting and hiring, and that number will likely increase, as will legislative and regulatory controls. These enactments are intended to prevent AI tools from making discriminatory employment decisions, and to hold employers liable for discrimination caused by these tools.

As of July 5, 2023, New York City law requires employers to audit their automated employment decision tools (AEDT) for bias and publish the results. The law also says that employers and employment agencies must notify employees and job candidates that they are using an AEDT and the job qualifications or characteristics the AEDT will assess 10 business days before using the tool. Alternatively, notice to job seekers can be posted on the employment section of an organization’s website and notice to employees can be included in a written policy. New Jersey and California are considering similar legislation, as is Washington, D.C.

What this means to you:

Employers should be transparent with workers about their use of AI, and empower employees to feel comfortable voicing concerns. Here’s an example from a recent EEOC guidance:

Employer decides to use an algorithm to evaluate employees’ productivity, using the employee’s average number of keystrokes per minute. A blind worker who uses voice recognition software instead of a keyboard will be rated poorly and might get fired as a result. But, if the employer tells its employees that they will be assessed partly on the basis of keyboard usage, the blind worker would know to ask for an alternative means of measuring productivity—perhaps one that considers the use of voice recognition software rather than keystrokes—as a reasonable accommodation.

Managers and supervisors must be aware of worker rights and be prepared to discuss reasonable accommodations. Our Managing Within the Law courses— newly revised to include more coverage of leaves of absence and accommodations–provide employment law training for managers and executives both in-person and online.

To find out more about our 2023-2024 training programs or to book a workshop, please call 800-458-2778 or email us.

Updated 09-11-2023

Information here is correct at the time it is posted. Case decisions cited here may be reversed. Please do not rely on this information without consulting an attorney first.