Dive Brief:
- Financial software company Intuit allegedly used AI-backed hiring assessment software provided by third-party vendor HireVue that discriminated against deaf and non-White individuals, the ACLU of Colorado asserted in a March 19 complaint to the Colorado Civil Rights Division and the U.S. Equal Employment Opportunity Commission.
- The tool at issue, an AI-backed video interview platform, performs worse when evaluating non-White and deaf or hard of hearing speakers, according to the complaint. A deaf and Indigenous Intuit customer service employee was required to use the platform when she applied for a promotion, the complaint claimed.
- The employee allegedly requested human-generated captioning as an accommodation so she could access interview instructions and questions. Intuit allegedly denied the request and after the interview, rejected her for the promotion due to her communication style, according to the complaint. The ACLU alleged that Intuit’s and HireVue’s actions violated the Americans with Disabilities Act, Title VII of the Civil Rights Act of 1964 and the Colorado Anti-Discrimination Act.
Dive Insight:
In an email statement to HR Dive, HireVue CEO Jeremy Friedman said the complaint “is entirely without merit and is based on an inaccurate assumption about the technology used in the interview. Intuit did not use a HireVue AI-based assessment.”
Friedman added that, “although HireVue’s AI was not used in this instance, our industrial-organizational psychologists and data scientists are continuously engaged in research, including sensitive and important work around race, ability, and other protected statuses.”
A spokesperson for Intuit told HR Dive in an email that, “The allegations in the complaint are entirely without merit. We provide reasonable accommodations to all candidates.”
HR managers are familiar with advantages AI-backed hiring tools can provide, including automating tasks like candidate screening and predicting a candidate’s success, Stradley Ronon partner Melanie Ronen noted in an October 2024 op-ed to HR Dive.
But HR professionals should also be aware of, and work to prevent, algorithmic bias — systematic and repeatable errors in AI-assisted processes that can lead to unintentional discrimination, Ronen stressed.
The AI-automated video interview system allegedly used in the Intuit case relies on automated speech recognition systems that may fail to accurately recognize and assess the speech of deaf applicants or the English dialects of Indigenous applicants, the ACLU said.
Here, the system allegedly failed to accurately reflect the employee’s knowledge, skills and abilities and screened her out, or tended to screen her out, for promotion because of her race and/or disability, the complaint asserted.
For employers looking to the EEOC’s website for guidance, they won’t find it. The agency rescinded documents related to AI and workplace discrimination following President Donald Trump’s executive order scrubbing his predecessor’s mandate to protect civil rights in AI use and his order rolling back government oversight of AI.
However, federal anti-discrimination laws still apply, the Husch Blackwell law firm emphasized in a February blog.
In particular, employers remain liable for disparate impact discrimination under Title VII, which can happen when AI disproportionately excludes protected groups, and for disability discrimination under the ADA due to AI systems screening out candidates because of disability-related characteristics, the blog explained.
Additionally, “employers can still be held responsible for AI-related discrimination, even if the tool was developed and implemented by a third-party vendor,” the law firm said.
Employers should also keep up-to-date with state and local laws — including those in Colorado and Illinois — set to take effect as the federal government pulls back, Husch Blackwell noted.
New York City’s law, which took effect in 2023, requires employers to audit automated decision tools for hiring or promotion and notify employees and candidates before using them.
To help prevent AI-related unlawful discrimination, HR leaders can take proactive steps, such as performing adverse impact assessments to confirm AI is not operating to favor or exclude groups, Ronen suggested. She also recommended reviewing contracts with vendors to make sure they keep up with the latest AI-related standards.