New York City’s law restricting the use of artificial intelligence tools in the hiring process goes into effect at the beginning of next year. While the law is seen as a bellwether for protecting job applicants against bias, little is known to date about how employers or vendors need to comply, and that has raised concerns about whether the law is the right path forward for addressing bias in hiring algorithms.
The law comes with two main requirements: Employers must audit any automated decision tools used in hiring or promoting employees before using them, and they must notify job candidates or employees no less than 10 business days before they’re used. The penalty is $500 for the first violation and $1,500 for each additional violation.
While Illinois has regulated the use of AI analysis of video interviews since 2020, New York City’s law is the first in the country to apply to the hiring process as a whole. It aims to address concerns from the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice that “blind reliance” on AI tools in the hiring process could cause companies to violate the Americans with Disabilities Act.
“New York City is looking holistically at how the practice of hiring has changed with automated decision systems,” Julia Stoyanovich, Ph.D., a professor of computer science at New York University and member of the city’s automated decision systems task force, told HR Dive. “This is about the context in which we are making sure that people have equitable access to economic opportunity. What if they can’t get a job, but they don’t know the reason why?”
Looking beyond the ‘model group’
AI recruiting tools are designed to support HR teams throughout the hiring process, from placing ads on job boards to filtering resumes from applicants to determining the right compensation package to offer. The goal, of course, is to help companies find someone with the right background and skills for the job.
Unfortunately, each step of this process can be prone to bias. That’s especially true if an employer’s “model group” of potential job candidates is judged against an existing employee roster. Notably, Amazon had to scrap a recruiting tool — trained to assess applicants based on resumes submitted over the course of a decade — because the algorithm taught itself to penalize resumes that included the term “women’s.”
“You’re trying to identify someone who you predict will succeed. You’re using the past as a prologue to the present,” said David J. Walton, a partner with law firm Fisher & Phillips LLP. “When you look back and use the data, if the model group is mostly white and male and under 40, by definition that’s what the algorithm will look for. How do you rework the model group so the output isn’t biased?”
AI tools used to assess candidates in interviews or tests may also pose problems. Measuring speech patterns in a video interview may screen out candidates with a speech impediment, while tracking keyboard inputs may eliminate candidates with arthritis or other conditions that limit dexterity.
“Many workers have disabilities that would put them at a disadvantage the way these tools evaluate them,” said Matt Scherer, senior policy counsel for worker privacy at the Center for Democracy and Technology. “A lot of these tools operate by making assumptions about people.”
Walton said these tools are akin to the “chin-up test” often given to candidates for firefighting roles: “It doesn’t discriminate on its face, but it could have a disparate impact on a protected category” of applicants as defined by the ADA.
There’s also a category of AI tools that aim to help identify candidates with the right personality for the job. These tools are also problematic, said Stoyanovich, who recently published an published an audit of two commonly used tools.
The issue is technical — the tools generated different scores for the same resume submitted as raw text as compared to a PDF file — as well as philosophical. “What is a ‘team player?’” she said. “AI isn’t magic. If you don’t tell it what to look for, and you don’t validate it using the scientific method, then the predictions are no better than a random guess.”
Legislation — or stronger regulation?
New York City’s law is part of a larger trend at the state and federal level. Similar provisions have been included in the federal American Data Privacy and Protection Act, introduced earlier this year, while the Algorithmic Accountability Act would require “impact assessments” of automated decision systems with various use cases, including employment. In addition, California is aiming to add liability related to the use of AI recruiting tools to the state’s anti-discrimination laws.
However, there’s some concern that legislation isn’t the right way to address AI in hiring. “The New York City law doesn’t impose anything new,” according to Scherer. “The disclosure requirement isn’t very meaningful, and the audit requirement is only a narrow subset of what federal law already requires.”
Given the limited guidance issued by New York City officials leading up to law taking effect on Jan. 1, 2023, it also remains unclear what a technology audit looks like — or how it should be done. Walton said employers will likely need to partner with someone who has data and business analytics expertise.
At a higher level, Stoyanovich said AI recruiting tools would benefit from a standards-based auditing process. Standards should be discussed publicly, she said, and certification should be done by an independent body — whether it’s a non-profit organization, a government agency or another entity that doesn’t stand to profit from it. Given these needs, Scherer said he believes regulatory action is preferable to legislation.
The challenge for those working for stronger regulation of such tools is getting policymakers to drive the conversation.
“The tools are already out there, and the policy isn’t keeping pace with technological change,” Scherer said. “We’re working to make sure policymakers are aware that there needs to be real requirements for audits on these tools, and there needs to be meaningful disclosure and accountability when the tools result in discrimination. We have a long way to go.”