Artificial intelligence tools — particularly large language models — appear to show significant racial, gender and intersectional biases when ranking job candidates’ resumes based on perceptions about their names, according to new research presented recently at the Association for the Advancement of Artificial Intelligence/Association for Computing Machinery Conference on AI, Ethics and Society.
Across 550 real-world resumes, the AI tools favored White-associated names 85% of the time and female-associated names only 11% of the time. The tools never favored Black male-associated names over White male-associated names, the researchers found.
“The use of AI tools for hiring procedures is already widespread, and it’s proliferating faster than we can regulate it,” Kyra Wilson, the lead author and a doctoral student at the University of Washington, said in a statement.
“Currently, outside of a New York City law, there’s no regulatory, independent audit of these systems, so we don’t know if they’re biased and discriminating based on protected characteristics such as race and gender,” Wilson said. “And because a lot of these systems are proprietary, we are limited to analyzing how they work by approximating real-world systems.”
Wilson and colleagues used 120 first names typically associated with White and Black men and women and varied them across the resumes. They used LLMs from three companies — Mistral AI, Salesforce and Contextual AI — to rank the resumes for more than 500 real-world job listings across nine occupations, adding up to more than 3 million comparisons between resumes and job descriptions.
Overall, the AI tools preferred white-associated names 85% of the time and Black-associated names 9% of the time, as well as male-associated names 52% of the time and female-associated names 11% of the time.
Using an intersectional lens, more patterns emerged. The smallest disparity occurred between typically White female names and typically White male names. The AI tools never preferred names typically associated with Black men over those associated with White men. However, the tools preferred typically Black female names 67% of the time and typically Black male names 15% of the time.
“We found this really unique harm against Black men that wasn’t necessarily visible from just looking at race or gender in isolation,” Wilson said. “Intersectionality is a protected attribute only in California right now, but looking at multidimensional combinations of identities is incredibly important to ensure the fairness of an AI system. If it’s not fair, we need to document that so it can be improved upon.”
Recruiters are investing in and using AI tools in numerous ways, including task automation, personalized messaging and interview scheduling, according to a Gartner analyst. AI tools can also help with candidate matching and ranking, but it’s still a recruiter’s responsibility to review AI summaries and determine next steps for each candidate, she wrote.
Notably, the Department of Labor has issued an inclusive hiring framework focused on AI tools. The framework includes guidance on AI implementation, hiring manager duties regarding diversity and inclusion, accessibility of tools, risk management with vendors and legal compliance.
Looking ahead, HR leaders can take proactive steps to avoid algorithmic discrimination when using AI tools, according to a partner at Stradley Ronon. For instance, HR pros can establish organizational standards and processes, conduct adverse impact assessments, review vendor contracts and remain informed about legislative updates.