Tory Summey and Jeremy Locklear are employment attorneys at Parker Poe in Charlotte and Raleigh, North Carolina, respectively. They can be reached at [email protected] and [email protected]. Shivani Motamarri also contributed to this article as part of her summer clerkship at Parker Poe.
Artificial intelligence allows employers to automate a wide range of human resources tasks, including scanning resumes into usable data, conducting screening interviews and even suggesting hiring decisions. The promise of HR AI tools is compelling: better, faster staffing decisions made at much lower cost.
For now, however, that promise comes into conflict with a number of laws and regulations.
How does HR use AI today?
Cosmetics giant L’Oréal has used an AI-powered chatbot to interact with potential candidates during the initial stages of the interview process, answer questions and screen candidates for availability and visa requirements. Hilton Hotels and Resorts was able to reduce its average time to hire from 43 days to five days with an AI-empowered tool featuring on-demand digital interviews that allows the hotel chain to interview multiple candidates at once without a recruiter.
Other HR teams have uncovered AI flaws. Amazon’s experimental hiring tool used AI to score job candidates and make hiring decisions. The company soon noticed that the system was not rating candidates for tech roles in a gender-neutral way. Ultimately, Amazon decided to only use the tool for recommendations rather than making decisions.
One company explains that its recruiting tool, which is reportedly used by 10,000 teams across the globe, automatically “enriches” the candidate profile by scanning 20-plus social media platforms for data. While searching social media has long been a part of recruiting, an AI tool comprehensively harvesting personal data and using it to make hiring recommendations invites new complications. What if the tool chose to disfavor a candidate who wrote something critical about a previous employer on social media, for example?
How does existing law account for AI?
There are no federal regulations in the U.S. that expressly govern the use of AI in the workplace (yet), but the U.S. Equal Employment Opportunity Commission has published guidance to remind employers that existing anti-discriminatory laws cover AI, including Title VII and the Americans with Disabilities Act.
Most importantly, the EEOC has explained that just because a decision is made by AI, companies are not shielded from liability for discriminatory actions. Rather, the same rules that apply to hiring, promotion, and firing decisions made by humans apply to those made with the assistance of AI.
Companies also are not off the hook if they use tools created by third parties. The EEOC opined that companies remain responsible for decisions made using a vendor’s tools. The onus is on the company to ensure compliance with nondiscriminatory use of AI tools.
Many states and municipalities, however, are stepping in to regulate the use of AI tools in the workplace. For example, New York City enforces Local Law 144, which regulates the use of AI in “employment decisions.” The law requires employers to take steps prior to using AI employment tools, such as conducting a bias audit and notifying candidates of the use of the tool. States like Illinois and Maryland have enacted legislation regarding the use of AI to assess video interviews and facial recognition. Several other AI-related bills are on the horizon.
Employers with workers outside of the U.S. may be subject to other requirements. The European Parliament has passed a draft law called the Artificial Intelligence Act that contains broad regulations aimed at the use of AI across industries and social activities using a risk-based approach. Importantly, the use of AI systems in recruiting and performance evaluation would be considered “high risk” under the AI Act’s sliding scale, subjecting employers to heavy compliance requirements.
The threat of legal action relating to AI will only increase from here. On Aug. 9, the EEOC settled its first lawsuit alleging AI-based age discrimination against an online English-language tutoring company, iTutorGroup. The company’s software was alleged to have automatically rejected female applicants older than 55 and male applicants older than 60. Specifically, one plaintiff submitted an application using her real birthday and was immediately rejected, but she was offered an interview with the same application and a more recent birth date.
Not only did the settlement result in a hefty monetary payout, but the company agreed to adopt EEOC-approved anti-discrimination policies and conduct training. The company also was required to reconsider all applicants who were purportedly rejected because of their age. While this settlement is the first of its kind, the EEOC’s draft enforcement plan signals that the agency will focus on the use of AI from the recruitment stage through performance management.
Similar litigation has been filed against Workday, the prominent human resources and financial management platform. A plaintiff seeking to represent a class of similar individuals alleged he applied for at least 80-100 positions with companies that use Workday but was denied employment each time. The plaintiff claimed he was not hired due to “systemic discrimination” because Workday’s AI allegedly has a disparate impact on applicants based on race, age and disability.
4 steps to avoid an AI headache
If AI is not thoughtfully implemented, it can create new challenges in the form of biased decision-making and possible litigation. However, by taking the following practical steps, you can reduce those risks.
- Take the time to understand not only what a tool does but how it achieves its output. A resume scanner may present you with the “most qualified” candidates for a position, but how did it achieve those results? What data does it collect from each resume? Which fields are prioritized? Where did it get its definition of “most qualified”? Is there any human component to the ultimate output?
- Do not rely on claims that a tool is “bias free.” Many vendors advertise that their AI solutions will deliver results unclouded by discrimination. This sounds great but is worth very little if you are ever faced with a discrimination lawsuit. If a tool delivers biased results, your organization can be held responsible even if you believed that the results were legitimate.
- Monitor and validate the results generated by the tool. Someone should be responsible for regularly reviewing the tool’s output to assess whether it has delivered skewed results. If a tool recommends male candidates for interview at a much higher rate than female candidates, you want to flag that issue early, understand the reason for those results, and make adjustments. When validating whether results are biased, the EEOC’s Uniform Guidelines on Employee Selection Procedures remain the gold standard.
- Make sure a human is involved in every decision. If a human is part of the decision-making process, you will always have a witness who can explain the reasoning. Relying completely on a tool that later turns out to be biased leaves little room for defense.
These four steps are a strong starting point for HR teams to help their companies identify useful AI tools, ensure that they deliver legitimate results and defend against any legal challenges that could come their way.