SAVANNAH, Ga. — HR departments must prepare for the integration of artificial intelligence into most jobs, including their own, according to speakers at last week’s Society for Human Resource Management Inclusion conference.
At the same time, those presenting noted that automated decision-making tools, machine learning algorithms, chatbots and similar technologies present numerous risks to employer progress on diversity, equity and inclusion goals like addressing institutional bias.
In a late afternoon session that served as an AI crash course for HR, Alex Alonso, SHRM’s chief knowledge officer, eased attendees into the subject, explaining AI’s capabilities and directly addressing the fear and unease it provokes for some.
“Future mongering,” a reflexively negative reaction, occurs all too often with the AI narrative, according to Alonso. “We are basically sitting there and reacting to what is the worst possible thing that can happen when it relates to artificial intelligence and the optimization of humans,” he said.
While it is clear that AI will replace some jobs — with some reports estimating that it already led to thousands in job cuts this year — it is also true that it will create others and make existing jobs, like HR, more strategic in nature, Alonso said.
He also acknowledged the potential that AI may suffer from bias, including biases introduced by the developers of AI tools or the organizations that deploy them. “People wonder, is the bias of the developer introduced into the algorithm? It could be. But is that the same thing as the biases held by the client?” Alonso asked. “There are a variety of biases that exist. It’s how we manage them, and most of these tools are designed to be a decision aid, not the decider.”
A multitude of risks
Similarly, another speaker called attention to the limitations of OpenAI’s ChatGPT, the generative AI platform that took the world by storm in 2022.
ChatGPT, in its present state, can provide inaccurate information and even “hallucinate,” or make up, information, Carol Kiburz, a member of SHRM’s Speakers Bureau and an HR industry veteran, said. The latter phenomenon created an infamous legal case study earlier this year, when a ChatGPT-generated legal brief cited numerous fake cases.
Kiburz also noted that ChatGPT works largely by scraping, or copying, online data or content to generate its output. Because much of this scraped content is written by humans, HR professionals should anticipate that it will contain bias, she said.
Beyond DEI concerns, Kiburz noted that HR teams will need to account for the possibility that bad actors could use AI to create malicious or offensive content and concoct schemes to harm organizations by creating fraudulent information or scams.
Then there’s the business model on which companies like OpenAI run. If employers’ operations become dependent on generative AI platforms, their operators could significantly raise the cost of maintaining organizational access to use the software. It’s also not too difficult to imagine scenarios in which such platforms are compromised by cyber attacks that consequently disrupt an employer’s operations, Kiburz said.
Despite these limitations, Kiburz maintained that ChatGPT has its HR applications. She walked attendees step-by-step through the process of giving ChatGPT writing prompts to create things such as job postings, offer letters and communications to employees from executive leadership. With just a few elements — task, context, role, format, tone and example — employers can create these documents in mere seconds.
Transparency is a top AI compliance consideration
U.S. regulators and lawmakers are increasingly signaling their intent to crack down on bias in AI, and few federal, state or local agencies have been as vocal on this front as the U.S. Equal Employment Opportunity Commission.
AI and machine learning featured near the top of EEOC’s recently released Strategic Enforcement Plan for 2024 to 2028, and the agency said it would specifically scrutinize applications used for job advertisements as well as those that make or assist in hiring decisions.
In April, EEOC Chair Charlotte Burrows joined other regulators in issuing a joint statement on AI, stating that one of EEOC’s goals is to “ensure AI does not become a high-tech pathway to discrimination.” Burrows isn’t the only EEOC official to speak out on the subject, either. Commissioner Keith Sonderling co-published a 2022 article in the University of Miami Law Review that partly discussed the “new perils for employment discrimination” posed by AI.
Regulators have emphasized a need to provide notice to job candidates about the use of AI during the hiring process, said Kelly Dobbs Bunting, shareholder at Greenberg Traurig, at the SHRM conference. Dobbs Bunting noted that several local governments have passed or could soon pass laws on AI in hiring. Mandates in Illinois, Maryland and New York City each require some level of transparency from employers that use such tools.
“You have to tell these folks that you’re using AI,” Dobbs Bunting said. “You have to get their consent — and it’s probably best to get it in writing — that they understand that AI is being used.”
Just one day prior to SHRM Inclusion’s AI sessions, President Joe Biden issued an executive order that, in part, cautioned against “irresponsible uses of AI” that could create new or deepen existing discrimination, bias and other abuses in areas such as employment.
Dobbs Bunting said two key takeaways from the order are that it asks stakeholders to develop principles and best practices for minimizing harm and maximizing the benefits of AI for workers, and produce a report on AI’s potential labor-market impacts.
In a review of relevant case law, Dobbs Bunting highlighted specific DEI concerns AI could create. For instance, employees in various cases have alleged that automated recruiting systems automatically weeded out candidates of certain age groups or racial backgrounds. In one lawsuit Dobbs Bunting highlighted, former HR employees of IBM alleged the company fired them due to their ages and planned to replace them with AI. “I’m just going to let that sink in: human resources,” she said. “Very scary.”
Anxiety about AI’s impact on diversity is nothing new, however. Organizations including the American Civil Liberties Union cite research showing that the technology may perpetuate existing discrimination and bias in areas ranging from housing to criminal law to finance. AI tools used to screen job seekers “pose enormous risks for discrimination against people with disabilities and other protected groups,” the ACLU said in a 2021 blog post.
DEI professionals will need to think carefully about how — not if — their organizations implement AI, according to Kiburz. She made clear that she does not believe HR professionals will be able to ignore AI’s momentum for much longer.
“This train has left the station,” she said. “My fear is that if you have a complete head in the sand, you’re going to be buried later, whether personally or professionally.”