Over the last decade, the use of artificial intelligence in areas like hiring, recruiting and workplace surveillance has shifted from a topic of speculation to a tangible reality for many workplaces. Now, those technologies have the attention of the highest office in the land.
On Oct. 4, the White House’s Office of Science and Technology Policy published a “Blueprint for an AI Bill of Rights,” a 73-page document outlining guidance on addressing bias and discrimination in automated technologies so that “protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people.”
The blueprint focuses on five areas of protections for U.S. citizens in relation to AI: system safety and effectiveness; algorithmic discrimination; data privacy; notice and explanation when an automated system is used; and access to human alternatives when appropriate. It also follows the publication in May of two cautionary documents by the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice specifically addressing the use of algorithmic decision-making tools in hiring and other employment actions.
Employment is listed in the blueprint as one of several “sensitive domains” deserving of enhanced data and privacy protections. Individuals handling sensitive employment information should ensure it is only used for “functions strictly necessary for that domain” while consent for all non-necessary functions “should be optional.”
Additionally, the blueprint states that continuous surveillance and monitoring systems “should not be used in physical or digital workplaces,” regardless of a person’s employment status. Surveillance is particularly sensitive in the union context; the blueprint notes that federal law “requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute, providing a transparency mechanism to help protect worker organizing.”
A growing presence
The prevalence of employment-focused AI and automation may depend on the size and type of organization studied, though research suggests a sizable portion of employers have adopted the tech.
For example, a February survey by the Society for Human Resource Management found that nearly one-quarter of employers used such tools, including 42% of employers with more than 5,000 employees. Of all respondents utilizing AI or automation, 79% said they were using this technology for recruitment and hiring, the most common such application cited, SHRM said.
Similarly, a 2020 Mercer study found that 79% of employers were either already using, or planned to start using that year, algorithms to identify top candidates based on publicly available information. But AI has applications extending beyond recruiting and hiring. Mercer found that most respondents said they were also using the tech to handle employee self-service processes, conduct performance management and onboard workers, among other needs.
What could the ‘blueprint’ mean for employers?
Employers should note that the blueprint is not legally binding, does not constitute official U.S. government policy and is not necessarily indicative of future policy, said Niloy Ray, shareholder at management-side firm Littler Mendelson. Though the principles contained in the document may be appropriate for AI and automation systems to follow, the blueprint is not prescriptive, he added.
“It helps add to the scholarship and thought leadership in the area, certainly,” Ray said. “But it does not rise to the level of some law or regulation.”
Employers may benefit from a single federal standard for AI technologies, Ray said, particularly given that this is an active legislative area for a handful of jurisdictions. A New York City law restricting the use of AI in hiring will take effect next year. Meanwhile, a similar law has been proposed in Washington, D.C., and California’s Fair Employment and Housing Council has proposed regulations on the use of automated decision systems.
Then there is the international regulatory landscape, which can pose even more challenges, Ray said. Because of the complexity involved, Ray added that employers might want to see more discussion around a unified federal standard, and the Biden administration’s blueprint may be a way of jump-starting that discussion.
“Let’s not have to jump through 55 sets of hoops,” Ray said of the potential for a federal standard. “Let’s have one set of hoops to jump through.”
The blueprint’s inclusion of standards around data privacy and other areas may be important for employers to consider, as AI and automation platforms used for hiring often take into account publicly available data that job candidates do not realize is being used for screening purposes, said Julia Stoyanovich, co-founder and director at New York University’s Center for Responsible AI.
Stoyanovich is co-author on an August paper in which a group of NYU researchers detailed their analysis of two personality tests used by two automated hiring vendors, Humantic AI and Crystal. The analysis found that the platforms exhibited “substantial instability on key facets of measurement” and concluded that “they cannot be considered valid personality assessment instruments.”
Even before AI is introduced into the equation, the idea that a personality profile of a candidate could be a predictor of job performance is a controversial one, Stoyanovich said. Laws like New York City’s could help to provide more transparency on how automated hiring platforms work, she added, and could provide HR teams a better idea of whether tools truly serve their intended purposes.
“The fact that we are starting to regulate this space is really good news for employers,” Stoyanovich said. “We know that there are tools that are proliferating that don’t work, and it doesn’t benefit anyone except for the companies that are making money selling these tools.”