Dive Brief:
- LinkedIn Recruiter and other tools that use an Automated Decision System, a machine using algorithms to make decisions independent of human intervention, should apply contextual transparency, or a “nutrition label” of sorts, to their results to show the criteria — or ingredients — used to make decisions, the authors of a study published March 13 in Nature Machine Intelligence said. LinkedIn could not immediately be reached for comment.
- “There are currently no systematic and scalable approaches for establishing ADS context and creating ADS transparency. This is particularly problematic when ADS are used by business corporations and enterprises, given that they are often hidden from public access and scrutiny, yet can impact the public at scale,” the authors, researchers at New York University, Cornell Tech and New Jersey Institute of Technology, wrote.
- The study found that recruiters don’t blindly trust the ranked results of their Boolean searches — using “and,” “or” and “not” queries — on LinkedIn Recruiter and usually double check the results.
Dive Insight:
The study addresses concerns about bias when using artificial intelligence for recruiting and hiring, an area just starting to be regulated.
The first such legislation, New York City’s Local Law 144, regulates how companies can use automated employment decision tools, notably by requiring a bias audit before a tool can be used. The law went into effect Jan. 1, but enforcement has been delayed until April 15 after the New York City Department of Consumer and Worker Protection received an influx of queries about the new law.
The U.S. Equal Employment Opportunity Commission is focused on ensuring AI technology doesn’t engender discrimination. Commissioner Andrea Lucas said Title VII of the Civil Rights Act covers this new area and should be applied.