Dive Brief:
- Companies that create pre-employment algorithms tend to offer very little information about how their assessments work and how they prevent bias, researchers from Cornell University found.
- The researchers analyzed 19 vendors with pre-employment screenings including questions, video interview analysis and games.
- Among companies that described their algorithms as "fair," vendors did not disclose how they defined fairness or other terms such as bias. This points to how pre-employment algorithms need to be improved, the researchers concluded. "The real question is not whether algorithms can be made perfect; instead, the relevant comparison is whether they can improve over alternative methods, or in this case, the human status quo," Manish Raghavan, who served as first author of the study, said. "Despite their many flaws, algorithms do have the potential to contribute to a more equitable society, and further work is needed to ensure that we can understand and mitigate the biases they bring."
Dive Insight:
As recruitment professionals attempt to keep up with the work involved in hiring, tech tools that cut down on tasks take on more importance. Researchers at Penn State University and Columbia University said they created an artificial intelligence (AI) tool with the capability to detect discrimination based on legally protected classifications like gender and race in hiring, compensation practices, policing, consumer finance and academic admissions. According to Penn State, the research involved data analysis on demographics, pay and employment-related information for about 50,000 people.
While this appears to make strides in the pursuit of equity, some tech tools have a history of perpetuating bias and inequality. As an example, Google trashed a tool in 2015 after discovering it furthered a hiring bias against women in the company, Reuters reported. This effect may occur in other tools, too. Daniel Greene, assistant professor of information studies at the University of Maryland, previously told HR Dive that if all the machine is doing is learning the biases of thousands of managers, users will do the same biased hiring — only faster.
But problems with AI don't stop at hiring; a paper published by professors at The Wharton School and ESSEC Business School found that a gap exists between what AI promises HR and what it delivers. Based on study results, the paper identified AI shortfalls including the difficulty of measuring performance, small data generated by infrequent occurrences (such as employee dismissals), legal and ethical restrictions, and employee reactions.
HR must remember, however, that it's ultimately up to employers to ensure compliance with nondiscrimination laws. This means that when it comes time to evaluate new tech, HR must ask the right questions.