Dive Brief:
- Untrained artificial intelligence (AI) is prone to discriminatory behavior, Quartz reports. A new report from the World Economic Forum, How to Prevent Discriminatory Outcomes in Machine Learning, explains how algorithms can deny sick people healthcare, reject low-income individuals' applications for credit and heighten racial bias.
- According to Quartz, it has happened plenty of times already. A Google photo tagging process misclassified people as gorillas and hiring platforms have kept people with disabilities out of work. AI decisions are driven in some cases by probability, or the assumption that people will behave a certain way because of circumstances in their lives, such as health or income, without considering other factors that would make the decision more accurate — or legal.
- The report advises employers to examine the ways discrimination creeps into AI and offers ways they can maximize the use of machine learning.
Dive Insight:
Employers need not sacrifice the benefits of machine learning because of potential problems, but they will need to be vigilant about how it's used. Debates continue over what technology can and can't do, with some arguing that human intervention is necessary when AI can't handle nuances in behavior and language, for example.
For AI to be adopted properly, a company will need to install a culture of trust, especially as employee data becomes part of these systems. And cutting edge employers need to be on top of this, as AI is already a reality in certain aspects of HR, like recruiting tech. Don't be afraid to communicate to employees the company's interest and perhaps use of AI, as hiding its impact creates further fear in the workplace over tools that many employees would agree are good for business.
Employers also must keep in mind that where there's discrimination, there's the risk of liability, whether or not the violator is human. For HR, this means asking the right questions before adopting new tech.