Moneybox

Amazon Created a Hiring Tool Using A.I. It Immediately Started Discriminating Against Women.

SEATTLE, WA - JUNE 16: A visitor checks in at the Amazon corporate headquarters on June 16, 2017 in Seattle, Washington. Amazon announced that it will buy Whole Foods Market, Inc. for over $13 billion.  (Photo by David Ryder/Getty Images)
Amazon sign, with dude. David Ryder/Getty Images

Thanks to Amazon, the world has a nifty new cautionary tale about the perils of teaching computers to make human decisions.

According to a Reuters report published Wednesday, the tech giant decided last year to abandon an “experimental hiring tool” that used artificial intelligence to rate job candidates, in part because it discriminated against women. Recruiters reportedly looked at the recommendations the program spat out while searching for talent, “but never relied solely on those rankings.”

The misadventure began in 2014, when a group of Amazon engineers in Scotland set out to mechanize the company’s head-hunting process, by creating a program that would scour the Internet for worthwhile job candidates (and presumably save Amazon’s HR staff some soul crushing hours clicking around LinkedIn). “Everyone wanted this holy grail,” a source told Reuters. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

It didn’t pan out that way. In 2015, the team realized that its creation was biased in favor of men when it came to hiring technical talent, like software developers. The problem was that they trained their machine learning algorithms to look for prospects by recognizing terms that had popped up on the resumes of past job applicants—and because of the tech world’s well-known gender imbalance, those past hopefuls tended to be men.

“In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded graduates of two all-women’s colleges,” Reuters reported. The program also decided that basic tech skills, like the ability to write code, which popped up on all sorts of resumes, weren’t all that important, but grew to like candidates who littered their resumes with macho verbs such as “executed” and “captured.”

After years of trying to fix the project, Amazon brass reportedly “lost hope“ and shuttered the effort in 2017.

All of this is a remarkably clear-cut illustration of why many tech experts are worried that, rather than remove human biases from important decisions, artificial intelligence will simply automate them. An investigation by ProPublica, for instance, found that algorithms judges use in criminal sentencing may dole out harsher penalties to black defendants than white ones. Google Translate famously introduced gender biases into its translations. The issue is that these programs learn to spot patterns and make decisions by analyzing massive data sets, which themselves are often a reflection of social discrimination. Programmers can try to tweak the A.I. to avoid those undesirable results, but they may not think to, or be successful even if they try.

Amazon deserves some credit for realizing its tool had a problem, trying to fix it, and eventually moving on (assuming it didn’t have a serious impact on the company’s recruiting over the last few years). But, at a time when lots of companies are embracing artificial intelligence for things like hiring, what happened at Amazon really highlights that using such technology without unintended consequences is hard. And if a company like Amazon can’t pull it off without problems, it’s difficult to imagine that less sophisticated companies can.