Amazon's 'Sexist' Recruitment Tool Highlights Importance Of Addressing Bias In AI

After identifying bias in its first AI recruiting tool, Amazon is working on another system that will have an enhanced focus on diversity.

According to a report from Reuters, Amazon's plans for an artificial intelligence tool that helps with hiring backfired when the company discovered the system was discriminating against women.

Per Reuters, Amazon assembled an engineering team to work on creating an automated recruitment system back in 2014. The team created computer programs to review hundreds of resumes from job applicants, with the aim of picking top talent. "They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those," one source told Reuters.

By 2015 though, the company noticed the system wasn't rating candidates in a gender-neutral manner. Instead, it was disproportionately ranking male candidates higher. This was happening because the resumes submitted to the system over a 10-year period to accrue data were predominantly male — which, as Reuters points out, is likely a reflection of the largely male-dominated tech industry.


Since the tool was programmed to vet applicants based on patterns it observed in those resumes, the AI ended up concluding that male candidates were better suited to the company. Conversely, it downgraded women candidates, penalizing resumes that including the word "women's" (as in "women's chess captain) or the names of all-women's colleges.

Though Amazon reportedly tweaked the program to prompt a neutral response to these terms, the company couldn't be sure that the machine wouldn't find other discriminatory ways to sort candidates. It ultimately decided to scrap that particular model of the project early last year, while it was still in its trial phase. 

Amazon never chose new hires based solely on the tool's rankings, although recruiters did look at the AI's recommendations. According to a statement emailed to A Plus by an Amazon spokesperson, the tool was "was never used by Amazon recruiters to evaluate candidates."

The company also told Reuters it remains dedicated to "workplace diversity and equality."

Machine learning technology is becoming increasingly common across various industries, from policing to recruiting. But reports have shown that many of these systems have long-standing problems regarding discrimination. To avoid amplifying bias, companies need to actively teach their technology to be inclusive.

There are several ways corporations can improve upon their machine learning tools. Quartz suggests assessing the wider impacts of new AI systems before implementation, as well as establishing internal codes of conduct and incentive models to enhance adherence to non-discriminatory practices. The publication also states that inclusivity and diversity should be made priorities early on, starting from the development of the design teams through the final product.

It's also important for companies to be transparent about the impact of their technology and to constantly evaluate its effectiveness, from refining algorithms to evaluating and reporting its behavior. By taking these proactive steps, there's potential for forward-thinking companies to create revolutionary AI systems without posing a risk to human rights.  

And apparently, Amazon thinks so too. The company says it's taking what it learned from its failed AI experiment to start over. According to what a source told Reuters, a new team in Edinburgh has been formed to give automated employment screening a second try, this time with an enhanced focus on diversity.

Update: This article has been updated to reflect that the project was still in a trial phase when it was scrapped, and to include a statement by an Amazon spokesperson.

Cover image via VDB Photos /


Subscribe to our newsletter and get the latest news and exclusive updates.