By Michael Li
We’ve seen no shortage of scandals when it comes to AI. In 2016, Microsoft Tay, an AI bot built to learn in real time from social media content turned into a misogynist, racist troll within 24 hours of launch.
A ProPublica report claimed that an algorithm — built by a private contractor — was more likely to rate black parole candidates as higher risk. A landmark U.S. government study reported that more than 200 facial recognition algorithms — comprising a majority in the industry — had a harder time distinguishing non-white faces. The bias in our human-built AI likely owes something to the lack of diversity in the humans who built them. After all, if none of the researchers building facial recognition systems are people of color, ensuring that non-white faces are properly distinguished may be a far lower priority.
Sources of Discrimination in the AI and Technology Fields
Technology has a remarkably non-diverse workforce. A 2019 study found that under 5.7% of Google employees were Latinx, and 3.3% were Black. Similarly low rates exist across the tech industry. And those numbers are hardly better outside the tech industry, with Latinx and Black employees making up just 7% and 9%, respectively, of STEM workers in the general economy. (They comprise 18.5% and 13.4%, respectively, of the U.S. population.) Data science is a special standout — by one estimate, it underrepresents women, Hispanics, and Blacks more than any other role in the tech industry. It may come as no surprise that a 2019 study by the non-profit Female Founders Faster Forward (F4) found that 95% of surveyed candidates reported facing discrimination in the workplace. With such a biased workforce, how can we expect our AI to fare any better?
Sources of bias in hiring abound. Some of this comes from AI. Amazon famously had to scrap its AI recruiting bot when the company discovered it was biased against women. And it’s not just tech titans: LinkedIn’s 2018 Global Recruiting Trends survey found that 64% of employers use AI and data in recruiting, including top employers like Target, Hilton, Cisco, PepsiCo, and Ikea. But we cannot entirely blame AI — there is a much deeper and more systemic source of hiring bias. An established field of academic research suggests that human resume screening is inherently biased. Using innovative field experiments, university researchers have shown that resume screeners discriminate on the basis of race, religion, national origin, sex, sexual orientation, and age. Discrimination is so prevalent that minorities often actively whiten resumes (and are subsequently more successful in the job market). Scanning resumes, whether by computer or human, is an archaic practice best relegated to the dustbin of history. At best, it measures a candidate’s ability to tactfully boast about their accomplishments and, at worse, provides all the right ingredients for either intentional or unintentional discrimination. So how are companies overcoming this challenge?
A Musical Interlude
An unlikely parallel exists in — of all places — the field of classical music. In the 1970s and 1980s, historically male-dominated orchestras began changing their procedures for hiring. Auditions were conducted blind — placing a screen between the candidate and their judging committee so that the identity of the auditioner could not be discerned — only their music was being judged. The effects of this change were astounding: Harvard researchers found that women were passing 1.6 times more in blind auditions than in non-blind ones, and the number of female players in the orchestras increased by 20 to 30 percentage points. By focusing on the candidate’s performance (rather than irrelevant discriminatory attributes) companies can increase both diversity and quality of their new hires. Here’s how.