A community of optimists hosted by Melinda French Gates

Why we’re impatient for diversity in artificial intelligence

Undergraduate students take part in an Artificial Intelligence hackathon
Undergraduate students take part in an Artificial Intelligence hackathon
Photo by Xavier Galiana | AFP | Getty Images
Tech marketing and professional futurists once heralded artificial intelligence (AI) as a great equalizer: simple mathematical code could eliminate the prejudices inherent in human decision-making. This would lead, among other things, to fairer and more equitable workplaces. But if we look at the current state of the AI sector, we see anything but.

The AI industry is in a diversity crisis. You can see it in the statistics of who gets jobs in the industry, what jobs they get, and how long they stay. For example, women comprise 15 percent of AI research staff at Facebook and 10 percent at Google. It’s not much better in academia, with recent studies showing only 18 percent of authors at leading AI conferences are women, and more than 80 percent of AI professors are male. For black workers, the picture is worse—only 2.5 percent of Google’s workforce is black, while Facebook and Microsoft are each at 4 percent. Given the decades of investment to improve diversity, these figures are alarming.

This diversity crisis is not just about women—it’s about gender and race, and most fundamentally about power. It’s about who gets a say over how companies work, what products get built, and who they are best designed for. And the evidence shows that in the companies leading the field, women, people of color, and gender minorities are systematically underpaid and pushed out, excluded from AI conferences, ethics boards, and corporate hierarchies.
A woman makes a copy at the Google Artificial Intelligence office in Ghana
Photo by Cristina Aldehuela | AFP | Getty Images
This exclusion is reflected in AI technologies themselves: high-paying job ads on Facebook are shown to white men while women and people of color are shown lower-status jobs, fraud detection software locks out trans people, sentencing algorithms discriminate against black defendants, and recruiting software “learned” to penalize job applicants for even mentioning the word “woman.”

In short, AI discriminates, and the effects are not evenly distributed. They disproportionately harm those who are already marginalized: women, people of color, gender minorities, and other underrepresented groups.

So what is to be done? At the AI Now Institute, we’ve just completed a year-long study that identified some paths forward that could begin to produce positive changes for both industry and academia—and in doing so help improve the AI systems that are affecting the world beyond.
Now is the time to ensure that the future of AI isn’t amplifying the harms and biases of the past
The AI industry must make real and substantive improvements to workplace diversity by changing its hiring practices, increasing transparency, and tying executive incentives to increases in the hiring and retention of under-represented groups. Contractors, temps, and vendors must be included in all these efforts. We also need avenues to ensure accountability, including transparent AI systems, rigorous testing, and a broader, larger field of research into AI bias. Sometimes, we’ll need to ask whether certain technologies should be built at all.

It is clear that AI’s diversity crisis urgently needs a remedy. We can no longer accept the shocking lack of diversity and the systematic exclusion of under-represented groups inside the tech industry, nor can we ignore the proliferation of AI systems that normalize and amplify inequities at massive scale. Now is the time to ensure that the future of AI isn’t amplifying the harms and biases of the past. Given AI’s increasing reach into nearly every sensitive social domain, the urgency of this project cannot be overstated.