Skip to Content
Artificial intelligence

Facebook says it will look for racial bias in its algorithms

NeONBRAND / Unsplash

The news: Facebook says it is setting up new internal teams to look for racial bias in the algorithms that drive its main social network and Instagram, according to the Wall Street Journal. In particular, the investigations will address the adverse effects of machine learning—which can encode implicit racism in training data—on Black, Hispanic, and other minority groups.

Why it matters: In the last few years, increasing numbers of researchers and activists have highlighted the problem of bias in AI and the disproportionate impact it has on minorities. Facebook, which uses machine learning to curate the daily experience of its 2.5 billion users, is well overdue for an internal assessment of this kind. There is already evidence that Facebook’s ad-serving algorithms discriminate by race and allow advertisers to stop specific racial groups from seeing their ads, for example. 

Under pressure: Facebook has a history of dodging accusations of bias in its systems. It has taken several years of bad press and pressure from civil rights groups to get to this point. Facebook has set up these teams after a month-long advertising boycott organized by civil rights groups—including the Anti-Defamation League, Color of Change, and the NAACP—that led big spenders like Coca-Cola, Disney, McDonald’s, and Starbucks to suspend their campaigns. 

No easy fix: The move is welcome. But launching an investigation is a far cry from actually fixing the problem of racial bias, especially when nobody really knows how to fix it. In most cases, bias exists in the training data and there are no good agreed-on ways to remove it. And adjusting that data—a form of algorithmic affirmative action—is controversial. Machine-learning bias is also just one of social media’s problems around race. If Facebook is going to look at its algorithms, it should be part of a wider overhaul that also grapples with policies that give platforms to racist politicians, white-supremacist groups, and Holocaust deniers.

"We will continue to work closely with Facebook’s Responsible AI team to ensure we are looking at potential biases across our respective platforms," says Stephanie Otway, a spokesperson for Instagram. "It’s early days and we plan to share more details on this work in the coming months."

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.