Whose Job Is It To Protect Black People From AI? | Opinion

The governor of New York recently dispatched nearly 1,000 members of the National Guard into New York City subways, in response to a spike in crime. In similarly drastic fashion, Google has just reacted to public misfires of its text-to-image product Gemini by pulling it offline. Meanwhile, the victimization of Black people by artificial intelligence (AI) algorithms such as Google's Gemini remains ongoing, and neither elected officials nor tech companies demonstrate much interest in protecting Black people from AI.

Algorithmic tools, or AI, that spews lies, stereotypes, and worse is old news. Stories about AI issuing non-violent Black offenders higher recidivism scores than violent white offenders abound. In 2018, Safiya Noble demonstrated how typing "Black girls" yielded disproportionately high volumes of pornographic and derogatory content on Google. Meanwhile, AI health care algorithms recommend withholding medical treatments from Black patients at higher rates than for others. Whenever I prompt Midjourney, a visual AI platform, much like Gemini, to visualize a dark-skinned person from multiple angles, or with different positions or facial expressions, the algorithm lightens the subject's complexion with each subsequent pose. Eventually, I am presented with a subject with a white complexion and features.

 In this photo illustration, Gemini Ai
In this photo illustration, Gemini AI is seen on an iPad. Michael M. Santiago/Getty Images

These outputs don't display any pattern of exclusion or misrepresentation of white people, and tech companies maintain this status quo. Confronted with accusations of algorithmic racism and stereotyping, Big Tech companies tend to double down on the same technology. Occasionally, there may be an apology from Google (for algorithms labeling Black people as gorillas), or from Facebook (for algorithms labeling Black men as "primates"). Sometimes, the apology is a commercial decision, like IBM, Amazon, and Microsoft's temporary moratorium on selling AI facial recognition systems to U.S. law enforcement (after Gender Shades research proved that the tech worked poorly on Black and dark-complexioned people compared with white people). By contrast, none of the complaints or impassioned headlines about Google's Gemini depicting Nazis, superheroes, and U.S. founding fathers as non-white involve lasting or physical harm. If anything, pundits were outraged that an AI algorithm could project a reality so different from the one that they know.

To be sure, misrepresentation in popular culture and exclusion from the historical record hurts. Black people have long reported being harassed, subjected to unfair moderation and suspension policies, and forced by Big Tech companies to prove that they are employed by their employer.

More than just feelings are involved. In 2017, ProPublica found that Facebook's hate-speech detection algorithms allowed white men to post hateful, racialized content while flagging posts from Black children as hate speech. The Markup reported in 2021 about Google's social justice "block list" for advertisers on YouTube. It prohibited advertisers from marketing anything to viewers interested in "Black Lives Matter" and "Black Excellence," but permitted advertisers to target viewers who followed "White Lives Matter" and "White Power."

In More Than A Glitch: Confronting Race, Gender, and Ability Bias in Tech, New York University associate professor Meredith Broussard wrote that the most frequent victims of misrepresentations, stereotypes, and exclusion by the tech industry are not white people. This is important, given that irate right-wing talk show hosts have alleged that "racism" is behind Gemini's recent outputs—as if serious harm has occurred. It took less than a week for Google to issue a public apology and to block the public from using Gemini. If erasing people from U.S. history and popular culture is "racist," why aren't tech companies rushing to pull racist algorithms off the market that inaccurately label Black people as criminals, sentence us unfairly, or struggle to recognize us at all?

It should be unlawful to sell algorithms that ignore, demean, and/or endanger people based on the color of their skin. Our elected officials have the power to take action, and just as they can deploy military forces to protect the peace, they can demand that tech companies fix or decommission algorithms shown to cause harm to entire communities. It is time for lawmakers to act, even if only because white people are unhappy that one technology has transformed them into outsiders. Legal interventions exist and should be used to mandate swift action when a company's technology threatens or unfairly excludes any of us.

Nakeema Stefflbauer, PhD, is the founder and CEO of the nonprofit FrauenLoop. She writes and speaks about how to center marginalized communities within digital ecosystems. She is currently a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with The MacArthur Foundation.

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Nakeema Stefflbauer


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go