'Colorblind' Artificial Intelligence Just Reproduces Racism

AI is increasingly common, and the algorithms it uses to "predict" crime and misbehavior are reproducing racial discrimination.
Kai-Fu Lee, CEO of Sinovation Ventures and the former head of Google China, gives a presentation in Beijing on April 27, 2017.
Kai-Fu Lee, CEO of Sinovation Ventures and the former head of Google China, gives a presentation in Beijing on April 27, 2017.
Mark Schiefelbein/ASSOCIATED PRESS

Sunday night’s episode of “60 Minutes” profiled venture capitalist Kai-Fu Lee’s work on artificial intelligence (AI) in China, specifically a facial recognition software that can read and learn basic emotions. Lee explained how artificial intelligence is formed: Computers are programmed to take in massive amounts of data and then learn to make decisions based on that information.

The “60 Minutes” piece was marked by what is by now a predictable kind of exuberance about how this technology might end the need to perform mundane tasks, eliminate the need for many jobs and transform education. “Could these AI systems pick out geniuses from the countryside?” reporter Scott Pelley asked. “That’s possible in the future,” Kai-Fu Lee responded.

A significant use of AI not mentioned in this puff piece is the way that the Chinese are implementing surveillance technology in policing. While Pelley did ask about how Xi Jinping, the authoritarian leader of the Chinese government, might intend to use such technology for nefarious purposes, like targeting dissidents, Lee demured by saying that he couldn’t read Xi’s mind. Meanwhile, police near Beijing have been equipped with “smart” glasses. They use the same technology as Google glasses, which proved to be a commercial failure as a personal consumer good and are now being used by law enforcement. The Chinese are using these wearable pieces of technology to pick up facial features and car registration plates, which they then cross-check with a database of suspects in real time.

Police forces in the U.S. are using AI, too. Investigative journalist Julia Angwin and colleagues at ProPublica did some savvy reporting in 2016 showing that algorithms meant to predict who will engage in criminal activity rely on data that are already biased against black people. Police use facial images, without people’s permission, to conduct police lineups. And the use of surveillance technology by police has almost no transparency and even less regulation; the ACLU discovered that the Boston Police Department purchased three drones for aerial surveillance without telling anyone in city government.

“The ACLU discovered that the Boston Police Department purchased three drones for aerial surveillance without telling anyone in city government.”

Then there are the ways that AI is marketed as a consumer good to homeowners, hiring managers and harried parents in search of a reliable babysitter. Also, Taylor Swift.

Last May, the organizers of a Swift concert used AI technology to surreptitiously scan the faces of concertgoers. Kiosks showing videos clips of Swift’s rehearsals enticed viewers, and once people looked at the display, facial recognition software began screening their faces. Those images were then transferred to a local law enforcement “command post” and were cross-referenced with a database of hundreds of the pop star’s known stalkers.

Tech startup Ring, which was bought by Amazon last April, sells a “smart” doorbell. You’ve probably seen the advertisements: A person, often a white guy, is pictured through the familiar, grainy surveillance view. He lurks around the front door, smashes a window or makes off with a package intended for the homeowner. When the homeowner ― in this ad, she’s a white woman ― is alerted to the danger, she says something to the effect of “shoo,” and the intruder runs away.

This unobjectionable demonstration of how consumers are supposed to use the Ring doorbell obscures the more pernicious reality that the technology makes it easier for customers to contact the police. AI-enabled doorbells invite the homeowner to designate “suspicious” people and then to auto-generate a call to police. In effect, it automates dialing 911 whenever anyone deemed “suspicious” approaches the home.

Taylor Swift performs at the Rose Bowl on May 19, 2018, in Pasadena, California. Drones were used at one Swift concert to scan concertgoers for known stalkers.
Taylor Swift performs at the Rose Bowl on May 19, 2018, in Pasadena, California. Drones were used at one Swift concert to scan concertgoers for known stalkers.
Christopher Polk/TAS18 via Getty Images

We already know that calling the cops on a black person can quickly turn lethal. Even without AI-enabled devices, black people who have shown up on the doorsteps of white people’s homes have been shot at and killed. And white women have shown themselves to be particularly eager to involve police when they deem a black person “suspicious.” Introducing AI-enabled doorbells ― algorithms that allow white homeowners to determine who belongs and who doesn’t, and then automatically connect them to police ― is to engineer the certain deaths of black people.

The glitch that may stall our descent into a facial-recognition dystopia is that, so far, this technology isn’t very good at recognizing black faces. In an op-ed for The New York Times, Joy Buolamwini, founder of the nonprofit Algorithmic Justice League, notes that facial recognition software is biased. She’s right that “the robot doesn’t see dark skin” ― at least for now. But AI is good at “learning,” and with more data (that is, with more scans of black faces), the technology will become more adept at “seeing” dark skin.

AI is changing hiring, too. Take, for example, the use of AI-enabled babysitter screeners. Predictim, a tech startup, offers an online service with “advanced artificial intelligence” to assess a babysitter’s personality. To do that, it scrapes thousands of a candidate’s Facebook, Twitter and Instagram posts. One white mother in Rancho Mirage, California, used the service and said that “100 percent of the parents are going to want to use this. We all want the perfect babysitter.” Predictim’s service may not be designed with white women in mind, but it seems likely they will be early adopters.

“Predictim’s service may not be designed with white women in mind, but it seems likely they will be early adopters.”

There is a real danger here for people who make their living providing child care, like Kianah Stover. When her tech-writer employer, Brian Merchant, ran her information through the Predictim system for an article he was writing, it returned a ranking of “moderate risk” (3 out of 5) for “disrespectfulness” ― because of some innocuous Twitter jokes. Merchant ran a test on a friend, and he received a better ranking than Stover did, despite the fact that he often spews vulgarities online. He’s white, and she’s black.

Many, including those at Predictim, will say these technologies are not racist and are not designed to harm black and brown people. Joel Simonoff, Predictim’s chief technology officer, says of the results Merchant found, “I just want to clarify and say that Kianah was not flagged because she was African American. I can guarantee you 100 percent there was no bias that went into those posts being flagged. We don’t look at skin color, we don’t look at ethnicity, those aren’t even algorithmic inputs. There’s no way for us to enter that into the algorithm itself.”

For what it’s worth, I believe Simonoff when he says his technology doesn’t look at color. But discrimination doesn’t have to be deliberate or even conscious in order to be harmful. And the “colorblind” approach will not undo discrimination, it will entrench it. If we simply add AI technology on top of unjust social systems, without considering how they automate and speed up those very same systems, we only make injustice run more smoothly. And, crucially, we bestow upon it a gloss of fairness and impartiality it does not deserve ― which will make reforming it that much harder.


Jessie Daniels is a professor at The City University of New York and the author of the forthcoming book Tweetstorm: The Rise of the “Alt-Right” and the Mainstreaming of White Nationalism.

Popular in the Community

Close

What's Hot