An illustration of a face scan. Adapted from Getty Images
Finding the best ways to do good. Made possible by The Rockefeller Foundation.
San Francisco has become the first US city to ban the use of facial recognition technology by the police and local government agencies. This is a huge win for those who argue that the tech — which can identify an individual by analyzing their facial features in images, in videos, or in real time — carries risks so serious that they far outweigh any benefits.
The “Stop Secret Surveillance” ordinance, which passed 8-1 in a Tuesday vote by the city’s Board of Supervisors, will also prevent city agencies from adopting any other type of surveillance tech (say, automatic license plate readers) until the public has been given notice and the board has had a chance to vote on it.
The ban on facial recognition tech doesn’t apply to businesses, individuals, or federal agencies like the Transportation Security Administration at San Francisco International Airport. But the limits it places on police are important, especially for marginalized and overpoliced communities.
Although the tech is pretty good at identifying white male faces, because those are the sorts of faces it’s been trained on, it often misidentifies people of color and women. That bias could lead to them being disproportionately held for questioning when law enforcement agencies put the tech to use.
San Francisco’s new ban may inspire other cities to follow suit. Later this month, Oakland, California, will weigh whether to institute its own ban. Washington state and Massachusetts are considering similar measures.
But some argue that outlawing facial recognition tech is throwing the proverbial baby out with the bathwater. They say the software can help with worthy aims, like finding missing children and elderly adults or catching criminals and terrorists. Microsoft president Brad Smith has said it would be “cruel” to altogether stop selling the software to government agencies. This camp wants to see the tech regulated, not banned.
Yet there’s good reason to think regulation won’t be enough. For one thing, the danger of this tech is not well understood by the general public — not least because it’s been marketed to us as convenient (Facebook will tag your friends’ faces for you in pictures), cute (phone apps will let you put funny filters on your face), and cool (the latest iPhone’s Face ID makes it the shiny new must-have gadget).
What’s more, the market for this tech is so lucrative that there are strong financial incentives to keep pushing it into more areas of our lives in the absence of a ban. AI is also developing so fast that regulators would likely have to play whack-a-mole as they struggle to keep up with evolving forms of facial recognition. The risks of this tech — including the risk that it will fuel racial discrimination — are so great that there’s a strong argument for implementing a ban like the one San Francisco has passed.
A ban is an extreme measure, yes. But a tool that enables a government to immediately identify us anytime we cross the street is so inherently dangerous that treating it with extreme caution makes sense. Instead of starting from the assumption that facial recognition is permissible — which is the de facto reality we’ve unwittingly gotten used to as tech companies marketed the software to us unencumbered by legislation — we’d do better to start from the assumption that it’s banned, then carve out rare exceptions for specific cases when it might be warranted.
The case for banning facial recognition tech
Proponents of a ban have put forward a number of arguments for it. First, there’s the well-documented fact that human bias can creep into AI. Often, this manifests as a problem with the training data that goes into AIs: If designers mostly feed the systems examples of white male faces, and don’t think to diversify their data, the systems won’t learn to properly recognize women and people of color.
In 2015, Google’s image recognition system labeled African Americans as “gorillas.” Three years later, Amazon’s Rekognition system matched 28 members of Congress to criminal mug shots. Another study found that three facial recognition systems — IBM, Microsoft, and China’s Megvii — were more likely to misidentify the gender of dark-skinned people (especially women) than of light-skinned people.
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a new report from the AI Now Institute explains.
Say the tech gets just as good at identifying black people as it is at identifying white people. That may not actually be a positive change. Given that the black community is already overpoliced in the US, making black faces more legible to this tech and then giving the tech to police could just exacerbate discrimination. As Zoé Samudzi wrote at the Daily Beast, “It is not social progress to make black people equally visible to software that will inevitably be further weaponized against us.”
Woodrow Hartzog and Evan Selinger, a law professor and a philosophy professor, respectively, argued last year in an important essay that facial recognition tech is inherently damaging to our social fabric. “The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled,” they wrote. The worry is that there’ll be a chilling effect on freedom of speech, assembly, and religion.
It’s not hard to imagine some people becoming too nervous to show up at a protest, say, or a mosque, especially given the way law enforcement has already used facial recognition tech. As Recode’s Shirin Ghaffary noted, Baltimore police used it to identify and arrest protestersof Freddie Gray’s death.
Hartzog and Selinger also note that our faces are something we can’t change (at least not without surgery), that they’re central to our identity, and that they’re all too easily captured from a distance (unlike fingerprints or iris scans). If we don’t ban facial recognition before it becomes more entrenched, they argue, “people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”
Facial recognition: “the plutonium of AI”?
Luke Stark, a digital media scholar who works for Microsoft Research Montreal, made another argument for a ban in a recent article titled “Facial recognition is the plutonium of AI.”
Comparing software to a radioactive element may seem over-the-top, but Stark insists the analogy is apt. Plutonium is the biologically toxic element used to make atomic bombs, and just as its toxicity comes from its chemical structure, the danger of facial recognition is ineradicably, structurally embedded within it. “Facial recognition, simply by being designed and built, is intrinsically socially toxic, regardless of the intentions of its makers; it needs controls so strict that it should be banned for almost all practical purposes,” he writes.
Stark agrees with the pro-ban arguments listed above but says there’s another, even deeper issue with facial ID systems — that “they attach numerical values to the human face at all.” He explains:
Facial recognition technologies and other systems for visually classifying human bodies through data are inevitably and always means by which “race,” as a constructed category, is defined and made visible. Reducing humans into sets of legible, manipulable signs has been a hallmark of racializing scientific and administrative techniques going back several hundred years.
The mere fact of numerically classifying and schematizing human facial features is dangerous, he says, because it enables governments and companies to divide us into different races. It’s a short leap from having that capability to “finding numerical reasons for construing some groups as subordinate, and then reifying that subordination by wielding the ‘charisma of numbers’ to claim subordination is a ‘natural’ fact.”
In other words, racial categorization too often feeds racial discrimination. This is not a far-off hypothetical but a current reality: China is already using facial recognition to track Uighur Muslims. As the New York Times reported last month, “The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review.” This “automated racism” makes it easier for China to round up Uighurs and detain them in internment camps.
Stark, who specifically mentions the case of the Uighurs, concludes that the risks of this tech vastly outweigh the benefits. He does concede that there might be very rare use cases where the tech could be allowed under a strong regulatory scheme — for example, as an accessibility tool for the visually impaired. But, he argues, we need to start with the assumption that the tech is banned and make exceptions to that rule, not proceed as if the tech is the rule and regulation is the exception.
“To avoid the social toxicity and racial discrimination it will bring,” he writes, “facial recognition technologies need to be understood for what they are: nuclear-level threats to be handled with extraordinary care.”
Just as several nations came together to create the Non-Proliferation Treaty in the 1960s to curb the spread of nuclear weapons, San Francisco may now serve as a beacon to other cities, showing that it’s possible to say no to the spread of a risky new technology that would make us identifiable and surveillable anywhere we go.
We may have been largely hypnotized by facial recognition’s seeming convenience, cuteness, and coolness when it was first introduced to us. But it’s not too late to wake up.