San Francisco Is Right: Facial Recognition Must Be Put On Hold

Good analysis by Manjoo:

What are we going to do about all the cameras? The question keeps me up at night, in something like terror.

Cameras are the defining technological advance of our age. They are the keys to our smartphones, the eyes of tomorrow’s autonomous drones and the FOMO engines that drive Facebook, Instagram, TikTok, Snapchat and Pornhub. Cheap, ubiquitous, viral photography has fed social movements like Black Lives Matter, but cameras are already prompting more problems than we know what to do with — revenge porn, live-streamed terrorism, YouTube reactionaries and other photographic ills.

And cameras aren’t done. They keep getting cheaper and — in ways both amazing and alarming — they are getting smarter. Advances in computer vision are giving machines the ability to distinguish and track faces, to make guesses about people’s behaviors and intentions, and to comprehend and navigate threats in the physical environment. In China, smart cameras sit at the foundation of an all-encompassing surveillance totalitarianism unprecedented in human history. In the West, intelligent cameras are now being sold as cheap solutions to nearly every private and public woe, from catching cheating spouses and package thieves to preventing school shootings and immigration violations. I suspect these and more uses will take off, because in my years of covering tech, I’ve gleaned one ironclad axiom about society: If you put a camera in it, it will sell.

That’s why I worry that we’re stumbling dumbly into a surveillance state. And it’s why I think the only reasonable thing to do about smart cameras now is to put a stop to them.

This week, San Francisco’s board of supervisors voted to ban the use of facial-recognition technology by the city’s police and other agencies. Oakland and Berkeley are also considering bans, as is the city of Somerville, Mass. I’m hoping for a cascade. States, cities and the federal government should impose an immediate moratorium on facial recognition, especially its use by law-enforcement agencies. We might still decide, at a later time, to give ourselves over to cameras everywhere. But let’s not jump into an all-seeing future without understanding the risks at hand.

What are the risks? Two new reports by Clare Garvie, a researcher who studies facial recognition at Georgetown Law, brought the dangers home for me. In one report — written with Laura Moy, executive director of Georgetown Law’s Center on Privacy & Technology — Ms. Garvie uncovered municipal contracts indicating that law enforcement agencies in Chicago, Detroit and several other cities are moving quickly, and with little public notice, to install Chinese-style “real time” facial recognition systems.

In Detroit, the researchers discovered that the city signed a $1 million deal with DataWorks Plus, a facial recognition vendor, for software that allows for continuous screening of hundreds of private and public cameras set up around the city — in gas stations, fast-food restaurants, churches, hotels, clinics, addiction treatment centers, affordable-housing apartments and schools. Faces caught by the cameras can be searched against Michigan’s driver’s license photo database. Researchers also obtained the Detroit Police Department’s rules governing how officers can use the system. The rules are broad, allowing police to scan faces “on live or recorded video” for a wide variety of reasons, including to “investigate and/or corroborate tips and leads.” In a letter to Ms. Garvie, James E. Craig, Detroit’s police chief, disputed any “Orwellian activities,” adding that he took “great umbrage” at the suggestion that the police would “violate the rights of law-abiding citizens.”

I’m less optimistic, and so is Ms. Garvie. “Face recognition gives law enforcement a unique ability that they’ve never had before,” Ms. Garvie told me. “That’s the ability to conduct biometric surveillance — the ability to see not just what is happening on the ground but who is doing it. This has never been possible before. We’ve never been able to take mass fingerprint scans of a group of people in secret. We’ve never been able to do that with DNA. Now we can with face scans.”

That ability alters how we should think about privacy in public spaces. It has chilling implications for speech and assembly protected by the First Amendment; it means that the police can watch who participates in protests against the police and keep tabs on them afterward.

In fact, this is already happening. In 2015, when protests erupted in Baltimore over the death of Freddie Gray while in police custody, the Baltimore County Police Department used facial recognition softwareto find people in the crowd who had outstanding warrants — arresting them immediately, in the name of public safety.

Eyes On Detroit

Detroit’s facial recognition operation taps into high-definition cameras set up around the city under a program called Project Green Light Detroit. Participating businesses send the Detroit Police Department a live feed from their indoor and outdoor cameras. In exchange, they receive “special police attention,” according to the initiative’s website.

Source: Detroit Police Department; Open Street Map | By The New York Times

But there’s another wrinkle in the debate over facial recognition. In a second report, Ms. Garvie found that for all their alleged power, face-scanning systems are being used by the police in a rushed, sloppy way that should call into question their results.

Here’s one of the many crazy stories in Ms. Garvie’s report: In the spring of 2017, a man was caught on a security camera stealing beer from a CVS store in New York. But the camera didn’t get a good shot of the man, and the city’s face-scanning system returned no match.

The police, however, were undeterred. A detective in the New York Police Department’s facial recognition department thought the man in the pixelated CVS video looked like the actor Woody Harrelson. So the detective went to Google Images, got a picture of the actor and ran hisface through the face scanner. That produced a match, and the law made its move. A man was arrested for the crime not because he looked like the guy caught on tape but because Woody Harrelson did.

Facial Recognition Is Accurate, if You’re a White Guy – The New York Times

The built-in biases and limitations of facial recognition and the issues it raises:

Facial recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph.

When the person in the photo is a white man, the software is right 99 percent of the time.

But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.

Color Matters in Computer Vision

Facial recognition algorithms made by Microsoft, IBM and Face++ were more likely to misidentify the gender of black women than white men.

Gender was misidentified in up to 1 percent of lighter-skinned males in a set of 385 photos.

Gender was misidentified in up to 7 percent of lighter-skinned females in a set of 296 photos.

Gender was misidentified in up to 12 percent of darker-skinned males in a set of 318 photos.

Gender was misidentified in 35 percent of darker-skinned females in a set of 271 photos.

In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women.

One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study.

The new study also raises broader questions of fairness and accountability in artificial intelligence at a time when investment in and adoption of the technology is racing ahead.

Today, facial recognition software is being deployed by companies in various ways, including to help target product pitches based on social media profile pictures. But companies are also experimenting with face identification and other A.I. technology as an ingredient in automated decisions with higher stakes like hiring and lending.

Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement — and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.

Facial recognition technology is lightly regulated so far.

“This is the right time to be addressing how these A.I. systems work and where they fail — to make them socially accountable,” said Suresh Venkatasubramanian, a professor of computer science at the University of Utah.

Until now, there was anecdotal evidence of computer vision miscues, and occasionally in ways that suggested discrimination. In 2015, for example, Google had to apologize after its image-recognition photo app initially labeled African Americans as “gorillas.”

Sorelle Friedler, a computer scientist at Haverford College and a reviewing editor on Ms. Buolamwini’s research paper, said experts had long suspected that facial recognition software performed differently on different populations.

“But this is the first work I’m aware of that shows that empirically,” Ms. Friedler said.

Ms. Buolamwini, a young African-American computer scientist, experienced the bias of facial recognition firsthand. When she was an undergraduate at the Georgia Institute of Technology, programs would work well on her white friends, she said, but not recognize her face at all. She figured it was a flaw that would surely be fixed before long.

But a few years later, after joining the M.I.T. Media Lab, she ran into the missing-face problem again. Only when she put on a white mask did the software recognize hers as a face.

By then, face recognition software was increasingly moving out of the lab and into the mainstream.

“O.K., this is serious,” she recalled deciding then. “Time to do something.”

So she turned her attention to fighting the bias built into digital technology. Now 28 and a doctoral student, after studying as a Rhodes scholar and a Fulbright fellow, she is an advocate in the new field of “algorithmic accountability,” which seeks to make automated decisions more transparent, explainable and fair.

Her short TED Talk on coded bias has been viewed more than 940,000 times, and she founded the Algorithmic Justice League, a project to raise awareness of the issue.

In her newly published paper, which will be presented at a conferencethis month, Ms. Buolamwini studied the performance of three leading face recognition systems — by Microsoft, IBM and Megvii of China — by classifying how well they could guess the gender of people with different skin tones. These companies were selected because they offered gender classification features in their facial analysis software — and their code was publicly available for testing.

To test the commercial systems, Ms. Buolamwini built a data set of 1,270 faces, using faces of lawmakers from countries with a high percentage of women in office. The sources included three African nations with predominantly dark-skinned populations, and three Nordic countries with mainly light-skinned residents.

The African and Nordic faces were scored according to a six-point labeling system used by dermatologists to classify skin types. The medical classifications were determined to be more objective and precise than race.

Then, each company’s software was tested on the curated data, crafted for gender balance and a range of skin tones. The results varied somewhat. Microsoft’s error rate for darker-skinned women was 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.

Ms. Buolamwini shared the research results with each of the companies. IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparent” services. This month, the company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women.

Microsoft said that it had “already taken steps to improve the accuracy of our facial recognition technology” and that it was investing in research “to recognize, understand and remove bias.”

Ms. Buolamwini’s co-author on her paper is Timnit Gebru, who described her role as an adviser. Ms. Gebru is a scientist at Microsoft Research, working on its Fairness Accountability Transparency and Ethics in A.I. group.

Megvii, whose Face++ software is widely used for identification in online payment and ride-sharing services in China, did not reply to several requests for comment, Ms. Buolamwini said.

Ms. Buolamwini is releasing her data set for others to use and build upon. She describes her research as “a starting point, very much a first step” toward solutions.

Ms. Buolamwini is taking further steps in the technical community and beyond. She is working with the Institute of Electrical and Electronics Engineers, a large professional organization in computing, to set up a group to create standards for accountability and transparency in facial analysis software.

She meets regularly with other academics, public policy groups and philanthropies that are concerned about the impact of artificial intelligence. Darren Walker, president of the Ford Foundation, said that the new technology could be a “platform for opportunity,” but that it would not happen if it replicated and amplified bias and discrimination of the past.

“There is a battle going on for fairness, inclusion and justice in the digital world,” Mr. Walker said.

Part of the challenge, scientists say, is that there is so little diversity within the A.I. community.

“We’d have a lot more introspection and accountability in the field of A.I. if we had more people like Joy,” said Cathy O’Neil, a data scientist and author of “Weapons of Math Destruction.”

Technology, Ms. Buolamwini said, should be more attuned to the people who use it and the people it’s used on.

“You can’t have ethical A.I. that’s not inclusive,” she said. “And whoever is creating the technology is setting the standards.”

via Facial Recognition Is Accurate, if You’re a White Guy – The New York Times