‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man

A very concrete case of facial recognition getting it wrong and police incompetence or indifference in not examining evidence or circumstances:

Police in Detroit were trying to figure out who stole five watches from a Shinola retail store. Authorities say the thief took off with an estimated $3,800 worth of merchandise.

Investigators pulled a security video that had recorded the incident. Detectives zoomed in on the grainy footage and ran the person who appeared to be the suspect through facial recognition software.

A hit came back: Robert Julian-Borchak Williams, 42, of Farmington Hills, Mich., about 25 miles northwest of Detroit.

In January, police pulled up to Williams’ home and arrested him while he stood on his front lawn in front of his wife and two daughters, ages 2 and 5, who cried as they watched their father being placed in the patrol car.

His wife, Melissa Williams, wanted to know where police were taking her husband.

” ‘Google it,’ ” she recalls an officer telling her.

Robert Williams was led to an interrogation room, and police put three photos in front of him: Two photos taken from the surveillance camera in the store and a photo of Williams’ state-issued driver’s license.

“When I look at the picture of the guy, I just see a big Black guy. I don’t see a resemblance. I don’t think he looks like me at all,” Williams said in an interview with NPR.

“[The detective] flips the third page over and says, ‘So I guess the computer got it wrong, too.’ And I said, ‘Well, that’s me,’ pointing at a picture of my previous driver’s license,” Williams said of the interrogation with detectives. ” ‘But that guy’s not me,’ ” he said, referring to the other photographs.

“I picked it up and held it to my face and told him, ‘I hope you don’t think all Black people look alike,’ ” Williams said.

Williams was detained for 30 hours and then released on bail until a court hearing on the case, his lawyers say.

At the hearing, a Wayne County prosecutor announced that the charges against Williams were being dropped due to insufficient evidence.

Civil rights experts say Williams is the first documented example in the U.S. of someone being wrongfully arrested based on a false hit produced by facial recognition technology.

Lawyer: Artificial intelligence ‘framed and informed everything’

What makes Williams’ case extraordinary is that police admitted that facial recognition technology, conducted by Michigan State Police in a crime lab at the request of the Detroit Police Department, prompted the arrest, according to charging documents reviewed by NPR.

The pursuit of Williams as a possible suspect came despite repeated claims by him and his lawyers that the match generated by artificial intelligence was faulty.

The alleged suspect in the security camera image was wearing a red St. Louis Cardinals hat. Williams, a Detroit native, said he would under no circumstances be wearing that hat.

“They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day,” said lawyer Phil Mayor with the ACLU of Michigan.

On Wednesday, the ACLU of Michigan filed a complaint against the Detroit Police Department asking that police stop using the software in investigations.

In a statement to NPR, the Detroit Police Department said after the Williams case, the department enacted new rules. Now, only still photos, not security footage, can be used for facial recognition. And it is now used only in the case of violent crimes.

“Facial recognition software is an investigative tool that is used to generate leads only. Additional investigative work, corroborating evidence and probable cause are required before an arrest can be made,” Detroit Police Department Sgt. Nicole Kirkwood said in a statement.

In Williams’ case, police had asked the store security guard, who had not witnessed the robbery, to pick the suspect out of a photo lineup based on the footage, and the security guard selected Williams.

Victoria Burton-Harris, Williams’ lawyer, said in an interview that she is skeptical that investigators used the facial recognition software as only one of several possible leads.

“When that technology picked my client’s face out, from there, it framed and informed everything that officers did subsequently,” Burton-Harris said.

Academic and government studies have demonstrated that facial recognition systems misidentify people of color more often than white people.

One of the leading studies on bias in face recognition was conducted by Joy Buolamwini, an MIT researcher and founder of the Algorithmic Justice League.

“This egregious mismatch shows just one of the dangers of facial recognition technology which has already been shown in study after study to fail people of color, people with dark skin more than white counterparts generally speaking,” Buolamwini said.

“The threats to civil liberties posed by mass surveillance are too high a price,” she said. “You cannot erase the experience of 30 hours detained, the memories of children seeing their father arrested, or the stigma of being labeled criminal.”

Maria Miller, a spokeswoman for the prosecutor’s office, said the case was dismissed over insufficient evidence, including that the charges were filed without the support of any live witnesses.

Wayne County Prosecutor Kym Worthy said any case sent to her office that uses facial recognition technology cannot move forward without other supporting evidence.

“This case should not have been issued based on the DPD investigation, and for that we apologize,” Worthy said in a statement to NPR. “Thankfully, it was dismissed on our office’s own motion. This does not in any way make up for the hours that Mr. Williams spent in jail.”

Worthy said Williams is able to have the case expunged from his record.

Williams: “Let’s say that this case wasn’t retail fraud. What if it’s rape or murder?”

According to Georgetown Law’s Center on Privacy and Technology, at least a quarter of the nation’s law enforcement agencies have access to face recognition tools.

“Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them,” said Jameson Spivack, a researcher at the center.

While Amazon, Microsoft and IBM have announced a halt to sales of face recognition technology to law enforcement, Spivack said that will have little effect, since most major facial recognition software contracts with police are with smaller, more specialized companies, like South Carolina-based DataWorks Plus, which is the company that supplied the Detroit Police Department with its face-scanning software.

The company did not respond to an interview request.

DataWorks Plus has supplied the technology to government agencies in Santa Barbara, Calif., Chicago and Philadelphia.

Facial recognition technology is used by consumers every day to unlock their smartphones or to tag friends on social media. Some airports use the technology to scan passengers before they board flights.

Its deployment by governments, though, has drawn concern from privacy advocates and experts who study the machine learning tool and have highlighted its flaws.

“Some departments of motor vehicles will use facial recognition to detect license fraud, identity theft, but the most common use is law enforcement, whether it’s state, local or federal law enforcement,” Spivack said.

The government use of facial recognition technology has been banned in half a dozen cities.

In Michigan, Williams said he hopes his case is a wake-up call to lawmakers.

“Let’s say that this case wasn’t retail fraud. What if it’s rape or murder? Would I have gotten out of jail on a personal bond, or would I have ever come home?” Williams said.

Williams and his wife, Melissa, worry about the long-term effects the arrest will have on their two young daughters.

“Seeing their dad get arrested, that was their first interaction with the police. So it’s definitely going to shape how they perceive law enforcement,” Melissa Williams said.

In his complaint, Williams and his lawyers say if the police department won’t ban the technology outright, then at least his photo should be removed from the database, so this doesn’t happen again.

“If someone wants to pull my name and look me up,” Williams said, “who wants to be seen as a thief?”

Source: ‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man

Concerns raised after facial recognition software found to have racial bias

Legitimate concerns:

In 2015, two undercover police officers in Jacksonville, Fla., bought $50 worth of crack cocaine from a man on the street. One of the cops surreptitiously snapped a cellphone photo of the man and sent it to a crime analyst, who ran the photo through facial recognition software.

The facial recognition algorithm produced several matches, and the analyst chose the first one: a mug shot of a man named Willie Allen Lynch. Lynch was convicted of selling drugs and sentenced to eight years in prison.

Civil liberties lawyers jumped on the case, flagging a litany of concerns to fight the conviction. Matches of other possible perpetrators generated by the tool were never disclosed to Lynch, hampering his ability to argue for his innocence. The use of the technology statewide had been poorly regulated and shrouded in secrecy.

But also, Willie Allen Lynch is a Black man.

Multiple studies have shown facial recognition technology makes more errors on Black faces. For mug shots in particular, researchers have found that algorithms generate the highest rates of false matches for African American, Asian and Indigenous people.

After more than two dozen police services, government agencies and private businesses across Canada recently admitted to testing the divisive facial recognition app Clearview AI, experts and advocates say it’s vital that lawmakers and politicians understand how the emerging technology could impact racialized citizens.

“Technologies have their bias as well,” said Nasma Ahmed, director of Toronto-based non-profit Digital Justice Lab, who is advocating for a pause on the use of facial recognition technology until proper oversight is established.

“If they don’t wake up, they’re just going to be on the wrong side of trying to fight this battle … because they didn’t realize how significant the threat or the danger of this technology is,” says Toronto-born Toni Morgan, managing director of the Center for Law, Innovation and Creativity at Northeastern University School of Law in Boston.

“It feels like Toronto is a little bit behind the curve in understanding the implications of what it means for law enforcement to access this technology.”

Last month, the Star revealed that officers at more than 20 police forces across Canada have used Clearview AI, a facial recognition tool that has been described as “dystopian” and “reckless” for its broad search powers. It relies on what the U.S. company has said is a database of three billion photos scraped from the web, including social media.

Almost all police forces that confirmed use of the tool said officers had accessed a free trial version without the knowledge or authorization of police leadership and have been told to stop; the RCMP is the only police service that has paid to access the technology.

Multiple forces say the tool was used by investigators within child exploitation units, but it was also used to probe lesser crimes, including in an auto theft investigation and by a Rexall employee seeking to stop shoplifters.

While a handful of American cities and states have moved to limit or outright ban police use of facial recognition technology, the response from Canadian lawmakers has been muted.

According to client data obtained by BuzzFeed News and shared exclusively with the Star, the Toronto Police Service was the most prolific user of Clearview AI in Canada. (Clearview AI has not responded to multiple requests for comment from the Star but told BuzzFeed there are “numerous inaccuracies” in the client data information, which they allege was “illegally obtained.”)

Toronto police ran more than 3,400 searches since October, according to the BuzzFeed data.

A Toronto police spokesperson has said officers were “informally testing” the technology, but said the force could not verify the Star’s data about officers’ use or “comment on it with any certainty.” Toronto police Chief Mark Saunders directed officers to stop using the tool after he became aware they were using it, and a review is underway.

But Toronto police are still using a different facial recognition tool, one made by NEC Corp. of America and purchased in 2018. The NEC facial recognition tool searches the Toronto police database of approximately 1.5 million mug shot photos.

The National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce, has been testing the accuracy of facial recognition technology since 2002. Companies that sell the tools voluntarily submit their algorithms to be tested to NIST; government agencies sponsor the research to help inform policy.

In a report released in December that tested 189 algorithms from 99 developers, NIST found dramatic variations in accuracy across different demographic groups. For one type of matching, the team discovered the systems had error rates between 10 and 100 times higher for African American and Asian faces compared to images of white faces.

For the type of facial recognition matching most likely to be used by law enforcement, African American women had higher error rates.

“Law enforcement, they probably have one of the most difficult cases. Because if they miss someone … and that person commits a crime, they’re going to look bad. If they finger the wrong person, they’re going to look bad,” said Craig Watson, manager of the group that runs NIST’s testing program.

Clearview AI has not been tested by NIST. The company has claimed its tool is “100% accurate” in a report written by an “independent review panel.” The panel said it relied on the same methodology the American Civil Liberties Union used to assess a facial recognition algorithm sold by Amazon.

The American Civil Liberties Union slammed the report, calling the claim “misleading” and the tool “dystopian.”

Clearview AI did not respond to a request for comment about its accuracy claims.

Before purchasing the NEC facial recognition technology, Toronto police conducted a privacy impact assessment. Asked if this examined potential racial bias within the NEC’s algorithms, spokesperson Meaghan Gray said in an email the contents of the report are not public.

But she said TPS “has not experienced racial or gender bias when utilizing the NEC Facial Recognition System.”

“While not a means of undisputable positive identification like fingerprint identification, this technology provides ‘potential candidates’ as investigative leads,” she said. “Consequently, one race or gender has not been disproportionally identified nor has the TPS made any false identifications.”

The revelations about Toronto police’s use of Clearview AI have coincided with the planned installation of additional CCTV cameras in communities across the city, including in the Jane Street and Finch Avenue West area. The provincially funded additional cameras come after the Toronto police board approved increasing the number placed around the city.

The combination of facial recognition technology and additional CCTV cameras in a neighbourhood home to many racialized Torontonians is a “recipe for disaster,” said Sam Tecle, a community worker with Jane and Finch’s Success Beyond Limits youth support program.

“One technology feeds the other,” Tecle said. “Together, I don’t know how that doesn’t result in surveillance — more intensified surveillance — of Black and racialized folks.”

Tecle said the plan to install more cameras was asking for a lot of trust from a community that already has a fraught relationship with the police. That’s in large part due to the legacy of carding, he said — when police stop, question and document people not suspected of a crime, a practice that disproportionately impacts Black and brown men.

“This is just a digital form of doing the same thing,” Tecle told the Star. “If we’re misrecognized and misidentified through these facial recognition algorithms, then I’m very apprehensive about them using any kind of facial recognition software.”

Others pointed out that false positives — incorrect matches — could have particularly grave consequences in the context of police use of force: Black people are “grossly over-represented” in cases where Toronto police used force, according to a 2018 report by the Ontario Human Rights Commission.

Saunders has said residents in high-crime areas have repeatedly asked for more CCTV cameras in public spaces. At last month’s Toronto police board meeting, Mayor John Tory passed a motion requiring that police engage in a public community consultation process before installing more cameras.

Gray said many residents and business owners want increased safety measures, and this feedback alongside an analysis of crime trends led the force to identify “selected areas that are most susceptible to firearm-related offences.”

“The cameras are not used for surveillance. The cameras will be used for investigation purposes, post-reported offences or incidents, to help identify potential suspects, and if needed during major events to aid in public safety,” Gray said.

Akwasi Owusu-Bempah, an assistant professor of criminology at the University of Toronto, said when cameras are placed in neighbourhoods with high proportions of racialized people, then used in tandem with facial recognition technology, “it could be problematic, because of false positives and false negatives.”

“What this gets at is the need for continued discussion, debate, and certainly oversight,” Owusu-Bempah said.

Source: Concerns raised after facial recognition software found to have racial bias

Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Interesting. The advertisers are always ahead of the rest of us….:

There has been significant coverage of bias problems in the use of machine learning in the analysis of people. There has also been pushback against the use of facial recognition because of both bias and inaccuracy. However, a more narrow approach to recognition, one focused on recognition emotions rather than identification, can address marketing challenges. Sentiment analysis by survey is one thing, but tracking human facial responses can significantly improve accuracy of the analysis.

The Brookings Institute points to a projection that the US will become a majority-minority nation by 2045. That means that the non-white population will be over 50% of the population. Even before then, the growing demographic shift means that the non-white population has become a significant part of the consumer market. In this multicultural society, it’s important to know if messages work across those cultures. Today’s marketing needs are much more detailed and subtle than the famous example of the Chevy Nova not selling in Latin America because “no va” means “no go” in Spanish.

It’s also important to understand not only the growth of the multicultural markets, but also what they mean in pure dollars. The following chart from the Collage Group shows that the 2017 revenues from the three largest minority segments are similar to the revenues of entire nations.

It would be foolish for companies to ignore these already large and continually growing segments. While there’s the obvious need to be more inclusive in the images, in particular the people, appearing in ads, the picture is only part of the equation. The right words must also be used to interest different demographics. Of course, that a marketing team thinks it has been more inclusive doesn’t make it so. Just as with other aspects of marketing, these messages must be tested.

Companies have begun to look at vision AI for more than the much reported on facial recognition, that of identifying people. While social media and surveys can catch some sentiment, analysis of facial features is even more detailed. That identification is also an easier AI problem than that of full facial identification. Identifying basic facial features such as the mouth and the eyes, then tracking changes based on watching or reading an advertisement can catch not only a smile, but the “strength” of that smile. Other types of sentiment capture can also be scaled.

Then, without having to identify the individual people, information about their demographics can build a picture of how sentiment varies between groups of people. For instance, the same ad can easily get a different typical reaction from white, middle aged women, then from older black men, and from that of East Asian teenagers. With social media polarizing and fragmenting many attitudes, it’s important to understand how marketing messages are received through the target audiences.

The use of AI to rapidly provide feedback on sentiment analysis will help advertisers to better tune messages, whether aiming at a general message that attracts an audience across the US marketing landscape, or finding appropriate focused messages to attract specific demographics. One example of marketers leveraging AI in this arena is Collage Group. They are a market research firm which has helped companies to better understand and improve messaging to minority communities. Collage Group has recently rolled out AdRate, a process for evaluating ads that integrates AI vision to analysis sentiment of the viewers.

“Companies have come to understand the growing multicultural nature of the US consumer market,” said David Wellisch, CEO, Collage Group. “Artificial intelligence is improving Collage Group’s ability to help B2C companies understand the different reactions in varied communities and then adapt their to the best effect.”

While questions of accuracy and ethics in the use of facial recognition will continue in many areas of business, the opportunity to better message to the diversity of the market is a clear benefit. Visual AI to enhance the accuracy of sentiment analysis is clearly a segment that will grow.

Source: Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Of note. These kinds of studies are important to expose the bias inherent in some corporate facial recognition systems:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft’s technology mistook darker-skinned women for men just 1.5 percent of the time.

A study published a year ago found similar problems in the programs built by IBM, Microsoft and Megvii, an artificial intelligence company in China known as Face++. Those results set off an outcry that was amplified when a co-author of the study, Joy Buolamwini, posted YouTube videos showing the technology misclassifying famous African-American women, like Michelle Obama, as men.

The companies in last year’s report all reacted by quickly releasing more accurate technology. For the latest study, Ms. Buolamwini said, she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn’t heard back from Amazon, and that when she and a co-author retested the company’s product a couple of months later, it had not improved.

Matt Wood, general manager of artificial intelligence at Amazon Web Services, said the researchers had examined facial analysis — a technology that can spot features such as mustaches or expressions such as smiles — and not facial recognition, a technology that can match faces in photos or video stills to identify individuals. Amazon markets both services.

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Dr. Wood said in a statement. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.

Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities.

The M.I.T. researchers used these and other photos to study the accuracy of facial technology in identifying gender.

With advancements in artificial intelligence, facial technologies — services that can be used to identify people in crowds, analyze their emotions, or detect their age and facial characteristics — are proliferating. Now, as companies begin to market these services more aggressively for uses like policing and vetting job candidates, they have emerged as a lightning rod in the debate about whether and how Congress should regulate powerful emerging technologies.

The new study, scheduled to be presented Monday at an artificial intelligence and ethics conference in Honolulu, is sure to inflame that argument.

Proponents see facial recognition as an important advance in helping law enforcement agencies catch criminals and find missing children. Some police departments, and the Federal Bureau of Investigation, have tested Amazon’s product.

But civil liberties experts warn that it can also be used to secretly identify people — potentially chilling Americans’ ability to speak freely or simply go about their business anonymously in public.

Over the last year, Amazon has come under intense scrutiny by federal lawmakers, the American Civil Liberties Union, shareholders, employees and academic researchers for marketing Rekognition to law enforcement agencies. That is partly because, unlike Microsoft, IBM and other tech giants, Amazon has been less willing to publicly discuss concerns.

Amazon, citing customer confidentiality, has also declined to answer questions from federal lawmakers about which government agencies are using Rekognition or how they are using it. The company’s responses have further troubled some federal lawmakers.

“Not only do I want to see them address our concerns with the sense of urgency it deserves,” said Representative Jimmy Gomez, a California Democrat who has been investigating Amazon’s facial recognition practices. “But I also want to know if law enforcement is using it in ways that violate civil liberties, and what — if any — protections Amazon has built into the technology to protect the rights of our constituents.”

In a letter last month to Mr. Gomez, Amazon said Rekognition customers must abide by Amazon’s policies, which require them to comply with civil rights and other laws. But the company said that for privacy reasons it did not audit customers, giving it little insight into how its product is being used.

The study published last year reported that Microsoft had a perfect score in identifying the gender of lighter-skinned men in a photo database, but that it misclassified darker-skinned women as men about one in five times. IBM and Face++ had an even higher error rate, each misclassifying the gender of darker-skinned women about one in three times.

Ms. Buolamwini said she had developed her methodology with the idea of harnessing public pressure, and market competition, to push companies to fix biases in their software that could pose serious risks to people.

Ms. Buolamwini, who had done similar tests last year, conducted another round to learn whether industry practices had changed, she said.CreditTony Luong for The New York Times

“One of the things we were trying to explore with the paper was how to galvanize action,” Ms. Buolamwini said.

Immediately after the study came out last year, IBM published a blog post, “Mitigating Bias in A.I. Models,” citing Ms. Buolamwini’s study. In the post, Ruchir Puri, chief architect at IBM Watson, said IBM had been working for months to reduce bias in its facial recognition system. The company post included test results showing improvements, particularly in classifying the gender of darker-skinned women. Soon after, IBM released a new system that the company said had a tenfold decrease in error rates.

A few months later, Microsoft published its own post, titled “Microsoft improves facial recognition technology to perform well across all skin tones, genders.” In particular, the company said, it had significantly reduced the error rates for female and darker-skinned faces.

Ms. Buolamwini wanted to learn whether the study had changed overall industry practices. So she and a colleague, Deborah Raji, a college student who did an internship at the M.I.T. Media Lab last summer, conducted a new study.

In it, they retested the facial systems of IBM, Microsoft and Face++. They also tested the facial systems of two companies that were not included in the first study: Amazon and Kairos, a start-up in Florida.

The new study found that IBM, Microsoft and Face++ all improved their accuracy in identifying gender.

By contrast, the study reported, Amazon misclassified the gender of darker-skinned females 31 percent of the time, while Kairos had an error rate of 22.5 percent.

Melissa Doval, the chief executive of Kairos, said the company, inspired by Ms. Buolamwini’s work, released a more accurate algorithm in October.

Ms. Buolamwini said the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations, such as job interviews, or in products, like drones or police body cameras.

Some federal lawmakers are voicing similar issues.

“Technology like Amazon’s Rekognition should be used if and only if it is imbued with American values like the right to privacy and equal protection,” said Senator Edward J. Markey, a Massachusetts Democrat who has been investigating Amazon’s facial recognition practices. “I do not think that standard is currently being met.”

Source: Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Most engineers are white — and so are the faces they use to train software – Recode

Not terribly surprising but alarming given how much facial recognition is used these days.

While the focus of this article is with respect to Black faces (as it is with the Implicit Association Test), the same issue likely applies to other minority groups.

Welcome any comments from those with experience on how the various face recognition programs in commercial software such as Flicker, Google, Photos etc:

Facial recognition technology is known to struggle to recognize black faces. The underlying reason for this shortcoming runs deeper than you might expect, according to researchers at MIT.

Speaking during a panel discussion on artificial intelligence at the World Economic Forum Annual Meeting this week, MIT Media Lab director Joichi Ito said it likely stems from the fact that most engineers are white.

“The way you get into computers is because your friends are into computers, which is generally white men. So, when you look at the demographic across Silicon Valley you see a lot of white men,” Ito said.

Ito relayed an anecdote about how a graduate researcher in his lab had found that commonly used libraries for facial recognition have trouble reading dark faces.

“These libraries are used in many of the products that you have, and if you’re an African-American person you get in front of it, it won’t recognize your face,” he said.

Libraries are collections of pre-written code developers can share and reuse to save time instead of writing everything from scratch.

Joy Buolamwini, the graduate researcher on the project, told Recode in an email that software she used did not consistently detect her face, and that more analysis is needed to make broader claims about facial recognition technology.

“Given the wide range of skin-tone and facial features that can be considered African-American, more precise terminology and analysis is needed to determine the performance of existing facial detection systems,” she said.

“One of the risks that we have of the lack of diversity in engineers is that it’s not intuitive which questions you should be asking,” Ito said. “And even if you have a design guidelines, some of this stuff is kind of feel decision.”

“Calls for tech inclusion often miss the bias that is embedded in written code,” Buolamwini wrote in a May post on Medium.

Reused code, while convenient, is limited by the training data it uses to learn, she said. In the case of code for facial recognition, the code is limited by the faces included in the training data.

“A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set,” wrote Buolamwini.

She wrote that to cope with limitations in one project involving facial recognition technology, she had to wear a white mask so that her face could “be detected in a variety of lighting conditions,” she said.

“While this is a temporary solution, we can do better than asking people to change themselves to fit our code. Our task is to create code that can work for people of all types.”

Racial profiling, by a computer? Police facial-ID tech raises civil rights concerns. – The Washington Post

The next frontier of combatting profiling:

The growing use of facial-recognition systems has led to a high-tech form of racial profiling, with African Americans more likely than others to have their images captured, analyzed and reviewed during computerized searches for crime suspects, according to a new report based on records from dozens of police departments.

The report, released Tuesday by the Center for Privacy & Technology at Georgetown University’s law school, found that half of all American adults have their images stored in at least one facial-recognition database that police can search, typically with few restrictions.

The steady expansion of these systems has led to a disproportionate racial impact because African Americans are more likely to be arrested and have mug shots taken, one of the main ways that images end up in police databases. The report also found that criminal databases are rarely “scrubbed” to remove the images of innocent people, nor are facial-recognition systems routinely tested for accuracy, even though some struggle to distinguish among darker-skinned faces.

The combination of these factors means that African Americans are more likely to be singled out as possible suspects in crimes — including ones they did not commit, the report says.

“This is a serious problem, and no one is working to fix it,” said Alvaro M. Bedoya, executive director of the Georgetown Law center that produced the report on facial-recognition technology. “Police departments are talking about it as if it’s race-blind, and it’s just not true.”

The 150-page report, called “The Perpetual Line-Up,” found a rapidly growing patchwork of facial-recognition systems at the federal, state and local level with little regulation and few legal standards. Some databases include mug shots, others driver’s-license photos. Some states, such as Maryland and Pennsylvania, use both as they analyze crime-scene images in search of potential suspects.

At least 117 million Americans have images of their faces in one or more police databases, meaning their resemblance to images taken from crime scenes can become the basis for follow-up by investigators. The FBI ran a pilot program this year in which it could search the State Department’s passport and visa databases for leads in criminal cases. Overall, the Government Accountability Office reported in May, the FBI has had access to 412 million facial images for searches; the faces of some Americans appear several times in these databases.

Source: Racial profiling, by a computer? Police facial-ID tech raises civil rights concerns. – The Washington Post