Canada issues tender notice to improve face biometrics for immigration applications

Of note (passport has been using facial recognition technology for some time) as does NEXUS:

The Government of Canada has issued a tender notice inviting industry engagement to improve its biometric immigration system.

The document was published by Public Works and Government Services Canada (PWGSC) on behalf of Immigration, Refugees, and Citizenship Canada (IRCC).

The Invitation to Qualify (ITQ) is the first phase of a two-phase procurement process, which will initially see suppliers of facial recognition technologies invited to pre-qualify in accordance with the terms and conditions of the ITQ.

Qualified Respondents will then be permitted to submit bids on any subsequent Request for Proposals (RFP) issued as part of the procurement process.

According to IRCC, the biometric system’s requirements should be a “reliable and accurate system for establishing and confirming a person’s identity throughout the passport program continuum,” considered as “an integral component of immigration and border decision-making processes.”

Furthermore, the facial recognition system should also include both a front-end component with a user interface and a back-end component. The former will be used by IRCC to collect, enter, and view biographical and biometric data, as well as passport and potential passport clients, while the latter should store databases, tables, algorithms, permissions, code, IT and security rules, and infrastructures.

The back-end system will be also responsible to perform the validation, transformation, and dissemination and integration of face biometrics data in alignment with Government of Canada IT guidelines.

The first phase of the tender notice will end on 9 November. The full text of the document is available in both English and French.

The publication of the new tender comes months after a similar one the Government of Canada posted in July for biometric capture solutions for IRCC.

Source: Canada issues tender notice to improve face biometrics for immigration applications

Demographic skews in training data create algorithmic errors

Of note:

Algorithmic bias is often described as a thorny technical problem. Machine-learning models can respond to almost any pattern—including ones that reflect discrimination. Their designers can explicitly prevent such tools from consuming certain types of information, such as race or sex. Nonetheless, the use of related variables, like someone’s address, can still cause models to perpetuate disadvantage.

Ironing out all traces of bias is a daunting task. Yet despite the growing attention paid to this problem, some of the lowest-hanging fruit remains unpicked.

Every good model relies on training data that reflect what it seeks to predict. This can sometimes be a full population, such as everyone convicted of a given crime. But modellers often have to settle for non-random samples. For uses like facial recognition, models need enough cases from each demographic group to learn how to identify members accurately. And when making forecasts, like trying to predict successful hires from recorded job interviews, the proportions of each group in training data should resemble those in the population.

Many businesses compile private training data. However, the two largest public image archives, Google Open Images and ImageNet—which together have 725,000 pictures labelled by sex, and 27,000 that also record skin colour—are far from representative. In these collections, drawn from search engines and image-hosting sites, just 30-40% of photos are of women. Only 5% of skin colours are listed as “dark”.

Sex and race also sharply affect how people are depicted. Men are unusually likely to appear as skilled workers, whereas images of women disproportionately contain swimwear or undergarments. Machine-learning models regurgitate such patterns. One study trained an image-generation algorithm on ImageNet, and found that it completed pictures of young women’s faces with low-cut tops or bikinis.

https://infographics.economist.com/2021/20210605_GDC200/index.html

Similarly, images with light skin often displayed professionals, such as cardiologists. Those with dark skin had higher shares of rappers, lower-class jobs like “washerwoman” and even generic “strangers”. Thanks to the Obamas, “president” and “first lady” were also overrepresented.

ImageNet is developing a tool to rebalance the demography of its photos. And private firms may use less biased archives. However, commercial products do show signs of skewed data. One study of three programs that identify sex in photos found far more errors for dark-skinned women than for light-skinned men.

Making image or video data more representative would not fix imbalances that reflect real-world gaps, such as the high number of dark-skinned basketball players. But for people trying to clear passport control, avoid police stops based on security cameras or break into industries run by white men, correcting exaggerated demographic disparities would surely help.■

Source: https://www.economist.com/graphic-detail/2021/06/05/demographic-skews-in-training-data-create-algorithmic-errors?utm_campaign=data-newsletter&utm_medium=newsletter&utm_source=salesforce-marketing-cloud&utm_term=2021-06-08&utm_content=data-nl-article-link-1&etear=data_nl_1

From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Good long read and overview of the major issues:

In mid-2019, an investigative journalism/tech non-profit called MuckRock and Open the Government (OTG), a non-partisan advocacy group, began submitting freedom of information requests to law enforcement agencies across the United States. The goal: to smoke out details about the use of an app rumoured to offer unprecedented facial recognition capabilities to anyone with a smartphone.

Co-founded by Michael Morisy, a former Boston Globe editor, MuckRock specializes in FOIs and its site has grown into a publicly accessible repository of government documents obtained under access to information laws.

As responses trickled in, it became clear that the MuckRock/OTG team had made a discovery about a tech company called Clearview AI. Based on documents obtained from Atlanta, OTG researcher Freddy Martinez began filing more requests, and discovered that as many as 200 police departments across the U.S. were using Clearview’s app, which compares images taken by smartphone cameras to a sprawling database of 3 billion open-source photographs of faces linked to various forms of personal information (e.g., Facebook profiles). It was, in effect, a point-click-and-identify system that radically transformed the work of police officers.

The documents soon found their way to a New York Times reporter named Kashmir Hill, who, in January 2020, published a deeply investigated feature about Clearview, a tiny and secretive start-up with backing from Peter Thiel, the Silicon Valley billionaire behind Paypal and Palantir Technologies. Among the story’s revelations, Hill disclosed that tech giants like Google and Apple were well aware that such an app could be developed using artificial intelligence algorithms feeding off the vast storehouse of facial images uploaded to social media platforms and other publicly accessible databases. But they had opted against designing such a disruptive and easily disseminated surveillance tool.

The Times story set off what could best be described as an international chain reaction, with widespread media coverage about the use of Clearview’s app, followed by a wave of announcements from various governments and police agencies about how Clearview’s app would be banned. The reaction played out against a backdrop of news reports about China’s nearly ubiquitous facial recognition-based surveillance networks.

Canada was not exempt. To Surveil and Predict, a detailed examination of “algorithmic policing” published this past fall by the University of Toronto’s Citizen Lab, noted that officers with law enforcement agencies in Calgary, Edmonton and across Greater Toronto had tested Clearview’s app, sometimes without the knowledge of their superiors. Investigative reporting by the Toronto Star and Buzzfeed News found numerous examples of municipal law enforcement agencies, including the Toronto Police Service, using the app in crime investigations. The RCMP denied using Clearview even after it had entered into a contract with the company — a detail exposed by Vancouver’s The Tyee.

With federal and provincial privacy commissioners ordering investigations, Clearview and the RCMP subsequently severed ties, although Citizen Lab noted that many other tech companies still sell facial recognition systems in Canada. “I think it is very questionable whether [Clearview] would conform with Canadian law,” Michael McEvoy, British Columbia’s privacy commissioner, told the Star in February.

There was fallout elsewhere. Four U.S. cities banned police use of facial recognition outright, the Citizen Lab report noted. The European Union in February proposed a ban on facial recognition in public spaces but later hedged. A U.K. court in April ruled that police facial recognition systems were “unlawful,” marking a significant reversal in surveillance-minded Britain. And the European Data Protection Board, an EU agency, informed Commission members in June that Clearview’s technology violates Pan-European law enforcement policies. As Rutgers University law professor and smart city scholar Ellen Goodman notes “There’s been a huge blowback” against the use of data-intensive policing technologies.

There’s nothing new about surveillance or police investigative practices that draw on highly diverse forms of electronic information, from wire taps to bank records and images captured by private security cameras. Yet during the past decade or so, dramatic advances in big data analytics, biometrics and AI, stoked by venture capital and law enforcement agencies eager to invest in new technology, have given rise to a fast-growing data policing industry. As the Clearview story showed, regulation and democratic oversight have lagged far behind the technology.

U.S. startups like PredPol and HunchLab, now owned by ShotSpotter, have designed so-called “predictive policing” algorithms that use law enforcement records and other geographical data (e.g. locations of schools) to make statistical guesses about the times and locations of future property crimes. Palantir’s law-enforcement service aggregates and then mines huge data sets consisting of emails, court documents, evidence repositories, gang member databases, automated licence plate readers, social media, etc., to find correlations or patterns that police can use to investigate suspects.

Yet as the Clearview fallout indicated, big data policing is rife with technical, ethical and political landmines, according to Andrew Ferguson, a University of the District Columbia law professor. As he explains in his 2017 book, The Rise of Big Data Policing, analysts have identified an impressive list: biased, incomplete or inaccurate data, opaque technology, erroneous predictions, lack of governance, public suspicions about surveillance and over-policing, conflicts over access to proprietary algorithms, unauthorized use of data and the muddied incentives of private firms selling law enforcement software.

At least one major study found that some police officers were highly skeptical of predictive policing algorithms. Other critics point out that by deploying smart city sensors or other data-enabled systems, like transit smart cards, local governments may be inadvertently providing the police with new intelligence sources. Metrolinx, for example, has released Presto card user information to police while London’s Metropolitan Police has made thousands of requests for Oyster card data to track criminals, according to The Guardian. “Any time you have a microphone, camera or a live-feed, these [become] surveillance devices with the simple addition of a court order,” says New York civil rights lawyer Albert Cahn, executive director of the Surveillance Technology Oversight Project (STOP).

The authors of the Citizen Lab study, lawyers Kate Robertson, Cynthia Khoo and Yolanda Song, argue that Canadian governments need to impose a moratorium on the deployment of algorithmic policing technology until the public policy and legal frameworks can catch up.

Data policing was born in New York City in the early 1990s when then-police Commissioner William Bratton launched “Compstat,” a computer system that compiled up-to-date crime information then visualized the findings in heat maps. These allowed unit commanders to deploy officers to neighbourhoods most likely to be experiencing crime problems.

Originally conceived as a management tool that would push a demoralized police force to make better use of limited resources, Compstat is credited by some as contributing to the marked reduction in crime rates in the Big Apple, although many other big cities experienced similar drops through the 1990s and early 2000s.

The 9/11 terrorist attacks sparked enormous investments in security technology. The past two decades have seen the emergence of a multi-billion-dollar industry dedicated to civilian security technology, everything from large-scale deployments of CCTVs and cybersecurity to the development of highly sensitive biometric devices — fingerprint readers, iris scanners, etc. — designed to bulk up the security around factories, infrastructure and government buildings.

Predictive policing and facial recognition technologies evolved on parallel tracks, both relying on increasingly sophisticated analytics techniques, artificial intelligence algorithms and ever deeper pools of digital data.

The core idea is that the algorithms — essentially formulas, such as decision-trees, that generate predictions — are “trained” on large tranches of data so they become increasingly accurate, for example at anticipating the likely locations of future property crimes or matching a face captured in a digital image from a CCTV to one in a large database of headshots. Some algorithms are designed to use a set of rules with variables (akin to following a recipe). Others, known as machine learning, are programmed to learn on their own (trial and error).

The risk lies in the quality of the data used to train the algorithms — what was dubbed the “garbage-in-garbage-out” problem in a study by the Georgetown Law Center on Privacy and Technology. If there are hidden biases in the training data — e.g., it contains mostly Caucasian faces — the algorithm may misread Asian or Black faces and generate “false positives,” a well-documented shortcoming if the application involves a identifying a suspect in a crime.

Similarly, if a poor or racialized area is subject to over-policing, there will likely be more crime reports, meaning the data from that neighbourhood is likely to reveal higher-than-average rates of certain types of criminal activity, a data point that would justify more over-policing and racial profiling. Some crimes are under-reported, and don’t influence these algorithms.

Other predictive and AI-based law enforcement technologies, such as “social network analysis” — an individual’s web of personal relationships, gleaned, for example, from social media platforms or examined by cross-referencing of lists of gang members — promised to generate predictions that individuals known to police were at risk of becoming embroiled in violent crimes.

This type of sleuthing seemed to hold out some promise. In one study, criminologists at Cardiff University found that “disorder-related” posts on Twitter reflected crime incidents in metropolitan London — a finding that suggests how big data can help map and anticipate criminal activity. In practise, however, such surveillance tactics can prove explosive. This happened in 2016, when U.S. civil liberties groups revealed documents showing that Geofeedia, a location-based data company, had contracts with numerous police departments to provide analytics based on social media posts to Twitter, Facebook, Instagram, etc. Among the individuals targeted by the company’s data: protestors and activists. Chastened, the social media firms rapidly blocked Geofeedia’s access.

In 2013, the Chicago Police Department began experimenting with predictive models that assigned risk scores for individuals based on criminal records or their connections to people involved in violent crime. By 2019, the CPD had assigned risk scores to almost 400,000 people, and claimed to be using the information to surveil and target “at-risk” individuals (including potential victims) or connect them to social services, according to a January 2020 report by Chicago’s inspector general.

These tools can draw incorrect or biased inferences in the same way that overreliance on police checks in racialized neighbourhoods results in what could be described as guilt by address. The Citizen Lab study noted that the Ontario Human Rights Commission identified social network analysis as a potential cause of racial profiling. In the case of the CPD’s predictive risk model, the system was discontinued in 2020 after media reports and internal investigations showed that people were added to the list based solely on arrest records, meaning they might not even have been charged, much less convicted of a crime.

Early applications of facial recognition software included passport security systems or searches of mug shot databases. But in 2011, the Insurance Corporation of B.C. offered Vancouver police the use of facial recognition software to match photos of Stanley Cup rioters with driver’s licence images — a move that prompted a stern warning from the province’s privacy commissioner. In 2019, the Washington Post revealed that FBI and Immigration and Customs Enforcement (ICE) investigators regarded state databases of digitized driver’s licences as a “gold mine for facial recognition photos” which had been scanned without consent.

In 2013, Canada’s federal privacy commissioner released a report on police use of facial recognition that anticipated the issues raised by Clearview app earlier in 2020. “[S]trict controls and increased transparency are needed to ensure that the use of facial recognition conforms with our privacy laws and our common sense of what is socially acceptable.” (Canada’s data privacy laws are only now being considered for an update.)

The technology, meanwhile, continues to gallop ahead. New York civil rights lawyer Albert Cahn points to the emergence of “gait recognition” systems, which use visual analysis to identify individuals by their walk; these systems are reportedly in use in China. “You’re trying to teach machines how to identify people who walk with the same gait,” he says. “Of course, a lot of this is completely untested.”

The predictive policing story evolved somewhat differently. The methodology grew out of analysis commissioned by the Los Angeles Police Department in the early 2010s. Two data scientists, Jeff Brantingham and George Mohler, used mathematical modelling to forecast copycat crimes based on data about the location and frequency of previous burglaries in three L.A. neighbourhoods. They published their results and soon set up PredPol to commercialize the technology. Media attention soon followed, as news stories played up the seemingly miraculous power of a Minority Report-like system that could do a decent job anticipating incidents of property crime.

Operationally, police forces used PredPol’s system by dividing up precincts in 150-square-metre “cells” that police officers were instructed to patrol more intensively during periods when PredPol’s algorithm forecast criminal activity. In the post-2009 credit crisis period, the technology seemed to promise that cash-strapped American municipalities would get more bang for their policing buck.

Other firms, from startups to multinationals like IBM, entered the market with innovations, for example, incorporating other types of data, such as socio-economic data or geographical features, from parks and picnic tables to schools and bars, that may be correlated to elevated incidents of certain types of crime. The reported crime data is routinely updated so the algorithm remains current.

Police departments across the U.S. and Europe have invested in various predictive policing tools, as have several in Canada, including Vancouver, Edmonton and Saskatoon. Whether they have made a difference is an open question. As with several other studies, a 2017 review by analysts with the Institute for International Research on Criminal Policy, at Ghent University in Belgium, found inconclusive results: some places showed improved results compared to more conventional policing, while in other cities, the use of predictive algorithms led to reduced policing costs, but little measurable difference in outcomes.

Revealingly, the city where predictive policing really took hold, Los Angeles, has rolled back police use on these techniques. Last spring, the LAPD tore up its contract with PredPol in the wake of mounting community and legal pressure from the Stop LAPD Spying Coalition, which found that individuals who posed no real threat, mostly Black or Latino, were ending up on police watch lists because of flaws in the way the system assigned risk scores.

“Algorithms have no place in policing,” Coalition founder Hamid Khan said in an interview this summer with MIT Technology Review. “I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.” (Similar advocacy campaigns, including proposed legislation governing surveillance technology and gang databases, have been proposed for New York City.)

There has been one other interesting consequence: police resistance. B.C.-born sociologist Sarah Brayne, an assistant professor at the University of Texas (Austin), spent two-and-a-half years embedded with the LAPD, exploring the reaction of law enforcement officials to algorithmic policing techniques by conducting ride-alongs as well as interviews with dozens of veteran cops and data analysts. In results published last year, Brayne and collaborator Angèle Christin observed “strong processes of resistance fuelled by fear of professional devaluation and threats of performance tracking.”

Before shifts, officers were told which grids to drive through, when and how frequently, and the locations of their vehicles were tracked by an on-board GPS devices to ensure compliance. But Brayne found that some would turn off the tracking device, which they regarded with suspicion. Others just didn’t buy what the technology was selling. “Patrol officers frequently asserted that they did not need an algorithm to tell them where crime occurs,” she noted.

In an interview, Brayne said that police departments increasingly see predictive technology as part of the tool kit, despite questions about effectiveness or other concerns, like racial profiling. “Once a particular technology is created,” she observed,” there’s a tendency to use it.” But Brayne added one other prediction, which has to do with the future of algorithmic policing in the post-George Floyd era — “an intersection,” as she says, “between squeezed budgets and this movement around defunding the police.”

The widening use of big data policing and digital surveillance poses, according to Citizen Lab’s analysis as well as critiques from U.S. and U.K. legal scholars, a range of civil rights questions, from privacy and freedom from discrimination to due process. Yet governments have been slow to acknowledge these consequences. Big Brother Watch, a British civil liberties group, notes that in the U.K., the national government’s stance has been that police decisions about the deployment of facial recognition systems are “operational.”

At the core of the debate is a basic public policy principle: transparency. Do individuals have the tools to understand and debate the workings of a suite of technologies that can have tremendous influence over their lives and freedoms? It’s what Andrew Ferguson and others refer to as the “black box” problem. The algorithms, designed by software engineers, rely on certain assumptions, methodologies and variables, none of which are visible, much less legible to anyone without advanced technical know-how. Many, moreover, are proprietary because they are sold to local governments by private companies. The upshot is that these kinds of algorithms have not been regulated by governments despite their use by public agencies.

New York City Council moved to tackle this question in May 2018 by establishing an “automated decision systems” task force to examine how municipal agencies and departments use AI and machine learning algorithms. The task force was to devise procedures for identifying hidden biases and to disclose how the algorithms generate choices so the public can assess their impact. The group included officials from the administration of Mayor Bill de Blasio, tech experts and civil liberties advocates. It held public meetings throughout 2019 and released a report that November. NYC was, by most accounts, the first city to have tackled this question, and the initiative was, initially, well received.

Going in, Cahn, the New York City civil rights lawyer, saw the task force as “a unique opportunity to examine how AI was operating in city government.” But he describes the outcome as “disheartening.” “There was an unwillingness to challenge the NYPD on its use of (automated decision systems).” Some other participants agreed, describing the effort as a waste.

If institutional obstacles thwarted an effort in a government the size of the City of New York, what does better and more effective oversight look like? A couple of answers have emerged.

In his book on big data policing, Andrew Ferguson writes that local governments should start at first principles, and urges police forces and civilian oversight bodies to address five fundamental questions, ideally in a public forum:

  • Can you identify the risks that your big data technology is trying to address?
  • Can you defend the inputs into the system (accuracy of data, soundness of methodology)?
  • Can you defend the outputs of the system (how they will impact policing practice and community relationships)?
  • Can you test the technology (offering accountability and some measure of transparency)?
  • Is police use of the technology respectful of the autonomy of the people it will impact?

These “foundational” questions, he writes, “must be satisfactorily answered before green-lighting any purchase or adopting a big data policing strategy.”

In addition to calling for a moratorium and a judicial inquiry into the uses of predictive policing and facial recognition systems, the authors of the Citizen Lab report made several other recommendations, including: the need for full transparency; provincial policies governing the procurement of such systems; limits on the use of ADS in public spaces; and the establishment of oversight bodies that include members of historically marginalized or victimized groups.

Interestingly, the federal government has made advances in this arena, which University of Ottawa law professor and privacy expert Teresa Scassa describes as “really interesting.”

The Treasury Board Secretariat in 2019 issued the “Directive on Automated Decision-Making,” which came into effect in April 2020, requires federal departments and agencies, except those involved in national security, to conduct “algorithmic impact assessments” (AIA) to evaluate unintended bias before procuring or approving the use of technologies that rely on AI or machine learning. The policy requires the government to publish AIAs, release software codes developed internally and continually monitor the performance of these systems. In the case of proprietary algorithms developed by private suppliers, federal officials have extensive rights to access and test the software.

In a forthcoming paper, Scassa points out that the directive includes due process rules and looks for evidence of whether systemic bias has become embedded in these technologies, which can happen if the algorithms are trained on skewed data. She also observes that not all algorithm-driven systems generate life-altering decisions, e.g., chatbots that are now commonly used in online application processes. But where they are deployed in “high impact” contexts such as policing, e.g., with algorithms that aim to identify individuals caught on surveillance videos, the policy requires “a human in the loop.”

The directive, says Scassa, “is getting interest elsewhere,” including the U.S. Ellen Goodman, at Rutgers, is hopeful this approach will gain traction with the Biden administration. In Canada, where provincial governments oversee law enforcement, Ottawa’s low-key but seemingly thorough regulation points to a way for citizens to shine a flashlight into the black box that is big data policing.

Source: From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man

A very concrete case of facial recognition getting it wrong and police incompetence or indifference in not examining evidence or circumstances:

Police in Detroit were trying to figure out who stole five watches from a Shinola retail store. Authorities say the thief took off with an estimated $3,800 worth of merchandise.

Investigators pulled a security video that had recorded the incident. Detectives zoomed in on the grainy footage and ran the person who appeared to be the suspect through facial recognition software.

A hit came back: Robert Julian-Borchak Williams, 42, of Farmington Hills, Mich., about 25 miles northwest of Detroit.

In January, police pulled up to Williams’ home and arrested him while he stood on his front lawn in front of his wife and two daughters, ages 2 and 5, who cried as they watched their father being placed in the patrol car.

His wife, Melissa Williams, wanted to know where police were taking her husband.

” ‘Google it,’ ” she recalls an officer telling her.

Robert Williams was led to an interrogation room, and police put three photos in front of him: Two photos taken from the surveillance camera in the store and a photo of Williams’ state-issued driver’s license.

“When I look at the picture of the guy, I just see a big Black guy. I don’t see a resemblance. I don’t think he looks like me at all,” Williams said in an interview with NPR.

“[The detective] flips the third page over and says, ‘So I guess the computer got it wrong, too.’ And I said, ‘Well, that’s me,’ pointing at a picture of my previous driver’s license,” Williams said of the interrogation with detectives. ” ‘But that guy’s not me,’ ” he said, referring to the other photographs.

“I picked it up and held it to my face and told him, ‘I hope you don’t think all Black people look alike,’ ” Williams said.

Williams was detained for 30 hours and then released on bail until a court hearing on the case, his lawyers say.

At the hearing, a Wayne County prosecutor announced that the charges against Williams were being dropped due to insufficient evidence.

Civil rights experts say Williams is the first documented example in the U.S. of someone being wrongfully arrested based on a false hit produced by facial recognition technology.

Lawyer: Artificial intelligence ‘framed and informed everything’

What makes Williams’ case extraordinary is that police admitted that facial recognition technology, conducted by Michigan State Police in a crime lab at the request of the Detroit Police Department, prompted the arrest, according to charging documents reviewed by NPR.

The pursuit of Williams as a possible suspect came despite repeated claims by him and his lawyers that the match generated by artificial intelligence was faulty.

The alleged suspect in the security camera image was wearing a red St. Louis Cardinals hat. Williams, a Detroit native, said he would under no circumstances be wearing that hat.

“They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day,” said lawyer Phil Mayor with the ACLU of Michigan.

On Wednesday, the ACLU of Michigan filed a complaint against the Detroit Police Department asking that police stop using the software in investigations.

In a statement to NPR, the Detroit Police Department said after the Williams case, the department enacted new rules. Now, only still photos, not security footage, can be used for facial recognition. And it is now used only in the case of violent crimes.

“Facial recognition software is an investigative tool that is used to generate leads only. Additional investigative work, corroborating evidence and probable cause are required before an arrest can be made,” Detroit Police Department Sgt. Nicole Kirkwood said in a statement.

In Williams’ case, police had asked the store security guard, who had not witnessed the robbery, to pick the suspect out of a photo lineup based on the footage, and the security guard selected Williams.

Victoria Burton-Harris, Williams’ lawyer, said in an interview that she is skeptical that investigators used the facial recognition software as only one of several possible leads.

“When that technology picked my client’s face out, from there, it framed and informed everything that officers did subsequently,” Burton-Harris said.

Academic and government studies have demonstrated that facial recognition systems misidentify people of color more often than white people.

One of the leading studies on bias in face recognition was conducted by Joy Buolamwini, an MIT researcher and founder of the Algorithmic Justice League.

“This egregious mismatch shows just one of the dangers of facial recognition technology which has already been shown in study after study to fail people of color, people with dark skin more than white counterparts generally speaking,” Buolamwini said.

“The threats to civil liberties posed by mass surveillance are too high a price,” she said. “You cannot erase the experience of 30 hours detained, the memories of children seeing their father arrested, or the stigma of being labeled criminal.”

Maria Miller, a spokeswoman for the prosecutor’s office, said the case was dismissed over insufficient evidence, including that the charges were filed without the support of any live witnesses.

Wayne County Prosecutor Kym Worthy said any case sent to her office that uses facial recognition technology cannot move forward without other supporting evidence.

“This case should not have been issued based on the DPD investigation, and for that we apologize,” Worthy said in a statement to NPR. “Thankfully, it was dismissed on our office’s own motion. This does not in any way make up for the hours that Mr. Williams spent in jail.”

Worthy said Williams is able to have the case expunged from his record.

Williams: “Let’s say that this case wasn’t retail fraud. What if it’s rape or murder?”

According to Georgetown Law’s Center on Privacy and Technology, at least a quarter of the nation’s law enforcement agencies have access to face recognition tools.

“Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them,” said Jameson Spivack, a researcher at the center.

While Amazon, Microsoft and IBM have announced a halt to sales of face recognition technology to law enforcement, Spivack said that will have little effect, since most major facial recognition software contracts with police are with smaller, more specialized companies, like South Carolina-based DataWorks Plus, which is the company that supplied the Detroit Police Department with its face-scanning software.

The company did not respond to an interview request.

DataWorks Plus has supplied the technology to government agencies in Santa Barbara, Calif., Chicago and Philadelphia.

Facial recognition technology is used by consumers every day to unlock their smartphones or to tag friends on social media. Some airports use the technology to scan passengers before they board flights.

Its deployment by governments, though, has drawn concern from privacy advocates and experts who study the machine learning tool and have highlighted its flaws.

“Some departments of motor vehicles will use facial recognition to detect license fraud, identity theft, but the most common use is law enforcement, whether it’s state, local or federal law enforcement,” Spivack said.

The government use of facial recognition technology has been banned in half a dozen cities.

In Michigan, Williams said he hopes his case is a wake-up call to lawmakers.

“Let’s say that this case wasn’t retail fraud. What if it’s rape or murder? Would I have gotten out of jail on a personal bond, or would I have ever come home?” Williams said.

Williams and his wife, Melissa, worry about the long-term effects the arrest will have on their two young daughters.

“Seeing their dad get arrested, that was their first interaction with the police. So it’s definitely going to shape how they perceive law enforcement,” Melissa Williams said.

In his complaint, Williams and his lawyers say if the police department won’t ban the technology outright, then at least his photo should be removed from the database, so this doesn’t happen again.

“If someone wants to pull my name and look me up,” Williams said, “who wants to be seen as a thief?”

Source: ‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man

Concerns raised after facial recognition software found to have racial bias

Legitimate concerns:

In 2015, two undercover police officers in Jacksonville, Fla., bought $50 worth of crack cocaine from a man on the street. One of the cops surreptitiously snapped a cellphone photo of the man and sent it to a crime analyst, who ran the photo through facial recognition software.

The facial recognition algorithm produced several matches, and the analyst chose the first one: a mug shot of a man named Willie Allen Lynch. Lynch was convicted of selling drugs and sentenced to eight years in prison.

Civil liberties lawyers jumped on the case, flagging a litany of concerns to fight the conviction. Matches of other possible perpetrators generated by the tool were never disclosed to Lynch, hampering his ability to argue for his innocence. The use of the technology statewide had been poorly regulated and shrouded in secrecy.

But also, Willie Allen Lynch is a Black man.

Multiple studies have shown facial recognition technology makes more errors on Black faces. For mug shots in particular, researchers have found that algorithms generate the highest rates of false matches for African American, Asian and Indigenous people.

After more than two dozen police services, government agencies and private businesses across Canada recently admitted to testing the divisive facial recognition app Clearview AI, experts and advocates say it’s vital that lawmakers and politicians understand how the emerging technology could impact racialized citizens.

“Technologies have their bias as well,” said Nasma Ahmed, director of Toronto-based non-profit Digital Justice Lab, who is advocating for a pause on the use of facial recognition technology until proper oversight is established.

“If they don’t wake up, they’re just going to be on the wrong side of trying to fight this battle … because they didn’t realize how significant the threat or the danger of this technology is,” says Toronto-born Toni Morgan, managing director of the Center for Law, Innovation and Creativity at Northeastern University School of Law in Boston.

“It feels like Toronto is a little bit behind the curve in understanding the implications of what it means for law enforcement to access this technology.”

Last month, the Star revealed that officers at more than 20 police forces across Canada have used Clearview AI, a facial recognition tool that has been described as “dystopian” and “reckless” for its broad search powers. It relies on what the U.S. company has said is a database of three billion photos scraped from the web, including social media.

Almost all police forces that confirmed use of the tool said officers had accessed a free trial version without the knowledge or authorization of police leadership and have been told to stop; the RCMP is the only police service that has paid to access the technology.

Multiple forces say the tool was used by investigators within child exploitation units, but it was also used to probe lesser crimes, including in an auto theft investigation and by a Rexall employee seeking to stop shoplifters.

While a handful of American cities and states have moved to limit or outright ban police use of facial recognition technology, the response from Canadian lawmakers has been muted.

According to client data obtained by BuzzFeed News and shared exclusively with the Star, the Toronto Police Service was the most prolific user of Clearview AI in Canada. (Clearview AI has not responded to multiple requests for comment from the Star but told BuzzFeed there are “numerous inaccuracies” in the client data information, which they allege was “illegally obtained.”)

Toronto police ran more than 3,400 searches since October, according to the BuzzFeed data.

A Toronto police spokesperson has said officers were “informally testing” the technology, but said the force could not verify the Star’s data about officers’ use or “comment on it with any certainty.” Toronto police Chief Mark Saunders directed officers to stop using the tool after he became aware they were using it, and a review is underway.

But Toronto police are still using a different facial recognition tool, one made by NEC Corp. of America and purchased in 2018. The NEC facial recognition tool searches the Toronto police database of approximately 1.5 million mug shot photos.

The National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce, has been testing the accuracy of facial recognition technology since 2002. Companies that sell the tools voluntarily submit their algorithms to be tested to NIST; government agencies sponsor the research to help inform policy.

In a report released in December that tested 189 algorithms from 99 developers, NIST found dramatic variations in accuracy across different demographic groups. For one type of matching, the team discovered the systems had error rates between 10 and 100 times higher for African American and Asian faces compared to images of white faces.

For the type of facial recognition matching most likely to be used by law enforcement, African American women had higher error rates.

“Law enforcement, they probably have one of the most difficult cases. Because if they miss someone … and that person commits a crime, they’re going to look bad. If they finger the wrong person, they’re going to look bad,” said Craig Watson, manager of the group that runs NIST’s testing program.

Clearview AI has not been tested by NIST. The company has claimed its tool is “100% accurate” in a report written by an “independent review panel.” The panel said it relied on the same methodology the American Civil Liberties Union used to assess a facial recognition algorithm sold by Amazon.

The American Civil Liberties Union slammed the report, calling the claim “misleading” and the tool “dystopian.”

Clearview AI did not respond to a request for comment about its accuracy claims.

Before purchasing the NEC facial recognition technology, Toronto police conducted a privacy impact assessment. Asked if this examined potential racial bias within the NEC’s algorithms, spokesperson Meaghan Gray said in an email the contents of the report are not public.

But she said TPS “has not experienced racial or gender bias when utilizing the NEC Facial Recognition System.”

“While not a means of undisputable positive identification like fingerprint identification, this technology provides ‘potential candidates’ as investigative leads,” she said. “Consequently, one race or gender has not been disproportionally identified nor has the TPS made any false identifications.”

The revelations about Toronto police’s use of Clearview AI have coincided with the planned installation of additional CCTV cameras in communities across the city, including in the Jane Street and Finch Avenue West area. The provincially funded additional cameras come after the Toronto police board approved increasing the number placed around the city.

The combination of facial recognition technology and additional CCTV cameras in a neighbourhood home to many racialized Torontonians is a “recipe for disaster,” said Sam Tecle, a community worker with Jane and Finch’s Success Beyond Limits youth support program.

“One technology feeds the other,” Tecle said. “Together, I don’t know how that doesn’t result in surveillance — more intensified surveillance — of Black and racialized folks.”

Tecle said the plan to install more cameras was asking for a lot of trust from a community that already has a fraught relationship with the police. That’s in large part due to the legacy of carding, he said — when police stop, question and document people not suspected of a crime, a practice that disproportionately impacts Black and brown men.

“This is just a digital form of doing the same thing,” Tecle told the Star. “If we’re misrecognized and misidentified through these facial recognition algorithms, then I’m very apprehensive about them using any kind of facial recognition software.”

Others pointed out that false positives — incorrect matches — could have particularly grave consequences in the context of police use of force: Black people are “grossly over-represented” in cases where Toronto police used force, according to a 2018 report by the Ontario Human Rights Commission.

Saunders has said residents in high-crime areas have repeatedly asked for more CCTV cameras in public spaces. At last month’s Toronto police board meeting, Mayor John Tory passed a motion requiring that police engage in a public community consultation process before installing more cameras.

Gray said many residents and business owners want increased safety measures, and this feedback alongside an analysis of crime trends led the force to identify “selected areas that are most susceptible to firearm-related offences.”

“The cameras are not used for surveillance. The cameras will be used for investigation purposes, post-reported offences or incidents, to help identify potential suspects, and if needed during major events to aid in public safety,” Gray said.

Akwasi Owusu-Bempah, an assistant professor of criminology at the University of Toronto, said when cameras are placed in neighbourhoods with high proportions of racialized people, then used in tandem with facial recognition technology, “it could be problematic, because of false positives and false negatives.”

“What this gets at is the need for continued discussion, debate, and certainly oversight,” Owusu-Bempah said.

Source: Concerns raised after facial recognition software found to have racial bias

Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Interesting. The advertisers are always ahead of the rest of us….:

There has been significant coverage of bias problems in the use of machine learning in the analysis of people. There has also been pushback against the use of facial recognition because of both bias and inaccuracy. However, a more narrow approach to recognition, one focused on recognition emotions rather than identification, can address marketing challenges. Sentiment analysis by survey is one thing, but tracking human facial responses can significantly improve accuracy of the analysis.

The Brookings Institute points to a projection that the US will become a majority-minority nation by 2045. That means that the non-white population will be over 50% of the population. Even before then, the growing demographic shift means that the non-white population has become a significant part of the consumer market. In this multicultural society, it’s important to know if messages work across those cultures. Today’s marketing needs are much more detailed and subtle than the famous example of the Chevy Nova not selling in Latin America because “no va” means “no go” in Spanish.

It’s also important to understand not only the growth of the multicultural markets, but also what they mean in pure dollars. The following chart from the Collage Group shows that the 2017 revenues from the three largest minority segments are similar to the revenues of entire nations.

It would be foolish for companies to ignore these already large and continually growing segments. While there’s the obvious need to be more inclusive in the images, in particular the people, appearing in ads, the picture is only part of the equation. The right words must also be used to interest different demographics. Of course, that a marketing team thinks it has been more inclusive doesn’t make it so. Just as with other aspects of marketing, these messages must be tested.

Companies have begun to look at vision AI for more than the much reported on facial recognition, that of identifying people. While social media and surveys can catch some sentiment, analysis of facial features is even more detailed. That identification is also an easier AI problem than that of full facial identification. Identifying basic facial features such as the mouth and the eyes, then tracking changes based on watching or reading an advertisement can catch not only a smile, but the “strength” of that smile. Other types of sentiment capture can also be scaled.

Then, without having to identify the individual people, information about their demographics can build a picture of how sentiment varies between groups of people. For instance, the same ad can easily get a different typical reaction from white, middle aged women, then from older black men, and from that of East Asian teenagers. With social media polarizing and fragmenting many attitudes, it’s important to understand how marketing messages are received through the target audiences.

The use of AI to rapidly provide feedback on sentiment analysis will help advertisers to better tune messages, whether aiming at a general message that attracts an audience across the US marketing landscape, or finding appropriate focused messages to attract specific demographics. One example of marketers leveraging AI in this arena is Collage Group. They are a market research firm which has helped companies to better understand and improve messaging to minority communities. Collage Group has recently rolled out AdRate, a process for evaluating ads that integrates AI vision to analysis sentiment of the viewers.

“Companies have come to understand the growing multicultural nature of the US consumer market,” said David Wellisch, CEO, Collage Group. “Artificial intelligence is improving Collage Group’s ability to help B2C companies understand the different reactions in varied communities and then adapt their to the best effect.”

While questions of accuracy and ethics in the use of facial recognition will continue in many areas of business, the opportunity to better message to the diversity of the market is a clear benefit. Visual AI to enhance the accuracy of sentiment analysis is clearly a segment that will grow.

Source: Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Of note. These kinds of studies are important to expose the bias inherent in some corporate facial recognition systems:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft’s technology mistook darker-skinned women for men just 1.5 percent of the time.

A study published a year ago found similar problems in the programs built by IBM, Microsoft and Megvii, an artificial intelligence company in China known as Face++. Those results set off an outcry that was amplified when a co-author of the study, Joy Buolamwini, posted YouTube videos showing the technology misclassifying famous African-American women, like Michelle Obama, as men.

The companies in last year’s report all reacted by quickly releasing more accurate technology. For the latest study, Ms. Buolamwini said, she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn’t heard back from Amazon, and that when she and a co-author retested the company’s product a couple of months later, it had not improved.

Matt Wood, general manager of artificial intelligence at Amazon Web Services, said the researchers had examined facial analysis — a technology that can spot features such as mustaches or expressions such as smiles — and not facial recognition, a technology that can match faces in photos or video stills to identify individuals. Amazon markets both services.

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Dr. Wood said in a statement. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.

Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities.

The M.I.T. researchers used these and other photos to study the accuracy of facial technology in identifying gender.

With advancements in artificial intelligence, facial technologies — services that can be used to identify people in crowds, analyze their emotions, or detect their age and facial characteristics — are proliferating. Now, as companies begin to market these services more aggressively for uses like policing and vetting job candidates, they have emerged as a lightning rod in the debate about whether and how Congress should regulate powerful emerging technologies.

The new study, scheduled to be presented Monday at an artificial intelligence and ethics conference in Honolulu, is sure to inflame that argument.

Proponents see facial recognition as an important advance in helping law enforcement agencies catch criminals and find missing children. Some police departments, and the Federal Bureau of Investigation, have tested Amazon’s product.

But civil liberties experts warn that it can also be used to secretly identify people — potentially chilling Americans’ ability to speak freely or simply go about their business anonymously in public.

Over the last year, Amazon has come under intense scrutiny by federal lawmakers, the American Civil Liberties Union, shareholders, employees and academic researchers for marketing Rekognition to law enforcement agencies. That is partly because, unlike Microsoft, IBM and other tech giants, Amazon has been less willing to publicly discuss concerns.

Amazon, citing customer confidentiality, has also declined to answer questions from federal lawmakers about which government agencies are using Rekognition or how they are using it. The company’s responses have further troubled some federal lawmakers.

“Not only do I want to see them address our concerns with the sense of urgency it deserves,” said Representative Jimmy Gomez, a California Democrat who has been investigating Amazon’s facial recognition practices. “But I also want to know if law enforcement is using it in ways that violate civil liberties, and what — if any — protections Amazon has built into the technology to protect the rights of our constituents.”

In a letter last month to Mr. Gomez, Amazon said Rekognition customers must abide by Amazon’s policies, which require them to comply with civil rights and other laws. But the company said that for privacy reasons it did not audit customers, giving it little insight into how its product is being used.

The study published last year reported that Microsoft had a perfect score in identifying the gender of lighter-skinned men in a photo database, but that it misclassified darker-skinned women as men about one in five times. IBM and Face++ had an even higher error rate, each misclassifying the gender of darker-skinned women about one in three times.

Ms. Buolamwini said she had developed her methodology with the idea of harnessing public pressure, and market competition, to push companies to fix biases in their software that could pose serious risks to people.

Ms. Buolamwini, who had done similar tests last year, conducted another round to learn whether industry practices had changed, she said.CreditTony Luong for The New York Times

“One of the things we were trying to explore with the paper was how to galvanize action,” Ms. Buolamwini said.

Immediately after the study came out last year, IBM published a blog post, “Mitigating Bias in A.I. Models,” citing Ms. Buolamwini’s study. In the post, Ruchir Puri, chief architect at IBM Watson, said IBM had been working for months to reduce bias in its facial recognition system. The company post included test results showing improvements, particularly in classifying the gender of darker-skinned women. Soon after, IBM released a new system that the company said had a tenfold decrease in error rates.

A few months later, Microsoft published its own post, titled “Microsoft improves facial recognition technology to perform well across all skin tones, genders.” In particular, the company said, it had significantly reduced the error rates for female and darker-skinned faces.

Ms. Buolamwini wanted to learn whether the study had changed overall industry practices. So she and a colleague, Deborah Raji, a college student who did an internship at the M.I.T. Media Lab last summer, conducted a new study.

In it, they retested the facial systems of IBM, Microsoft and Face++. They also tested the facial systems of two companies that were not included in the first study: Amazon and Kairos, a start-up in Florida.

The new study found that IBM, Microsoft and Face++ all improved their accuracy in identifying gender.

By contrast, the study reported, Amazon misclassified the gender of darker-skinned females 31 percent of the time, while Kairos had an error rate of 22.5 percent.

Melissa Doval, the chief executive of Kairos, said the company, inspired by Ms. Buolamwini’s work, released a more accurate algorithm in October.

Ms. Buolamwini said the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations, such as job interviews, or in products, like drones or police body cameras.

Some federal lawmakers are voicing similar issues.

“Technology like Amazon’s Rekognition should be used if and only if it is imbued with American values like the right to privacy and equal protection,” said Senator Edward J. Markey, a Massachusetts Democrat who has been investigating Amazon’s facial recognition practices. “I do not think that standard is currently being met.”

Source: Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Most engineers are white — and so are the faces they use to train software – Recode

Not terribly surprising but alarming given how much facial recognition is used these days.

While the focus of this article is with respect to Black faces (as it is with the Implicit Association Test), the same issue likely applies to other minority groups.

Welcome any comments from those with experience on how the various face recognition programs in commercial software such as Flicker, Google, Photos etc:

Facial recognition technology is known to struggle to recognize black faces. The underlying reason for this shortcoming runs deeper than you might expect, according to researchers at MIT.

Speaking during a panel discussion on artificial intelligence at the World Economic Forum Annual Meeting this week, MIT Media Lab director Joichi Ito said it likely stems from the fact that most engineers are white.

“The way you get into computers is because your friends are into computers, which is generally white men. So, when you look at the demographic across Silicon Valley you see a lot of white men,” Ito said.

Ito relayed an anecdote about how a graduate researcher in his lab had found that commonly used libraries for facial recognition have trouble reading dark faces.

“These libraries are used in many of the products that you have, and if you’re an African-American person you get in front of it, it won’t recognize your face,” he said.

Libraries are collections of pre-written code developers can share and reuse to save time instead of writing everything from scratch.

Joy Buolamwini, the graduate researcher on the project, told Recode in an email that software she used did not consistently detect her face, and that more analysis is needed to make broader claims about facial recognition technology.

“Given the wide range of skin-tone and facial features that can be considered African-American, more precise terminology and analysis is needed to determine the performance of existing facial detection systems,” she said.

“One of the risks that we have of the lack of diversity in engineers is that it’s not intuitive which questions you should be asking,” Ito said. “And even if you have a design guidelines, some of this stuff is kind of feel decision.”

“Calls for tech inclusion often miss the bias that is embedded in written code,” Buolamwini wrote in a May post on Medium.

Reused code, while convenient, is limited by the training data it uses to learn, she said. In the case of code for facial recognition, the code is limited by the faces included in the training data.

“A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set,” wrote Buolamwini.

She wrote that to cope with limitations in one project involving facial recognition technology, she had to wear a white mask so that her face could “be detected in a variety of lighting conditions,” she said.

“While this is a temporary solution, we can do better than asking people to change themselves to fit our code. Our task is to create code that can work for people of all types.”

Racial profiling, by a computer? Police facial-ID tech raises civil rights concerns. – The Washington Post

The next frontier of combatting profiling:

The growing use of facial-recognition systems has led to a high-tech form of racial profiling, with African Americans more likely than others to have their images captured, analyzed and reviewed during computerized searches for crime suspects, according to a new report based on records from dozens of police departments.

The report, released Tuesday by the Center for Privacy & Technology at Georgetown University’s law school, found that half of all American adults have their images stored in at least one facial-recognition database that police can search, typically with few restrictions.

The steady expansion of these systems has led to a disproportionate racial impact because African Americans are more likely to be arrested and have mug shots taken, one of the main ways that images end up in police databases. The report also found that criminal databases are rarely “scrubbed” to remove the images of innocent people, nor are facial-recognition systems routinely tested for accuracy, even though some struggle to distinguish among darker-skinned faces.

The combination of these factors means that African Americans are more likely to be singled out as possible suspects in crimes — including ones they did not commit, the report says.

“This is a serious problem, and no one is working to fix it,” said Alvaro M. Bedoya, executive director of the Georgetown Law center that produced the report on facial-recognition technology. “Police departments are talking about it as if it’s race-blind, and it’s just not true.”

The 150-page report, called “The Perpetual Line-Up,” found a rapidly growing patchwork of facial-recognition systems at the federal, state and local level with little regulation and few legal standards. Some databases include mug shots, others driver’s-license photos. Some states, such as Maryland and Pennsylvania, use both as they analyze crime-scene images in search of potential suspects.

At least 117 million Americans have images of their faces in one or more police databases, meaning their resemblance to images taken from crime scenes can become the basis for follow-up by investigators. The FBI ran a pilot program this year in which it could search the State Department’s passport and visa databases for leads in criminal cases. Overall, the Government Accountability Office reported in May, the FBI has had access to 412 million facial images for searches; the faces of some Americans appear several times in these databases.

Source: Racial profiling, by a computer? Police facial-ID tech raises civil rights concerns. – The Washington Post