From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Good long read and overview of the major issues:

In mid-2019, an investigative journalism/tech non-profit called MuckRock and Open the Government (OTG), a non-partisan advocacy group, began submitting freedom of information requests to law enforcement agencies across the United States. The goal: to smoke out details about the use of an app rumoured to offer unprecedented facial recognition capabilities to anyone with a smartphone.

Co-founded by Michael Morisy, a former Boston Globe editor, MuckRock specializes in FOIs and its site has grown into a publicly accessible repository of government documents obtained under access to information laws.

As responses trickled in, it became clear that the MuckRock/OTG team had made a discovery about a tech company called Clearview AI. Based on documents obtained from Atlanta, OTG researcher Freddy Martinez began filing more requests, and discovered that as many as 200 police departments across the U.S. were using Clearview’s app, which compares images taken by smartphone cameras to a sprawling database of 3 billion open-source photographs of faces linked to various forms of personal information (e.g., Facebook profiles). It was, in effect, a point-click-and-identify system that radically transformed the work of police officers.

The documents soon found their way to a New York Times reporter named Kashmir Hill, who, in January 2020, published a deeply investigated feature about Clearview, a tiny and secretive start-up with backing from Peter Thiel, the Silicon Valley billionaire behind Paypal and Palantir Technologies. Among the story’s revelations, Hill disclosed that tech giants like Google and Apple were well aware that such an app could be developed using artificial intelligence algorithms feeding off the vast storehouse of facial images uploaded to social media platforms and other publicly accessible databases. But they had opted against designing such a disruptive and easily disseminated surveillance tool.

The Times story set off what could best be described as an international chain reaction, with widespread media coverage about the use of Clearview’s app, followed by a wave of announcements from various governments and police agencies about how Clearview’s app would be banned. The reaction played out against a backdrop of news reports about China’s nearly ubiquitous facial recognition-based surveillance networks.

Canada was not exempt. To Surveil and Predict, a detailed examination of “algorithmic policing” published this past fall by the University of Toronto’s Citizen Lab, noted that officers with law enforcement agencies in Calgary, Edmonton and across Greater Toronto had tested Clearview’s app, sometimes without the knowledge of their superiors. Investigative reporting by the Toronto Star and Buzzfeed News found numerous examples of municipal law enforcement agencies, including the Toronto Police Service, using the app in crime investigations. The RCMP denied using Clearview even after it had entered into a contract with the company — a detail exposed by Vancouver’s The Tyee.

With federal and provincial privacy commissioners ordering investigations, Clearview and the RCMP subsequently severed ties, although Citizen Lab noted that many other tech companies still sell facial recognition systems in Canada. “I think it is very questionable whether [Clearview] would conform with Canadian law,” Michael McEvoy, British Columbia’s privacy commissioner, told the Star in February.

There was fallout elsewhere. Four U.S. cities banned police use of facial recognition outright, the Citizen Lab report noted. The European Union in February proposed a ban on facial recognition in public spaces but later hedged. A U.K. court in April ruled that police facial recognition systems were “unlawful,” marking a significant reversal in surveillance-minded Britain. And the European Data Protection Board, an EU agency, informed Commission members in June that Clearview’s technology violates Pan-European law enforcement policies. As Rutgers University law professor and smart city scholar Ellen Goodman notes “There’s been a huge blowback” against the use of data-intensive policing technologies.

There’s nothing new about surveillance or police investigative practices that draw on highly diverse forms of electronic information, from wire taps to bank records and images captured by private security cameras. Yet during the past decade or so, dramatic advances in big data analytics, biometrics and AI, stoked by venture capital and law enforcement agencies eager to invest in new technology, have given rise to a fast-growing data policing industry. As the Clearview story showed, regulation and democratic oversight have lagged far behind the technology.

U.S. startups like PredPol and HunchLab, now owned by ShotSpotter, have designed so-called “predictive policing” algorithms that use law enforcement records and other geographical data (e.g. locations of schools) to make statistical guesses about the times and locations of future property crimes. Palantir’s law-enforcement service aggregates and then mines huge data sets consisting of emails, court documents, evidence repositories, gang member databases, automated licence plate readers, social media, etc., to find correlations or patterns that police can use to investigate suspects.

Yet as the Clearview fallout indicated, big data policing is rife with technical, ethical and political landmines, according to Andrew Ferguson, a University of the District Columbia law professor. As he explains in his 2017 book, The Rise of Big Data Policing, analysts have identified an impressive list: biased, incomplete or inaccurate data, opaque technology, erroneous predictions, lack of governance, public suspicions about surveillance and over-policing, conflicts over access to proprietary algorithms, unauthorized use of data and the muddied incentives of private firms selling law enforcement software.

At least one major study found that some police officers were highly skeptical of predictive policing algorithms. Other critics point out that by deploying smart city sensors or other data-enabled systems, like transit smart cards, local governments may be inadvertently providing the police with new intelligence sources. Metrolinx, for example, has released Presto card user information to police while London’s Metropolitan Police has made thousands of requests for Oyster card data to track criminals, according to The Guardian. “Any time you have a microphone, camera or a live-feed, these [become] surveillance devices with the simple addition of a court order,” says New York civil rights lawyer Albert Cahn, executive director of the Surveillance Technology Oversight Project (STOP).

The authors of the Citizen Lab study, lawyers Kate Robertson, Cynthia Khoo and Yolanda Song, argue that Canadian governments need to impose a moratorium on the deployment of algorithmic policing technology until the public policy and legal frameworks can catch up.

Data policing was born in New York City in the early 1990s when then-police Commissioner William Bratton launched “Compstat,” a computer system that compiled up-to-date crime information then visualized the findings in heat maps. These allowed unit commanders to deploy officers to neighbourhoods most likely to be experiencing crime problems.

Originally conceived as a management tool that would push a demoralized police force to make better use of limited resources, Compstat is credited by some as contributing to the marked reduction in crime rates in the Big Apple, although many other big cities experienced similar drops through the 1990s and early 2000s.

The 9/11 terrorist attacks sparked enormous investments in security technology. The past two decades have seen the emergence of a multi-billion-dollar industry dedicated to civilian security technology, everything from large-scale deployments of CCTVs and cybersecurity to the development of highly sensitive biometric devices — fingerprint readers, iris scanners, etc. — designed to bulk up the security around factories, infrastructure and government buildings.

Predictive policing and facial recognition technologies evolved on parallel tracks, both relying on increasingly sophisticated analytics techniques, artificial intelligence algorithms and ever deeper pools of digital data.

The core idea is that the algorithms — essentially formulas, such as decision-trees, that generate predictions — are “trained” on large tranches of data so they become increasingly accurate, for example at anticipating the likely locations of future property crimes or matching a face captured in a digital image from a CCTV to one in a large database of headshots. Some algorithms are designed to use a set of rules with variables (akin to following a recipe). Others, known as machine learning, are programmed to learn on their own (trial and error).

The risk lies in the quality of the data used to train the algorithms — what was dubbed the “garbage-in-garbage-out” problem in a study by the Georgetown Law Center on Privacy and Technology. If there are hidden biases in the training data — e.g., it contains mostly Caucasian faces — the algorithm may misread Asian or Black faces and generate “false positives,” a well-documented shortcoming if the application involves a identifying a suspect in a crime.

Similarly, if a poor or racialized area is subject to over-policing, there will likely be more crime reports, meaning the data from that neighbourhood is likely to reveal higher-than-average rates of certain types of criminal activity, a data point that would justify more over-policing and racial profiling. Some crimes are under-reported, and don’t influence these algorithms.

Other predictive and AI-based law enforcement technologies, such as “social network analysis” — an individual’s web of personal relationships, gleaned, for example, from social media platforms or examined by cross-referencing of lists of gang members — promised to generate predictions that individuals known to police were at risk of becoming embroiled in violent crimes.

This type of sleuthing seemed to hold out some promise. In one study, criminologists at Cardiff University found that “disorder-related” posts on Twitter reflected crime incidents in metropolitan London — a finding that suggests how big data can help map and anticipate criminal activity. In practise, however, such surveillance tactics can prove explosive. This happened in 2016, when U.S. civil liberties groups revealed documents showing that Geofeedia, a location-based data company, had contracts with numerous police departments to provide analytics based on social media posts to Twitter, Facebook, Instagram, etc. Among the individuals targeted by the company’s data: protestors and activists. Chastened, the social media firms rapidly blocked Geofeedia’s access.

In 2013, the Chicago Police Department began experimenting with predictive models that assigned risk scores for individuals based on criminal records or their connections to people involved in violent crime. By 2019, the CPD had assigned risk scores to almost 400,000 people, and claimed to be using the information to surveil and target “at-risk” individuals (including potential victims) or connect them to social services, according to a January 2020 report by Chicago’s inspector general.

These tools can draw incorrect or biased inferences in the same way that overreliance on police checks in racialized neighbourhoods results in what could be described as guilt by address. The Citizen Lab study noted that the Ontario Human Rights Commission identified social network analysis as a potential cause of racial profiling. In the case of the CPD’s predictive risk model, the system was discontinued in 2020 after media reports and internal investigations showed that people were added to the list based solely on arrest records, meaning they might not even have been charged, much less convicted of a crime.

Early applications of facial recognition software included passport security systems or searches of mug shot databases. But in 2011, the Insurance Corporation of B.C. offered Vancouver police the use of facial recognition software to match photos of Stanley Cup rioters with driver’s licence images — a move that prompted a stern warning from the province’s privacy commissioner. In 2019, the Washington Post revealed that FBI and Immigration and Customs Enforcement (ICE) investigators regarded state databases of digitized driver’s licences as a “gold mine for facial recognition photos” which had been scanned without consent.

In 2013, Canada’s federal privacy commissioner released a report on police use of facial recognition that anticipated the issues raised by Clearview app earlier in 2020. “[S]trict controls and increased transparency are needed to ensure that the use of facial recognition conforms with our privacy laws and our common sense of what is socially acceptable.” (Canada’s data privacy laws are only now being considered for an update.)

The technology, meanwhile, continues to gallop ahead. New York civil rights lawyer Albert Cahn points to the emergence of “gait recognition” systems, which use visual analysis to identify individuals by their walk; these systems are reportedly in use in China. “You’re trying to teach machines how to identify people who walk with the same gait,” he says. “Of course, a lot of this is completely untested.”

The predictive policing story evolved somewhat differently. The methodology grew out of analysis commissioned by the Los Angeles Police Department in the early 2010s. Two data scientists, Jeff Brantingham and George Mohler, used mathematical modelling to forecast copycat crimes based on data about the location and frequency of previous burglaries in three L.A. neighbourhoods. They published their results and soon set up PredPol to commercialize the technology. Media attention soon followed, as news stories played up the seemingly miraculous power of a Minority Report-like system that could do a decent job anticipating incidents of property crime.

Operationally, police forces used PredPol’s system by dividing up precincts in 150-square-metre “cells” that police officers were instructed to patrol more intensively during periods when PredPol’s algorithm forecast criminal activity. In the post-2009 credit crisis period, the technology seemed to promise that cash-strapped American municipalities would get more bang for their policing buck.

Other firms, from startups to multinationals like IBM, entered the market with innovations, for example, incorporating other types of data, such as socio-economic data or geographical features, from parks and picnic tables to schools and bars, that may be correlated to elevated incidents of certain types of crime. The reported crime data is routinely updated so the algorithm remains current.

Police departments across the U.S. and Europe have invested in various predictive policing tools, as have several in Canada, including Vancouver, Edmonton and Saskatoon. Whether they have made a difference is an open question. As with several other studies, a 2017 review by analysts with the Institute for International Research on Criminal Policy, at Ghent University in Belgium, found inconclusive results: some places showed improved results compared to more conventional policing, while in other cities, the use of predictive algorithms led to reduced policing costs, but little measurable difference in outcomes.

Revealingly, the city where predictive policing really took hold, Los Angeles, has rolled back police use on these techniques. Last spring, the LAPD tore up its contract with PredPol in the wake of mounting community and legal pressure from the Stop LAPD Spying Coalition, which found that individuals who posed no real threat, mostly Black or Latino, were ending up on police watch lists because of flaws in the way the system assigned risk scores.

“Algorithms have no place in policing,” Coalition founder Hamid Khan said in an interview this summer with MIT Technology Review. “I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.” (Similar advocacy campaigns, including proposed legislation governing surveillance technology and gang databases, have been proposed for New York City.)

There has been one other interesting consequence: police resistance. B.C.-born sociologist Sarah Brayne, an assistant professor at the University of Texas (Austin), spent two-and-a-half years embedded with the LAPD, exploring the reaction of law enforcement officials to algorithmic policing techniques by conducting ride-alongs as well as interviews with dozens of veteran cops and data analysts. In results published last year, Brayne and collaborator Angèle Christin observed “strong processes of resistance fuelled by fear of professional devaluation and threats of performance tracking.”

Before shifts, officers were told which grids to drive through, when and how frequently, and the locations of their vehicles were tracked by an on-board GPS devices to ensure compliance. But Brayne found that some would turn off the tracking device, which they regarded with suspicion. Others just didn’t buy what the technology was selling. “Patrol officers frequently asserted that they did not need an algorithm to tell them where crime occurs,” she noted.

In an interview, Brayne said that police departments increasingly see predictive technology as part of the tool kit, despite questions about effectiveness or other concerns, like racial profiling. “Once a particular technology is created,” she observed,” there’s a tendency to use it.” But Brayne added one other prediction, which has to do with the future of algorithmic policing in the post-George Floyd era — “an intersection,” as she says, “between squeezed budgets and this movement around defunding the police.”

The widening use of big data policing and digital surveillance poses, according to Citizen Lab’s analysis as well as critiques from U.S. and U.K. legal scholars, a range of civil rights questions, from privacy and freedom from discrimination to due process. Yet governments have been slow to acknowledge these consequences. Big Brother Watch, a British civil liberties group, notes that in the U.K., the national government’s stance has been that police decisions about the deployment of facial recognition systems are “operational.”

At the core of the debate is a basic public policy principle: transparency. Do individuals have the tools to understand and debate the workings of a suite of technologies that can have tremendous influence over their lives and freedoms? It’s what Andrew Ferguson and others refer to as the “black box” problem. The algorithms, designed by software engineers, rely on certain assumptions, methodologies and variables, none of which are visible, much less legible to anyone without advanced technical know-how. Many, moreover, are proprietary because they are sold to local governments by private companies. The upshot is that these kinds of algorithms have not been regulated by governments despite their use by public agencies.

New York City Council moved to tackle this question in May 2018 by establishing an “automated decision systems” task force to examine how municipal agencies and departments use AI and machine learning algorithms. The task force was to devise procedures for identifying hidden biases and to disclose how the algorithms generate choices so the public can assess their impact. The group included officials from the administration of Mayor Bill de Blasio, tech experts and civil liberties advocates. It held public meetings throughout 2019 and released a report that November. NYC was, by most accounts, the first city to have tackled this question, and the initiative was, initially, well received.

Going in, Cahn, the New York City civil rights lawyer, saw the task force as “a unique opportunity to examine how AI was operating in city government.” But he describes the outcome as “disheartening.” “There was an unwillingness to challenge the NYPD on its use of (automated decision systems).” Some other participants agreed, describing the effort as a waste.

If institutional obstacles thwarted an effort in a government the size of the City of New York, what does better and more effective oversight look like? A couple of answers have emerged.

In his book on big data policing, Andrew Ferguson writes that local governments should start at first principles, and urges police forces and civilian oversight bodies to address five fundamental questions, ideally in a public forum:

  • Can you identify the risks that your big data technology is trying to address?
  • Can you defend the inputs into the system (accuracy of data, soundness of methodology)?
  • Can you defend the outputs of the system (how they will impact policing practice and community relationships)?
  • Can you test the technology (offering accountability and some measure of transparency)?
  • Is police use of the technology respectful of the autonomy of the people it will impact?

These “foundational” questions, he writes, “must be satisfactorily answered before green-lighting any purchase or adopting a big data policing strategy.”

In addition to calling for a moratorium and a judicial inquiry into the uses of predictive policing and facial recognition systems, the authors of the Citizen Lab report made several other recommendations, including: the need for full transparency; provincial policies governing the procurement of such systems; limits on the use of ADS in public spaces; and the establishment of oversight bodies that include members of historically marginalized or victimized groups.

Interestingly, the federal government has made advances in this arena, which University of Ottawa law professor and privacy expert Teresa Scassa describes as “really interesting.”

The Treasury Board Secretariat in 2019 issued the “Directive on Automated Decision-Making,” which came into effect in April 2020, requires federal departments and agencies, except those involved in national security, to conduct “algorithmic impact assessments” (AIA) to evaluate unintended bias before procuring or approving the use of technologies that rely on AI or machine learning. The policy requires the government to publish AIAs, release software codes developed internally and continually monitor the performance of these systems. In the case of proprietary algorithms developed by private suppliers, federal officials have extensive rights to access and test the software.

In a forthcoming paper, Scassa points out that the directive includes due process rules and looks for evidence of whether systemic bias has become embedded in these technologies, which can happen if the algorithms are trained on skewed data. She also observes that not all algorithm-driven systems generate life-altering decisions, e.g., chatbots that are now commonly used in online application processes. But where they are deployed in “high impact” contexts such as policing, e.g., with algorithms that aim to identify individuals caught on surveillance videos, the policy requires “a human in the loop.”

The directive, says Scassa, “is getting interest elsewhere,” including the U.S. Ellen Goodman, at Rutgers, is hopeful this approach will gain traction with the Biden administration. In Canada, where provincial governments oversee law enforcement, Ottawa’s low-key but seemingly thorough regulation points to a way for citizens to shine a flashlight into the black box that is big data policing.

Source: From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster

Doesn’t appear to be universally well received:

Google’s chief executive Sundar Pichai on Wednesday apologized in the aftermath of the dismissal of a prominent Black scientist whose ouster set off widespread condemnation from thousands of Google employees and outside researchers.

Timnit Gebru, who helped lead Google’s Ethical Artificial Intelligence team, said that she was fired last week after having a dispute over a research paper and sending a note to other Google employees criticizing the company for its treatment of people of color and women, particularly in hiring.

“I’ve heard the reaction to Dr. Gebru’s departure loud and clear: it seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust,” Pichai wrote to Google employees on Wednesday, according to a copy of the email reviewed by NPR.

Since Gebru was pushed out, more than 2,000 Google employees have signed an open letter demanding answers, calling Gebru’s termination “research censorship” and a “retaliatory firing.”

In his letter, Pichai said the company is conducting a review of how Gebru’s dismissal was handled in order to determine whether there could have been a “more respectful process.”

Pichai went on to say that Google needs to accept responsible for a prominent Black female leader leaving Google on bad terms.

“This loss has had a ripple effect through some of our least represented communities, who saw themselves and some of their experiences reflected in Dr. Gebru’s. It was also keenly felt because Dr. Gebru is an expert in an important area of AI Ethics that we must continue to make progress on — progress that depends on our ability to ask ourselves challenging questions,” Pichai wrote.

Pichai said Google earlier this year committed to taking a look at all of the company’s systems for hiring and promoting employees to try to increase representation among Black workers and others underrepresented groups.

“The events of the last week are a painful but important reminder of the progress we still need to make,” Pichai wrote in his letter, which was earlier reported by Axios.

In a series of tweets, Gebru said she did not appreciate Pichai’s email to her former colleagues.

“Don’t paint me as an ‘angry Black woman’ for whom you need ‘de-escalation strategies’ for,” Gebru said.

“Finally it does not say ‘I’m sorry for what we did to her and it was wrong.’ What it DOES say is ‘it seeded doubts and led some in our community to question their place at Google.’ So I see this as ‘I’m sorry for how it played out but I’m not sorry for what we did to her yet,'” Gebru wrote.

One Google employee who requested anonymity for fear of retaliation said Pichai’s letter will do little to address the simmering strife among Googlers since Gebru’s firing.

The employee expressed frustration that Pichai did not directly apologize for Gebru’s termination and continued to suggest she was not fired by the company, which Gebru and many of her colleagues say is not true. The employees described Pichai’s letter as “meaningless PR.”

Source: Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster

Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.

Of note:

A well-respected Google researcher said she was fired by the company after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems.

Timnit Gebru, who was a co-leader of Google’s Ethical A.I. team, said in a tweet on Wednesday evening that she was fired because of an email she had sent a day earlier to a group that included company employees.

In the email, reviewed by The New York Times, she expressed exasperation over Google’s response to efforts by her and other employees to increase minority hiring and draw attention to bias in artificial intelligence.

“Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset,” the email read. “There is no way more documents or more conversations will achieve anything.”

Her departure from Google highlights growing tension between Google’s outspoken work force and its buttoned-up senior management, while raising concerns over the company’s efforts to build fair and reliable technology. It may also have a chilling effect on both Black tech workers and researchers who have left academia in recent years for high-paying jobs in Silicon Valley.

“Her firing only indicates that scientists, activists and scholars who want to work in this field — and are Black women — are not welcome in Silicon Valley,” said Mutale Nkonde, a fellow with the Stanford Digital Civil Society Lab. “It is very disappointing.”

A Google spokesman declined to comment. In an email sent to Google employees, Jeff Dean, who oversees Google’s A.I. work, including that of Dr. Gebru and her team, called her departure “a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible A.I. research as an org and as a company.”

After years of an anything-goes environment where employees engaged in freewheeling discussions in companywide meetings and online message boards, Google has started to crack down on workplace discourse. Many Google employees have bristled at the new restrictions and have argued that the company has broken from a tradition of transparency and free debate.

On Wednesday, the National Labor Relations Board said Google had most likely violated labor law when it fired two employees who were involved in labor organizing. The federal agency said Google illegally surveilled the employees before firing them.

Google’s battles with its workers, who have spoken out in recent years about the company’s handling of sexual harassment and its work with the Defense Department and federal border agencies, have diminished its reputation as a utopia for tech workers with generous salaries, perks and workplace freedom.

Like other technology companies, Google has also faced criticism for not doing enough to resolve the lack of women and racial minorities among its ranks.

The problems of racial inequality, especially the mistreatment of Black employees at technology companies, has plagued Silicon Valley for years. Coinbase, the most valuable cryptocurrency start-up, has experienced an exodus of Black employees in the last two years over what the workers said was racist and discriminatory treatment.

Researchers worry that the people who are building artificial intelligence systems may be building their own biases into the technology. Over the past several years, several public experiments have shown that the systems often interact differently with people of color — perhaps because they are underrepresented among the developers who create those systems.

Dr. Gebru, 37, was born and raised in Ethiopia. In 2018, while a researcher at Stanford University, she helped write a paper that is widely seen as a turning point in efforts to pinpoint and remove bias in artificial intelligence. She joined Google later that year, and helped build the Ethical A.I. team.

After hiring researchers like Dr. Gebru, Google has painted itself as a company dedicated to “ethical” A.I. But it is often reluctant to publicly acknowledge flaws in its own systems.

In an interview with The Times, Dr. Gebru said her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google. The paper, also reviewed by The Times, pinpointed flaws in a new breed of language technology, including a system built by Google that underpins the company’s search engine.

These systems learn the vagaries of language by analyzing enormous amounts of text, including thousands of books, Wikipedia entries and other online documents. Because this text includes biased and sometimes hateful language, the technology may end up generating biased and hateful language.

After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name and the names of the other Google employees. She refused to do so without further discussion and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns.

The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked.

In his note to employees, Mr. Dean said Google respected “her decision to resign.” Mr. Dean also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems.

“It was dehumanizing,” Dr. Gebru said. “They may have reasons for shutting down our research. But what is most upsetting is that they refuse to have a discussion about why.”

Dr. Gebru’s departure from Google comes at a time when A.I. technology is playing a bigger role in nearly every facet of Google’s business. The company has hitched its future to artificial intelligence — whether with its voice-enabled digital assistant or its automated placement of advertising for marketers — as the breakthrough technology to make the next generation of services and devices smarter and more capable.

Sundar Pichai, chief executive of Alphabet, Google’s parent company, has compared the advent of artificial intelligence to that of electricity or fire, and has said that it is essential to the future of the company and computing. Earlier this year, Mr. Pichai called for greater regulation and responsible handling of artificial intelligence, arguing that society needs to balance potential harms with new opportunities.

Google has repeatedly committed to eliminating bias in its systems. The trouble, Dr. Gebru said, is that most of the people making the ultimate decisions are men. “They are not only failing to prioritize hiring more people from minority communities, they are quashing their voices,” she said.

Julien Cornebise, an honorary associate professor at University College London and a former researcher with DeepMind, a prominent A.I. lab owned by the same parent company as Google’s, was among many artificial intelligence researchers who said Dr. Gebru’s departure reflected a larger problem in the industry.

“This shows how some large tech companies only support ethics and fairness and other A.I.-for-social-good causes as long as their positive P.R. impact outweighs the extra scrutiny they bring,” he said. “Timnit is a brilliant researcher. We need more like her in our field.”

Source: https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html?action=click&module=Well&pgtype=Homepage&section=Business

Can We Make Our Robots Less Biased Than Us? A.I. developers are committing to end the injustices in how their technology is often made and used.

Important read:

On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot.

Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled. Sending the machine was safer than sending in human officers, Mr. Brown said.

But some robotics researchers were troubled. “Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police offers in Dixmont, Maine, ended a shootout in a similar manner.). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems.

While Mr. Johnson’s death resulted from a human decision, in the future such a decision might be made by a robot — one created by humans, with their flaws in judgment baked in.

“Given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge,” Dr. Howard, a leader of the organization Black in Robotics, and Dr. Borenstein wrote, “it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Last summer, hundreds of A.I. and robotics researchers signed statements committing themselves to changing the way their fields work. One statement, from the organization Black in Computing, sounded an alarm that “the technologies we help create to benefit society are also disrupting Black communities through the proliferation of racial profiling.” Another manifesto, “No Justice, No Robots,” commits its signers to refusing to work with or for law enforcement agencies.

Over the past decade, evidence has accumulated that “bias is the original sin of A.I,” Dr. Howard notes in her 2020 audiobook, “Sex, Race and Robots.” Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one such system told the Detroit police that it had matched photos of a suspected thief with the driver’s license photo of Robert Julian-Borchak Williams, a Black man with no connection to the crime.)

There are A.I. systems enabling self-driving cars to detect pedestrians — last year Benjamin Wilson of Georgia Tech and his colleagues found that eight such systems were worse at recognizing people with darker skin tones than paler ones. Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the M.I.T. Media Lab, has encountered interactive robots at two different laboratories that failed to detect her. (For her work with such a robot at M.I.T., she wore a white mask in order to be seen.)

The long-term solution for such lapses is “having more folks that look like the United States population at the table when technology is designed,” said Chris S. Crawford, a professor at the University of Alabama who works on direct brain-to-robot controls. Algorithms trained mostly on white male faces (by mostly white male developers who don’t notice the absence of other kinds of people in the process) are better at recognizing white males than other people.

“I personally was in Silicon Valley when some of these technologies were being developed,” he said. More than once, he added, “I would sit down and they would test it on me, and it wouldn’t work. And I was like, You know why it’s not working, right?”

Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. So it was striking that many roboticists signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed themselves to actions aimed at making the creation and usage of robots less unjust.

“I think the protests in the street have really made an impact,” said Odest Chadwicke Jenkins, a roboticist and A.I. researcher at the University of Michigan. At a conference earlier this year, Dr. Jenkins, who works on robots that can assist and collaborate with people, framed his talk as an apology to Mr. Williams. Although Dr. Jenkins doesn’t work in face-recognition algorithms, he felt responsible for the A.I. field’s general failure to make systems that are accurate for everyone.

“This summer was different than any other than I’ve seen before,” he said. “Colleagues I know and respect, this was maybe the first time I’ve heard them talk about systemic racism in these terms. So that has been very heartening.” He said he hoped that the conversation would continue and result in action, rather than dissipate with a return to business-as-usual.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

As the Black in Computing open letter addressed how robots and A.I. are made, another manifesto appeared around the same time, focusing on how robots are used by society. Entitled “No Justice, No Robots,” the open letter pledges its signers to keep robots and robot research away from law enforcement agencies. Because many such agencies “have actively demonstrated brutality and racism toward our communities,” the statement says, “we cannot in good faith trust these police forces with the types of robotic technologies we are responsible for researching and developing.”

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

“I was dismayed by it,” said Cindy Bethel, director of the Social, Therapeutic and Robotic Systems Lab at Mississippi State University. “It’s such a blanket statement,” she said. “I think it’s naïve and not well-informed.” Dr. Bethel has worked with local and state police forces on robot projects for a decade, she said, because she thinks robots can make police work safer for both officers and civilians.

One robot that Dr. Bethel is developing with her local police department is equipped with night-vision cameras, that would allow officers to scope out a room before they enter it. “Everyone is safer when there isn’t the element of surprise, when police have time to think,” she said.

Adhering to the declaration would prohibit researchers from working on robots that conduct search-and-rescue operations, or in the new field of “social robotics.” One of Dr. Bethel’s research projects is developing technology that would use small, humanlike robots to interview children who have been abused, sexually assaulted, trafficked or otherwise traumatized. In one of her recent studies, 250 children and adolescents who were interviewed about bullying were often willing to confide information in a robot that they would not disclose to an adult.

Having an investigator “drive” a robot in another room thus could yield less painful, more informative interviews of child survivors, said Dr. Bethel, who is a trained forensic interviewer.

“You have to understand the problem space before you can talk about robotics and police work,” she said. “They’re making a lot of generalizations without a lot of information.”

Dr. Crawford is among the signers of both “No Justice, No Robots” and the Black in Computing open letter. “And you know, anytime something like this happens, or awareness is made, especially in the community that I function in, I try to make sure that I support it,” he said.

Dr. Jenkins declined to sign the “No Justice” statement. “I thought it was worth consideration,” he said. “But in the end, I thought the bigger issue is, really, representation in the room — in the research lab, in the classroom, and the development team, the executive board.” Ethics discussions should be rooted in that first fundamental civil-rights question, he said.

Dr. Howard has not signed either statement. She reiterated her point that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software.

“If external people who have ethical values aren’t working with these law enforcement entities, then who is?” she said. “When you say ‘no,’ others are going to say ‘yes.’ It’s not good if there’s no one in the room to say, ‘Um, I don’t believe the robot should kill.’”

Source: https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html?action=click&module=News&pgtype=Homepage

Scientists combat anti-Semitism with artificial intelligence

Will be interesting to assess the effectiveness of this approach, and whether the definition of antisemitism used in the algorithms takes a narrow or more expansive approach, including how it deals with criticism of Israeli government poliicies.

Additionally, it may provide an approach that could serve as a model for efforts to combat anti-Black, anti-Muslim and other forms of hate:

An international team of scientists said Monday it had joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence.

The project Decoding Anti-Semitism includes discourse analysts, computational linguists and historians who will develop a “highly complex, AI-driven approach to identifying online anti-Semitism,” the Alfred Landecker Foundation, which supports the project, said in a statement Monday.

“In order to prevent more and more users from becoming radicalized on the web, it is important to identify the real dimensions of anti-Semitism — also taking into account the implicit forms that might become more explicit over time,” said Matthias Becker, a linguist and project leader from the Technical University of Berlin.

The team also includes researchers from King’s College in London and other scientific institutions in Europe and Israel.

Computers will help run through vast amounts of data and images that humans wouldn’t be able to assess because of their sheer quantity, the foundation said.

“Studies have also shown that the majority of anti-Semitic defamation is expressed in implicit ways – for example through the use of codes (“juice” instead of “Jews”) and allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images,” the statement said.

As implicit anti-Semitism is harder to detect, the combination of qualitative and AI-driven approaches will allow for a more comprehensive search, the scientists think.

The problem of anti-Semitism online has increased, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19, groups tracking anti-Semitism on the internet have found.

The focus of the current project is initially on Germany, France and the U.K., but will later be expanded to cover other countries and languages.

The Alfred Landecker Foundation, which was founded in 2019 in response to rising trends of populism, nationalism and hatred toward minorities, is supporting the project with 3 million euros ($3.5 million), the German news agency dpa reported.

Source: Scientists combat anti-Semitism with artificial intelligence

Of course technology perpetuates racism. It was designed that way.

Interesting observations of how technology embeds biases and prejudices and the related risks:

Today the United States crumbles under the weight of two pandemics: coronavirus and police brutality.

Both wreak physical and psychological violence. Both disproportionately kill and debilitate black and brown people. And both are animated by technology that we design, repurpose, and deploy—whether it’s contact tracing, facial recognition, or social media.

We often call on technology to help solve problems. But when society defines, frames, and represents people of color as “the problem,” those solutions often do more harm than good. We’ve designed facial recognition technologies that target criminal suspects on the basis of skin color. We’ve trained automated risk profiling systems that disproportionately identify Latinx people as illegal immigrants. We’ve devised credit scoring algorithms that disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs.

So the question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy,

Of course, it’s not a new question at all.

Uncivil rights

In 1960, Democratic Party leaders confronted their own problem: How could their presidential candidate, John F. Kennedy, shore up waning support from black people and other racial minorities?

An enterprising political scientist at MIT, Ithiel de Sola Pool, approached them with a solution. He would gather voter data from earlier presidential elections, feed it into a new digital processing machine, develop an algorithm to model voting behavior, predict what policy positions would lead to the most favorable results, and then advise the Kennedy campaign to act accordingly. Pool started a new company, the Simulmatics Corporation, and executed his plan. He succeeded, Kennedy was elected, and the results showcased the power of this new method of predictive modeling.

Racial tension escalated throughout the 1960s. Then came the long, hot summer of 1967. Cities across the nation burned, from Birmingham, Alabama, to Rochester, New York, to Minneapolis Minnesota, and many more in between. Black Americans protested the oppression and discrimination they faced at the hands of America’s criminal justice system. But President Johnson called it “civil disorder,” and formed the Kerner Commission to understand the causes of “ghetto riots.” The commission called on Simulmatics.

As part of a DARPA project aimed at turning the tide of the Vietnam War, Pool’s company had been hard at work preparing a massive propaganda and psychological campaign against the Vietcong. President Johnson was eager to deploy Simulmatics’s behavioral influence technology to quell the nation’s domestic threat, not just its foreign enemies. Under the guise of what they called a “media study,” Simulmatics built a team for what amounted to a large-scale surveillance campaign in the “riot-affected areas” that captured the nation’s attention that summer of 1967.

Three-member teams went into areas where riots had taken place that summer. They identified and interviewed strategically important black people. They followed up to identify and interview other black residents, in every venue from barbershops to churches. They asked residents what they thought about the news media’s coverage of the “riots.” But they collected data on so much more, too: how people moved in and around the city during the unrest, who they talked to before and during, and how they prepared for the aftermath. They collected data on toll booth usage, gas station sales, and bus routes. They gained entry to these communities under the pretense of trying to understand how news media supposedly inflamed “riots.” But Johnson and the nation’s political leaders were trying to solve a problem. They aimed to use the information that Simulmatics collected to trace information flow during protests to identify influencers and decapitate the protests’ leadership.

They didn’t accomplish this directly. They did not murder people, put people in jail, or secretly “disappear” them.

But by the end of the 1960s, this kind of information had helped create what came to be known as “criminal justice information systems.” They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated.

Reframing the problem

Blackness and black people. Both persist as our nation’s—dare I say even our world’s—problem. When contact tracing first cropped up at the beginning of the pandemic, it was easy to see it as a necessary but benign health surveillance tool. The coronavirus was our problem, and we began to design new surveillance technologies in the form of contact tracing, temperature monitoring, and threat mapping applications to help address it.

But something both curious and tragic happened. We discovered that black people, Latinx people, and indigenous populations were disproportionately infected and affected. Suddenly, we also became a national problem; we disproportionately threatened to spread the virus. That was compounded when the tragic murder of George Floyd by a white police officer sent thousands of protesters into the streets. When the looting and rioting started, we—black people—were again seen as a threat to law and order, a threat to a system that perpetuates white racial power. It makes you wonder how long it will take for law enforcement to deploy those technologies we first designed to fight covid-19 to quell the threat that black people supposedly pose to the nation’s safety.

If we don’t want our technology to be used to perpetuate racism, then we must make sure that we don’t conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate.

New Zealand: ‘Like swimming in crocodile waters’ – Immigration officials’ data analytics use

Of note. As always, one needs to ensure that AI systems are as free of bias as possible as well as remembering that human decision-making is also not perfect. But any large-scale immigration system will likely have to rely on AI in order to manage the workload:

Immigration officials are being accused of using data analytics and algorithms in visa processing – and leaving applicants in the dark about why they are being rejected.

One immigration adviser described how applicants unaware of risk profiling were like unwitting swimmers in crocodile infested waters.

The automatic ‘triage’ system places tourists, overseas students or immigrants into high, medium or low risk categories.

The factors which raise a red flag on high-risk applications are not made publicly available; Official Information Act requests are redacted on the grounds of international relations.

But an immigration manager has told RNZ that staff identify patterns, such as overstaying and asylum claim rates of certain nationalities or visa types, and feed that data into the triage system.

On a recent visit to a visa processing centre in Auckland, Immigration New Zealand assistant general manager Jeannie Melville acknowledged that it now ran an automated system that triages applications, but said it was humans who make the decisions.

“There is an automatic triage that’s done – but to be honest, the most important thing is the work that our immigration officers do in actually determining how the application should be processed,” she said.

“And we do have immigration officers that have the skills and the experience to be able to determine whether there are further risk factors or no risk factors in a particular application.

“The triage system is something that we work on all the time because as you would expect, things change all the time. And we try and make sure that it’s a dynamic system that takes into account a whole range of factors, whether that be things that have happened in the past or things that are going on at the present time.”

When asked what ‘things that have happened in the past’ might mean in the context of deciding what risk category an applicant would be assigned to, another manager filled the silence.

“Immigration outcomes, application outcomes, things that we measure – overstaying rates or asylum claim rates from certain sources,” she said. “Nationality or visa type patterns that may have trended, so we do some data analytics that feed into some of those business rules.”

Humans defer to machines – professor

Professor Colin Gavaghan, of Otago University, said studies on human interactions with technology suggested people found it hard to ignore computerised judgments.

“What they’ve found is if you’re not very, very careful, you get a kind of situation where the human tends just to defer to whatever the machine recommends,” said Prof Gavaghan, director of the New Zealand Law Foundation Centre for Law and Policy in Emerging Technologies.

“It’s very hard to stay in a position where you’re actually critiquing and making your own independent decision – humans who are going to get to see these cases, they’ll be told that the machine, the system has already flagged them up as being high risk.

“It’s hard not to think that that will influence their decision. The idea they’re going to make a completely fresh call on those cases, I think, if we’re not careful, could be a bit unrealistic.”

Oversight and transparency were needed to check the accuracy of calls made by the algorithmic system and to ensure people could challenge decisions, he added.

Best practice guidelines tended to be high level and vague, he added.

“There’s also questions and concerns about bias,” he said. “It can be biased because the training data that’s been used to prepare it is itself the product of user bias decisions – if you have a body of data that’s been used to train the system that’s informed by let’s say, for the sake of argument, racist assumptions about particular groups, then that’s going to come through in the system’s recommendations as well.

“We haven’t had what we would like to see, which is one body with responsibility to look across all of government and all of these uses.”

The concerns follow questions around another Immigration New Zealand programme in 2018 which was used to prioritise deportations.

A compliance manager told RNZ it was using data, including nationality, of former immigrants to determine which future overstayers to target.

It subsequently denied that nationality was one of the factors but axed the programme.

Don’t make assumptions on raw data – immigration adviser

Immigration adviser Katy Armstrong said Immigration New Zealand had to fight its own ‘jaundice’ that was based on profiling and presumptions.

“Just because you’re a 23-year-old, let’s say, Brazilian coming in, wanting to have a holiday experience in New Zealand, doesn’t make you an enemy of the state.

“And you’re being lumped in maybe with a whole bunch of statistics that might say that young male Brazilians have a particular pattern of behaviour.

“So you then have to prove a negative against you, but you’re not being told transparently what that negative is.”

It would be unacceptable if the police were arresting people based on the previous offending rates of a certain nationality and immigration rules were also based on fairness and natural justice, she said.

“That means not discriminating, not being presumptuous about the way people may behave just purely based on assumptions from raw data,” she said.

“And that’s the area of real concern. If you have profiling and an unsophisticated workforce, with an organisation that is constantly in churn, with people coming on board to make decisions about people’s lives with very little training, then what do you end up with?

“Well, I can tell you – you end up with decisions that are basically unfair, and often biased.

“I think people go in very trusting of the system and not realising that there is this almighty wall between them and a visa over issues that they would have no inkling about.

“And then they get turned down, they don’t even give you a chance very often to respond to any doubts that immigration might have around you.

“People come and say: ‘I got declined’ and you look at it and you think ‘oh my God, it was like they literally went swimming in the crocodile waters without any protection’.”

Source: ‘Like swimming in crocodile waters’ – Immigration officials’ data analytics use

Concerns raised after facial recognition software found to have racial bias

Legitimate concerns:

In 2015, two undercover police officers in Jacksonville, Fla., bought $50 worth of crack cocaine from a man on the street. One of the cops surreptitiously snapped a cellphone photo of the man and sent it to a crime analyst, who ran the photo through facial recognition software.

The facial recognition algorithm produced several matches, and the analyst chose the first one: a mug shot of a man named Willie Allen Lynch. Lynch was convicted of selling drugs and sentenced to eight years in prison.

Civil liberties lawyers jumped on the case, flagging a litany of concerns to fight the conviction. Matches of other possible perpetrators generated by the tool were never disclosed to Lynch, hampering his ability to argue for his innocence. The use of the technology statewide had been poorly regulated and shrouded in secrecy.

But also, Willie Allen Lynch is a Black man.

Multiple studies have shown facial recognition technology makes more errors on Black faces. For mug shots in particular, researchers have found that algorithms generate the highest rates of false matches for African American, Asian and Indigenous people.

After more than two dozen police services, government agencies and private businesses across Canada recently admitted to testing the divisive facial recognition app Clearview AI, experts and advocates say it’s vital that lawmakers and politicians understand how the emerging technology could impact racialized citizens.

“Technologies have their bias as well,” said Nasma Ahmed, director of Toronto-based non-profit Digital Justice Lab, who is advocating for a pause on the use of facial recognition technology until proper oversight is established.

“If they don’t wake up, they’re just going to be on the wrong side of trying to fight this battle … because they didn’t realize how significant the threat or the danger of this technology is,” says Toronto-born Toni Morgan, managing director of the Center for Law, Innovation and Creativity at Northeastern University School of Law in Boston.

“It feels like Toronto is a little bit behind the curve in understanding the implications of what it means for law enforcement to access this technology.”

Last month, the Star revealed that officers at more than 20 police forces across Canada have used Clearview AI, a facial recognition tool that has been described as “dystopian” and “reckless” for its broad search powers. It relies on what the U.S. company has said is a database of three billion photos scraped from the web, including social media.

Almost all police forces that confirmed use of the tool said officers had accessed a free trial version without the knowledge or authorization of police leadership and have been told to stop; the RCMP is the only police service that has paid to access the technology.

Multiple forces say the tool was used by investigators within child exploitation units, but it was also used to probe lesser crimes, including in an auto theft investigation and by a Rexall employee seeking to stop shoplifters.

While a handful of American cities and states have moved to limit or outright ban police use of facial recognition technology, the response from Canadian lawmakers has been muted.

According to client data obtained by BuzzFeed News and shared exclusively with the Star, the Toronto Police Service was the most prolific user of Clearview AI in Canada. (Clearview AI has not responded to multiple requests for comment from the Star but told BuzzFeed there are “numerous inaccuracies” in the client data information, which they allege was “illegally obtained.”)

Toronto police ran more than 3,400 searches since October, according to the BuzzFeed data.

A Toronto police spokesperson has said officers were “informally testing” the technology, but said the force could not verify the Star’s data about officers’ use or “comment on it with any certainty.” Toronto police Chief Mark Saunders directed officers to stop using the tool after he became aware they were using it, and a review is underway.

But Toronto police are still using a different facial recognition tool, one made by NEC Corp. of America and purchased in 2018. The NEC facial recognition tool searches the Toronto police database of approximately 1.5 million mug shot photos.

The National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce, has been testing the accuracy of facial recognition technology since 2002. Companies that sell the tools voluntarily submit their algorithms to be tested to NIST; government agencies sponsor the research to help inform policy.

In a report released in December that tested 189 algorithms from 99 developers, NIST found dramatic variations in accuracy across different demographic groups. For one type of matching, the team discovered the systems had error rates between 10 and 100 times higher for African American and Asian faces compared to images of white faces.

For the type of facial recognition matching most likely to be used by law enforcement, African American women had higher error rates.

“Law enforcement, they probably have one of the most difficult cases. Because if they miss someone … and that person commits a crime, they’re going to look bad. If they finger the wrong person, they’re going to look bad,” said Craig Watson, manager of the group that runs NIST’s testing program.

Clearview AI has not been tested by NIST. The company has claimed its tool is “100% accurate” in a report written by an “independent review panel.” The panel said it relied on the same methodology the American Civil Liberties Union used to assess a facial recognition algorithm sold by Amazon.

The American Civil Liberties Union slammed the report, calling the claim “misleading” and the tool “dystopian.”

Clearview AI did not respond to a request for comment about its accuracy claims.

Before purchasing the NEC facial recognition technology, Toronto police conducted a privacy impact assessment. Asked if this examined potential racial bias within the NEC’s algorithms, spokesperson Meaghan Gray said in an email the contents of the report are not public.

But she said TPS “has not experienced racial or gender bias when utilizing the NEC Facial Recognition System.”

“While not a means of undisputable positive identification like fingerprint identification, this technology provides ‘potential candidates’ as investigative leads,” she said. “Consequently, one race or gender has not been disproportionally identified nor has the TPS made any false identifications.”

The revelations about Toronto police’s use of Clearview AI have coincided with the planned installation of additional CCTV cameras in communities across the city, including in the Jane Street and Finch Avenue West area. The provincially funded additional cameras come after the Toronto police board approved increasing the number placed around the city.

The combination of facial recognition technology and additional CCTV cameras in a neighbourhood home to many racialized Torontonians is a “recipe for disaster,” said Sam Tecle, a community worker with Jane and Finch’s Success Beyond Limits youth support program.

“One technology feeds the other,” Tecle said. “Together, I don’t know how that doesn’t result in surveillance — more intensified surveillance — of Black and racialized folks.”

Tecle said the plan to install more cameras was asking for a lot of trust from a community that already has a fraught relationship with the police. That’s in large part due to the legacy of carding, he said — when police stop, question and document people not suspected of a crime, a practice that disproportionately impacts Black and brown men.

“This is just a digital form of doing the same thing,” Tecle told the Star. “If we’re misrecognized and misidentified through these facial recognition algorithms, then I’m very apprehensive about them using any kind of facial recognition software.”

Others pointed out that false positives — incorrect matches — could have particularly grave consequences in the context of police use of force: Black people are “grossly over-represented” in cases where Toronto police used force, according to a 2018 report by the Ontario Human Rights Commission.

Saunders has said residents in high-crime areas have repeatedly asked for more CCTV cameras in public spaces. At last month’s Toronto police board meeting, Mayor John Tory passed a motion requiring that police engage in a public community consultation process before installing more cameras.

Gray said many residents and business owners want increased safety measures, and this feedback alongside an analysis of crime trends led the force to identify “selected areas that are most susceptible to firearm-related offences.”

“The cameras are not used for surveillance. The cameras will be used for investigation purposes, post-reported offences or incidents, to help identify potential suspects, and if needed during major events to aid in public safety,” Gray said.

Akwasi Owusu-Bempah, an assistant professor of criminology at the University of Toronto, said when cameras are placed in neighbourhoods with high proportions of racialized people, then used in tandem with facial recognition technology, “it could be problematic, because of false positives and false negatives.”

“What this gets at is the need for continued discussion, debate, and certainly oversight,” Owusu-Bempah said.

Source: Concerns raised after facial recognition software found to have racial bias

Canada must look beyond STEM and diversify its AI workforce

From a visible minority perspective, based on STEM graduates, representation reasonably good as per the chart above except for engineering and particularly strong in math and computer sciences, the closest fields of study to AI.

With respect to gender, the percentage of visible minority women is generally equivalent to that on non-visible minority women or stronger (but women are under-represented in engineering and math/computer sciences):

Artificial intelligence (AI) is expected to add US$15.7 trillion to the global economy by 2030, according to a recent report from PwC, representing a 14 percent boost to global GDP. Countries around the world are scrambling for a piece of the pie, as evidenced by the proliferation of national and regional AI strategies aimed at capturing the promise of AI for future value generation.

Canada has benefited from an early lead in AI, which is often attributed to the Canadian Institute for Advanced Research (CIFAR) having had the foresight to invest in Geoffrey Hinton’s research on deep learning shortly after the turn of the century. As a result, Canada can now tout Montreal as having the highest concentration of researchers and students of deep learning in the world and Toronto as being home to the highest concentration of AI start-ups in the world.

But the market for AI is approaching maturity. A report from McKinsey & Co. suggests that the public and private sectors together have captured only between 10 and 40 percent of the potential value of advances in machine learning. If Canada hopes to maintain a competitive advantage, it must both broaden the range of disciplines and diversify the workforce in the AI sector.

Looking beyond STEM

Strategies aimed at capturing the expected future value of AI have been concentrated on innovation in fundamental research, which is conducted largely in the STEM disciplines: science, technology, engineering and mathematics. But it is the application of this research that will grow market share and multiply value. In order to capitalize on what fundamental research discovers, the AI sector must deepen its ties with the social sciences.

To date the role of social scientists in Canada’s strategy on AI has been largely limited to areas of ethics and public policy. While these are endeavours to which social scientists are particularly well suited, they could be engaged much more broadly with AI. Social scientists are well positioned to identify and exploit potential applications of this research that will generate both social and economic returns on Canada’s investment in AI.

Social scientists take a unique approach to data analysis by drawing on social theory to critically interpret both the inputs and outputs of a given model. They ask what a given model is really telling us about the world and how it arrived at that result. They see potential opportunities in data and digital technology that STEM researchers are not trained to look for.

A recent OECD report looks at the skills that most distinguish innovative from non-innovative workers; chief among them are creativity, critical thinking and communication skills. While these skills are by no means exclusively the domain of the social sciences, they are perhaps more central to social scientific training than to any other discipline.

The social science perspective can serve as a defence mechanism against the potential folly of certain applications of AI. If social scientists had been more involved in early adaptations of computer vision, for example, Google might have been spared the shame of image recognition algorithms that classify people of colour as animals (they certainly would have come up with a better solution). In the same vein, Microsoft’s AI chatbots would have been less likely to spew racist slurs shortly after launch.

Social scientists can also help meet a labour shortage: there are not enough STEM graduatesto meet future demand for AI talent. Meanwhile, social science graduates are often underemployed, in part because they do not have the skills necessary to participate in a future of work that privileges expertise in AI. As a consequence, many of the opportunities associated with AI are passing Canada’s social science graduates by. Excluding social science students from Canada’s AI strategy not only reduces their career paths but restricts their opportunities to contribute to fulfilling the societal and economic promise of AI.

Realizing the potential of the social sciences within Canada’s AI ecosystem requires innovative thinking by both governments and universities. Federal and provincial governments should relax restrictions on funding for AI-related research that prohibit applications from social scientists or make them eligible only within interdisciplinary teams that include STEM researchers. This policy has the effect of subordinating social scientific approaches to AI to those of STEM disciplines. In fact, social scientists are just as capable of independent research, and a growing number are already engaged in sophisticated applications of machine learning to address some of the most pressing societal challenges of our time.

Governments must also invest in the development of undergraduate and graduate training opportunities that are specific to the application of AI in the social sciences, using pedagogical approaches that are appropriate for them.

Social science faculties in universities across Canada can also play a crucial role by supporting the development of AI-related skills within their undergraduate and graduate curriculums. At McMaster University, for example, the Faculty of Social Sciences is developing a new degree: master of public policy in digital society. Alongside graduate training in the fundamentals of public policy, the 12-month program will include rigorous training in data science as well as technical training in key digital technologies that are revolutionizing contemporary society. The program, which is expected to launch in 2021, is intended to provide students with a command of digital technologies such as AI necessary to enable them to think creatively and critically about its application to the social world. In addition to the obvious benefit of producing a new generation of policy leadership in AI, the training provided by this program will ensure that its graduates are well positioned for a broader range of leadership opportunities across the public and private sectors.

Increasing workplace diversity

A report released in 2019 by New York University’s AI Now Institute declared that there is a diversity crisis in the AI workforce. This has implications for the sector itself but also for society more broadly, in that the systemic biases within the AI sector are being perpetuated via the myriad touch points that AI has with our everyday lives: it is organizing our online search results and social media news feeds and supporting hiring decisions, and it may even render decisions in some court cases in future.

One of the main findings of the AI Now report was that the widespread strategy of focusing on “women in tech” is too narrow to counter the diversity crisis. In Canada, efforts to diversify AI generally translate to providing advancement opportunities for women in the STEM disciplines. Although the focus of policy-makers on STEM is critical and necessary, it is short-sighted. Disciplinary diversity in AI research not only broadens the horizons for research and commercialization; it also creates opportunities for groups who are underrepresented in STEM to benefit from and contribute to innovations in AI.

As it happens, equity-seeking groups are better represented in the social sciences. According to Statistics Canada, the social sciences and adjacent fields have the highest enrolment of visible minorities. And as of 2017, only 23.7 percent of those enrolled in STEM programs at Canadian universities were women, whereas women were 69.1 percent of participants in the social sciences.

So, engaging the social sciences more substantively in research and training related to AI will itself lead to greater diversity. While advancing this engagement, universities should be careful not to import training approaches directly from statistics or computer science, as these will bring with them some of the cultural context and biases that have resulted in a lack of diversity in those fields to begin with.

Bringing the social sciences into Canada’s AI strategy is a concrete way to demonstrate the strength of diversity, in disciplines as well as demographics. Not only would many social science students benefit from training in AI, but so too would Canada’s competitive advantage in AI benefit from enabling social scientists to effectively translate research into action.

Source: Canada must look beyond STEM and diversify its AI workforce

Douglas Todd: Robots replacing Canadian visa officers, Ottawa report says

Ongoing story, raising legitimate questions regarding the quality and possible bias of the algorithms used. That being said, human decision making is not bias free and using AI, at least in the more straightforward cases, makes sense from an efficiency and timeliness of service response.

Will be important to ensure appropriate oversight and there may be a need from an external body to review the algorithms to reduce risks if not already in place:

Tens of thousands of would-be guest workers and international students from China and India are having their fates determined by Canadian computers that are making visa decisions using artificial intelligence.

Even though Immigration Department officials recognize the public is wary about substituting robotic algorithms for human visa officers, the Liberal government plans to greatly expand “automated decision-making” in April of this year, according to an internal report.

“There is significant public anxiety over fairness and privacy associated with Big Data and Artificial Intelligence,” said the 2019 Immigration Department report, obtained under an access to information request. Nevertheless, Ottawa still plans to broaden the automated approval system far beyond the pilot programs it began operating in 2018 to process applicants from India and China.

At a time when Canada is approving more guest workers and foreign students than ever before, immigration lawyers have expressed worry about a lack of transparency in having machines make life-changing decisions about many of the more than 200,000 temporary visas that Canada issues each year.

The internal report reveals its departmental reservations about shifting more fully to an automated system — in particular wondering if machines could be “gamed” by high-risk applicants making false claims about their banking, job, marriage, educational or travel history.

“A system that approves applications without sufficient vetting would raise risks to Canadians, and it is understandable for Canadians to be more concerned about mistakenly approving risky individuals than about mistakenly refusing bona fide candidates,” says the document.

The 25-page report also flags how having robots stand in for humans will have an impact on thousands of visa officers. The new system “will fundamentally change the day-to-day work of decision-makers.”

Immigration Department officials did not respond to questions about the automated visa program.

Vancouver immigration lawyer Richard Kurland says Ottawa’s sweeping plan “to process huge numbers of visas fast and cheap” raises questions about whether an automated “Big Brother” system will be open to scrutiny, or whether it will lead to “Wizard of Oz” decision-making, in which it will be hard to determine who is accountable.

The publisher of the Lexbase immigration newsletter, which uncovered the internal document, was especially concerned that a single official has already “falsely” signed his or her name to countless visa decisions affecting migrants from India and China, without ever having reviewed their specific applications.

“The internal memo shows tens of thousands of visa decisions were signed-off under the name of one employee. If someone pulled that stunt on a visa application, they would be banned from Canada for five years for misrepresentation. It hides the fact it was really a machine that made the call,” said Kurland.

The policy report itself acknowledges that the upcoming shift to “hard-wiring” the visa decision-making process “at a tremendous scale” significantly raises legal risks for the Immigration Department, which it says is already “one of the most heavily litigated in the government of Canada.”

The population of Canada jumped by 560,000 people last year, or 1.5 per cent, the fastest rate of increase in three decades. About 470,000 of that total was made up of immigrants or newcomers arriving on 10-year multiple-entry visas, work visas or study visas.

The senior immigration officials who wrote the internal report repeatedly warn departmental staff that Canadians will be suspicious when they learn about the increasingly automated visa system.

“Keeping a human in the loop is important for public confidence. While human decision making may not be superior to algorithmic systems,” the report said, “human in-the-loop systems currently represent a form of transparency and personal accountability that is more familiar to the public than automated processes.”

In an effort to sell the automated system to a wary populace, the report emphasizes making people aware that the logarithm that decides whether an applicant receives a visa is not random. It’s a computer program governed by certain rules regarding what constitutes a valid visa application.

“A system that provides no real opportunity for officers to reflect is a de facto automated decision-making system, even when officers click the last button,” says the report, which states that flesh-and-blood women and men should still make the rulings on complex or difficult cases — and will also be able to review appeals.

“When a client challenges a decision that was made in full or in part by an automated system, a human officer will review the application. However, the (department) should not proactively offer clients the choice to have a human officer review and decide on their case at the beginning of the application process.”

George Lee, a veteran immigration lawyer in Burnaby, said he had not heard that machines are increasingly taking over from humans in deciding Canadian visa cases. He doesn’t think the public will like it when they learn it.

“People will say, ‘What are we doing here? Where are the human beings? You can’t do this. People are afraid of change. We want to keep the status quo.”

However, Lee said society’s transition towards replacing human workers with robots is “unstoppable. We’re seeing it everywhere.”

Lee believes people will eventually get used to the idea that machines are making vitally important decisions about human lives, including about people’s dreams of migrating to a new country.

“I think the use of robots will become more acceptable down the road,” he said. “Until the robots screw up.”

Source: Douglas Todd: Robots replacing Canadian visa officers, Ottawa report says