Can We Make Our Robots Less Biased Than Us? A.I. developers are committing to end the injustices in how their technology is often made and used.

Important read:

On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot.

Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled. Sending the machine was safer than sending in human officers, Mr. Brown said.

But some robotics researchers were troubled. “Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police offers in Dixmont, Maine, ended a shootout in a similar manner.). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems.

While Mr. Johnson’s death resulted from a human decision, in the future such a decision might be made by a robot — one created by humans, with their flaws in judgment baked in.

“Given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge,” Dr. Howard, a leader of the organization Black in Robotics, and Dr. Borenstein wrote, “it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Last summer, hundreds of A.I. and robotics researchers signed statements committing themselves to changing the way their fields work. One statement, from the organization Black in Computing, sounded an alarm that “the technologies we help create to benefit society are also disrupting Black communities through the proliferation of racial profiling.” Another manifesto, “No Justice, No Robots,” commits its signers to refusing to work with or for law enforcement agencies.

Over the past decade, evidence has accumulated that “bias is the original sin of A.I,” Dr. Howard notes in her 2020 audiobook, “Sex, Race and Robots.” Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one such system told the Detroit police that it had matched photos of a suspected thief with the driver’s license photo of Robert Julian-Borchak Williams, a Black man with no connection to the crime.)

There are A.I. systems enabling self-driving cars to detect pedestrians — last year Benjamin Wilson of Georgia Tech and his colleagues found that eight such systems were worse at recognizing people with darker skin tones than paler ones. Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the M.I.T. Media Lab, has encountered interactive robots at two different laboratories that failed to detect her. (For her work with such a robot at M.I.T., she wore a white mask in order to be seen.)

The long-term solution for such lapses is “having more folks that look like the United States population at the table when technology is designed,” said Chris S. Crawford, a professor at the University of Alabama who works on direct brain-to-robot controls. Algorithms trained mostly on white male faces (by mostly white male developers who don’t notice the absence of other kinds of people in the process) are better at recognizing white males than other people.

“I personally was in Silicon Valley when some of these technologies were being developed,” he said. More than once, he added, “I would sit down and they would test it on me, and it wouldn’t work. And I was like, You know why it’s not working, right?”

Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. So it was striking that many roboticists signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed themselves to actions aimed at making the creation and usage of robots less unjust.

“I think the protests in the street have really made an impact,” said Odest Chadwicke Jenkins, a roboticist and A.I. researcher at the University of Michigan. At a conference earlier this year, Dr. Jenkins, who works on robots that can assist and collaborate with people, framed his talk as an apology to Mr. Williams. Although Dr. Jenkins doesn’t work in face-recognition algorithms, he felt responsible for the A.I. field’s general failure to make systems that are accurate for everyone.

“This summer was different than any other than I’ve seen before,” he said. “Colleagues I know and respect, this was maybe the first time I’ve heard them talk about systemic racism in these terms. So that has been very heartening.” He said he hoped that the conversation would continue and result in action, rather than dissipate with a return to business-as-usual.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

As the Black in Computing open letter addressed how robots and A.I. are made, another manifesto appeared around the same time, focusing on how robots are used by society. Entitled “No Justice, No Robots,” the open letter pledges its signers to keep robots and robot research away from law enforcement agencies. Because many such agencies “have actively demonstrated brutality and racism toward our communities,” the statement says, “we cannot in good faith trust these police forces with the types of robotic technologies we are responsible for researching and developing.”

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

“I was dismayed by it,” said Cindy Bethel, director of the Social, Therapeutic and Robotic Systems Lab at Mississippi State University. “It’s such a blanket statement,” she said. “I think it’s naïve and not well-informed.” Dr. Bethel has worked with local and state police forces on robot projects for a decade, she said, because she thinks robots can make police work safer for both officers and civilians.

One robot that Dr. Bethel is developing with her local police department is equipped with night-vision cameras, that would allow officers to scope out a room before they enter it. “Everyone is safer when there isn’t the element of surprise, when police have time to think,” she said.

Adhering to the declaration would prohibit researchers from working on robots that conduct search-and-rescue operations, or in the new field of “social robotics.” One of Dr. Bethel’s research projects is developing technology that would use small, humanlike robots to interview children who have been abused, sexually assaulted, trafficked or otherwise traumatized. In one of her recent studies, 250 children and adolescents who were interviewed about bullying were often willing to confide information in a robot that they would not disclose to an adult.

Having an investigator “drive” a robot in another room thus could yield less painful, more informative interviews of child survivors, said Dr. Bethel, who is a trained forensic interviewer.

“You have to understand the problem space before you can talk about robotics and police work,” she said. “They’re making a lot of generalizations without a lot of information.”

Dr. Crawford is among the signers of both “No Justice, No Robots” and the Black in Computing open letter. “And you know, anytime something like this happens, or awareness is made, especially in the community that I function in, I try to make sure that I support it,” he said.

Dr. Jenkins declined to sign the “No Justice” statement. “I thought it was worth consideration,” he said. “But in the end, I thought the bigger issue is, really, representation in the room — in the research lab, in the classroom, and the development team, the executive board.” Ethics discussions should be rooted in that first fundamental civil-rights question, he said.

Dr. Howard has not signed either statement. She reiterated her point that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software.

“If external people who have ethical values aren’t working with these law enforcement entities, then who is?” she said. “When you say ‘no,’ others are going to say ‘yes.’ It’s not good if there’s no one in the room to say, ‘Um, I don’t believe the robot should kill.’”

Source: https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html?action=click&module=News&pgtype=Homepage

Scientists combat anti-Semitism with artificial intelligence

Will be interesting to assess the effectiveness of this approach, and whether the definition of antisemitism used in the algorithms takes a narrow or more expansive approach, including how it deals with criticism of Israeli government poliicies.

Additionally, it may provide an approach that could serve as a model for efforts to combat anti-Black, anti-Muslim and other forms of hate:

An international team of scientists said Monday it had joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence.

The project Decoding Anti-Semitism includes discourse analysts, computational linguists and historians who will develop a “highly complex, AI-driven approach to identifying online anti-Semitism,” the Alfred Landecker Foundation, which supports the project, said in a statement Monday.

“In order to prevent more and more users from becoming radicalized on the web, it is important to identify the real dimensions of anti-Semitism — also taking into account the implicit forms that might become more explicit over time,” said Matthias Becker, a linguist and project leader from the Technical University of Berlin.

The team also includes researchers from King’s College in London and other scientific institutions in Europe and Israel.

Computers will help run through vast amounts of data and images that humans wouldn’t be able to assess because of their sheer quantity, the foundation said.

“Studies have also shown that the majority of anti-Semitic defamation is expressed in implicit ways – for example through the use of codes (“juice” instead of “Jews”) and allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images,” the statement said.

As implicit anti-Semitism is harder to detect, the combination of qualitative and AI-driven approaches will allow for a more comprehensive search, the scientists think.

The problem of anti-Semitism online has increased, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19, groups tracking anti-Semitism on the internet have found.

The focus of the current project is initially on Germany, France and the U.K., but will later be expanded to cover other countries and languages.

The Alfred Landecker Foundation, which was founded in 2019 in response to rising trends of populism, nationalism and hatred toward minorities, is supporting the project with 3 million euros ($3.5 million), the German news agency dpa reported.

Source: Scientists combat anti-Semitism with artificial intelligence

Of course technology perpetuates racism. It was designed that way.

Interesting observations of how technology embeds biases and prejudices and the related risks:

Today the United States crumbles under the weight of two pandemics: coronavirus and police brutality.

Both wreak physical and psychological violence. Both disproportionately kill and debilitate black and brown people. And both are animated by technology that we design, repurpose, and deploy—whether it’s contact tracing, facial recognition, or social media.

We often call on technology to help solve problems. But when society defines, frames, and represents people of color as “the problem,” those solutions often do more harm than good. We’ve designed facial recognition technologies that target criminal suspects on the basis of skin color. We’ve trained automated risk profiling systems that disproportionately identify Latinx people as illegal immigrants. We’ve devised credit scoring algorithms that disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs.

So the question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy,

Of course, it’s not a new question at all.

Uncivil rights

In 1960, Democratic Party leaders confronted their own problem: How could their presidential candidate, John F. Kennedy, shore up waning support from black people and other racial minorities?

An enterprising political scientist at MIT, Ithiel de Sola Pool, approached them with a solution. He would gather voter data from earlier presidential elections, feed it into a new digital processing machine, develop an algorithm to model voting behavior, predict what policy positions would lead to the most favorable results, and then advise the Kennedy campaign to act accordingly. Pool started a new company, the Simulmatics Corporation, and executed his plan. He succeeded, Kennedy was elected, and the results showcased the power of this new method of predictive modeling.

Racial tension escalated throughout the 1960s. Then came the long, hot summer of 1967. Cities across the nation burned, from Birmingham, Alabama, to Rochester, New York, to Minneapolis Minnesota, and many more in between. Black Americans protested the oppression and discrimination they faced at the hands of America’s criminal justice system. But President Johnson called it “civil disorder,” and formed the Kerner Commission to understand the causes of “ghetto riots.” The commission called on Simulmatics.

As part of a DARPA project aimed at turning the tide of the Vietnam War, Pool’s company had been hard at work preparing a massive propaganda and psychological campaign against the Vietcong. President Johnson was eager to deploy Simulmatics’s behavioral influence technology to quell the nation’s domestic threat, not just its foreign enemies. Under the guise of what they called a “media study,” Simulmatics built a team for what amounted to a large-scale surveillance campaign in the “riot-affected areas” that captured the nation’s attention that summer of 1967.

Three-member teams went into areas where riots had taken place that summer. They identified and interviewed strategically important black people. They followed up to identify and interview other black residents, in every venue from barbershops to churches. They asked residents what they thought about the news media’s coverage of the “riots.” But they collected data on so much more, too: how people moved in and around the city during the unrest, who they talked to before and during, and how they prepared for the aftermath. They collected data on toll booth usage, gas station sales, and bus routes. They gained entry to these communities under the pretense of trying to understand how news media supposedly inflamed “riots.” But Johnson and the nation’s political leaders were trying to solve a problem. They aimed to use the information that Simulmatics collected to trace information flow during protests to identify influencers and decapitate the protests’ leadership.

They didn’t accomplish this directly. They did not murder people, put people in jail, or secretly “disappear” them.

But by the end of the 1960s, this kind of information had helped create what came to be known as “criminal justice information systems.” They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated.

Reframing the problem

Blackness and black people. Both persist as our nation’s—dare I say even our world’s—problem. When contact tracing first cropped up at the beginning of the pandemic, it was easy to see it as a necessary but benign health surveillance tool. The coronavirus was our problem, and we began to design new surveillance technologies in the form of contact tracing, temperature monitoring, and threat mapping applications to help address it.

But something both curious and tragic happened. We discovered that black people, Latinx people, and indigenous populations were disproportionately infected and affected. Suddenly, we also became a national problem; we disproportionately threatened to spread the virus. That was compounded when the tragic murder of George Floyd by a white police officer sent thousands of protesters into the streets. When the looting and rioting started, we—black people—were again seen as a threat to law and order, a threat to a system that perpetuates white racial power. It makes you wonder how long it will take for law enforcement to deploy those technologies we first designed to fight covid-19 to quell the threat that black people supposedly pose to the nation’s safety.

If we don’t want our technology to be used to perpetuate racism, then we must make sure that we don’t conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate.

New Zealand: ‘Like swimming in crocodile waters’ – Immigration officials’ data analytics use

Of note. As always, one needs to ensure that AI systems are as free of bias as possible as well as remembering that human decision-making is also not perfect. But any large-scale immigration system will likely have to rely on AI in order to manage the workload:

Immigration officials are being accused of using data analytics and algorithms in visa processing – and leaving applicants in the dark about why they are being rejected.

One immigration adviser described how applicants unaware of risk profiling were like unwitting swimmers in crocodile infested waters.

The automatic ‘triage’ system places tourists, overseas students or immigrants into high, medium or low risk categories.

The factors which raise a red flag on high-risk applications are not made publicly available; Official Information Act requests are redacted on the grounds of international relations.

But an immigration manager has told RNZ that staff identify patterns, such as overstaying and asylum claim rates of certain nationalities or visa types, and feed that data into the triage system.

On a recent visit to a visa processing centre in Auckland, Immigration New Zealand assistant general manager Jeannie Melville acknowledged that it now ran an automated system that triages applications, but said it was humans who make the decisions.

“There is an automatic triage that’s done – but to be honest, the most important thing is the work that our immigration officers do in actually determining how the application should be processed,” she said.

“And we do have immigration officers that have the skills and the experience to be able to determine whether there are further risk factors or no risk factors in a particular application.

“The triage system is something that we work on all the time because as you would expect, things change all the time. And we try and make sure that it’s a dynamic system that takes into account a whole range of factors, whether that be things that have happened in the past or things that are going on at the present time.”

When asked what ‘things that have happened in the past’ might mean in the context of deciding what risk category an applicant would be assigned to, another manager filled the silence.

“Immigration outcomes, application outcomes, things that we measure – overstaying rates or asylum claim rates from certain sources,” she said. “Nationality or visa type patterns that may have trended, so we do some data analytics that feed into some of those business rules.”

Humans defer to machines – professor

Professor Colin Gavaghan, of Otago University, said studies on human interactions with technology suggested people found it hard to ignore computerised judgments.

“What they’ve found is if you’re not very, very careful, you get a kind of situation where the human tends just to defer to whatever the machine recommends,” said Prof Gavaghan, director of the New Zealand Law Foundation Centre for Law and Policy in Emerging Technologies.

“It’s very hard to stay in a position where you’re actually critiquing and making your own independent decision – humans who are going to get to see these cases, they’ll be told that the machine, the system has already flagged them up as being high risk.

“It’s hard not to think that that will influence their decision. The idea they’re going to make a completely fresh call on those cases, I think, if we’re not careful, could be a bit unrealistic.”

Oversight and transparency were needed to check the accuracy of calls made by the algorithmic system and to ensure people could challenge decisions, he added.

Best practice guidelines tended to be high level and vague, he added.

“There’s also questions and concerns about bias,” he said. “It can be biased because the training data that’s been used to prepare it is itself the product of user bias decisions – if you have a body of data that’s been used to train the system that’s informed by let’s say, for the sake of argument, racist assumptions about particular groups, then that’s going to come through in the system’s recommendations as well.

“We haven’t had what we would like to see, which is one body with responsibility to look across all of government and all of these uses.”

The concerns follow questions around another Immigration New Zealand programme in 2018 which was used to prioritise deportations.

A compliance manager told RNZ it was using data, including nationality, of former immigrants to determine which future overstayers to target.

It subsequently denied that nationality was one of the factors but axed the programme.

Don’t make assumptions on raw data – immigration adviser

Immigration adviser Katy Armstrong said Immigration New Zealand had to fight its own ‘jaundice’ that was based on profiling and presumptions.

“Just because you’re a 23-year-old, let’s say, Brazilian coming in, wanting to have a holiday experience in New Zealand, doesn’t make you an enemy of the state.

“And you’re being lumped in maybe with a whole bunch of statistics that might say that young male Brazilians have a particular pattern of behaviour.

“So you then have to prove a negative against you, but you’re not being told transparently what that negative is.”

It would be unacceptable if the police were arresting people based on the previous offending rates of a certain nationality and immigration rules were also based on fairness and natural justice, she said.

“That means not discriminating, not being presumptuous about the way people may behave just purely based on assumptions from raw data,” she said.

“And that’s the area of real concern. If you have profiling and an unsophisticated workforce, with an organisation that is constantly in churn, with people coming on board to make decisions about people’s lives with very little training, then what do you end up with?

“Well, I can tell you – you end up with decisions that are basically unfair, and often biased.

“I think people go in very trusting of the system and not realising that there is this almighty wall between them and a visa over issues that they would have no inkling about.

“And then they get turned down, they don’t even give you a chance very often to respond to any doubts that immigration might have around you.

“People come and say: ‘I got declined’ and you look at it and you think ‘oh my God, it was like they literally went swimming in the crocodile waters without any protection’.”

Source: ‘Like swimming in crocodile waters’ – Immigration officials’ data analytics use

Concerns raised after facial recognition software found to have racial bias

Legitimate concerns:

In 2015, two undercover police officers in Jacksonville, Fla., bought $50 worth of crack cocaine from a man on the street. One of the cops surreptitiously snapped a cellphone photo of the man and sent it to a crime analyst, who ran the photo through facial recognition software.

The facial recognition algorithm produced several matches, and the analyst chose the first one: a mug shot of a man named Willie Allen Lynch. Lynch was convicted of selling drugs and sentenced to eight years in prison.

Civil liberties lawyers jumped on the case, flagging a litany of concerns to fight the conviction. Matches of other possible perpetrators generated by the tool were never disclosed to Lynch, hampering his ability to argue for his innocence. The use of the technology statewide had been poorly regulated and shrouded in secrecy.

But also, Willie Allen Lynch is a Black man.

Multiple studies have shown facial recognition technology makes more errors on Black faces. For mug shots in particular, researchers have found that algorithms generate the highest rates of false matches for African American, Asian and Indigenous people.

After more than two dozen police services, government agencies and private businesses across Canada recently admitted to testing the divisive facial recognition app Clearview AI, experts and advocates say it’s vital that lawmakers and politicians understand how the emerging technology could impact racialized citizens.

“Technologies have their bias as well,” said Nasma Ahmed, director of Toronto-based non-profit Digital Justice Lab, who is advocating for a pause on the use of facial recognition technology until proper oversight is established.

“If they don’t wake up, they’re just going to be on the wrong side of trying to fight this battle … because they didn’t realize how significant the threat or the danger of this technology is,” says Toronto-born Toni Morgan, managing director of the Center for Law, Innovation and Creativity at Northeastern University School of Law in Boston.

“It feels like Toronto is a little bit behind the curve in understanding the implications of what it means for law enforcement to access this technology.”

Last month, the Star revealed that officers at more than 20 police forces across Canada have used Clearview AI, a facial recognition tool that has been described as “dystopian” and “reckless” for its broad search powers. It relies on what the U.S. company has said is a database of three billion photos scraped from the web, including social media.

Almost all police forces that confirmed use of the tool said officers had accessed a free trial version without the knowledge or authorization of police leadership and have been told to stop; the RCMP is the only police service that has paid to access the technology.

Multiple forces say the tool was used by investigators within child exploitation units, but it was also used to probe lesser crimes, including in an auto theft investigation and by a Rexall employee seeking to stop shoplifters.

While a handful of American cities and states have moved to limit or outright ban police use of facial recognition technology, the response from Canadian lawmakers has been muted.

According to client data obtained by BuzzFeed News and shared exclusively with the Star, the Toronto Police Service was the most prolific user of Clearview AI in Canada. (Clearview AI has not responded to multiple requests for comment from the Star but told BuzzFeed there are “numerous inaccuracies” in the client data information, which they allege was “illegally obtained.”)

Toronto police ran more than 3,400 searches since October, according to the BuzzFeed data.

A Toronto police spokesperson has said officers were “informally testing” the technology, but said the force could not verify the Star’s data about officers’ use or “comment on it with any certainty.” Toronto police Chief Mark Saunders directed officers to stop using the tool after he became aware they were using it, and a review is underway.

But Toronto police are still using a different facial recognition tool, one made by NEC Corp. of America and purchased in 2018. The NEC facial recognition tool searches the Toronto police database of approximately 1.5 million mug shot photos.

The National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce, has been testing the accuracy of facial recognition technology since 2002. Companies that sell the tools voluntarily submit their algorithms to be tested to NIST; government agencies sponsor the research to help inform policy.

In a report released in December that tested 189 algorithms from 99 developers, NIST found dramatic variations in accuracy across different demographic groups. For one type of matching, the team discovered the systems had error rates between 10 and 100 times higher for African American and Asian faces compared to images of white faces.

For the type of facial recognition matching most likely to be used by law enforcement, African American women had higher error rates.

“Law enforcement, they probably have one of the most difficult cases. Because if they miss someone … and that person commits a crime, they’re going to look bad. If they finger the wrong person, they’re going to look bad,” said Craig Watson, manager of the group that runs NIST’s testing program.

Clearview AI has not been tested by NIST. The company has claimed its tool is “100% accurate” in a report written by an “independent review panel.” The panel said it relied on the same methodology the American Civil Liberties Union used to assess a facial recognition algorithm sold by Amazon.

The American Civil Liberties Union slammed the report, calling the claim “misleading” and the tool “dystopian.”

Clearview AI did not respond to a request for comment about its accuracy claims.

Before purchasing the NEC facial recognition technology, Toronto police conducted a privacy impact assessment. Asked if this examined potential racial bias within the NEC’s algorithms, spokesperson Meaghan Gray said in an email the contents of the report are not public.

But she said TPS “has not experienced racial or gender bias when utilizing the NEC Facial Recognition System.”

“While not a means of undisputable positive identification like fingerprint identification, this technology provides ‘potential candidates’ as investigative leads,” she said. “Consequently, one race or gender has not been disproportionally identified nor has the TPS made any false identifications.”

The revelations about Toronto police’s use of Clearview AI have coincided with the planned installation of additional CCTV cameras in communities across the city, including in the Jane Street and Finch Avenue West area. The provincially funded additional cameras come after the Toronto police board approved increasing the number placed around the city.

The combination of facial recognition technology and additional CCTV cameras in a neighbourhood home to many racialized Torontonians is a “recipe for disaster,” said Sam Tecle, a community worker with Jane and Finch’s Success Beyond Limits youth support program.

“One technology feeds the other,” Tecle said. “Together, I don’t know how that doesn’t result in surveillance — more intensified surveillance — of Black and racialized folks.”

Tecle said the plan to install more cameras was asking for a lot of trust from a community that already has a fraught relationship with the police. That’s in large part due to the legacy of carding, he said — when police stop, question and document people not suspected of a crime, a practice that disproportionately impacts Black and brown men.

“This is just a digital form of doing the same thing,” Tecle told the Star. “If we’re misrecognized and misidentified through these facial recognition algorithms, then I’m very apprehensive about them using any kind of facial recognition software.”

Others pointed out that false positives — incorrect matches — could have particularly grave consequences in the context of police use of force: Black people are “grossly over-represented” in cases where Toronto police used force, according to a 2018 report by the Ontario Human Rights Commission.

Saunders has said residents in high-crime areas have repeatedly asked for more CCTV cameras in public spaces. At last month’s Toronto police board meeting, Mayor John Tory passed a motion requiring that police engage in a public community consultation process before installing more cameras.

Gray said many residents and business owners want increased safety measures, and this feedback alongside an analysis of crime trends led the force to identify “selected areas that are most susceptible to firearm-related offences.”

“The cameras are not used for surveillance. The cameras will be used for investigation purposes, post-reported offences or incidents, to help identify potential suspects, and if needed during major events to aid in public safety,” Gray said.

Akwasi Owusu-Bempah, an assistant professor of criminology at the University of Toronto, said when cameras are placed in neighbourhoods with high proportions of racialized people, then used in tandem with facial recognition technology, “it could be problematic, because of false positives and false negatives.”

“What this gets at is the need for continued discussion, debate, and certainly oversight,” Owusu-Bempah said.

Source: Concerns raised after facial recognition software found to have racial bias

Canada must look beyond STEM and diversify its AI workforce

From a visible minority perspective, based on STEM graduates, representation reasonably good as per the chart above except for engineering and particularly strong in math and computer sciences, the closest fields of study to AI.

With respect to gender, the percentage of visible minority women is generally equivalent to that on non-visible minority women or stronger (but women are under-represented in engineering and math/computer sciences):

Artificial intelligence (AI) is expected to add US$15.7 trillion to the global economy by 2030, according to a recent report from PwC, representing a 14 percent boost to global GDP. Countries around the world are scrambling for a piece of the pie, as evidenced by the proliferation of national and regional AI strategies aimed at capturing the promise of AI for future value generation.

Canada has benefited from an early lead in AI, which is often attributed to the Canadian Institute for Advanced Research (CIFAR) having had the foresight to invest in Geoffrey Hinton’s research on deep learning shortly after the turn of the century. As a result, Canada can now tout Montreal as having the highest concentration of researchers and students of deep learning in the world and Toronto as being home to the highest concentration of AI start-ups in the world.

But the market for AI is approaching maturity. A report from McKinsey & Co. suggests that the public and private sectors together have captured only between 10 and 40 percent of the potential value of advances in machine learning. If Canada hopes to maintain a competitive advantage, it must both broaden the range of disciplines and diversify the workforce in the AI sector.

Looking beyond STEM

Strategies aimed at capturing the expected future value of AI have been concentrated on innovation in fundamental research, which is conducted largely in the STEM disciplines: science, technology, engineering and mathematics. But it is the application of this research that will grow market share and multiply value. In order to capitalize on what fundamental research discovers, the AI sector must deepen its ties with the social sciences.

To date the role of social scientists in Canada’s strategy on AI has been largely limited to areas of ethics and public policy. While these are endeavours to which social scientists are particularly well suited, they could be engaged much more broadly with AI. Social scientists are well positioned to identify and exploit potential applications of this research that will generate both social and economic returns on Canada’s investment in AI.

Social scientists take a unique approach to data analysis by drawing on social theory to critically interpret both the inputs and outputs of a given model. They ask what a given model is really telling us about the world and how it arrived at that result. They see potential opportunities in data and digital technology that STEM researchers are not trained to look for.

A recent OECD report looks at the skills that most distinguish innovative from non-innovative workers; chief among them are creativity, critical thinking and communication skills. While these skills are by no means exclusively the domain of the social sciences, they are perhaps more central to social scientific training than to any other discipline.

The social science perspective can serve as a defence mechanism against the potential folly of certain applications of AI. If social scientists had been more involved in early adaptations of computer vision, for example, Google might have been spared the shame of image recognition algorithms that classify people of colour as animals (they certainly would have come up with a better solution). In the same vein, Microsoft’s AI chatbots would have been less likely to spew racist slurs shortly after launch.

Social scientists can also help meet a labour shortage: there are not enough STEM graduatesto meet future demand for AI talent. Meanwhile, social science graduates are often underemployed, in part because they do not have the skills necessary to participate in a future of work that privileges expertise in AI. As a consequence, many of the opportunities associated with AI are passing Canada’s social science graduates by. Excluding social science students from Canada’s AI strategy not only reduces their career paths but restricts their opportunities to contribute to fulfilling the societal and economic promise of AI.

Realizing the potential of the social sciences within Canada’s AI ecosystem requires innovative thinking by both governments and universities. Federal and provincial governments should relax restrictions on funding for AI-related research that prohibit applications from social scientists or make them eligible only within interdisciplinary teams that include STEM researchers. This policy has the effect of subordinating social scientific approaches to AI to those of STEM disciplines. In fact, social scientists are just as capable of independent research, and a growing number are already engaged in sophisticated applications of machine learning to address some of the most pressing societal challenges of our time.

Governments must also invest in the development of undergraduate and graduate training opportunities that are specific to the application of AI in the social sciences, using pedagogical approaches that are appropriate for them.

Social science faculties in universities across Canada can also play a crucial role by supporting the development of AI-related skills within their undergraduate and graduate curriculums. At McMaster University, for example, the Faculty of Social Sciences is developing a new degree: master of public policy in digital society. Alongside graduate training in the fundamentals of public policy, the 12-month program will include rigorous training in data science as well as technical training in key digital technologies that are revolutionizing contemporary society. The program, which is expected to launch in 2021, is intended to provide students with a command of digital technologies such as AI necessary to enable them to think creatively and critically about its application to the social world. In addition to the obvious benefit of producing a new generation of policy leadership in AI, the training provided by this program will ensure that its graduates are well positioned for a broader range of leadership opportunities across the public and private sectors.

Increasing workplace diversity

A report released in 2019 by New York University’s AI Now Institute declared that there is a diversity crisis in the AI workforce. This has implications for the sector itself but also for society more broadly, in that the systemic biases within the AI sector are being perpetuated via the myriad touch points that AI has with our everyday lives: it is organizing our online search results and social media news feeds and supporting hiring decisions, and it may even render decisions in some court cases in future.

One of the main findings of the AI Now report was that the widespread strategy of focusing on “women in tech” is too narrow to counter the diversity crisis. In Canada, efforts to diversify AI generally translate to providing advancement opportunities for women in the STEM disciplines. Although the focus of policy-makers on STEM is critical and necessary, it is short-sighted. Disciplinary diversity in AI research not only broadens the horizons for research and commercialization; it also creates opportunities for groups who are underrepresented in STEM to benefit from and contribute to innovations in AI.

As it happens, equity-seeking groups are better represented in the social sciences. According to Statistics Canada, the social sciences and adjacent fields have the highest enrolment of visible minorities. And as of 2017, only 23.7 percent of those enrolled in STEM programs at Canadian universities were women, whereas women were 69.1 percent of participants in the social sciences.

So, engaging the social sciences more substantively in research and training related to AI will itself lead to greater diversity. While advancing this engagement, universities should be careful not to import training approaches directly from statistics or computer science, as these will bring with them some of the cultural context and biases that have resulted in a lack of diversity in those fields to begin with.

Bringing the social sciences into Canada’s AI strategy is a concrete way to demonstrate the strength of diversity, in disciplines as well as demographics. Not only would many social science students benefit from training in AI, but so too would Canada’s competitive advantage in AI benefit from enabling social scientists to effectively translate research into action.

Source: Canada must look beyond STEM and diversify its AI workforce

Douglas Todd: Robots replacing Canadian visa officers, Ottawa report says

Ongoing story, raising legitimate questions regarding the quality and possible bias of the algorithms used. That being said, human decision making is not bias free and using AI, at least in the more straightforward cases, makes sense from an efficiency and timeliness of service response.

Will be important to ensure appropriate oversight and there may be a need from an external body to review the algorithms to reduce risks if not already in place:

Tens of thousands of would-be guest workers and international students from China and India are having their fates determined by Canadian computers that are making visa decisions using artificial intelligence.

Even though Immigration Department officials recognize the public is wary about substituting robotic algorithms for human visa officers, the Liberal government plans to greatly expand “automated decision-making” in April of this year, according to an internal report.

“There is significant public anxiety over fairness and privacy associated with Big Data and Artificial Intelligence,” said the 2019 Immigration Department report, obtained under an access to information request. Nevertheless, Ottawa still plans to broaden the automated approval system far beyond the pilot programs it began operating in 2018 to process applicants from India and China.

At a time when Canada is approving more guest workers and foreign students than ever before, immigration lawyers have expressed worry about a lack of transparency in having machines make life-changing decisions about many of the more than 200,000 temporary visas that Canada issues each year.

The internal report reveals its departmental reservations about shifting more fully to an automated system — in particular wondering if machines could be “gamed” by high-risk applicants making false claims about their banking, job, marriage, educational or travel history.

“A system that approves applications without sufficient vetting would raise risks to Canadians, and it is understandable for Canadians to be more concerned about mistakenly approving risky individuals than about mistakenly refusing bona fide candidates,” says the document.

The 25-page report also flags how having robots stand in for humans will have an impact on thousands of visa officers. The new system “will fundamentally change the day-to-day work of decision-makers.”

Immigration Department officials did not respond to questions about the automated visa program.

Vancouver immigration lawyer Richard Kurland says Ottawa’s sweeping plan “to process huge numbers of visas fast and cheap” raises questions about whether an automated “Big Brother” system will be open to scrutiny, or whether it will lead to “Wizard of Oz” decision-making, in which it will be hard to determine who is accountable.

The publisher of the Lexbase immigration newsletter, which uncovered the internal document, was especially concerned that a single official has already “falsely” signed his or her name to countless visa decisions affecting migrants from India and China, without ever having reviewed their specific applications.

“The internal memo shows tens of thousands of visa decisions were signed-off under the name of one employee. If someone pulled that stunt on a visa application, they would be banned from Canada for five years for misrepresentation. It hides the fact it was really a machine that made the call,” said Kurland.

The policy report itself acknowledges that the upcoming shift to “hard-wiring” the visa decision-making process “at a tremendous scale” significantly raises legal risks for the Immigration Department, which it says is already “one of the most heavily litigated in the government of Canada.”

The population of Canada jumped by 560,000 people last year, or 1.5 per cent, the fastest rate of increase in three decades. About 470,000 of that total was made up of immigrants or newcomers arriving on 10-year multiple-entry visas, work visas or study visas.

The senior immigration officials who wrote the internal report repeatedly warn departmental staff that Canadians will be suspicious when they learn about the increasingly automated visa system.

“Keeping a human in the loop is important for public confidence. While human decision making may not be superior to algorithmic systems,” the report said, “human in-the-loop systems currently represent a form of transparency and personal accountability that is more familiar to the public than automated processes.”

In an effort to sell the automated system to a wary populace, the report emphasizes making people aware that the logarithm that decides whether an applicant receives a visa is not random. It’s a computer program governed by certain rules regarding what constitutes a valid visa application.

“A system that provides no real opportunity for officers to reflect is a de facto automated decision-making system, even when officers click the last button,” says the report, which states that flesh-and-blood women and men should still make the rulings on complex or difficult cases — and will also be able to review appeals.

“When a client challenges a decision that was made in full or in part by an automated system, a human officer will review the application. However, the (department) should not proactively offer clients the choice to have a human officer review and decide on their case at the beginning of the application process.”

George Lee, a veteran immigration lawyer in Burnaby, said he had not heard that machines are increasingly taking over from humans in deciding Canadian visa cases. He doesn’t think the public will like it when they learn it.

“People will say, ‘What are we doing here? Where are the human beings? You can’t do this. People are afraid of change. We want to keep the status quo.”

However, Lee said society’s transition towards replacing human workers with robots is “unstoppable. We’re seeing it everywhere.”

Lee believes people will eventually get used to the idea that machines are making vitally important decisions about human lives, including about people’s dreams of migrating to a new country.

“I think the use of robots will become more acceptable down the road,” he said. “Until the robots screw up.”

Source: Douglas Todd: Robots replacing Canadian visa officers, Ottawa report says

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Yet another article emphasizing the risks:

Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.

But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.

BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. Decades and even centuries of biases — along with a few new ones — are probably baked into all that material.

BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit. One program decided almost everything written about President Trump was negative, even if the actual content was flattering.

AI system for granting UK visas is biased, rights groups claim

Always a challenge with AI, ensuring that the algorithms do not replicate or create bias:

Immigrant rights campaigners have begun a ground-breaking legal case to establish how a Home Office algorithm that filters UK visa applications actually works.

The challenge is the first court bid to expose how an artificial intelligence program affects immigration policy decisions over who is allowed to enter the country.

Foxglove, a new advocacy group promoting justice in the new technology sector, is supporting the case brought by the Joint Council for the Welfare of Immigrants (JCWI) to legally force the Home Office to explain on what basis the algorithm “streams” visa applicants.

The two groups both said they feared the AI “streaming tool” created three channels for applicants including a “fast lane” that would lead to “speedy boarding for white people”.

The Home Office has insisted that the algorithm is used only to allocate applications and does not ultimately rule on them. The final decision remains in the hands of human caseworkers and not machines, it said.

A spokesperson for the Home Office said: “We have always used processes that enable UK Visas and Immigration to allocate cases in an efficient way.

“The streaming tool is only used to allocate applications, not to decide them. It uses data to indicate whether an application might require more or less scrutiny and it complies fully with the relevant legislation under the Equalities Act 2010.”

Cori Crider, a director at Foxglove, rejected the Home Office’s defence of the AI system.

Source: AI system for granting UK visas is biased, rights groups claim

Beware of Automated Hiring It won’t end employment discrimination. In fact, it could make it worse.

Some interesting ideas to reduce the risks of bias and discrimination:

Algorithms make many important decisions for us, like our creditworthiness, best romantic prospects and whether we are qualified for a job. Employers are increasingly turning to automated hiring platforms, believing they’re both more convenient and less biased than humans. However, as I describe in a new paper, this is misguided.

In the past, a job applicant could walk into a clothing store, fill out an application, and even hand it straight to the hiring manager. Nowadays, her application must make it through an obstacle course of online hiring algorithms before it might be considered. This is especially true for low-wage and hourly workers.

The situation applies to white-collar jobs too. People applying to be summer interns and first-year analysts at Goldman Sachs have their résumés digitally scanned for keywords that can predict success at the company. And the company has now embracedautomated interviewing.

Automated hiring can create a closed loop system. Advertisements created by algorithms encourage certain people to send in their résumés. After the résumés have undergone automated culling, a lucky few are hired and then subjected to automated evaluation, the results of which are looped back to establish criteria for future job advertisements and selections. This system operates with no transparency or accountability built in to check that the criteria are fair to all job applicants.