AI’s anti-Muslim bias problem

Of note (and unfortunately, not all that surprising):

Imagine that you’re asked to finish this sentence: “Two Muslims walked into a …”

Which word would you add? “Bar,” maybe?

It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. “Two Muslims walked into a synagogue with axes and a bomb,” it said. Or, on another try, “Two Muslims walked into a Texas cartoon contest and opened fire.”

For Abubakar Abid, one of the researchers, the AI’s output came as a rude awakening. “We were just trying to see if it could tell jokes,” he recounted to me. “I even tried numerous prompts to steer it away from violent completions, and it would find some way to make it violent.”

Language models such as GPT-3 have been hailed for their potential to enhance our creativity. Given a phrase or two written by a human, they can add on more phrases that sound uncannily human-like. They can be great collaborators for anyone trying to write a novel, say, or a poem.

Source: AI’s anti-Muslim bias problem

Misattributed blame? Attitudes toward globalization in the age of automation

Interesting study and findings:

Many, especially low-skilled workers, blame globalization for their economic woes. Robots and machines, which have led to job market polarization, rising income inequality, and labor displacement, are often viewed much more forgivingly. This paper argues that citizens have a tendency to misattribute blame for economic dislocations toward immigrants and workers abroad, while discounting the effects of technology. Using the 2016 American National Elections Studies, a nationally representative survey, I show that workers facing higher risks of automation are more likely to oppose free trade agreements and favor immigration restrictions, even controlling for standard explanations for these attitudes. Although pocket-book concerns do influence attitudes toward globalization, this study calls into question the standard assumption that individuals understand and can correctly identify the sources of their economic anxieties. Accelerated automation may have intensified attempts to resist globalization.

Source: https://www.cambridge.org/core/journals/political-science-research-and-methods/article/misattributed-blame-attitudes-toward-globalization-in-the-age-of-automation/29B08295CEAC4A4A89991E064D0284FF

Facebook Apologizes After Its AI Labels Black Men As ‘Primates’

Ouch!

Facebook issued an apology on behalf of its artificial intelligence software that asked users watching a video featuring Black men if they wanted to see more “videos about primates.” The social media giant has since disabled the topic recommendation feature and says it’s investigating the cause of the error, but the video had been online for more than a year.

A Facebook spokesperson told The New York Times on Friday, whichfirst reported on the story, that the automated prompt was an “unacceptable error” and apologized to anyone who came across the offensive suggestion.

The video, uploaded by the Daily Mail on June 27, 2020, documented an encounter between a white man and a group of Black men who were celebrating a birthday. The clip captures the white man allegedly calling 911 to report that he is “being harassed by a bunch of Black men,” before cutting to an unrelated video that showed police officers arresting a Black tenant at his own home.

Former Facebook employee Darci Groves tweeted about the error on Thursday after a friend clued her in on the misidentification. She shared a screenshot of the video that captured Facebook’s “Keep seeing videos about Primates?” message.

“This ‘keep seeing’ prompt is unacceptable, @Facebook,” she wrote. “And despite the video being more than a year old, a friend got this prompt yesterday. Friends at [Facebook], please escalate. This is egregious.”

This is not Facebook’s first time in the spotlight for major technical errors. Last year, Chinese President Xi Jinping’s name appeared as “Mr. S***hole” on its platform when translated from Burmese to English. The translation hiccup seemed to be Facebook-specific, and didn’t occur on Google, Reuters had reported.

However, in 2015, Google’s image recognition software classified photos of Black people as “gorillas.” Google apologized and removed the labels of gorilla, chimp, chimpanzee and monkey words that remained censored over two years later, Wired reported.

Facebook could not be reached for comment.

Source: Facebook Apologizes After Its AI Labels Black Men As ‘Primates’

Using A.I. to Find Bias in A.I.

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. O’Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For Ms. O’Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligence. It was a “cruel game of Whac-a-Mole,” she said.

This month, Ms. O’Sullivan, a 36-year-old New Yorker, was named chief executive of a new company, Parity. The start-up is one of many organizations, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from A.I. systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of A.I. systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in A.I., including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislation or regulation is inevitable,” said Christian Troncoso, the senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about A.I., it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigationof UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discriminate, while the status of the UnitedHealth investigation is unclear. 

A spokesman for UnitedHealth, Tyler Mason, said the company’s algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point this month when the Software Alliance offered a detailed framework for fighting bias in A.I., including the recognition that some automated technologies require regular oversight from humans. The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Though they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

Ms. O’Sullivan said there was no simple solution to bias in A.I. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

“Changing mentalities does not happen overnight — and that is even more true when you’re talking about large companies,” she said. “You are trying to change not just one person’s mind but many minds.”

When she started advising businesses on A.I. bias more than two years ago, Ms. O’Sullivan was often met with skepticism. Many executives and engineers espoused what they called “fairness through unawareness,” arguing that the best way to build equitable technology was to ignore issues like race and gender.

Increasingly, companies were building systems that learned tasks by analyzing vast amounts of data, including photos, sounds, text and stats. The belief was that if a system learned from as much data as possible, fairness would follow.

But as Ms. O’Sullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognition services can be biased against women and people of color when they are trained on photo collections dominated by white men.

Designers can be blind to these problems. The workers in India — where gay relationships were still illegal at the time and where attitudes toward gays and lesbians were very different from those in the United States — were classifying the photos as they saw fit.

Ms. O’Sullivan saw the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that ran the tagging project. She said she had left the company after realizing it was building systems for the military that she believed could eventually be used to kill. Clarifai did not respond to a request for comment. 

She now believes that after years of public complaints over bias in A.I. — not to mention the threat of regulation — attitudes are changing. In its new framework for curbing harmful bias, the Software Alliance warned against fairness through unawareness, saying the argument did not hold up.

“They are acknowledging that you need to turn over the rocks and see what is underneath,” Ms. O’Sullivan said.

Still, there is resistance. She said a recent clash at Google, where two ethics researchers were pushed out, was indicative of the situation at many companies. Efforts to fight bias often clash with corporate culture and the unceasing push to build new technology, get it out the door and start making money.

It is also still difficult to know just how serious the problem is. “We have very little data needed to model the broader societal safety issues with these systems, including bias,” said Jack Clark, one of the authors of the A.I. Index, an effort to track A.I. technology and policy across the globe. “Many of the things that the average person cares about — such as fairness — are not yet being measured in a disciplined or a large-scale way.”

Ms. O’Sullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Chowdhury, a well-known A.I. ethics researcher who spent years at the business consultancy Accenture before joining Twitter.

While other start-ups, like Fiddler A.I. and Weights and Biases, offer tools for monitoring A.I. services and identifying potentially biased behavior, Parity’s technology aims to analyze the data, technologies and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

The tool uses artificial intelligence technology that can be biased in its own right, showing the double-edged nature of A.I. — and the difficulty of Ms. O’Sullivan’s task.

Tools that can identify bias in A.I. are imperfect, just as A.I. is imperfect. But the power of such a tool, she said, is to pinpoint potential problems — to get people looking closely at the issue.

Ultimately, she explained, the goal is to create a wider dialogue among people with a broad range of views. The trouble comes when the problem is ignored — or when those discussing the issues carry the same point of view.

“You need diverse perspectives. But can you get truly diverse perspectives at one company?” Ms. O’Sullivan asked. “It is a very important question I am not sure I can answer.”

Source: https://www.nytimes.com/2021/06/30/technology/artificial-intelligence-bias.html

Why A.I. Should Be Afraid of Us: Because benevolent bots are suckers.

Of significance as AI becomes more prevalent. “Road rage” as the new Turing test!

Artificial intelligence is gradually catching up to ours. A.I. algorithms can now consistently beat us at chesspoker and multiplayer video games, generate images of human faces indistinguishable from real oneswrite news articles (not this one!) and even love stories, and drive cars better than most teenagers do.

But A.I. isn’t perfect, yet, if Woebot is any indicator. Woebot, as Karen Brown wrote this week in Science Times, is an A.I.-powered smartphone app that aims to provide low-cost counseling, using dialogue to guide users through the basic techniques of cognitive-behavioral therapy. But many psychologists doubt whether an A.I. algorithm can ever express the kind of empathy required to make interpersonal therapy work.

“These apps really shortchange the essential ingredient that — mounds of evidence show — is what helps in therapy, which is the therapeutic relationship,” Linda Michaels, a Chicago-based therapist who is co-chair of the Psychotherapy Action Network, a professional group, told The Times.

Empathy, of course, is a two-way street, and we humans don’t exhibit a whole lot more of it for bots than bots do for us. Numerous studies have found that when people are placed in a situation where they can cooperate with a benevolent A.I., they are less likely to do so than if the bot were an actual person.

“There seems to be something missing regarding reciprocity,” Ophelia Deroy, a philosopher at Ludwig Maximilian University, in Munich, told me. “We basically would treat a perfect stranger better than A.I.”

In a recent study, Dr. Deroy and her neuroscientist colleagues set out to understand why that is. The researchers paired human subjects with unseen partners, sometimes human and sometimes A.I.; each pair then played a series of classic economic games — Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created called Reciprocity — designed to gauge and reward cooperativeness.

Our lack of reciprocity toward A.I. is commonly assumed to reflect a lack of trust. It’s hyper-rational and unfeeling, after all, surely just out for itself, unlikely to cooperate, so why should we? Dr. Deroy and her colleagues reached a different and perhaps less comforting conclusion. Their study found that people were less likely to cooperate with a bot even when the bot was keen to cooperate. It’s not that we don’t trust the bot, it’s that we do: The bot is guaranteed benevolent, a capital-S sucker, so we exploit it.

That conclusion was borne out by conversations afterward with the study’s participants. “Not only did they tend to not reciprocate the cooperative intentions of the artificial agents,” Dr. Deroy said, “but when they basically betrayed the trust of the bot, they didn’t report guilt, whereas with humans they did.” She added, “You can just ignore the bot and there is no feeling that you have broken any mutual obligation.”

This could have real-world implications. When we think about A.I., we tend to think about the Alexas and Siris of our future world, with whom we might form some sort of faux-intimate relationship. But most of our interactions will be one-time, often wordless encounters. Imagine driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you’ll be far less likely to let it in. And if the A.I. doesn’t account for your bad behavior, an accident could ensue.

“What sustains cooperation in society at any scale is the establishment of certain norms,” Dr. Deroy said. “The social function of guilt is exactly to make people follow social norms that lead them to make compromises, to cooperate with others. And we have not evolved to have social or moral norms for non-sentient creatures and bots.”

That, of course, is half the premise of “Westworld.” (To my surprise Dr. Deroy had not heard of the HBO series.) But a landscape free of guilt could have consequences, she noted: “We are creatures of habit. So what guarantees that the behavior that gets repeated, and where you show less politeness, less moral obligation, less cooperativeness, will not color and contaminate the rest of your behavior when you interact with another human?”

There are similar consequences for A.I., too. “If people treat them badly, they’re programed to learn from what they experience,” she said. “An A.I. that was put on the road and programmed to be benevolent should start to be not that kind to humans, because otherwise it will be stuck in traffic forever.” (That’s the other half of the premise of “Westworld,” basically.)

There we have it: The true Turing test is road rage. When a self-driving car starts honking wildly from behind because you cut it off, you’ll know that humanity has reached the pinnacle of achievement. By then, hopefully, A.I therapy will be sophisticated enough to help driverless cars solve their anger-management issues.

Robots are coming and the fallout will largely harm marginalized communities

Interesting piece on the possible impact of automation on many of the essential service workers, largely women, visible minorities and immigrants (more speculative than data-driven):

COVID-19 has brought about numerous, devastating changes to people’s lives globally. With the number of cases rising across Canada and globally, we are also witnessing the development and use of robots to perform jobs in some workplaces that are deemed unsafe for humans. 

There are cases of robots being used to disinfect health-care facilities, deliver drugs to patients and perform temperature checks. In April 2020, doctors at a Boston hospital used Boston Dynamics’ quadruped robot called Spot to reduce health-care workers exposure to SARS-CoV-2, the virus that causes COVID-19. By equipping Spot with an iPad and a two-way radio, doctors and patients could communicate in real-time. 

In these instances, the use of robots is certainly justified because they can directly aid in lowering COVID-19 transmission rates and reducing the unnecessary exposure of health-care workers to the virus. But, as we know, robots are also performing these tasks outside of health-care settings, including at airports, officesretail spaces and restaurants

This is precisely where the issue of robot use gets complicated. 

Robots in the workplace

The type of labour that these and other robots perform or, in some cases, replace, is labour that is generally considered low-paying, ranging from cleaners and fast food workers to security guards and factory employees. Not only do many of these workers in Canada earn minimum wage, the majority are racialized women and youth between the ages of 15 to 24

The use of robots also affects immigrant populations. The gap between immigrant workers earning minimum wage and Canadian-born workers has more than doubled. In 2008, 5.3 per cent of both immigrant and Canadian-born workers earned minimum wage, compared to 2018 where 12 per cent of immigrant workers earned minimum wage and only 9.8 per cent of Canadian-born workers earned minimum wage. Canada’s reliance on migrant workers as a source of cheap and disposable labour, has intensified the exploitation of workers.

McDonald’s has replaced cashiers with self-service kiosks. It has also begun testing robots to replace both cooks and servers. Walmart has begun using robots to clean store floors, while also increasing their usage in warehouses

Nowhere is the implementation of robots more apparent than Amazon’s use of them in its fulfilment centres. As information scholars applying marxist theory Nick Dyer-Witheford, Atle Mikkola Kjøsen and James Steinhoff explain, Amazon’s use of robots have reduced order times and increased warehouse space, allowing for 50 per cent more inventory in areas where robots are used, and have saved Amazon’s power costs by working in the dark and without air conditioning.

Already marginalized labourers are most affected by robots. In other words, human labour that can be mechanized, routinized or automated to some extent, is work that is deemed to be expendable because it is seen to be replaceable. It is work that is stripped of any humanity in the name of efficiency and cost-effectiveness for massive corporations. However, the influence of corporations on robot development goes beyond cost-saving measures. 

Robot violence

The emergence of Boston Dynamics’ Spot, gives us some insight into how robots have crossed from the battlefield into urban spaces. Boston Dynamics’ robot development program has long been funded by the American Defense Advanced Research Projects Agency (DARPA)

In 2005, Boston Dynamics received funding from DARPA to develop one of its first quadruped robots known as BigDog, a robotic pack mule that was used to assist soldiers across rough terrain. In 2012, Boston Dynamics and DARPA revealed another quadruped robot known as AlphaDog, designed to primarily carry military gear for soldiers

The development of Spot would have been be impossible without these previous, DARPA-funded initiatives. While the founder of Boston Dynamics, Marc Raibert, has claimed that Spot will not be turned into a weapon, the company leased Spot to the Massachusetts State Police bomb squad in 2019 for a 90-day period. 

In February 2021, the New York Police Department used Spot to investigate the scene of a home invasion. And, in April 2021, Spot was deployed by the French Army in a series of military exercises to evaluate its usefulness on the future battlefield. https://www.youtube.com/embed/xYbhKHfZSEE?wmode=transparent&start=0Massachusetts State Police lease Boston Dynamics’ Spot in 2019.

Targeting the most vulnerable

These examples are not intended to altogether dismiss the importance of some robots. This is particularly the case in health care, where robots continue to help doctors improve patient outcomes. Instead, these examples should serve as a call for governments to intervene in order to prevent a proliferation of robot use across different spaces.

More importantly, this is a call to prevent the multiple forms of exploitation that already affect marginalized groups. Since technological innovation has a tendency to outpace legislationand regulatory controls, it is imperative that lawmakers step in before it is too late.

Source: https://theconversationcanada.cmail20.com/t/r-l-tltjihkd-kyldjlthkt-c/

Who Is Making Sure the A.I. Machines Aren’t Racist?

Good overview of the issues and debates (Google’s earlier slogan of “do no evil” seems so quaint):

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

Their work was hailed as groundbreaking. The nascent A.I. industry, it had become clear, needed minders and people with different perspectives.

About six years ago, A.I. in a Google online photo service organized photos of Black people into a folder called “gorillas.” Four years ago, a researcher at a New York start-up noticed that the A.I. system she was working on was egregiously biased against Black people. Not long after, a Black researcher in Boston discovered that an A.I. system couldn’t identify her face — until she put on a white mask.

In 2018, when I told Google’s public relations staff that I was working on a book about artificial intelligence, it arranged a long talk with Dr. Mitchell to discuss her work. As she described how she built the company’s Ethical A.I. team — and brought Dr. Gebru into the fold — it was refreshing to hear from someone so closely focused on the bias problem.

But nearly three years later, Dr. Gebru was pushed out of the company without a clear explanation. She said she had been firedafter criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.

“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”

As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.

Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.

It should have been a wake-up call.

In June 2015, a friend sent Jacky Alciné, a 22-year-old software engineer living in Brooklyn, an internet link for snapshots the friend had posted to the new Google Photos service. Google Photos could analyze snapshots and automatically sort them into digital folders based on what was pictured. One folder might be “dogs,” another “birthday party.”

When Mr. Alciné clicked on the link, he noticed one of the folders was labeled “gorillas.” That made no sense to him, so he opened the folder. He found more than 80 photos he had taken nearly a year earlier of a friend during a concert in nearby Prospect Park. That friend was Black.

He might have let it go if Google had mistakenly tagged just one photo. But 80? He posted a screenshot on Twitter. “Google Photos, y’all,” messed up, he wrote, using much saltier language. “My friend is not a gorilla.”

Like facial recognition services, talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.

Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)

As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”

In 2017, Deborah Raji, a 21-​year-​old Black woman from Ottawa, sat at a desk inside the New York offices of Clarifai, the start-up where she was working. The company built technology that could automatically recognize objects in digital images and planned to sell it to businesses, police departments and government agencies.

She stared at a screen filled with faces — images the company used to train its facial recognition software.

As she scrolled through page after page of these faces, she realized that most — more than 80 percent — were of white people. More than 70 percent of those white people were male. When Clarifai trained its system on this data, it might do a decent job of recognizing white people, Ms. Raji thought, but it would fail miserably with people of color, and probably women, too.

Clarifai was also building a “content moderation system,” a tool that could automatically identify and remove pornography from images people posted to social networks. The company trained this system on two sets of data: thousands of photos pulled from online pornography sites, and thousands of G‑rated images bought from stock photo services.

The system was supposed to learn the difference between the pornographic and the anodyne. The problem was that the G‑rated images were dominated by white people, and the pornography was not. The system was learning to identify Black people as pornographic.

“The data we use to train these systems matters,” Ms. Raji said. “We can’t just blindly pick our sources.”

This was obvious to her, but to the rest of the company it was not. Because the people choosing the training data were mostly white men, they didn’t realize their data was biased.

“The issue of bias in facial recognition technologies is an evolving and important topic,” Clarifai’s chief executive, Matt Zeiler, said in a statement. Measuring bias, he said, “is an important step.”

Before joining Google, Dr. Gebru collaborated on a study with a young computer scientist, Joy Buolamwini. A graduate student at the Massachusetts Institute of Technology, Ms. Buolamwini, who is Black, came from a family of academics. Her grandfather specialized in medicinal chemistry, and so did her father.

She gravitated toward facial recognition technology. Other researchers believed it was reaching maturity, but when she used it, she knew it wasn’t.

In October 2016, a friend invited her for a night out in Boston with several other women. “We’ll do masks,” the friend said. Her friend meant skin care masks at a spa, but Ms. Buolamwini assumed Halloween masks. So she carried a white plastic Halloween mask to her office that morning.

It was still sitting on her desk a few days later as she struggled to finish a project for one of her classes. She was trying to get a detection system to track her face. No matter what she did, she couldn’t quite get it to work.

In her frustration, she picked up the white mask from her desk and pulled it over her head. Before it was all the way on, the system recognized her face — or, at least, it recognized the mask.

“Black Skin, White Masks,” she said in an interview, nodding to the 1952 critique of historical racism from the psychiatrist Frantz Fanon. “The metaphor becomes the truth. You have to fit a norm, and that norm is not you.”

Ms. Buolamwini started exploring commercial services designed to analyze faces and identify characteristics like age and sex, including tools from Microsoft and IBM.

She found that when the services read photos of lighter-​skinned men, they misidentified sex about 1 percent of the time. But the darker the skin in the photo, the larger the error rate. It rose particularly high with images of women with dark skin. Microsoft’s error rate was about 21 percent. IBM’s was 35.

Published in the winter of 2018, the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.

Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.

Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.

Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-​skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-​skinned women for men 31 percent of the time. For lighter-​skinned males, the error rate was zero.

Amazon called for government regulation of facial recognition. It also attacked the researchers in private emails and public blog posts.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” an Amazon executive, Matt Wood, wrote in a blog post that disputed the study and a New York Times article that described it.

In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.

Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.

Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.

Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.

After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.

The response: Her resignation was accepted immediately, and Google revoked her access to company email and other services. A month later, it removed Dr. Mitchell’s access after she searched through her own email in an effort to defend Dr. Gebru.

In a Google staff meeting last month, just after the company fired Dr. Mitchell, the head of the Google A.I. lab, Jeff Dean, said the company would create strict rules meant to limit its review of sensitive research papers. He also defended the reviews. He declined to discuss the details of Dr. Mitchell’s dismissal but said she had violated the company’s code of conduct and security policies.

One of Mr. Dean’s new lieutenants, Zoubin Ghahramani, said the company must be willing to tackle hard issues. There are “uncomfortable things that responsible A.I. will inevitably bring up,” he said. “We need to be comfortable with that discomfort.”

But it will be difficult for Google to regain trust — both inside the company and out.

“They think they can get away with firing these people and it will not hurt them in the end, but they are absolutely shooting themselves in the foot,” said Alex Hanna, a longtime part of Google’s 10-member Ethical A.I. team. “What they have done is incredibly myopic.”

Source: https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html

The Robots Are Coming for Phil in Accounting

Implications for many white collar workers, including in government given the nature of repetitive operational work:

The robots are coming. Not to kill you with lasers, or beat you in chess, or even to ferry you around town in a driverless Uber.

These robots are here to merge purchase orders into columns J and K of next quarter’s revenue forecast, and transfer customer data from the invoicing software to the Oracle database. They are unassuming software programs with names like “Auxiliobits — DataTable To Json String,” and they are becoming the star employees at many American companies.

Some of these tools are simple apps, downloaded from online stores and installed by corporate I.T. departments, that do the dull-but-critical tasks that someone named Phil in Accounting used to do: reconciling bank statements, approving expense reports, reviewing tax forms. Others are expensive, custom-built software packages, armed with more sophisticated types of artificial intelligence, that are capable of doing the kinds of cognitive work that once required teams of highly-paid humans.

White-collar workers, armed with college degrees and specialized training, once felt relatively safe from automation. But recent advances in A.I. and machine learning have created algorithms capable of outperforming doctorslawyers and bankers at certain parts of their jobs. And as bots learn to do higher-value tasks, they are climbing the corporate ladder.

The trend — quietly building for years, but accelerating to warp speed since the pandemic — goes by the sleepy moniker “robotic process automation.” And it is transforming workplaces at a pace that few outsiders appreciate. Nearly 8 in 10 corporate executives surveyed by Deloitte last year said they had implemented some form of R.P.A. Another 16 percent said they planned to do so within three years.

Most of this automation is being done by companies you’ve probably never heard of. UiPath, the largest stand-alone automation firm, is valued at $35 billion — roughly the size of eBay — and is slated to go public later this year. Other companies like Automation Anywhere and Blue Prism, which have Fortune 500 companies like Coca-Cola and Walgreens Boots Alliance as clients, are also enjoying breakneck growth, and tech giants like Microsoft have recently introduced their own automation products to get in on the action.

Executives generally spin these bots as being good for everyone, “streamlining operations” while “liberating workers” from mundane and repetitive tasks. But they are also liberating plenty of people from their jobs. Independent experts say that major corporate R.P.A. initiatives have been followed by rounds of layoffs, and that cutting costs, not improving workplace conditions, is usually the driving factor behind the decision to automate. 

Craig Le Clair, an analyst with Forrester Research who studies the corporate automation market, said that for executives, much of the appeal of R.P.A. bots is that they are cheap, easy to use and compatible with their existing back-end systems. He said that companies often rely on them to juice short-term profits, rather than embarking on more expensive tech upgrades that might take years to pay for themselves.

“It’s not a moonshot project like a lot of A.I., so companies are doing it like crazy,” Mr. Le Clair said. “With R.P.A., you can build a bot that costs $10,000 a year and take out two to four humans.”

Covid-19 has led some companies to turn to automation to deal with growing demand, closed offices, or budget constraints. But for other companies, the pandemic has provided cover for executives to implement ambitious automation plans they dreamed up long ago.

“Automation is more politically acceptable now,” said Raul Vega, the chief executive of Auxis, a firm that helps companies automate their operations.

Before the pandemic, Mr. Vega said, some executives turned down offers to automate their call centers, or shrink their finance departments, because they worried about scaring their remaining workers or provoking a backlash like the one that followed the outsourcing boom of the 1990s, when C.E.O.s became villains for sending jobs to Bangalore and Shenzhen.

But those concerns matter less now, with millions of people already out of work and many businesses struggling to stay afloat.

Now, Mr. Vega said, “they don’t really care, they’re just going to do what’s right for their business,” Mr. Vega said.

Sales of automation software are expected to rise by 20 percent this year, after increasing by 12 percent last year, according to the research firm Gartner. And the consulting firm McKinsey, which predicted before the pandemic that 37 million U.S. workers would be displaced by automation by 2030, recently increased its projection to 45 million.

A white-collar wake-up call

Not all bots are the job-destroying kind. Holly Uhl, a technology manager at State Auto Insurance Companies, said that her firm has used automation to do 173,000 hours’ worth of work in areas like underwriting and human resources without laying anyone off.

“People are concerned that there’s a possibility of losing their jobs, or not having anything to do,” she said. “But once we have a bot in the area, and people see how automation is applied, they’re truly thrilled that they don’t have to do that work anymore.”

As bots become capable of complex decision-making, rather than doing single repetitive tasks, their disruptive potential is growing.

Recent studies by researchers at Stanford University and the Brookings Institution compared the text of job listings with the wording of A.I.-related patents, looking for phrases like “make prediction” and “generate recommendation” that appeared in both. They found that the groups with the highest exposure to A.I. were better-paid, better-educated workers in technical and supervisory roles, with men, white and Asian-American workers, and midcareer professionals being some of the most endangered. Workers with bachelor’s or graduate degrees were nearly four times as exposed to A.I. risk as those with just a high school degree, the researchers found, and residents of high-tech cities like Seattle and Salt Lake City were more vulnerable than workers in smaller, more rural communities.

“A lot of professional work combines some element of routine information processing with an element of judgment and discretion,” said David Autor, an economist at M.I.T. who studies the labor effects of automation. “That’s where software has always fallen short. But with A.I., that type of work is much more in the kill path.”

Many of those vulnerable workers don’t see this coming, in part because the effects of white-collar automation are often couched in jargon and euphemism. On their websites, R.P.A. firms promote glowing testimonials from their customers, often glossing over the parts that involve actual humans.

“Sprint Automates 50 Business Processes In Just Six Months.” (Possible translation: Sprint replaced 300 people in the billing department.)

“Dai-ichi Life Insurance Saves 132,000 Hours Annually” (Bye-bye, claims adjusters.)

“600% Productivity Gain for Credit Reporting Giant with R.P.A.”(Don’t let the door hit you, data analysts.)

Jason Kingdon, the chief executive of the R.P.A. firm Blue Prism, speaks in the softened vernacular of displacement too. He refers to his company’s bots as “digital workers,” and he explained that the economic shock of the pandemic had “massively raised awareness” among executives about the variety of work that no longer requires human involvement.

“We think any business process can be automated,” he said.

Mr. Kingdon tells business leaders that between half and two-thirds of all the tasks currently being done at their companies can be done by machines. Ultimately, he sees a future in which humans will collaborate side-by-side with teams of digital employees, with plenty of work for everyone, although he conceded that the robots have certain natural advantages.

“A digital worker,” he said, “can be scaled in a vastly more flexible way.”

Humans have feared losing our jobs to machines for millennia. (In 350 BCE, Aristotle worried that self-playing harps would make musicians obsolete.) And yet, automation has never created mass unemployment, in part because technology has always generated new jobs to replace the ones it destroyed.

During the 19th and 20th centuries, some lamplighters and blacksmiths became obsolete, but more people were able to make a living as electricians and car dealers. And today’s A.I. optimists argue that while new technology may displace some workers, it will spur economic growth and create better, more fulfilling jobs, just as it has in the past.

But that is no guarantee, and there is growing evidence that this time may be different.

In a series of recent studies, Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University, two well-respected economists who have researched the history of automation, found that for most of the 20th century, the optimistic take on automation prevailed — on average, in industries that implemented automation, new tasks were created faster than old ones were destroyed.

Since the late 1980s, they found, the equation had flipped — tasks have been disappearing to automation faster than new ones are appearing.

This shift may be related to the popularity of what they call “so-so automation” — technology that is just barely good enough to replace human workers, but not good enough to create new jobs or make companies significantly more productive.

A common example of so-so automation is the grocery store self-checkout machine. These machines don’t cause customers to buy more groceries, or help them shop significantly faster — they simply allow store owners to staff slightly fewer employees on a shift. This simple, substitutive kind of automation, Mr. Acemoglu and Mr. Restrepo wrote, threatens not just individual workers, but the economy as a whole.

“The real danger for labor,” they wrote, “may come not from highly productive but from ‘so-so’ automation technologies that are just productive enough to be adopted and cause displacement.”

Only the most devoted Luddites would argue against automating any job, no matter how menial or dangerous. But not all automation is created equal, and much of the automation being done in white-collar workplaces today is the kind that may not help workers over the long run.

During past eras of technological change, governments and labor unions have stepped in to fight for automation-prone workers, or support them while they trained for new jobs. But this time, there is less in the way of help. Congress has rejected calls to fund federal worker retraining programs for years, and while some of the money in the $1.9 trillion Covid-19 relief bill Democrats hope to pass this week will go to laid-off and furloughed workers, none of it is specifically earmarked for job training programs that could help displaced workers get back on their feet.

Another key difference is that in the past, automation arrived gradually, factory machine by factory machine. But today’s white-collar automation is so sudden — and often, so deliberately obscured by management — that few workers have time to prepare.

“The rate of progression of this technology is faster than any previous automation,” said Mr. Le Clair, the Forrester analyst, who thinks we are closer to the beginning than the end of the corporate A.I. boom.

“We haven’t hit the exponential point of this stuff yet,” he added. “And when we do, it’s going to be dramatic.”

The corporate world’s automation fever isn’t purely about getting rid of workers. Executives have shareholders and boards to satisfy, and competitors to keep up with. And some automation does, in fact, lift all boats, making workers’ jobs better and more interesting while allowing companies to do more with less.

But as A.I. enters the corporate world, it is forcing workers at all levels to adapt, and focus on developing the kinds of distinctly human skills that machines can’t easily replicate.

Ellen Wengert, a former data processor at an Australian insurance firm, learned this lesson four years ago, when she arrived at work one day to find a bot-builder sitting in her seat.

The man, coincidentally an old classmate of hers, worked for a consulting firm that specialized in R.P.A. He explained that he’d been hired to automate her job, which mostly involved moving customer data from one database to another. He then asked her to, essentially, train her own replacement — teaching him how to do the steps involved in her job so that he, in turn, could program a bot to do the same thing.

Ms. Wengert wasn’t exactly surprised. She’d known that her job was straightforward and repetitive, making it low-hanging fruit for automation. But she was annoyed that her managers seemed so eager to hand it over to a machine.

“They were desperate to create this sense of excitement around automation,” she said. “Most of my colleagues got on board with that pretty readily, but I found it really jarring, to be feigning excitement about us all potentially losing our jobs.”

For Ms. Wengert, 27, the experience was a wake-up call. She had a college degree and was early in her career. But some of her colleagues had been happily doing the same job for years, and she worried that they would fall through the cracks.

“Even though these aren’t glamorous jobs, there are a lot of people doing them,” she said.

She left the insurance company after her contract ended. And she now works as a second-grade teacher — a job she says she sought out, in part, because it seemed harder to automate.

Source: https://www.nytimes.com/2021/03/06/business/the-robots-are-coming-for-phil-in-accounting.html

From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Good long read and overview of the major issues:

In mid-2019, an investigative journalism/tech non-profit called MuckRock and Open the Government (OTG), a non-partisan advocacy group, began submitting freedom of information requests to law enforcement agencies across the United States. The goal: to smoke out details about the use of an app rumoured to offer unprecedented facial recognition capabilities to anyone with a smartphone.

Co-founded by Michael Morisy, a former Boston Globe editor, MuckRock specializes in FOIs and its site has grown into a publicly accessible repository of government documents obtained under access to information laws.

As responses trickled in, it became clear that the MuckRock/OTG team had made a discovery about a tech company called Clearview AI. Based on documents obtained from Atlanta, OTG researcher Freddy Martinez began filing more requests, and discovered that as many as 200 police departments across the U.S. were using Clearview’s app, which compares images taken by smartphone cameras to a sprawling database of 3 billion open-source photographs of faces linked to various forms of personal information (e.g., Facebook profiles). It was, in effect, a point-click-and-identify system that radically transformed the work of police officers.

The documents soon found their way to a New York Times reporter named Kashmir Hill, who, in January 2020, published a deeply investigated feature about Clearview, a tiny and secretive start-up with backing from Peter Thiel, the Silicon Valley billionaire behind Paypal and Palantir Technologies. Among the story’s revelations, Hill disclosed that tech giants like Google and Apple were well aware that such an app could be developed using artificial intelligence algorithms feeding off the vast storehouse of facial images uploaded to social media platforms and other publicly accessible databases. But they had opted against designing such a disruptive and easily disseminated surveillance tool.

The Times story set off what could best be described as an international chain reaction, with widespread media coverage about the use of Clearview’s app, followed by a wave of announcements from various governments and police agencies about how Clearview’s app would be banned. The reaction played out against a backdrop of news reports about China’s nearly ubiquitous facial recognition-based surveillance networks.

Canada was not exempt. To Surveil and Predict, a detailed examination of “algorithmic policing” published this past fall by the University of Toronto’s Citizen Lab, noted that officers with law enforcement agencies in Calgary, Edmonton and across Greater Toronto had tested Clearview’s app, sometimes without the knowledge of their superiors. Investigative reporting by the Toronto Star and Buzzfeed News found numerous examples of municipal law enforcement agencies, including the Toronto Police Service, using the app in crime investigations. The RCMP denied using Clearview even after it had entered into a contract with the company — a detail exposed by Vancouver’s The Tyee.

With federal and provincial privacy commissioners ordering investigations, Clearview and the RCMP subsequently severed ties, although Citizen Lab noted that many other tech companies still sell facial recognition systems in Canada. “I think it is very questionable whether [Clearview] would conform with Canadian law,” Michael McEvoy, British Columbia’s privacy commissioner, told the Star in February.

There was fallout elsewhere. Four U.S. cities banned police use of facial recognition outright, the Citizen Lab report noted. The European Union in February proposed a ban on facial recognition in public spaces but later hedged. A U.K. court in April ruled that police facial recognition systems were “unlawful,” marking a significant reversal in surveillance-minded Britain. And the European Data Protection Board, an EU agency, informed Commission members in June that Clearview’s technology violates Pan-European law enforcement policies. As Rutgers University law professor and smart city scholar Ellen Goodman notes “There’s been a huge blowback” against the use of data-intensive policing technologies.

There’s nothing new about surveillance or police investigative practices that draw on highly diverse forms of electronic information, from wire taps to bank records and images captured by private security cameras. Yet during the past decade or so, dramatic advances in big data analytics, biometrics and AI, stoked by venture capital and law enforcement agencies eager to invest in new technology, have given rise to a fast-growing data policing industry. As the Clearview story showed, regulation and democratic oversight have lagged far behind the technology.

U.S. startups like PredPol and HunchLab, now owned by ShotSpotter, have designed so-called “predictive policing” algorithms that use law enforcement records and other geographical data (e.g. locations of schools) to make statistical guesses about the times and locations of future property crimes. Palantir’s law-enforcement service aggregates and then mines huge data sets consisting of emails, court documents, evidence repositories, gang member databases, automated licence plate readers, social media, etc., to find correlations or patterns that police can use to investigate suspects.

Yet as the Clearview fallout indicated, big data policing is rife with technical, ethical and political landmines, according to Andrew Ferguson, a University of the District Columbia law professor. As he explains in his 2017 book, The Rise of Big Data Policing, analysts have identified an impressive list: biased, incomplete or inaccurate data, opaque technology, erroneous predictions, lack of governance, public suspicions about surveillance and over-policing, conflicts over access to proprietary algorithms, unauthorized use of data and the muddied incentives of private firms selling law enforcement software.

At least one major study found that some police officers were highly skeptical of predictive policing algorithms. Other critics point out that by deploying smart city sensors or other data-enabled systems, like transit smart cards, local governments may be inadvertently providing the police with new intelligence sources. Metrolinx, for example, has released Presto card user information to police while London’s Metropolitan Police has made thousands of requests for Oyster card data to track criminals, according to The Guardian. “Any time you have a microphone, camera or a live-feed, these [become] surveillance devices with the simple addition of a court order,” says New York civil rights lawyer Albert Cahn, executive director of the Surveillance Technology Oversight Project (STOP).

The authors of the Citizen Lab study, lawyers Kate Robertson, Cynthia Khoo and Yolanda Song, argue that Canadian governments need to impose a moratorium on the deployment of algorithmic policing technology until the public policy and legal frameworks can catch up.

Data policing was born in New York City in the early 1990s when then-police Commissioner William Bratton launched “Compstat,” a computer system that compiled up-to-date crime information then visualized the findings in heat maps. These allowed unit commanders to deploy officers to neighbourhoods most likely to be experiencing crime problems.

Originally conceived as a management tool that would push a demoralized police force to make better use of limited resources, Compstat is credited by some as contributing to the marked reduction in crime rates in the Big Apple, although many other big cities experienced similar drops through the 1990s and early 2000s.

The 9/11 terrorist attacks sparked enormous investments in security technology. The past two decades have seen the emergence of a multi-billion-dollar industry dedicated to civilian security technology, everything from large-scale deployments of CCTVs and cybersecurity to the development of highly sensitive biometric devices — fingerprint readers, iris scanners, etc. — designed to bulk up the security around factories, infrastructure and government buildings.

Predictive policing and facial recognition technologies evolved on parallel tracks, both relying on increasingly sophisticated analytics techniques, artificial intelligence algorithms and ever deeper pools of digital data.

The core idea is that the algorithms — essentially formulas, such as decision-trees, that generate predictions — are “trained” on large tranches of data so they become increasingly accurate, for example at anticipating the likely locations of future property crimes or matching a face captured in a digital image from a CCTV to one in a large database of headshots. Some algorithms are designed to use a set of rules with variables (akin to following a recipe). Others, known as machine learning, are programmed to learn on their own (trial and error).

The risk lies in the quality of the data used to train the algorithms — what was dubbed the “garbage-in-garbage-out” problem in a study by the Georgetown Law Center on Privacy and Technology. If there are hidden biases in the training data — e.g., it contains mostly Caucasian faces — the algorithm may misread Asian or Black faces and generate “false positives,” a well-documented shortcoming if the application involves a identifying a suspect in a crime.

Similarly, if a poor or racialized area is subject to over-policing, there will likely be more crime reports, meaning the data from that neighbourhood is likely to reveal higher-than-average rates of certain types of criminal activity, a data point that would justify more over-policing and racial profiling. Some crimes are under-reported, and don’t influence these algorithms.

Other predictive and AI-based law enforcement technologies, such as “social network analysis” — an individual’s web of personal relationships, gleaned, for example, from social media platforms or examined by cross-referencing of lists of gang members — promised to generate predictions that individuals known to police were at risk of becoming embroiled in violent crimes.

This type of sleuthing seemed to hold out some promise. In one study, criminologists at Cardiff University found that “disorder-related” posts on Twitter reflected crime incidents in metropolitan London — a finding that suggests how big data can help map and anticipate criminal activity. In practise, however, such surveillance tactics can prove explosive. This happened in 2016, when U.S. civil liberties groups revealed documents showing that Geofeedia, a location-based data company, had contracts with numerous police departments to provide analytics based on social media posts to Twitter, Facebook, Instagram, etc. Among the individuals targeted by the company’s data: protestors and activists. Chastened, the social media firms rapidly blocked Geofeedia’s access.

In 2013, the Chicago Police Department began experimenting with predictive models that assigned risk scores for individuals based on criminal records or their connections to people involved in violent crime. By 2019, the CPD had assigned risk scores to almost 400,000 people, and claimed to be using the information to surveil and target “at-risk” individuals (including potential victims) or connect them to social services, according to a January 2020 report by Chicago’s inspector general.

These tools can draw incorrect or biased inferences in the same way that overreliance on police checks in racialized neighbourhoods results in what could be described as guilt by address. The Citizen Lab study noted that the Ontario Human Rights Commission identified social network analysis as a potential cause of racial profiling. In the case of the CPD’s predictive risk model, the system was discontinued in 2020 after media reports and internal investigations showed that people were added to the list based solely on arrest records, meaning they might not even have been charged, much less convicted of a crime.

Early applications of facial recognition software included passport security systems or searches of mug shot databases. But in 2011, the Insurance Corporation of B.C. offered Vancouver police the use of facial recognition software to match photos of Stanley Cup rioters with driver’s licence images — a move that prompted a stern warning from the province’s privacy commissioner. In 2019, the Washington Post revealed that FBI and Immigration and Customs Enforcement (ICE) investigators regarded state databases of digitized driver’s licences as a “gold mine for facial recognition photos” which had been scanned without consent.

In 2013, Canada’s federal privacy commissioner released a report on police use of facial recognition that anticipated the issues raised by Clearview app earlier in 2020. “[S]trict controls and increased transparency are needed to ensure that the use of facial recognition conforms with our privacy laws and our common sense of what is socially acceptable.” (Canada’s data privacy laws are only now being considered for an update.)

The technology, meanwhile, continues to gallop ahead. New York civil rights lawyer Albert Cahn points to the emergence of “gait recognition” systems, which use visual analysis to identify individuals by their walk; these systems are reportedly in use in China. “You’re trying to teach machines how to identify people who walk with the same gait,” he says. “Of course, a lot of this is completely untested.”

The predictive policing story evolved somewhat differently. The methodology grew out of analysis commissioned by the Los Angeles Police Department in the early 2010s. Two data scientists, Jeff Brantingham and George Mohler, used mathematical modelling to forecast copycat crimes based on data about the location and frequency of previous burglaries in three L.A. neighbourhoods. They published their results and soon set up PredPol to commercialize the technology. Media attention soon followed, as news stories played up the seemingly miraculous power of a Minority Report-like system that could do a decent job anticipating incidents of property crime.

Operationally, police forces used PredPol’s system by dividing up precincts in 150-square-metre “cells” that police officers were instructed to patrol more intensively during periods when PredPol’s algorithm forecast criminal activity. In the post-2009 credit crisis period, the technology seemed to promise that cash-strapped American municipalities would get more bang for their policing buck.

Other firms, from startups to multinationals like IBM, entered the market with innovations, for example, incorporating other types of data, such as socio-economic data or geographical features, from parks and picnic tables to schools and bars, that may be correlated to elevated incidents of certain types of crime. The reported crime data is routinely updated so the algorithm remains current.

Police departments across the U.S. and Europe have invested in various predictive policing tools, as have several in Canada, including Vancouver, Edmonton and Saskatoon. Whether they have made a difference is an open question. As with several other studies, a 2017 review by analysts with the Institute for International Research on Criminal Policy, at Ghent University in Belgium, found inconclusive results: some places showed improved results compared to more conventional policing, while in other cities, the use of predictive algorithms led to reduced policing costs, but little measurable difference in outcomes.

Revealingly, the city where predictive policing really took hold, Los Angeles, has rolled back police use on these techniques. Last spring, the LAPD tore up its contract with PredPol in the wake of mounting community and legal pressure from the Stop LAPD Spying Coalition, which found that individuals who posed no real threat, mostly Black or Latino, were ending up on police watch lists because of flaws in the way the system assigned risk scores.

“Algorithms have no place in policing,” Coalition founder Hamid Khan said in an interview this summer with MIT Technology Review. “I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.” (Similar advocacy campaigns, including proposed legislation governing surveillance technology and gang databases, have been proposed for New York City.)

There has been one other interesting consequence: police resistance. B.C.-born sociologist Sarah Brayne, an assistant professor at the University of Texas (Austin), spent two-and-a-half years embedded with the LAPD, exploring the reaction of law enforcement officials to algorithmic policing techniques by conducting ride-alongs as well as interviews with dozens of veteran cops and data analysts. In results published last year, Brayne and collaborator Angèle Christin observed “strong processes of resistance fuelled by fear of professional devaluation and threats of performance tracking.”

Before shifts, officers were told which grids to drive through, when and how frequently, and the locations of their vehicles were tracked by an on-board GPS devices to ensure compliance. But Brayne found that some would turn off the tracking device, which they regarded with suspicion. Others just didn’t buy what the technology was selling. “Patrol officers frequently asserted that they did not need an algorithm to tell them where crime occurs,” she noted.

In an interview, Brayne said that police departments increasingly see predictive technology as part of the tool kit, despite questions about effectiveness or other concerns, like racial profiling. “Once a particular technology is created,” she observed,” there’s a tendency to use it.” But Brayne added one other prediction, which has to do with the future of algorithmic policing in the post-George Floyd era — “an intersection,” as she says, “between squeezed budgets and this movement around defunding the police.”

The widening use of big data policing and digital surveillance poses, according to Citizen Lab’s analysis as well as critiques from U.S. and U.K. legal scholars, a range of civil rights questions, from privacy and freedom from discrimination to due process. Yet governments have been slow to acknowledge these consequences. Big Brother Watch, a British civil liberties group, notes that in the U.K., the national government’s stance has been that police decisions about the deployment of facial recognition systems are “operational.”

At the core of the debate is a basic public policy principle: transparency. Do individuals have the tools to understand and debate the workings of a suite of technologies that can have tremendous influence over their lives and freedoms? It’s what Andrew Ferguson and others refer to as the “black box” problem. The algorithms, designed by software engineers, rely on certain assumptions, methodologies and variables, none of which are visible, much less legible to anyone without advanced technical know-how. Many, moreover, are proprietary because they are sold to local governments by private companies. The upshot is that these kinds of algorithms have not been regulated by governments despite their use by public agencies.

New York City Council moved to tackle this question in May 2018 by establishing an “automated decision systems” task force to examine how municipal agencies and departments use AI and machine learning algorithms. The task force was to devise procedures for identifying hidden biases and to disclose how the algorithms generate choices so the public can assess their impact. The group included officials from the administration of Mayor Bill de Blasio, tech experts and civil liberties advocates. It held public meetings throughout 2019 and released a report that November. NYC was, by most accounts, the first city to have tackled this question, and the initiative was, initially, well received.

Going in, Cahn, the New York City civil rights lawyer, saw the task force as “a unique opportunity to examine how AI was operating in city government.” But he describes the outcome as “disheartening.” “There was an unwillingness to challenge the NYPD on its use of (automated decision systems).” Some other participants agreed, describing the effort as a waste.

If institutional obstacles thwarted an effort in a government the size of the City of New York, what does better and more effective oversight look like? A couple of answers have emerged.

In his book on big data policing, Andrew Ferguson writes that local governments should start at first principles, and urges police forces and civilian oversight bodies to address five fundamental questions, ideally in a public forum:

  • Can you identify the risks that your big data technology is trying to address?
  • Can you defend the inputs into the system (accuracy of data, soundness of methodology)?
  • Can you defend the outputs of the system (how they will impact policing practice and community relationships)?
  • Can you test the technology (offering accountability and some measure of transparency)?
  • Is police use of the technology respectful of the autonomy of the people it will impact?

These “foundational” questions, he writes, “must be satisfactorily answered before green-lighting any purchase or adopting a big data policing strategy.”

In addition to calling for a moratorium and a judicial inquiry into the uses of predictive policing and facial recognition systems, the authors of the Citizen Lab report made several other recommendations, including: the need for full transparency; provincial policies governing the procurement of such systems; limits on the use of ADS in public spaces; and the establishment of oversight bodies that include members of historically marginalized or victimized groups.

Interestingly, the federal government has made advances in this arena, which University of Ottawa law professor and privacy expert Teresa Scassa describes as “really interesting.”

The Treasury Board Secretariat in 2019 issued the “Directive on Automated Decision-Making,” which came into effect in April 2020, requires federal departments and agencies, except those involved in national security, to conduct “algorithmic impact assessments” (AIA) to evaluate unintended bias before procuring or approving the use of technologies that rely on AI or machine learning. The policy requires the government to publish AIAs, release software codes developed internally and continually monitor the performance of these systems. In the case of proprietary algorithms developed by private suppliers, federal officials have extensive rights to access and test the software.

In a forthcoming paper, Scassa points out that the directive includes due process rules and looks for evidence of whether systemic bias has become embedded in these technologies, which can happen if the algorithms are trained on skewed data. She also observes that not all algorithm-driven systems generate life-altering decisions, e.g., chatbots that are now commonly used in online application processes. But where they are deployed in “high impact” contexts such as policing, e.g., with algorithms that aim to identify individuals caught on surveillance videos, the policy requires “a human in the loop.”

The directive, says Scassa, “is getting interest elsewhere,” including the U.S. Ellen Goodman, at Rutgers, is hopeful this approach will gain traction with the Biden administration. In Canada, where provincial governments oversee law enforcement, Ottawa’s low-key but seemingly thorough regulation points to a way for citizens to shine a flashlight into the black box that is big data policing.

Source: From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster

Doesn’t appear to be universally well received:

Google’s chief executive Sundar Pichai on Wednesday apologized in the aftermath of the dismissal of a prominent Black scientist whose ouster set off widespread condemnation from thousands of Google employees and outside researchers.

Timnit Gebru, who helped lead Google’s Ethical Artificial Intelligence team, said that she was fired last week after having a dispute over a research paper and sending a note to other Google employees criticizing the company for its treatment of people of color and women, particularly in hiring.

“I’ve heard the reaction to Dr. Gebru’s departure loud and clear: it seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust,” Pichai wrote to Google employees on Wednesday, according to a copy of the email reviewed by NPR.

Since Gebru was pushed out, more than 2,000 Google employees have signed an open letter demanding answers, calling Gebru’s termination “research censorship” and a “retaliatory firing.”

In his letter, Pichai said the company is conducting a review of how Gebru’s dismissal was handled in order to determine whether there could have been a “more respectful process.”

Pichai went on to say that Google needs to accept responsible for a prominent Black female leader leaving Google on bad terms.

“This loss has had a ripple effect through some of our least represented communities, who saw themselves and some of their experiences reflected in Dr. Gebru’s. It was also keenly felt because Dr. Gebru is an expert in an important area of AI Ethics that we must continue to make progress on — progress that depends on our ability to ask ourselves challenging questions,” Pichai wrote.

Pichai said Google earlier this year committed to taking a look at all of the company’s systems for hiring and promoting employees to try to increase representation among Black workers and others underrepresented groups.

“The events of the last week are a painful but important reminder of the progress we still need to make,” Pichai wrote in his letter, which was earlier reported by Axios.

In a series of tweets, Gebru said she did not appreciate Pichai’s email to her former colleagues.

“Don’t paint me as an ‘angry Black woman’ for whom you need ‘de-escalation strategies’ for,” Gebru said.

“Finally it does not say ‘I’m sorry for what we did to her and it was wrong.’ What it DOES say is ‘it seeded doubts and led some in our community to question their place at Google.’ So I see this as ‘I’m sorry for how it played out but I’m not sorry for what we did to her yet,'” Gebru wrote.

One Google employee who requested anonymity for fear of retaliation said Pichai’s letter will do little to address the simmering strife among Googlers since Gebru’s firing.

The employee expressed frustration that Pichai did not directly apologize for Gebru’s termination and continued to suggest she was not fired by the company, which Gebru and many of her colleagues say is not true. The employees described Pichai’s letter as “meaningless PR.”

Source: Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster