Using A.I. to Find Bias in A.I.

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. O’Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For Ms. O’Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligence. It was a “cruel game of Whac-a-Mole,” she said.

This month, Ms. O’Sullivan, a 36-year-old New Yorker, was named chief executive of a new company, Parity. The start-up is one of many organizations, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from A.I. systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of A.I. systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in A.I., including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislation or regulation is inevitable,” said Christian Troncoso, the senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about A.I., it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigationof UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discriminate, while the status of the UnitedHealth investigation is unclear. 

A spokesman for UnitedHealth, Tyler Mason, said the company’s algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point this month when the Software Alliance offered a detailed framework for fighting bias in A.I., including the recognition that some automated technologies require regular oversight from humans. The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Though they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

Ms. O’Sullivan said there was no simple solution to bias in A.I. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

“Changing mentalities does not happen overnight — and that is even more true when you’re talking about large companies,” she said. “You are trying to change not just one person’s mind but many minds.”

When she started advising businesses on A.I. bias more than two years ago, Ms. O’Sullivan was often met with skepticism. Many executives and engineers espoused what they called “fairness through unawareness,” arguing that the best way to build equitable technology was to ignore issues like race and gender.

Increasingly, companies were building systems that learned tasks by analyzing vast amounts of data, including photos, sounds, text and stats. The belief was that if a system learned from as much data as possible, fairness would follow.

But as Ms. O’Sullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognition services can be biased against women and people of color when they are trained on photo collections dominated by white men.

Designers can be blind to these problems. The workers in India — where gay relationships were still illegal at the time and where attitudes toward gays and lesbians were very different from those in the United States — were classifying the photos as they saw fit.

Ms. O’Sullivan saw the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that ran the tagging project. She said she had left the company after realizing it was building systems for the military that she believed could eventually be used to kill. Clarifai did not respond to a request for comment. 

She now believes that after years of public complaints over bias in A.I. — not to mention the threat of regulation — attitudes are changing. In its new framework for curbing harmful bias, the Software Alliance warned against fairness through unawareness, saying the argument did not hold up.

“They are acknowledging that you need to turn over the rocks and see what is underneath,” Ms. O’Sullivan said.

Still, there is resistance. She said a recent clash at Google, where two ethics researchers were pushed out, was indicative of the situation at many companies. Efforts to fight bias often clash with corporate culture and the unceasing push to build new technology, get it out the door and start making money.

It is also still difficult to know just how serious the problem is. “We have very little data needed to model the broader societal safety issues with these systems, including bias,” said Jack Clark, one of the authors of the A.I. Index, an effort to track A.I. technology and policy across the globe. “Many of the things that the average person cares about — such as fairness — are not yet being measured in a disciplined or a large-scale way.”

Ms. O’Sullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Chowdhury, a well-known A.I. ethics researcher who spent years at the business consultancy Accenture before joining Twitter.

While other start-ups, like Fiddler A.I. and Weights and Biases, offer tools for monitoring A.I. services and identifying potentially biased behavior, Parity’s technology aims to analyze the data, technologies and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

The tool uses artificial intelligence technology that can be biased in its own right, showing the double-edged nature of A.I. — and the difficulty of Ms. O’Sullivan’s task.

Tools that can identify bias in A.I. are imperfect, just as A.I. is imperfect. But the power of such a tool, she said, is to pinpoint potential problems — to get people looking closely at the issue.

Ultimately, she explained, the goal is to create a wider dialogue among people with a broad range of views. The trouble comes when the problem is ignored — or when those discussing the issues carry the same point of view.

“You need diverse perspectives. But can you get truly diverse perspectives at one company?” Ms. O’Sullivan asked. “It is a very important question I am not sure I can answer.”

Source: https://www.nytimes.com/2021/06/30/technology/artificial-intelligence-bias.html

Why A.I. Should Be Afraid of Us: Because benevolent bots are suckers.

Of significance as AI becomes more prevalent. “Road rage” as the new Turing test!

Artificial intelligence is gradually catching up to ours. A.I. algorithms can now consistently beat us at chesspoker and multiplayer video games, generate images of human faces indistinguishable from real oneswrite news articles (not this one!) and even love stories, and drive cars better than most teenagers do.

But A.I. isn’t perfect, yet, if Woebot is any indicator. Woebot, as Karen Brown wrote this week in Science Times, is an A.I.-powered smartphone app that aims to provide low-cost counseling, using dialogue to guide users through the basic techniques of cognitive-behavioral therapy. But many psychologists doubt whether an A.I. algorithm can ever express the kind of empathy required to make interpersonal therapy work.

“These apps really shortchange the essential ingredient that — mounds of evidence show — is what helps in therapy, which is the therapeutic relationship,” Linda Michaels, a Chicago-based therapist who is co-chair of the Psychotherapy Action Network, a professional group, told The Times.

Empathy, of course, is a two-way street, and we humans don’t exhibit a whole lot more of it for bots than bots do for us. Numerous studies have found that when people are placed in a situation where they can cooperate with a benevolent A.I., they are less likely to do so than if the bot were an actual person.

“There seems to be something missing regarding reciprocity,” Ophelia Deroy, a philosopher at Ludwig Maximilian University, in Munich, told me. “We basically would treat a perfect stranger better than A.I.”

In a recent study, Dr. Deroy and her neuroscientist colleagues set out to understand why that is. The researchers paired human subjects with unseen partners, sometimes human and sometimes A.I.; each pair then played a series of classic economic games — Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created called Reciprocity — designed to gauge and reward cooperativeness.

Our lack of reciprocity toward A.I. is commonly assumed to reflect a lack of trust. It’s hyper-rational and unfeeling, after all, surely just out for itself, unlikely to cooperate, so why should we? Dr. Deroy and her colleagues reached a different and perhaps less comforting conclusion. Their study found that people were less likely to cooperate with a bot even when the bot was keen to cooperate. It’s not that we don’t trust the bot, it’s that we do: The bot is guaranteed benevolent, a capital-S sucker, so we exploit it.

That conclusion was borne out by conversations afterward with the study’s participants. “Not only did they tend to not reciprocate the cooperative intentions of the artificial agents,” Dr. Deroy said, “but when they basically betrayed the trust of the bot, they didn’t report guilt, whereas with humans they did.” She added, “You can just ignore the bot and there is no feeling that you have broken any mutual obligation.”

This could have real-world implications. When we think about A.I., we tend to think about the Alexas and Siris of our future world, with whom we might form some sort of faux-intimate relationship. But most of our interactions will be one-time, often wordless encounters. Imagine driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you’ll be far less likely to let it in. And if the A.I. doesn’t account for your bad behavior, an accident could ensue.

“What sustains cooperation in society at any scale is the establishment of certain norms,” Dr. Deroy said. “The social function of guilt is exactly to make people follow social norms that lead them to make compromises, to cooperate with others. And we have not evolved to have social or moral norms for non-sentient creatures and bots.”

That, of course, is half the premise of “Westworld.” (To my surprise Dr. Deroy had not heard of the HBO series.) But a landscape free of guilt could have consequences, she noted: “We are creatures of habit. So what guarantees that the behavior that gets repeated, and where you show less politeness, less moral obligation, less cooperativeness, will not color and contaminate the rest of your behavior when you interact with another human?”

There are similar consequences for A.I., too. “If people treat them badly, they’re programed to learn from what they experience,” she said. “An A.I. that was put on the road and programmed to be benevolent should start to be not that kind to humans, because otherwise it will be stuck in traffic forever.” (That’s the other half of the premise of “Westworld,” basically.)

There we have it: The true Turing test is road rage. When a self-driving car starts honking wildly from behind because you cut it off, you’ll know that humanity has reached the pinnacle of achievement. By then, hopefully, A.I therapy will be sophisticated enough to help driverless cars solve their anger-management issues.

Robots are coming and the fallout will largely harm marginalized communities

Interesting piece on the possible impact of automation on many of the essential service workers, largely women, visible minorities and immigrants (more speculative than data-driven):

COVID-19 has brought about numerous, devastating changes to people’s lives globally. With the number of cases rising across Canada and globally, we are also witnessing the development and use of robots to perform jobs in some workplaces that are deemed unsafe for humans. 

There are cases of robots being used to disinfect health-care facilities, deliver drugs to patients and perform temperature checks. In April 2020, doctors at a Boston hospital used Boston Dynamics’ quadruped robot called Spot to reduce health-care workers exposure to SARS-CoV-2, the virus that causes COVID-19. By equipping Spot with an iPad and a two-way radio, doctors and patients could communicate in real-time. 

In these instances, the use of robots is certainly justified because they can directly aid in lowering COVID-19 transmission rates and reducing the unnecessary exposure of health-care workers to the virus. But, as we know, robots are also performing these tasks outside of health-care settings, including at airports, officesretail spaces and restaurants

This is precisely where the issue of robot use gets complicated. 

Robots in the workplace

The type of labour that these and other robots perform or, in some cases, replace, is labour that is generally considered low-paying, ranging from cleaners and fast food workers to security guards and factory employees. Not only do many of these workers in Canada earn minimum wage, the majority are racialized women and youth between the ages of 15 to 24

The use of robots also affects immigrant populations. The gap between immigrant workers earning minimum wage and Canadian-born workers has more than doubled. In 2008, 5.3 per cent of both immigrant and Canadian-born workers earned minimum wage, compared to 2018 where 12 per cent of immigrant workers earned minimum wage and only 9.8 per cent of Canadian-born workers earned minimum wage. Canada’s reliance on migrant workers as a source of cheap and disposable labour, has intensified the exploitation of workers.

McDonald’s has replaced cashiers with self-service kiosks. It has also begun testing robots to replace both cooks and servers. Walmart has begun using robots to clean store floors, while also increasing their usage in warehouses

Nowhere is the implementation of robots more apparent than Amazon’s use of them in its fulfilment centres. As information scholars applying marxist theory Nick Dyer-Witheford, Atle Mikkola Kjøsen and James Steinhoff explain, Amazon’s use of robots have reduced order times and increased warehouse space, allowing for 50 per cent more inventory in areas where robots are used, and have saved Amazon’s power costs by working in the dark and without air conditioning.

Already marginalized labourers are most affected by robots. In other words, human labour that can be mechanized, routinized or automated to some extent, is work that is deemed to be expendable because it is seen to be replaceable. It is work that is stripped of any humanity in the name of efficiency and cost-effectiveness for massive corporations. However, the influence of corporations on robot development goes beyond cost-saving measures. 

Robot violence

The emergence of Boston Dynamics’ Spot, gives us some insight into how robots have crossed from the battlefield into urban spaces. Boston Dynamics’ robot development program has long been funded by the American Defense Advanced Research Projects Agency (DARPA)

In 2005, Boston Dynamics received funding from DARPA to develop one of its first quadruped robots known as BigDog, a robotic pack mule that was used to assist soldiers across rough terrain. In 2012, Boston Dynamics and DARPA revealed another quadruped robot known as AlphaDog, designed to primarily carry military gear for soldiers

The development of Spot would have been be impossible without these previous, DARPA-funded initiatives. While the founder of Boston Dynamics, Marc Raibert, has claimed that Spot will not be turned into a weapon, the company leased Spot to the Massachusetts State Police bomb squad in 2019 for a 90-day period. 

In February 2021, the New York Police Department used Spot to investigate the scene of a home invasion. And, in April 2021, Spot was deployed by the French Army in a series of military exercises to evaluate its usefulness on the future battlefield. https://www.youtube.com/embed/xYbhKHfZSEE?wmode=transparent&start=0Massachusetts State Police lease Boston Dynamics’ Spot in 2019.

Targeting the most vulnerable

These examples are not intended to altogether dismiss the importance of some robots. This is particularly the case in health care, where robots continue to help doctors improve patient outcomes. Instead, these examples should serve as a call for governments to intervene in order to prevent a proliferation of robot use across different spaces.

More importantly, this is a call to prevent the multiple forms of exploitation that already affect marginalized groups. Since technological innovation has a tendency to outpace legislationand regulatory controls, it is imperative that lawmakers step in before it is too late.

Source: https://theconversationcanada.cmail20.com/t/r-l-tltjihkd-kyldjlthkt-c/

Who Is Making Sure the A.I. Machines Aren’t Racist?

Good overview of the issues and debates (Google’s earlier slogan of “do no evil” seems so quaint):

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

Their work was hailed as groundbreaking. The nascent A.I. industry, it had become clear, needed minders and people with different perspectives.

About six years ago, A.I. in a Google online photo service organized photos of Black people into a folder called “gorillas.” Four years ago, a researcher at a New York start-up noticed that the A.I. system she was working on was egregiously biased against Black people. Not long after, a Black researcher in Boston discovered that an A.I. system couldn’t identify her face — until she put on a white mask.

In 2018, when I told Google’s public relations staff that I was working on a book about artificial intelligence, it arranged a long talk with Dr. Mitchell to discuss her work. As she described how she built the company’s Ethical A.I. team — and brought Dr. Gebru into the fold — it was refreshing to hear from someone so closely focused on the bias problem.

But nearly three years later, Dr. Gebru was pushed out of the company without a clear explanation. She said she had been firedafter criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.

“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”

As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.

Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.

It should have been a wake-up call.

In June 2015, a friend sent Jacky Alciné, a 22-year-old software engineer living in Brooklyn, an internet link for snapshots the friend had posted to the new Google Photos service. Google Photos could analyze snapshots and automatically sort them into digital folders based on what was pictured. One folder might be “dogs,” another “birthday party.”

When Mr. Alciné clicked on the link, he noticed one of the folders was labeled “gorillas.” That made no sense to him, so he opened the folder. He found more than 80 photos he had taken nearly a year earlier of a friend during a concert in nearby Prospect Park. That friend was Black.

He might have let it go if Google had mistakenly tagged just one photo. But 80? He posted a screenshot on Twitter. “Google Photos, y’all,” messed up, he wrote, using much saltier language. “My friend is not a gorilla.”

Like facial recognition services, talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.

Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)

As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”

In 2017, Deborah Raji, a 21-​year-​old Black woman from Ottawa, sat at a desk inside the New York offices of Clarifai, the start-up where she was working. The company built technology that could automatically recognize objects in digital images and planned to sell it to businesses, police departments and government agencies.

She stared at a screen filled with faces — images the company used to train its facial recognition software.

As she scrolled through page after page of these faces, she realized that most — more than 80 percent — were of white people. More than 70 percent of those white people were male. When Clarifai trained its system on this data, it might do a decent job of recognizing white people, Ms. Raji thought, but it would fail miserably with people of color, and probably women, too.

Clarifai was also building a “content moderation system,” a tool that could automatically identify and remove pornography from images people posted to social networks. The company trained this system on two sets of data: thousands of photos pulled from online pornography sites, and thousands of G‑rated images bought from stock photo services.

The system was supposed to learn the difference between the pornographic and the anodyne. The problem was that the G‑rated images were dominated by white people, and the pornography was not. The system was learning to identify Black people as pornographic.

“The data we use to train these systems matters,” Ms. Raji said. “We can’t just blindly pick our sources.”

This was obvious to her, but to the rest of the company it was not. Because the people choosing the training data were mostly white men, they didn’t realize their data was biased.

“The issue of bias in facial recognition technologies is an evolving and important topic,” Clarifai’s chief executive, Matt Zeiler, said in a statement. Measuring bias, he said, “is an important step.”

Before joining Google, Dr. Gebru collaborated on a study with a young computer scientist, Joy Buolamwini. A graduate student at the Massachusetts Institute of Technology, Ms. Buolamwini, who is Black, came from a family of academics. Her grandfather specialized in medicinal chemistry, and so did her father.

She gravitated toward facial recognition technology. Other researchers believed it was reaching maturity, but when she used it, she knew it wasn’t.

In October 2016, a friend invited her for a night out in Boston with several other women. “We’ll do masks,” the friend said. Her friend meant skin care masks at a spa, but Ms. Buolamwini assumed Halloween masks. So she carried a white plastic Halloween mask to her office that morning.

It was still sitting on her desk a few days later as she struggled to finish a project for one of her classes. She was trying to get a detection system to track her face. No matter what she did, she couldn’t quite get it to work.

In her frustration, she picked up the white mask from her desk and pulled it over her head. Before it was all the way on, the system recognized her face — or, at least, it recognized the mask.

“Black Skin, White Masks,” she said in an interview, nodding to the 1952 critique of historical racism from the psychiatrist Frantz Fanon. “The metaphor becomes the truth. You have to fit a norm, and that norm is not you.”

Ms. Buolamwini started exploring commercial services designed to analyze faces and identify characteristics like age and sex, including tools from Microsoft and IBM.

She found that when the services read photos of lighter-​skinned men, they misidentified sex about 1 percent of the time. But the darker the skin in the photo, the larger the error rate. It rose particularly high with images of women with dark skin. Microsoft’s error rate was about 21 percent. IBM’s was 35.

Published in the winter of 2018, the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.

Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.

Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.

Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-​skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-​skinned women for men 31 percent of the time. For lighter-​skinned males, the error rate was zero.

Amazon called for government regulation of facial recognition. It also attacked the researchers in private emails and public blog posts.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” an Amazon executive, Matt Wood, wrote in a blog post that disputed the study and a New York Times article that described it.

In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.

Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.

Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.

Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.

After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.

The response: Her resignation was accepted immediately, and Google revoked her access to company email and other services. A month later, it removed Dr. Mitchell’s access after she searched through her own email in an effort to defend Dr. Gebru.

In a Google staff meeting last month, just after the company fired Dr. Mitchell, the head of the Google A.I. lab, Jeff Dean, said the company would create strict rules meant to limit its review of sensitive research papers. He also defended the reviews. He declined to discuss the details of Dr. Mitchell’s dismissal but said she had violated the company’s code of conduct and security policies.

One of Mr. Dean’s new lieutenants, Zoubin Ghahramani, said the company must be willing to tackle hard issues. There are “uncomfortable things that responsible A.I. will inevitably bring up,” he said. “We need to be comfortable with that discomfort.”

But it will be difficult for Google to regain trust — both inside the company and out.

“They think they can get away with firing these people and it will not hurt them in the end, but they are absolutely shooting themselves in the foot,” said Alex Hanna, a longtime part of Google’s 10-member Ethical A.I. team. “What they have done is incredibly myopic.”

Source: https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html

The Robots Are Coming for Phil in Accounting

Implications for many white collar workers, including in government given the nature of repetitive operational work:

The robots are coming. Not to kill you with lasers, or beat you in chess, or even to ferry you around town in a driverless Uber.

These robots are here to merge purchase orders into columns J and K of next quarter’s revenue forecast, and transfer customer data from the invoicing software to the Oracle database. They are unassuming software programs with names like “Auxiliobits — DataTable To Json String,” and they are becoming the star employees at many American companies.

Some of these tools are simple apps, downloaded from online stores and installed by corporate I.T. departments, that do the dull-but-critical tasks that someone named Phil in Accounting used to do: reconciling bank statements, approving expense reports, reviewing tax forms. Others are expensive, custom-built software packages, armed with more sophisticated types of artificial intelligence, that are capable of doing the kinds of cognitive work that once required teams of highly-paid humans.

White-collar workers, armed with college degrees and specialized training, once felt relatively safe from automation. But recent advances in A.I. and machine learning have created algorithms capable of outperforming doctorslawyers and bankers at certain parts of their jobs. And as bots learn to do higher-value tasks, they are climbing the corporate ladder.

The trend — quietly building for years, but accelerating to warp speed since the pandemic — goes by the sleepy moniker “robotic process automation.” And it is transforming workplaces at a pace that few outsiders appreciate. Nearly 8 in 10 corporate executives surveyed by Deloitte last year said they had implemented some form of R.P.A. Another 16 percent said they planned to do so within three years.

Most of this automation is being done by companies you’ve probably never heard of. UiPath, the largest stand-alone automation firm, is valued at $35 billion — roughly the size of eBay — and is slated to go public later this year. Other companies like Automation Anywhere and Blue Prism, which have Fortune 500 companies like Coca-Cola and Walgreens Boots Alliance as clients, are also enjoying breakneck growth, and tech giants like Microsoft have recently introduced their own automation products to get in on the action.

Executives generally spin these bots as being good for everyone, “streamlining operations” while “liberating workers” from mundane and repetitive tasks. But they are also liberating plenty of people from their jobs. Independent experts say that major corporate R.P.A. initiatives have been followed by rounds of layoffs, and that cutting costs, not improving workplace conditions, is usually the driving factor behind the decision to automate. 

Craig Le Clair, an analyst with Forrester Research who studies the corporate automation market, said that for executives, much of the appeal of R.P.A. bots is that they are cheap, easy to use and compatible with their existing back-end systems. He said that companies often rely on them to juice short-term profits, rather than embarking on more expensive tech upgrades that might take years to pay for themselves.

“It’s not a moonshot project like a lot of A.I., so companies are doing it like crazy,” Mr. Le Clair said. “With R.P.A., you can build a bot that costs $10,000 a year and take out two to four humans.”

Covid-19 has led some companies to turn to automation to deal with growing demand, closed offices, or budget constraints. But for other companies, the pandemic has provided cover for executives to implement ambitious automation plans they dreamed up long ago.

“Automation is more politically acceptable now,” said Raul Vega, the chief executive of Auxis, a firm that helps companies automate their operations.

Before the pandemic, Mr. Vega said, some executives turned down offers to automate their call centers, or shrink their finance departments, because they worried about scaring their remaining workers or provoking a backlash like the one that followed the outsourcing boom of the 1990s, when C.E.O.s became villains for sending jobs to Bangalore and Shenzhen.

But those concerns matter less now, with millions of people already out of work and many businesses struggling to stay afloat.

Now, Mr. Vega said, “they don’t really care, they’re just going to do what’s right for their business,” Mr. Vega said.

Sales of automation software are expected to rise by 20 percent this year, after increasing by 12 percent last year, according to the research firm Gartner. And the consulting firm McKinsey, which predicted before the pandemic that 37 million U.S. workers would be displaced by automation by 2030, recently increased its projection to 45 million.

A white-collar wake-up call

Not all bots are the job-destroying kind. Holly Uhl, a technology manager at State Auto Insurance Companies, said that her firm has used automation to do 173,000 hours’ worth of work in areas like underwriting and human resources without laying anyone off.

“People are concerned that there’s a possibility of losing their jobs, or not having anything to do,” she said. “But once we have a bot in the area, and people see how automation is applied, they’re truly thrilled that they don’t have to do that work anymore.”

As bots become capable of complex decision-making, rather than doing single repetitive tasks, their disruptive potential is growing.

Recent studies by researchers at Stanford University and the Brookings Institution compared the text of job listings with the wording of A.I.-related patents, looking for phrases like “make prediction” and “generate recommendation” that appeared in both. They found that the groups with the highest exposure to A.I. were better-paid, better-educated workers in technical and supervisory roles, with men, white and Asian-American workers, and midcareer professionals being some of the most endangered. Workers with bachelor’s or graduate degrees were nearly four times as exposed to A.I. risk as those with just a high school degree, the researchers found, and residents of high-tech cities like Seattle and Salt Lake City were more vulnerable than workers in smaller, more rural communities.

“A lot of professional work combines some element of routine information processing with an element of judgment and discretion,” said David Autor, an economist at M.I.T. who studies the labor effects of automation. “That’s where software has always fallen short. But with A.I., that type of work is much more in the kill path.”

Many of those vulnerable workers don’t see this coming, in part because the effects of white-collar automation are often couched in jargon and euphemism. On their websites, R.P.A. firms promote glowing testimonials from their customers, often glossing over the parts that involve actual humans.

“Sprint Automates 50 Business Processes In Just Six Months.” (Possible translation: Sprint replaced 300 people in the billing department.)

“Dai-ichi Life Insurance Saves 132,000 Hours Annually” (Bye-bye, claims adjusters.)

“600% Productivity Gain for Credit Reporting Giant with R.P.A.”(Don’t let the door hit you, data analysts.)

Jason Kingdon, the chief executive of the R.P.A. firm Blue Prism, speaks in the softened vernacular of displacement too. He refers to his company’s bots as “digital workers,” and he explained that the economic shock of the pandemic had “massively raised awareness” among executives about the variety of work that no longer requires human involvement.

“We think any business process can be automated,” he said.

Mr. Kingdon tells business leaders that between half and two-thirds of all the tasks currently being done at their companies can be done by machines. Ultimately, he sees a future in which humans will collaborate side-by-side with teams of digital employees, with plenty of work for everyone, although he conceded that the robots have certain natural advantages.

“A digital worker,” he said, “can be scaled in a vastly more flexible way.”

Humans have feared losing our jobs to machines for millennia. (In 350 BCE, Aristotle worried that self-playing harps would make musicians obsolete.) And yet, automation has never created mass unemployment, in part because technology has always generated new jobs to replace the ones it destroyed.

During the 19th and 20th centuries, some lamplighters and blacksmiths became obsolete, but more people were able to make a living as electricians and car dealers. And today’s A.I. optimists argue that while new technology may displace some workers, it will spur economic growth and create better, more fulfilling jobs, just as it has in the past.

But that is no guarantee, and there is growing evidence that this time may be different.

In a series of recent studies, Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University, two well-respected economists who have researched the history of automation, found that for most of the 20th century, the optimistic take on automation prevailed — on average, in industries that implemented automation, new tasks were created faster than old ones were destroyed.

Since the late 1980s, they found, the equation had flipped — tasks have been disappearing to automation faster than new ones are appearing.

This shift may be related to the popularity of what they call “so-so automation” — technology that is just barely good enough to replace human workers, but not good enough to create new jobs or make companies significantly more productive.

A common example of so-so automation is the grocery store self-checkout machine. These machines don’t cause customers to buy more groceries, or help them shop significantly faster — they simply allow store owners to staff slightly fewer employees on a shift. This simple, substitutive kind of automation, Mr. Acemoglu and Mr. Restrepo wrote, threatens not just individual workers, but the economy as a whole.

“The real danger for labor,” they wrote, “may come not from highly productive but from ‘so-so’ automation technologies that are just productive enough to be adopted and cause displacement.”

Only the most devoted Luddites would argue against automating any job, no matter how menial or dangerous. But not all automation is created equal, and much of the automation being done in white-collar workplaces today is the kind that may not help workers over the long run.

During past eras of technological change, governments and labor unions have stepped in to fight for automation-prone workers, or support them while they trained for new jobs. But this time, there is less in the way of help. Congress has rejected calls to fund federal worker retraining programs for years, and while some of the money in the $1.9 trillion Covid-19 relief bill Democrats hope to pass this week will go to laid-off and furloughed workers, none of it is specifically earmarked for job training programs that could help displaced workers get back on their feet.

Another key difference is that in the past, automation arrived gradually, factory machine by factory machine. But today’s white-collar automation is so sudden — and often, so deliberately obscured by management — that few workers have time to prepare.

“The rate of progression of this technology is faster than any previous automation,” said Mr. Le Clair, the Forrester analyst, who thinks we are closer to the beginning than the end of the corporate A.I. boom.

“We haven’t hit the exponential point of this stuff yet,” he added. “And when we do, it’s going to be dramatic.”

The corporate world’s automation fever isn’t purely about getting rid of workers. Executives have shareholders and boards to satisfy, and competitors to keep up with. And some automation does, in fact, lift all boats, making workers’ jobs better and more interesting while allowing companies to do more with less.

But as A.I. enters the corporate world, it is forcing workers at all levels to adapt, and focus on developing the kinds of distinctly human skills that machines can’t easily replicate.

Ellen Wengert, a former data processor at an Australian insurance firm, learned this lesson four years ago, when she arrived at work one day to find a bot-builder sitting in her seat.

The man, coincidentally an old classmate of hers, worked for a consulting firm that specialized in R.P.A. He explained that he’d been hired to automate her job, which mostly involved moving customer data from one database to another. He then asked her to, essentially, train her own replacement — teaching him how to do the steps involved in her job so that he, in turn, could program a bot to do the same thing.

Ms. Wengert wasn’t exactly surprised. She’d known that her job was straightforward and repetitive, making it low-hanging fruit for automation. But she was annoyed that her managers seemed so eager to hand it over to a machine.

“They were desperate to create this sense of excitement around automation,” she said. “Most of my colleagues got on board with that pretty readily, but I found it really jarring, to be feigning excitement about us all potentially losing our jobs.”

For Ms. Wengert, 27, the experience was a wake-up call. She had a college degree and was early in her career. But some of her colleagues had been happily doing the same job for years, and she worried that they would fall through the cracks.

“Even though these aren’t glamorous jobs, there are a lot of people doing them,” she said.

She left the insurance company after her contract ended. And she now works as a second-grade teacher — a job she says she sought out, in part, because it seemed harder to automate.

Source: https://www.nytimes.com/2021/03/06/business/the-robots-are-coming-for-phil-in-accounting.html

From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Good long read and overview of the major issues:

In mid-2019, an investigative journalism/tech non-profit called MuckRock and Open the Government (OTG), a non-partisan advocacy group, began submitting freedom of information requests to law enforcement agencies across the United States. The goal: to smoke out details about the use of an app rumoured to offer unprecedented facial recognition capabilities to anyone with a smartphone.

Co-founded by Michael Morisy, a former Boston Globe editor, MuckRock specializes in FOIs and its site has grown into a publicly accessible repository of government documents obtained under access to information laws.

As responses trickled in, it became clear that the MuckRock/OTG team had made a discovery about a tech company called Clearview AI. Based on documents obtained from Atlanta, OTG researcher Freddy Martinez began filing more requests, and discovered that as many as 200 police departments across the U.S. were using Clearview’s app, which compares images taken by smartphone cameras to a sprawling database of 3 billion open-source photographs of faces linked to various forms of personal information (e.g., Facebook profiles). It was, in effect, a point-click-and-identify system that radically transformed the work of police officers.

The documents soon found their way to a New York Times reporter named Kashmir Hill, who, in January 2020, published a deeply investigated feature about Clearview, a tiny and secretive start-up with backing from Peter Thiel, the Silicon Valley billionaire behind Paypal and Palantir Technologies. Among the story’s revelations, Hill disclosed that tech giants like Google and Apple were well aware that such an app could be developed using artificial intelligence algorithms feeding off the vast storehouse of facial images uploaded to social media platforms and other publicly accessible databases. But they had opted against designing such a disruptive and easily disseminated surveillance tool.

The Times story set off what could best be described as an international chain reaction, with widespread media coverage about the use of Clearview’s app, followed by a wave of announcements from various governments and police agencies about how Clearview’s app would be banned. The reaction played out against a backdrop of news reports about China’s nearly ubiquitous facial recognition-based surveillance networks.

Canada was not exempt. To Surveil and Predict, a detailed examination of “algorithmic policing” published this past fall by the University of Toronto’s Citizen Lab, noted that officers with law enforcement agencies in Calgary, Edmonton and across Greater Toronto had tested Clearview’s app, sometimes without the knowledge of their superiors. Investigative reporting by the Toronto Star and Buzzfeed News found numerous examples of municipal law enforcement agencies, including the Toronto Police Service, using the app in crime investigations. The RCMP denied using Clearview even after it had entered into a contract with the company — a detail exposed by Vancouver’s The Tyee.

With federal and provincial privacy commissioners ordering investigations, Clearview and the RCMP subsequently severed ties, although Citizen Lab noted that many other tech companies still sell facial recognition systems in Canada. “I think it is very questionable whether [Clearview] would conform with Canadian law,” Michael McEvoy, British Columbia’s privacy commissioner, told the Star in February.

There was fallout elsewhere. Four U.S. cities banned police use of facial recognition outright, the Citizen Lab report noted. The European Union in February proposed a ban on facial recognition in public spaces but later hedged. A U.K. court in April ruled that police facial recognition systems were “unlawful,” marking a significant reversal in surveillance-minded Britain. And the European Data Protection Board, an EU agency, informed Commission members in June that Clearview’s technology violates Pan-European law enforcement policies. As Rutgers University law professor and smart city scholar Ellen Goodman notes “There’s been a huge blowback” against the use of data-intensive policing technologies.

There’s nothing new about surveillance or police investigative practices that draw on highly diverse forms of electronic information, from wire taps to bank records and images captured by private security cameras. Yet during the past decade or so, dramatic advances in big data analytics, biometrics and AI, stoked by venture capital and law enforcement agencies eager to invest in new technology, have given rise to a fast-growing data policing industry. As the Clearview story showed, regulation and democratic oversight have lagged far behind the technology.

U.S. startups like PredPol and HunchLab, now owned by ShotSpotter, have designed so-called “predictive policing” algorithms that use law enforcement records and other geographical data (e.g. locations of schools) to make statistical guesses about the times and locations of future property crimes. Palantir’s law-enforcement service aggregates and then mines huge data sets consisting of emails, court documents, evidence repositories, gang member databases, automated licence plate readers, social media, etc., to find correlations or patterns that police can use to investigate suspects.

Yet as the Clearview fallout indicated, big data policing is rife with technical, ethical and political landmines, according to Andrew Ferguson, a University of the District Columbia law professor. As he explains in his 2017 book, The Rise of Big Data Policing, analysts have identified an impressive list: biased, incomplete or inaccurate data, opaque technology, erroneous predictions, lack of governance, public suspicions about surveillance and over-policing, conflicts over access to proprietary algorithms, unauthorized use of data and the muddied incentives of private firms selling law enforcement software.

At least one major study found that some police officers were highly skeptical of predictive policing algorithms. Other critics point out that by deploying smart city sensors or other data-enabled systems, like transit smart cards, local governments may be inadvertently providing the police with new intelligence sources. Metrolinx, for example, has released Presto card user information to police while London’s Metropolitan Police has made thousands of requests for Oyster card data to track criminals, according to The Guardian. “Any time you have a microphone, camera or a live-feed, these [become] surveillance devices with the simple addition of a court order,” says New York civil rights lawyer Albert Cahn, executive director of the Surveillance Technology Oversight Project (STOP).

The authors of the Citizen Lab study, lawyers Kate Robertson, Cynthia Khoo and Yolanda Song, argue that Canadian governments need to impose a moratorium on the deployment of algorithmic policing technology until the public policy and legal frameworks can catch up.

Data policing was born in New York City in the early 1990s when then-police Commissioner William Bratton launched “Compstat,” a computer system that compiled up-to-date crime information then visualized the findings in heat maps. These allowed unit commanders to deploy officers to neighbourhoods most likely to be experiencing crime problems.

Originally conceived as a management tool that would push a demoralized police force to make better use of limited resources, Compstat is credited by some as contributing to the marked reduction in crime rates in the Big Apple, although many other big cities experienced similar drops through the 1990s and early 2000s.

The 9/11 terrorist attacks sparked enormous investments in security technology. The past two decades have seen the emergence of a multi-billion-dollar industry dedicated to civilian security technology, everything from large-scale deployments of CCTVs and cybersecurity to the development of highly sensitive biometric devices — fingerprint readers, iris scanners, etc. — designed to bulk up the security around factories, infrastructure and government buildings.

Predictive policing and facial recognition technologies evolved on parallel tracks, both relying on increasingly sophisticated analytics techniques, artificial intelligence algorithms and ever deeper pools of digital data.

The core idea is that the algorithms — essentially formulas, such as decision-trees, that generate predictions — are “trained” on large tranches of data so they become increasingly accurate, for example at anticipating the likely locations of future property crimes or matching a face captured in a digital image from a CCTV to one in a large database of headshots. Some algorithms are designed to use a set of rules with variables (akin to following a recipe). Others, known as machine learning, are programmed to learn on their own (trial and error).

The risk lies in the quality of the data used to train the algorithms — what was dubbed the “garbage-in-garbage-out” problem in a study by the Georgetown Law Center on Privacy and Technology. If there are hidden biases in the training data — e.g., it contains mostly Caucasian faces — the algorithm may misread Asian or Black faces and generate “false positives,” a well-documented shortcoming if the application involves a identifying a suspect in a crime.

Similarly, if a poor or racialized area is subject to over-policing, there will likely be more crime reports, meaning the data from that neighbourhood is likely to reveal higher-than-average rates of certain types of criminal activity, a data point that would justify more over-policing and racial profiling. Some crimes are under-reported, and don’t influence these algorithms.

Other predictive and AI-based law enforcement technologies, such as “social network analysis” — an individual’s web of personal relationships, gleaned, for example, from social media platforms or examined by cross-referencing of lists of gang members — promised to generate predictions that individuals known to police were at risk of becoming embroiled in violent crimes.

This type of sleuthing seemed to hold out some promise. In one study, criminologists at Cardiff University found that “disorder-related” posts on Twitter reflected crime incidents in metropolitan London — a finding that suggests how big data can help map and anticipate criminal activity. In practise, however, such surveillance tactics can prove explosive. This happened in 2016, when U.S. civil liberties groups revealed documents showing that Geofeedia, a location-based data company, had contracts with numerous police departments to provide analytics based on social media posts to Twitter, Facebook, Instagram, etc. Among the individuals targeted by the company’s data: protestors and activists. Chastened, the social media firms rapidly blocked Geofeedia’s access.

In 2013, the Chicago Police Department began experimenting with predictive models that assigned risk scores for individuals based on criminal records or their connections to people involved in violent crime. By 2019, the CPD had assigned risk scores to almost 400,000 people, and claimed to be using the information to surveil and target “at-risk” individuals (including potential victims) or connect them to social services, according to a January 2020 report by Chicago’s inspector general.

These tools can draw incorrect or biased inferences in the same way that overreliance on police checks in racialized neighbourhoods results in what could be described as guilt by address. The Citizen Lab study noted that the Ontario Human Rights Commission identified social network analysis as a potential cause of racial profiling. In the case of the CPD’s predictive risk model, the system was discontinued in 2020 after media reports and internal investigations showed that people were added to the list based solely on arrest records, meaning they might not even have been charged, much less convicted of a crime.

Early applications of facial recognition software included passport security systems or searches of mug shot databases. But in 2011, the Insurance Corporation of B.C. offered Vancouver police the use of facial recognition software to match photos of Stanley Cup rioters with driver’s licence images — a move that prompted a stern warning from the province’s privacy commissioner. In 2019, the Washington Post revealed that FBI and Immigration and Customs Enforcement (ICE) investigators regarded state databases of digitized driver’s licences as a “gold mine for facial recognition photos” which had been scanned without consent.

In 2013, Canada’s federal privacy commissioner released a report on police use of facial recognition that anticipated the issues raised by Clearview app earlier in 2020. “[S]trict controls and increased transparency are needed to ensure that the use of facial recognition conforms with our privacy laws and our common sense of what is socially acceptable.” (Canada’s data privacy laws are only now being considered for an update.)

The technology, meanwhile, continues to gallop ahead. New York civil rights lawyer Albert Cahn points to the emergence of “gait recognition” systems, which use visual analysis to identify individuals by their walk; these systems are reportedly in use in China. “You’re trying to teach machines how to identify people who walk with the same gait,” he says. “Of course, a lot of this is completely untested.”

The predictive policing story evolved somewhat differently. The methodology grew out of analysis commissioned by the Los Angeles Police Department in the early 2010s. Two data scientists, Jeff Brantingham and George Mohler, used mathematical modelling to forecast copycat crimes based on data about the location and frequency of previous burglaries in three L.A. neighbourhoods. They published their results and soon set up PredPol to commercialize the technology. Media attention soon followed, as news stories played up the seemingly miraculous power of a Minority Report-like system that could do a decent job anticipating incidents of property crime.

Operationally, police forces used PredPol’s system by dividing up precincts in 150-square-metre “cells” that police officers were instructed to patrol more intensively during periods when PredPol’s algorithm forecast criminal activity. In the post-2009 credit crisis period, the technology seemed to promise that cash-strapped American municipalities would get more bang for their policing buck.

Other firms, from startups to multinationals like IBM, entered the market with innovations, for example, incorporating other types of data, such as socio-economic data or geographical features, from parks and picnic tables to schools and bars, that may be correlated to elevated incidents of certain types of crime. The reported crime data is routinely updated so the algorithm remains current.

Police departments across the U.S. and Europe have invested in various predictive policing tools, as have several in Canada, including Vancouver, Edmonton and Saskatoon. Whether they have made a difference is an open question. As with several other studies, a 2017 review by analysts with the Institute for International Research on Criminal Policy, at Ghent University in Belgium, found inconclusive results: some places showed improved results compared to more conventional policing, while in other cities, the use of predictive algorithms led to reduced policing costs, but little measurable difference in outcomes.

Revealingly, the city where predictive policing really took hold, Los Angeles, has rolled back police use on these techniques. Last spring, the LAPD tore up its contract with PredPol in the wake of mounting community and legal pressure from the Stop LAPD Spying Coalition, which found that individuals who posed no real threat, mostly Black or Latino, were ending up on police watch lists because of flaws in the way the system assigned risk scores.

“Algorithms have no place in policing,” Coalition founder Hamid Khan said in an interview this summer with MIT Technology Review. “I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.” (Similar advocacy campaigns, including proposed legislation governing surveillance technology and gang databases, have been proposed for New York City.)

There has been one other interesting consequence: police resistance. B.C.-born sociologist Sarah Brayne, an assistant professor at the University of Texas (Austin), spent two-and-a-half years embedded with the LAPD, exploring the reaction of law enforcement officials to algorithmic policing techniques by conducting ride-alongs as well as interviews with dozens of veteran cops and data analysts. In results published last year, Brayne and collaborator Angèle Christin observed “strong processes of resistance fuelled by fear of professional devaluation and threats of performance tracking.”

Before shifts, officers were told which grids to drive through, when and how frequently, and the locations of their vehicles were tracked by an on-board GPS devices to ensure compliance. But Brayne found that some would turn off the tracking device, which they regarded with suspicion. Others just didn’t buy what the technology was selling. “Patrol officers frequently asserted that they did not need an algorithm to tell them where crime occurs,” she noted.

In an interview, Brayne said that police departments increasingly see predictive technology as part of the tool kit, despite questions about effectiveness or other concerns, like racial profiling. “Once a particular technology is created,” she observed,” there’s a tendency to use it.” But Brayne added one other prediction, which has to do with the future of algorithmic policing in the post-George Floyd era — “an intersection,” as she says, “between squeezed budgets and this movement around defunding the police.”

The widening use of big data policing and digital surveillance poses, according to Citizen Lab’s analysis as well as critiques from U.S. and U.K. legal scholars, a range of civil rights questions, from privacy and freedom from discrimination to due process. Yet governments have been slow to acknowledge these consequences. Big Brother Watch, a British civil liberties group, notes that in the U.K., the national government’s stance has been that police decisions about the deployment of facial recognition systems are “operational.”

At the core of the debate is a basic public policy principle: transparency. Do individuals have the tools to understand and debate the workings of a suite of technologies that can have tremendous influence over their lives and freedoms? It’s what Andrew Ferguson and others refer to as the “black box” problem. The algorithms, designed by software engineers, rely on certain assumptions, methodologies and variables, none of which are visible, much less legible to anyone without advanced technical know-how. Many, moreover, are proprietary because they are sold to local governments by private companies. The upshot is that these kinds of algorithms have not been regulated by governments despite their use by public agencies.

New York City Council moved to tackle this question in May 2018 by establishing an “automated decision systems” task force to examine how municipal agencies and departments use AI and machine learning algorithms. The task force was to devise procedures for identifying hidden biases and to disclose how the algorithms generate choices so the public can assess their impact. The group included officials from the administration of Mayor Bill de Blasio, tech experts and civil liberties advocates. It held public meetings throughout 2019 and released a report that November. NYC was, by most accounts, the first city to have tackled this question, and the initiative was, initially, well received.

Going in, Cahn, the New York City civil rights lawyer, saw the task force as “a unique opportunity to examine how AI was operating in city government.” But he describes the outcome as “disheartening.” “There was an unwillingness to challenge the NYPD on its use of (automated decision systems).” Some other participants agreed, describing the effort as a waste.

If institutional obstacles thwarted an effort in a government the size of the City of New York, what does better and more effective oversight look like? A couple of answers have emerged.

In his book on big data policing, Andrew Ferguson writes that local governments should start at first principles, and urges police forces and civilian oversight bodies to address five fundamental questions, ideally in a public forum:

  • Can you identify the risks that your big data technology is trying to address?
  • Can you defend the inputs into the system (accuracy of data, soundness of methodology)?
  • Can you defend the outputs of the system (how they will impact policing practice and community relationships)?
  • Can you test the technology (offering accountability and some measure of transparency)?
  • Is police use of the technology respectful of the autonomy of the people it will impact?

These “foundational” questions, he writes, “must be satisfactorily answered before green-lighting any purchase or adopting a big data policing strategy.”

In addition to calling for a moratorium and a judicial inquiry into the uses of predictive policing and facial recognition systems, the authors of the Citizen Lab report made several other recommendations, including: the need for full transparency; provincial policies governing the procurement of such systems; limits on the use of ADS in public spaces; and the establishment of oversight bodies that include members of historically marginalized or victimized groups.

Interestingly, the federal government has made advances in this arena, which University of Ottawa law professor and privacy expert Teresa Scassa describes as “really interesting.”

The Treasury Board Secretariat in 2019 issued the “Directive on Automated Decision-Making,” which came into effect in April 2020, requires federal departments and agencies, except those involved in national security, to conduct “algorithmic impact assessments” (AIA) to evaluate unintended bias before procuring or approving the use of technologies that rely on AI or machine learning. The policy requires the government to publish AIAs, release software codes developed internally and continually monitor the performance of these systems. In the case of proprietary algorithms developed by private suppliers, federal officials have extensive rights to access and test the software.

In a forthcoming paper, Scassa points out that the directive includes due process rules and looks for evidence of whether systemic bias has become embedded in these technologies, which can happen if the algorithms are trained on skewed data. She also observes that not all algorithm-driven systems generate life-altering decisions, e.g., chatbots that are now commonly used in online application processes. But where they are deployed in “high impact” contexts such as policing, e.g., with algorithms that aim to identify individuals caught on surveillance videos, the policy requires “a human in the loop.”

The directive, says Scassa, “is getting interest elsewhere,” including the U.S. Ellen Goodman, at Rutgers, is hopeful this approach will gain traction with the Biden administration. In Canada, where provincial governments oversee law enforcement, Ottawa’s low-key but seemingly thorough regulation points to a way for citizens to shine a flashlight into the black box that is big data policing.

Source: From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster

Doesn’t appear to be universally well received:

Google’s chief executive Sundar Pichai on Wednesday apologized in the aftermath of the dismissal of a prominent Black scientist whose ouster set off widespread condemnation from thousands of Google employees and outside researchers.

Timnit Gebru, who helped lead Google’s Ethical Artificial Intelligence team, said that she was fired last week after having a dispute over a research paper and sending a note to other Google employees criticizing the company for its treatment of people of color and women, particularly in hiring.

“I’ve heard the reaction to Dr. Gebru’s departure loud and clear: it seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust,” Pichai wrote to Google employees on Wednesday, according to a copy of the email reviewed by NPR.

Since Gebru was pushed out, more than 2,000 Google employees have signed an open letter demanding answers, calling Gebru’s termination “research censorship” and a “retaliatory firing.”

In his letter, Pichai said the company is conducting a review of how Gebru’s dismissal was handled in order to determine whether there could have been a “more respectful process.”

Pichai went on to say that Google needs to accept responsible for a prominent Black female leader leaving Google on bad terms.

“This loss has had a ripple effect through some of our least represented communities, who saw themselves and some of their experiences reflected in Dr. Gebru’s. It was also keenly felt because Dr. Gebru is an expert in an important area of AI Ethics that we must continue to make progress on — progress that depends on our ability to ask ourselves challenging questions,” Pichai wrote.

Pichai said Google earlier this year committed to taking a look at all of the company’s systems for hiring and promoting employees to try to increase representation among Black workers and others underrepresented groups.

“The events of the last week are a painful but important reminder of the progress we still need to make,” Pichai wrote in his letter, which was earlier reported by Axios.

In a series of tweets, Gebru said she did not appreciate Pichai’s email to her former colleagues.

“Don’t paint me as an ‘angry Black woman’ for whom you need ‘de-escalation strategies’ for,” Gebru said.

“Finally it does not say ‘I’m sorry for what we did to her and it was wrong.’ What it DOES say is ‘it seeded doubts and led some in our community to question their place at Google.’ So I see this as ‘I’m sorry for how it played out but I’m not sorry for what we did to her yet,'” Gebru wrote.

One Google employee who requested anonymity for fear of retaliation said Pichai’s letter will do little to address the simmering strife among Googlers since Gebru’s firing.

The employee expressed frustration that Pichai did not directly apologize for Gebru’s termination and continued to suggest she was not fired by the company, which Gebru and many of her colleagues say is not true. The employees described Pichai’s letter as “meaningless PR.”

Source: Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster

Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.

Of note:

A well-respected Google researcher said she was fired by the company after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems.

Timnit Gebru, who was a co-leader of Google’s Ethical A.I. team, said in a tweet on Wednesday evening that she was fired because of an email she had sent a day earlier to a group that included company employees.

In the email, reviewed by The New York Times, she expressed exasperation over Google’s response to efforts by her and other employees to increase minority hiring and draw attention to bias in artificial intelligence.

“Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset,” the email read. “There is no way more documents or more conversations will achieve anything.”

Her departure from Google highlights growing tension between Google’s outspoken work force and its buttoned-up senior management, while raising concerns over the company’s efforts to build fair and reliable technology. It may also have a chilling effect on both Black tech workers and researchers who have left academia in recent years for high-paying jobs in Silicon Valley.

“Her firing only indicates that scientists, activists and scholars who want to work in this field — and are Black women — are not welcome in Silicon Valley,” said Mutale Nkonde, a fellow with the Stanford Digital Civil Society Lab. “It is very disappointing.”

A Google spokesman declined to comment. In an email sent to Google employees, Jeff Dean, who oversees Google’s A.I. work, including that of Dr. Gebru and her team, called her departure “a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible A.I. research as an org and as a company.”

After years of an anything-goes environment where employees engaged in freewheeling discussions in companywide meetings and online message boards, Google has started to crack down on workplace discourse. Many Google employees have bristled at the new restrictions and have argued that the company has broken from a tradition of transparency and free debate.

On Wednesday, the National Labor Relations Board said Google had most likely violated labor law when it fired two employees who were involved in labor organizing. The federal agency said Google illegally surveilled the employees before firing them.

Google’s battles with its workers, who have spoken out in recent years about the company’s handling of sexual harassment and its work with the Defense Department and federal border agencies, have diminished its reputation as a utopia for tech workers with generous salaries, perks and workplace freedom.

Like other technology companies, Google has also faced criticism for not doing enough to resolve the lack of women and racial minorities among its ranks.

The problems of racial inequality, especially the mistreatment of Black employees at technology companies, has plagued Silicon Valley for years. Coinbase, the most valuable cryptocurrency start-up, has experienced an exodus of Black employees in the last two years over what the workers said was racist and discriminatory treatment.

Researchers worry that the people who are building artificial intelligence systems may be building their own biases into the technology. Over the past several years, several public experiments have shown that the systems often interact differently with people of color — perhaps because they are underrepresented among the developers who create those systems.

Dr. Gebru, 37, was born and raised in Ethiopia. In 2018, while a researcher at Stanford University, she helped write a paper that is widely seen as a turning point in efforts to pinpoint and remove bias in artificial intelligence. She joined Google later that year, and helped build the Ethical A.I. team.

After hiring researchers like Dr. Gebru, Google has painted itself as a company dedicated to “ethical” A.I. But it is often reluctant to publicly acknowledge flaws in its own systems.

In an interview with The Times, Dr. Gebru said her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google. The paper, also reviewed by The Times, pinpointed flaws in a new breed of language technology, including a system built by Google that underpins the company’s search engine.

These systems learn the vagaries of language by analyzing enormous amounts of text, including thousands of books, Wikipedia entries and other online documents. Because this text includes biased and sometimes hateful language, the technology may end up generating biased and hateful language.

After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name and the names of the other Google employees. She refused to do so without further discussion and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns.

The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked.

In his note to employees, Mr. Dean said Google respected “her decision to resign.” Mr. Dean also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems.

“It was dehumanizing,” Dr. Gebru said. “They may have reasons for shutting down our research. But what is most upsetting is that they refuse to have a discussion about why.”

Dr. Gebru’s departure from Google comes at a time when A.I. technology is playing a bigger role in nearly every facet of Google’s business. The company has hitched its future to artificial intelligence — whether with its voice-enabled digital assistant or its automated placement of advertising for marketers — as the breakthrough technology to make the next generation of services and devices smarter and more capable.

Sundar Pichai, chief executive of Alphabet, Google’s parent company, has compared the advent of artificial intelligence to that of electricity or fire, and has said that it is essential to the future of the company and computing. Earlier this year, Mr. Pichai called for greater regulation and responsible handling of artificial intelligence, arguing that society needs to balance potential harms with new opportunities.

Google has repeatedly committed to eliminating bias in its systems. The trouble, Dr. Gebru said, is that most of the people making the ultimate decisions are men. “They are not only failing to prioritize hiring more people from minority communities, they are quashing their voices,” she said.

Julien Cornebise, an honorary associate professor at University College London and a former researcher with DeepMind, a prominent A.I. lab owned by the same parent company as Google’s, was among many artificial intelligence researchers who said Dr. Gebru’s departure reflected a larger problem in the industry.

“This shows how some large tech companies only support ethics and fairness and other A.I.-for-social-good causes as long as their positive P.R. impact outweighs the extra scrutiny they bring,” he said. “Timnit is a brilliant researcher. We need more like her in our field.”

Source: https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html?action=click&module=Well&pgtype=Homepage&section=Business

Can We Make Our Robots Less Biased Than Us? A.I. developers are committing to end the injustices in how their technology is often made and used.

Important read:

On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot.

Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled. Sending the machine was safer than sending in human officers, Mr. Brown said.

But some robotics researchers were troubled. “Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police offers in Dixmont, Maine, ended a shootout in a similar manner.). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems.

While Mr. Johnson’s death resulted from a human decision, in the future such a decision might be made by a robot — one created by humans, with their flaws in judgment baked in.

“Given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge,” Dr. Howard, a leader of the organization Black in Robotics, and Dr. Borenstein wrote, “it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Last summer, hundreds of A.I. and robotics researchers signed statements committing themselves to changing the way their fields work. One statement, from the organization Black in Computing, sounded an alarm that “the technologies we help create to benefit society are also disrupting Black communities through the proliferation of racial profiling.” Another manifesto, “No Justice, No Robots,” commits its signers to refusing to work with or for law enforcement agencies.

Over the past decade, evidence has accumulated that “bias is the original sin of A.I,” Dr. Howard notes in her 2020 audiobook, “Sex, Race and Robots.” Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one such system told the Detroit police that it had matched photos of a suspected thief with the driver’s license photo of Robert Julian-Borchak Williams, a Black man with no connection to the crime.)

There are A.I. systems enabling self-driving cars to detect pedestrians — last year Benjamin Wilson of Georgia Tech and his colleagues found that eight such systems were worse at recognizing people with darker skin tones than paler ones. Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the M.I.T. Media Lab, has encountered interactive robots at two different laboratories that failed to detect her. (For her work with such a robot at M.I.T., she wore a white mask in order to be seen.)

The long-term solution for such lapses is “having more folks that look like the United States population at the table when technology is designed,” said Chris S. Crawford, a professor at the University of Alabama who works on direct brain-to-robot controls. Algorithms trained mostly on white male faces (by mostly white male developers who don’t notice the absence of other kinds of people in the process) are better at recognizing white males than other people.

“I personally was in Silicon Valley when some of these technologies were being developed,” he said. More than once, he added, “I would sit down and they would test it on me, and it wouldn’t work. And I was like, You know why it’s not working, right?”

Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. So it was striking that many roboticists signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed themselves to actions aimed at making the creation and usage of robots less unjust.

“I think the protests in the street have really made an impact,” said Odest Chadwicke Jenkins, a roboticist and A.I. researcher at the University of Michigan. At a conference earlier this year, Dr. Jenkins, who works on robots that can assist and collaborate with people, framed his talk as an apology to Mr. Williams. Although Dr. Jenkins doesn’t work in face-recognition algorithms, he felt responsible for the A.I. field’s general failure to make systems that are accurate for everyone.

“This summer was different than any other than I’ve seen before,” he said. “Colleagues I know and respect, this was maybe the first time I’ve heard them talk about systemic racism in these terms. So that has been very heartening.” He said he hoped that the conversation would continue and result in action, rather than dissipate with a return to business-as-usual.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

As the Black in Computing open letter addressed how robots and A.I. are made, another manifesto appeared around the same time, focusing on how robots are used by society. Entitled “No Justice, No Robots,” the open letter pledges its signers to keep robots and robot research away from law enforcement agencies. Because many such agencies “have actively demonstrated brutality and racism toward our communities,” the statement says, “we cannot in good faith trust these police forces with the types of robotic technologies we are responsible for researching and developing.”

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

“I was dismayed by it,” said Cindy Bethel, director of the Social, Therapeutic and Robotic Systems Lab at Mississippi State University. “It’s such a blanket statement,” she said. “I think it’s naïve and not well-informed.” Dr. Bethel has worked with local and state police forces on robot projects for a decade, she said, because she thinks robots can make police work safer for both officers and civilians.

One robot that Dr. Bethel is developing with her local police department is equipped with night-vision cameras, that would allow officers to scope out a room before they enter it. “Everyone is safer when there isn’t the element of surprise, when police have time to think,” she said.

Adhering to the declaration would prohibit researchers from working on robots that conduct search-and-rescue operations, or in the new field of “social robotics.” One of Dr. Bethel’s research projects is developing technology that would use small, humanlike robots to interview children who have been abused, sexually assaulted, trafficked or otherwise traumatized. In one of her recent studies, 250 children and adolescents who were interviewed about bullying were often willing to confide information in a robot that they would not disclose to an adult.

Having an investigator “drive” a robot in another room thus could yield less painful, more informative interviews of child survivors, said Dr. Bethel, who is a trained forensic interviewer.

“You have to understand the problem space before you can talk about robotics and police work,” she said. “They’re making a lot of generalizations without a lot of information.”

Dr. Crawford is among the signers of both “No Justice, No Robots” and the Black in Computing open letter. “And you know, anytime something like this happens, or awareness is made, especially in the community that I function in, I try to make sure that I support it,” he said.

Dr. Jenkins declined to sign the “No Justice” statement. “I thought it was worth consideration,” he said. “But in the end, I thought the bigger issue is, really, representation in the room — in the research lab, in the classroom, and the development team, the executive board.” Ethics discussions should be rooted in that first fundamental civil-rights question, he said.

Dr. Howard has not signed either statement. She reiterated her point that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software.

“If external people who have ethical values aren’t working with these law enforcement entities, then who is?” she said. “When you say ‘no,’ others are going to say ‘yes.’ It’s not good if there’s no one in the room to say, ‘Um, I don’t believe the robot should kill.’”

Source: https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html?action=click&module=News&pgtype=Homepage

Scientists combat anti-Semitism with artificial intelligence

Will be interesting to assess the effectiveness of this approach, and whether the definition of antisemitism used in the algorithms takes a narrow or more expansive approach, including how it deals with criticism of Israeli government poliicies.

Additionally, it may provide an approach that could serve as a model for efforts to combat anti-Black, anti-Muslim and other forms of hate:

An international team of scientists said Monday it had joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence.

The project Decoding Anti-Semitism includes discourse analysts, computational linguists and historians who will develop a “highly complex, AI-driven approach to identifying online anti-Semitism,” the Alfred Landecker Foundation, which supports the project, said in a statement Monday.

“In order to prevent more and more users from becoming radicalized on the web, it is important to identify the real dimensions of anti-Semitism — also taking into account the implicit forms that might become more explicit over time,” said Matthias Becker, a linguist and project leader from the Technical University of Berlin.

The team also includes researchers from King’s College in London and other scientific institutions in Europe and Israel.

Computers will help run through vast amounts of data and images that humans wouldn’t be able to assess because of their sheer quantity, the foundation said.

“Studies have also shown that the majority of anti-Semitic defamation is expressed in implicit ways – for example through the use of codes (“juice” instead of “Jews”) and allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images,” the statement said.

As implicit anti-Semitism is harder to detect, the combination of qualitative and AI-driven approaches will allow for a more comprehensive search, the scientists think.

The problem of anti-Semitism online has increased, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19, groups tracking anti-Semitism on the internet have found.

The focus of the current project is initially on Germany, France and the U.K., but will later be expanded to cover other countries and languages.

The Alfred Landecker Foundation, which was founded in 2019 in response to rising trends of populism, nationalism and hatred toward minorities, is supporting the project with 3 million euros ($3.5 million), the German news agency dpa reported.

Source: Scientists combat anti-Semitism with artificial intelligence