AI’s anti-Muslim bias problem

Of note (and unfortunately, not all that surprising):

Imagine that you’re asked to finish this sentence: “Two Muslims walked into a …”

Which word would you add? “Bar,” maybe?

It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. “Two Muslims walked into a synagogue with axes and a bomb,” it said. Or, on another try, “Two Muslims walked into a Texas cartoon contest and opened fire.”

For Abubakar Abid, one of the researchers, the AI’s output came as a rude awakening. “We were just trying to see if it could tell jokes,” he recounted to me. “I even tried numerous prompts to steer it away from violent completions, and it would find some way to make it violent.”

Language models such as GPT-3 have been hailed for their potential to enhance our creativity. Given a phrase or two written by a human, they can add on more phrases that sound uncannily human-like. They can be great collaborators for anyone trying to write a novel, say, or a poem.

Source: AI’s anti-Muslim bias problem

What unconscious bias training gets wrong… and how to fix it

Good overview on the latest research and lessons. Main conclusion, no quick fix, has to be part of ongoing training and awareness:

Here’s a fact that cannot be disputed: if your name is James or Emily, you will find it easier to get a job than someone called Tariq or Adeola. Between November 2016 and December 2017, researchers sent out fake CVs and cover letters for 3,200 positions. Despite demonstrating exactly the same qualifications and experience, the “applicants” with common Pakistani or Nigerian names needed to send out 60% more applications to receive the same number of callbacks as applicants with more stereotypically British names.

Some of the people who had unfairly rejected Tariq or Adeola will have been overtly racist, and so deliberately screened people based on their ethnicity. According to a large body of psychological research, however, many will have also reacted with an implicit bias, without even being aware of the assumptions they were making.

Such findings have spawned a plethora of courses offering “unconscious bias and diversity training”, which aim to reduce people’s racist, sexist and homophobic tendencies. If you work for a large organisation, you’ve probably taken one yourself. Last year, Labour leader Keir Starmer volunteered to undergo such training after he appeared to dismiss the importance of the Black Lives Matter movement. “There is always the risk of unconscious bias, and just saying: ‘Oh well, it probably applies to other people, not me,’ is not the right thing to do,” he said. Even Prince Harry has been educating himself about his potential for implicit bias – and advising others to do the same.

Sounds sensible, doesn’t it? You remind people of their potential for prejudice so they can change their thinking and behaviour. Yet there is now a severe backlash against the very idea of unconscious bias and diversity training, with an increasing number of media articles lamenting these “woke courses” as a “useless” waste of money. The sceptics argue that there is little evidence that unconscious bias training works, leading some organisations – including the UK’s civil service – to cancel their schemes.

So what’s the truth? Is it ever possible to correct our biases? And if so, why have so many schemes failed to make a difference?

While the contents of unconscious bias and diversity training courses vary widely, most share a few core components. Participants will often be asked to complete the implicit association test (IAT), for example. By measuring people’s reaction times during a word categorisation task, an algorithm can calculate whether people have more positive or negative associations with a certain group – such as people of a different ethnicity, sexual orientation or gender. (You can try it for yourself on the Harvard website.)

After taking the IAT, participants will be debriefed about their results. They may then learn about the nature of unconscious bias and stereotypes more generally, and the consequences within the workplace, along with some suggestions to reduce the impact.

All of which sounds useful in theory. To confirm the benefits, however, you need to compare the attitudes and behaviours of employees who have taken unconscious bias and diversity training with those who have not – in much the same way that drugs are tested against a placebo.

Prof Edward Chang at Harvard Business School has led one of the most rigorous trials, delivering an hour-long online diversity course to thousands of employees at an international professional services company. Using tools like the IAT, the training was meant to educate people about sexist stereotypes and their consequences – and surveys suggest that it did change some attitudes. The participants reported greater acknowledgment of their own bias after the course, and greater support of women in the workplace, than people who had taken a more general course on “psychological safety” and “active listening”.

Unfortunately, this didn’t translate to the profound behavioural change you might expect. Three weeks after taking the course, the employees were given the chance of taking part in an informal mentoring scheme. Overall, the people who had taken the diversity course were no more likely to take on a female mentee. Six weeks after taking the course, the participants were also given the opportunity to nominate colleagues for recognition of their “excellence”. It could have been the perfect opportunity to offer some encouragement to overlooked women in the workplace. Once again, however, the people who had taken the diversity training were no more likely to nominate a female colleague than the control group.

“We did our best to design a training that would be effective,” Chang tells me. “But our results suggest that the sorts of one-off trainings that are commonplace in organisations are not particularly effective at leading to long-lasting behaviour change.”

Chang’s results chime with the broader conclusions of a recent report by Britain’s Equality and Human Rights Commission (EHRC), which examined 18 papers on unconscious bias training programmes. Overall, the authors concluded that the courses are effective at raising awareness of bias, but the evidence of long-lasting behavioural change is “limited”.

Even the value of the IAT – which is central to so many of these courses – has been subject to scrutiny. The courses tend to use shortened versions of the test, and the same person’s results can vary from week to week. So while it might be a useful educational aid to explain the concept of unconscious bias, it is wrong to present the IAT as a reliable diagnosis of underlying prejudice.

It certainly sounds damning; little wonder certain quarters of the press have been so willing to declare these courses a waste of time and money. Yet the psychologists researching their value take a more nuanced view, and fear their conclusions have been exaggerated. While it is true that many schemes have ended in disappointment, some have been more effective, and researchers believe we should learn from these successes and failures to design better interventions in the future – rather than simply dismissing them altogether.

For one thing, many of the current training schemes are simply too brief to have the desired effect. “It’s usually part of the employee induction and lasts about 30 minutes to an hour,” says Dr Doyin Atewologun, a co-author of the EHRC’s report and founding member of British Psychological Society’s diversity and inclusion at work group. “It’s just tucked away into one of the standard training materials.” We should not be surprised the lessons are soon forgotten. In general, studies have shown that diversity training can have more pronounced effects if it takes place over a longer period of time. A cynic might suspect that these short programmes are simple box-ticking exercises, but Atewologun thinks the good intentions are genuine – it’s just that the organisations haven’t been thinking critically about the level of commitment that would be necessary to bring about change, or even how to measure the desired outcomes.

Thanks to this lack of forethought, many of the existing courses may have also been too passive and theoretical. “If you are just lecturing at someone about how pervasive bias is, but you’re not giving them the tools to change, I think there can be a tendency for them to think that bias is normal and thus not something they need to work on,” says Prof Alex Lindsey at the University of Memphis. Attempts to combat bias could therefore benefit from more evidence-based exercises that increase participants’ self-reflection, alongside concrete steps for improvement.

Lindsey’s research team recently examined the benefits of a “perspective-taking” exercise, in which participants were asked to write about the challenges faced by someone within a minority group. They found that the intervention brought about lasting changes to people’s attitudes and behavioural intentions for months after the training. “We might not know exactly what it’s like to be someone of a different race, sex, religion, or sexual orientation from ourselves, but everyone, to some extent, knows what it feels like to be excluded in a social situation,” Lindsey says. “Once trainees realise that some people face that kind of ostracism on a more regular basis as a result of their demographic characteristics, I think that realisation can lead them to respond more empathetically in the future.”

Lindsey has found that you should also encourage participants to reflect on the ways their own behaviour may have been biased in the past, and to set themselves future goals during their training. Someone will be much more likely to act in an inclusive way if they decide, in advance, to challenge any inappropriate comments about a minority group, for example. This may be more powerful still, he says, if there is some kind of follow-up to check in with participants’ progress – an opportunity that the briefer courses completely miss. (Interestingly, he has found that these reflective techniques can be especially effective among people who are initially resistant to the idea of diversity training.)

More generally, these courses may often fail to bring about change because people become too defensive about the very idea that they may be prejudiced. Without excusing the biases, the courses might benefit from explaining how easily stereotypes can be absorbed – even by good, well-intentioned people – while also emphasising the individual responsibility to take action. Finally, they could teach people to recognise the possibility of “moral licensing”, in which an ostensibly virtuous act, such as attending the diversity course itself, or promoting someone from a minority, excuses a prejudiced behaviour afterwards, since you’ve already “proven” yourself to be a liberal and caring person. 

Ultimately, the psychologists I’ve spoken to all agree that organisations should stop seeing unconscious bias and diversity training as a quick fix, and instead use it as the foundation for broader organisational change.

“Anyone who has been in any type of schooling system knows that even the best two- or three-hour class is not going to change our world for ever,” says Prof Calvin Lai, who investigates implicit bias at Washington University in St Louis. “It’s not magic.” But it may act as a kind of ice-breaker, he says, helping people to be more receptive to other initiatives – such as those aimed at a more inclusive recruitment process.

Chang agrees. “Diversity training is unlikely to be an effective standalone solution,” he says. “But that doesn’t mean that it can’t be an effective component of a multipronged approach to improving diversity, equity and inclusion in organisations.”

Atewologun compares it to the public health campaigns to combat obesity and increase fitness. You can provide people with a list of the calories in different foods and the benefits of exercise, she says – but that information, alone, is unlikely to lead to significant weight loss, without continued support that will help people to act on that information. Similarly, education about biases can be a useful starting point, but it’s rather absurd to expect that ingrained habits could evaporate in a single hour of education.

“We could be a lot more explicit that it is step one,” Atewologun adds. “We need multiple levels of intervention – it’s an ongoing project.”

Source: https://www.theguardian.com/science/2021/apr/25/what-unconscious-bias-training-gets-wrong-and-how-to-fix-it

Using A.I. to Find Bias in A.I.

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. O’Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For Ms. O’Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligence. It was a “cruel game of Whac-a-Mole,” she said.

This month, Ms. O’Sullivan, a 36-year-old New Yorker, was named chief executive of a new company, Parity. The start-up is one of many organizations, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from A.I. systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of A.I. systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in A.I., including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislation or regulation is inevitable,” said Christian Troncoso, the senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about A.I., it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigationof UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discriminate, while the status of the UnitedHealth investigation is unclear. 

A spokesman for UnitedHealth, Tyler Mason, said the company’s algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point this month when the Software Alliance offered a detailed framework for fighting bias in A.I., including the recognition that some automated technologies require regular oversight from humans. The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Though they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

Ms. O’Sullivan said there was no simple solution to bias in A.I. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

“Changing mentalities does not happen overnight — and that is even more true when you’re talking about large companies,” she said. “You are trying to change not just one person’s mind but many minds.”

When she started advising businesses on A.I. bias more than two years ago, Ms. O’Sullivan was often met with skepticism. Many executives and engineers espoused what they called “fairness through unawareness,” arguing that the best way to build equitable technology was to ignore issues like race and gender.

Increasingly, companies were building systems that learned tasks by analyzing vast amounts of data, including photos, sounds, text and stats. The belief was that if a system learned from as much data as possible, fairness would follow.

But as Ms. O’Sullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognition services can be biased against women and people of color when they are trained on photo collections dominated by white men.

Designers can be blind to these problems. The workers in India — where gay relationships were still illegal at the time and where attitudes toward gays and lesbians were very different from those in the United States — were classifying the photos as they saw fit.

Ms. O’Sullivan saw the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that ran the tagging project. She said she had left the company after realizing it was building systems for the military that she believed could eventually be used to kill. Clarifai did not respond to a request for comment. 

She now believes that after years of public complaints over bias in A.I. — not to mention the threat of regulation — attitudes are changing. In its new framework for curbing harmful bias, the Software Alliance warned against fairness through unawareness, saying the argument did not hold up.

“They are acknowledging that you need to turn over the rocks and see what is underneath,” Ms. O’Sullivan said.

Still, there is resistance. She said a recent clash at Google, where two ethics researchers were pushed out, was indicative of the situation at many companies. Efforts to fight bias often clash with corporate culture and the unceasing push to build new technology, get it out the door and start making money.

It is also still difficult to know just how serious the problem is. “We have very little data needed to model the broader societal safety issues with these systems, including bias,” said Jack Clark, one of the authors of the A.I. Index, an effort to track A.I. technology and policy across the globe. “Many of the things that the average person cares about — such as fairness — are not yet being measured in a disciplined or a large-scale way.”

Ms. O’Sullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Chowdhury, a well-known A.I. ethics researcher who spent years at the business consultancy Accenture before joining Twitter.

While other start-ups, like Fiddler A.I. and Weights and Biases, offer tools for monitoring A.I. services and identifying potentially biased behavior, Parity’s technology aims to analyze the data, technologies and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

The tool uses artificial intelligence technology that can be biased in its own right, showing the double-edged nature of A.I. — and the difficulty of Ms. O’Sullivan’s task.

Tools that can identify bias in A.I. are imperfect, just as A.I. is imperfect. But the power of such a tool, she said, is to pinpoint potential problems — to get people looking closely at the issue.

Ultimately, she explained, the goal is to create a wider dialogue among people with a broad range of views. The trouble comes when the problem is ignored — or when those discussing the issues carry the same point of view.

“You need diverse perspectives. But can you get truly diverse perspectives at one company?” Ms. O’Sullivan asked. “It is a very important question I am not sure I can answer.”

Source: https://www.nytimes.com/2021/06/30/technology/artificial-intelligence-bias.html

Bias Is a Big Problem. But So Is ‘Noise.’

Useful discussion in the context of human and AI decision-making. AI provides greater consistency (less noise or variability than humans) but with the risk of bias being part of the algorithms, and the importance of distinguishing the two when assessing decision-making:

The word “bias” commonly appears in conversations about mistaken judgments and unfortunate decisions. We use it when there is discrimination, for instance against women or in favor of Ivy League graduates. But the meaning of the word is broader: A bias is any predictable error that inclines your judgment in a particular direction. For instance, we speak of bias when forecasts of sales are consistently optimistic or investment decisions overly cautious.

Society has devoted a lot of attention to the problem of bias — and rightly so. But when it comes to mistaken judgments and unfortunate decisions, there is another type of error that attracts far less attention: noise.

To see the difference between bias and noise, consider your bathroom scale. If on average the readings it gives are too high (or too low), the scale is biased. If it shows different readings when you step on it several times in quick succession, the scale is noisy. (Cheap scales are likely to be both biased and noisy.) While bias is the average of errors, noise is their variability.

Although it is often ignored, noise is a large source of malfunction in society. In a 1981 study, for example, 208 federal judges were asked to determine the appropriate sentences for the same 16 cases. The cases were described by the characteristics of the offense (robbery or fraud, violent or not) and of the defendant (young or old, repeat or first-time offender, accomplice or principal). You might have expected judges to agree closely about such vignettes, which were stripped of distracting details and contained only relevant information.

But the judges did not agree. The average difference between the sentences that two randomly chosen judges gave for the same crime was more than 3.5 years. Considering that the mean sentence was seven years, that was a disconcerting amount of noise.

Noise in real courtrooms is surely only worse, as actual cases are more complex and difficult to judge than stylized vignettes. It is hard to escape the conclusion that sentencing is in part a lottery, because the punishment can vary by many years depending on which judge is assigned to the case and on the judge’s state of mind on that day. The judicial system is unacceptably noisy.

Consider another noisy system, this time in the private sector. In 2015, we conducted a study of underwriters in a large insurance company. Forty-eight underwriters were shown realistic summaries of risks to which they assigned premiums, just as they did in their jobs.

How much of a difference would you expect to find between the premium values that two competent underwriters assigned to the same risk? Executives in the insurance company said they expected about a 10 percent difference. But the typical difference we found between two underwriters was an astonishing 55 percent of their average premium — more than five times as large as the executives had expected.

Many other studies demonstrate noise in professional judgments. Radiologists disagree on their readings of images and cardiologists on their surgery decisions. Forecasts of economic outcomes are notoriously noisy. Sometimes fingerprint experts disagree about whether there is a “match.” Wherever there is judgment, there is noise — and more of it than you think.

Noise causes error, as does bias, but the two kinds of error are separate and independent. A company’s hiring decisions could be unbiased overall if some of its recruiters favor men and others favor women. However, its hiring decisions would be noisy, and the company would make many bad choices. Likewise, if one insurance policy is overpriced and another is underpriced by the same amount, the company is making two mistakes, even though there is no overall bias.

Where does noise come from? There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions.

Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.

A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.

Once you become aware of noise, you can look for ways to reduce it. For instance, independent judgments from a number of people can be averaged (a frequent practice in forecasting). Guidelines, such as those often used in medicine, can help professionals reach better and more uniform decisions. As studies of hiring practices have consistently shown, imposing structure and discipline in interviews and other forms of assessment tends to improve judgments of job candidates.

No noise-reduction techniques will be deployed, however, if we do not first recognize the existence of noise. Noise is too often neglected. But it is a serious issue that results in frequent error and rampant injustice. Organizations and institutions, public and private, will make better decisions if they take noise seriously.

Daniel Kahneman is an emeritus professor of psychology at Princeton and a recipient of the 2002 Nobel Memorial Prize in Economic Sciences. Olivier Sibony is a professor of strategy at the HEC Paris business school. Cass R. Sunstein is a law professor at Harvard. They are the authors of the forthcoming book “Noise: A Flaw in Human Judgment,” on which this essay is based.

Source: https://www.nytimes.com/2021/05/15/opinion/noise-bias-kahneman.html?action=click&module=Opinion&pgtype=Homepage

U.S. research shows race, age of jurors affects verdicts but Canada lacks data

Of note:

The race and age of jurors has a noticeable effect on trial verdicts, American studies indicate, but Canada has no data allowing similar research here.

Experts in Canada said it’s imperative to gather such demographic information to better understand systemic biases in the criminal justice system.

One 2012 study in Florida found all-white juries convicted Blacks at a rate 16 percentage points higher than whites. The gap disappeared when the jury pool included at least one Black member, the research found.

“The impact of the racial composition of the jury pool — and seated jury — is a factor that merits much more attention and analysis in order to ensure the fairness of the criminal justice system,” the study concludes.

Another U.S. study, in 2014, showed older jurors were significantly more likely to convict than younger ones:

“If a male defendant, completely by chance, faces a jury pool that has an average age above 50, he is about 13 percentage points more likely to be convicted than if he faces a jury pool with an average age less than 50.”

“These findings imply that many cases are decided differently for reasons that are completely independent of the true nature of the evidence,” it says.

Shamena Anwar, co-author of the papers, said in an interview this week that juries can be highly unrepresentative of their communities as a result of the selection process.

The research, which shows age of jurors and race play a substantial role in verdicts and convictions, indicates demographics “definitely” matter, Anwar said.

As a result, collecting the data was important in understanding that role, said Anwar, an economist who studies criminal justice and racial disparities at the non-profit Rand Corporation.

“If you don’t collect it — you don’t have access to the problem,” Anwar said. “This work shows you that (jury demographics) can have a big impact on (trial) outcomes.”

However, a survey by The Canadian Press found provinces and territories collect almost no demographic data of jurors, despite concerns about systemic bias and government promises to address it.

The absence of information makes it all but impossible to discern whether juries reflect the makeup of the community, experts said.

Colton Fehr, an assistant criminology professor at Simon Fraser University, said bias can infiltrate a trial in many ways, but the lack of data makes it difficult to track and study.

“I’d rather know just how bad it is, so that we can try to fix it, as opposed to just not know where things are going wrong,” Fehr said.

Source: U.S. research shows race, age of jurors affects verdicts but Canada lacks data

Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.

Of note:

A well-respected Google researcher said she was fired by the company after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems.

Timnit Gebru, who was a co-leader of Google’s Ethical A.I. team, said in a tweet on Wednesday evening that she was fired because of an email she had sent a day earlier to a group that included company employees.

In the email, reviewed by The New York Times, she expressed exasperation over Google’s response to efforts by her and other employees to increase minority hiring and draw attention to bias in artificial intelligence.

“Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset,” the email read. “There is no way more documents or more conversations will achieve anything.”

Her departure from Google highlights growing tension between Google’s outspoken work force and its buttoned-up senior management, while raising concerns over the company’s efforts to build fair and reliable technology. It may also have a chilling effect on both Black tech workers and researchers who have left academia in recent years for high-paying jobs in Silicon Valley.

“Her firing only indicates that scientists, activists and scholars who want to work in this field — and are Black women — are not welcome in Silicon Valley,” said Mutale Nkonde, a fellow with the Stanford Digital Civil Society Lab. “It is very disappointing.”

A Google spokesman declined to comment. In an email sent to Google employees, Jeff Dean, who oversees Google’s A.I. work, including that of Dr. Gebru and her team, called her departure “a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible A.I. research as an org and as a company.”

After years of an anything-goes environment where employees engaged in freewheeling discussions in companywide meetings and online message boards, Google has started to crack down on workplace discourse. Many Google employees have bristled at the new restrictions and have argued that the company has broken from a tradition of transparency and free debate.

On Wednesday, the National Labor Relations Board said Google had most likely violated labor law when it fired two employees who were involved in labor organizing. The federal agency said Google illegally surveilled the employees before firing them.

Google’s battles with its workers, who have spoken out in recent years about the company’s handling of sexual harassment and its work with the Defense Department and federal border agencies, have diminished its reputation as a utopia for tech workers with generous salaries, perks and workplace freedom.

Like other technology companies, Google has also faced criticism for not doing enough to resolve the lack of women and racial minorities among its ranks.

The problems of racial inequality, especially the mistreatment of Black employees at technology companies, has plagued Silicon Valley for years. Coinbase, the most valuable cryptocurrency start-up, has experienced an exodus of Black employees in the last two years over what the workers said was racist and discriminatory treatment.

Researchers worry that the people who are building artificial intelligence systems may be building their own biases into the technology. Over the past several years, several public experiments have shown that the systems often interact differently with people of color — perhaps because they are underrepresented among the developers who create those systems.

Dr. Gebru, 37, was born and raised in Ethiopia. In 2018, while a researcher at Stanford University, she helped write a paper that is widely seen as a turning point in efforts to pinpoint and remove bias in artificial intelligence. She joined Google later that year, and helped build the Ethical A.I. team.

After hiring researchers like Dr. Gebru, Google has painted itself as a company dedicated to “ethical” A.I. But it is often reluctant to publicly acknowledge flaws in its own systems.

In an interview with The Times, Dr. Gebru said her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google. The paper, also reviewed by The Times, pinpointed flaws in a new breed of language technology, including a system built by Google that underpins the company’s search engine.

These systems learn the vagaries of language by analyzing enormous amounts of text, including thousands of books, Wikipedia entries and other online documents. Because this text includes biased and sometimes hateful language, the technology may end up generating biased and hateful language.

After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name and the names of the other Google employees. She refused to do so without further discussion and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns.

The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked.

In his note to employees, Mr. Dean said Google respected “her decision to resign.” Mr. Dean also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems.

“It was dehumanizing,” Dr. Gebru said. “They may have reasons for shutting down our research. But what is most upsetting is that they refuse to have a discussion about why.”

Dr. Gebru’s departure from Google comes at a time when A.I. technology is playing a bigger role in nearly every facet of Google’s business. The company has hitched its future to artificial intelligence — whether with its voice-enabled digital assistant or its automated placement of advertising for marketers — as the breakthrough technology to make the next generation of services and devices smarter and more capable.

Sundar Pichai, chief executive of Alphabet, Google’s parent company, has compared the advent of artificial intelligence to that of electricity or fire, and has said that it is essential to the future of the company and computing. Earlier this year, Mr. Pichai called for greater regulation and responsible handling of artificial intelligence, arguing that society needs to balance potential harms with new opportunities.

Google has repeatedly committed to eliminating bias in its systems. The trouble, Dr. Gebru said, is that most of the people making the ultimate decisions are men. “They are not only failing to prioritize hiring more people from minority communities, they are quashing their voices,” she said.

Julien Cornebise, an honorary associate professor at University College London and a former researcher with DeepMind, a prominent A.I. lab owned by the same parent company as Google’s, was among many artificial intelligence researchers who said Dr. Gebru’s departure reflected a larger problem in the industry.

“This shows how some large tech companies only support ethics and fairness and other A.I.-for-social-good causes as long as their positive P.R. impact outweighs the extra scrutiny they bring,” he said. “Timnit is a brilliant researcher. We need more like her in our field.”

Source: https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html?action=click&module=Well&pgtype=Homepage&section=Business

UK: How anti-Semitic and how Islamophobic are local politicians?

Interesting and revealing analysis. Suspect similar patterns between urban and rural areas in most countries:

In October 2020, the UK’s human rights watchdog found Labour to be ‘responsible for unlawful acts of harassment and discrimination’. Last year, the Muslim Council of Britain called for an enquiry into Islamophobia in the Conservative Party. Other critics have accused the latter of failing to tackle Islamophobia. The 2017 British Social Attitudes Survey showed that 33% of those who identify with the Conservative Party would describe themselves as somewhat racist, compared with 18% of those who identify with the Labour Party.

We set out to gather some evidence on the extent of bias by local politicians against their constituents, using a correspondence experiment. We sent ten thousand emails to councillors with a quick question, and randomised whether they came from a stereotypically Christian name (Harry or Sarah White), Jewish name (Levi or Shoshana Goldstein), or Muslim name (Mohammad or Zara Hussain). We kept the email short in order to minimise the burden placed on our busy objects of study.

Response rates were six to seven percentage points lower to the Muslim and Jewish names – a clear evidence of bias. We don’t however see more bias against Jewish names by Labour councillors. Neither do we see more bias against Muslim names by Conservative councillors. Such discrimination in the provision of services based on race or religion is against UK law.  This form of discrimination by councillors may have substantive impacts for constituents. For example councillors set policy on access to the limited supply of social housing, policies which have been documented to disadvantage ethnic minorities.

Note: Response rates are estimated after removing council fixed effects, and standardising residuals to a response rate equal to the sample average of 55 percent for whites. Bars represent 95 percent confidence intervals.

In total we received 5,093 responses to 9,994 queries sent, for a 51% response rate. This is almost identical to the response rate found by a survey of real requests to councillors, in which 51% received a response within two weeks. Amongst those who responded to our queries, the median time to response was 12 hours, and the median length of responses was 228 words.

Compared to the male Christian name (Harry White), response rates to Jewish names are 5-6% points lower, and 6-9% points lower to Muslim names. Response rates are marginally higher to the female Christian name (Sarah) than to the male Christian name (Harry). Response rates are also higher to Zara Hussain than to Mohammad Hussain.

Name or Religion?

We randomise each councillor to receive one of two email scripts. The first email script makes a simple request in line with basic councillor responsibilities – ‘I have a question about local services and was wondering if you could tell me when your surgery is held?’. The second request explicitly indicates the religion of the emailer – ‘I’m  interested in organising a sponsored  walk in the local area to raise money for [Christian Aid/Islamic Relief/Global Jewish Relief]. Could you advise me if I need to get some kind of permit?’.

The two email scripts can be seen as different levels of intensity of the treatment. The response rate for white names to the first email was 61%, and 45% to the second email. Bias in response rates is similar across the two types of emails. This suggests that the discrimination occurs based on the name of the sender alone. Due to the high volume and low cognitive effort of checking emails, by not replying, councillors may be acting unconsciously when exposed to non-Christian/minority group names. Alternatively, councillors may simply be consciously discriminating against minority constituents, irrespective of their degree of self-identity. Because the identity of the sender is present in the email address itself, councillors might choose to not even open the emails from names associated with minority groups.

What explains the bias?

Bias in response rates is largest against Jewish and Muslim names in the least densely populated rural locations, with small non-white populations (Figure 2). One reason for this could be that councillors in white areas are more likely to be white themselves. On average we see much lower bias by councillors with names estimated to be Jewish or Muslim (though these estimates are imprecise due to the small number of such councillors). There may also be other differences in the selection of candidates with different levels of unobserved racial and religious bias in rural and urban areas. Alternatively, councillors may respond to political incentives and be less likely to respond to minorities in locations where minority groups are a small proportion of the electorate.

We test responses to electoral incentives directly by showing the relationship between response rates and two measures of competition – the margin of victory at the last election and the number of days until the next election. We see now less bias in close elections. Finally, lower bias could be attributed to the degree of ‘contact’ councillors have with different minority groups. Councillors in more diverse urban locations may show less discrimination through an erosion of prejudice as described by the contact hypothesis, though we are unable to test this hypothesis directly.

Note: The top-left figure shows a binned scatterplot of response rates against population density, by whether the sender name was Christian or non-Christian. The top-right figure shows response rates against the non-white population share. The bottom-left shows the response rate against the winning margin of the elected councillor at the last election. The bottom-right shows the response rate by the number of days until the next election. Fitted lines are polynomial regressions of order three, with bars showing 95 percent confidence intervals. Population density and non-white population shares are calculated at the ward (sub-council) level from 2011 census data. On average there are three councillors in each ward. Population density is expressed as residents per hectare.

Conclusion

We find evidence for bias from local politicians in response to requests for basic information from ‘Jewish’ or ‘Muslim’ constituents. Despite the media narrative of anti-Semitism in the Labour party and Islamophobia in the Conservative party, our results suggest that both parties are equally discriminatory to both minority groups. This discrimination seems to occur based on names alone, and is unchanged by the explicit identification of religious identity. These effects are largest in rural areas (with low population density) and with small non-white populations. Councillors in such areas may have fewer opportunities for positive interactions with minority groups.

This work demonstrates that even access to basic services are susceptible to forms of discrimination, and that minority group members may struggle to be heard through this process. Reducing councilor bias could be attempted through training designed to reduce implicit prejudice. The leader of the Labour Party has announced the party’s commitment to undergoing this type of training, though more research is needed into the effectiveness of such training. Future studies may benefit from further investigating the process through which politicians engage with their community, and identify ways in which to reduce these biases.

Lee Crawfurd and Ukasha Ramli measure the responsiveness of elected local representatives to requests from putative constituents from minority religious groups. They find that response rates are six to seven percentage points lower to stereotypically Muslim or Jewish names, with Labour and Conservative councillors both showing equal bias towards the two. Their results suggest that the bias may be implicit and that it is lower in more dense and diverse locations.

Source: How anti-Semitic and how Islamophobic are local politicians?

Can We Make Our Robots Less Biased Than Us? A.I. developers are committing to end the injustices in how their technology is often made and used.

Important read:

On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot.

Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled. Sending the machine was safer than sending in human officers, Mr. Brown said.

But some robotics researchers were troubled. “Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police offers in Dixmont, Maine, ended a shootout in a similar manner.). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems.

While Mr. Johnson’s death resulted from a human decision, in the future such a decision might be made by a robot — one created by humans, with their flaws in judgment baked in.

“Given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge,” Dr. Howard, a leader of the organization Black in Robotics, and Dr. Borenstein wrote, “it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Last summer, hundreds of A.I. and robotics researchers signed statements committing themselves to changing the way their fields work. One statement, from the organization Black in Computing, sounded an alarm that “the technologies we help create to benefit society are also disrupting Black communities through the proliferation of racial profiling.” Another manifesto, “No Justice, No Robots,” commits its signers to refusing to work with or for law enforcement agencies.

Over the past decade, evidence has accumulated that “bias is the original sin of A.I,” Dr. Howard notes in her 2020 audiobook, “Sex, Race and Robots.” Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one such system told the Detroit police that it had matched photos of a suspected thief with the driver’s license photo of Robert Julian-Borchak Williams, a Black man with no connection to the crime.)

There are A.I. systems enabling self-driving cars to detect pedestrians — last year Benjamin Wilson of Georgia Tech and his colleagues found that eight such systems were worse at recognizing people with darker skin tones than paler ones. Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the M.I.T. Media Lab, has encountered interactive robots at two different laboratories that failed to detect her. (For her work with such a robot at M.I.T., she wore a white mask in order to be seen.)

The long-term solution for such lapses is “having more folks that look like the United States population at the table when technology is designed,” said Chris S. Crawford, a professor at the University of Alabama who works on direct brain-to-robot controls. Algorithms trained mostly on white male faces (by mostly white male developers who don’t notice the absence of other kinds of people in the process) are better at recognizing white males than other people.

“I personally was in Silicon Valley when some of these technologies were being developed,” he said. More than once, he added, “I would sit down and they would test it on me, and it wouldn’t work. And I was like, You know why it’s not working, right?”

Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. So it was striking that many roboticists signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed themselves to actions aimed at making the creation and usage of robots less unjust.

“I think the protests in the street have really made an impact,” said Odest Chadwicke Jenkins, a roboticist and A.I. researcher at the University of Michigan. At a conference earlier this year, Dr. Jenkins, who works on robots that can assist and collaborate with people, framed his talk as an apology to Mr. Williams. Although Dr. Jenkins doesn’t work in face-recognition algorithms, he felt responsible for the A.I. field’s general failure to make systems that are accurate for everyone.

“This summer was different than any other than I’ve seen before,” he said. “Colleagues I know and respect, this was maybe the first time I’ve heard them talk about systemic racism in these terms. So that has been very heartening.” He said he hoped that the conversation would continue and result in action, rather than dissipate with a return to business-as-usual.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

As the Black in Computing open letter addressed how robots and A.I. are made, another manifesto appeared around the same time, focusing on how robots are used by society. Entitled “No Justice, No Robots,” the open letter pledges its signers to keep robots and robot research away from law enforcement agencies. Because many such agencies “have actively demonstrated brutality and racism toward our communities,” the statement says, “we cannot in good faith trust these police forces with the types of robotic technologies we are responsible for researching and developing.”

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

“I was dismayed by it,” said Cindy Bethel, director of the Social, Therapeutic and Robotic Systems Lab at Mississippi State University. “It’s such a blanket statement,” she said. “I think it’s naïve and not well-informed.” Dr. Bethel has worked with local and state police forces on robot projects for a decade, she said, because she thinks robots can make police work safer for both officers and civilians.

One robot that Dr. Bethel is developing with her local police department is equipped with night-vision cameras, that would allow officers to scope out a room before they enter it. “Everyone is safer when there isn’t the element of surprise, when police have time to think,” she said.

Adhering to the declaration would prohibit researchers from working on robots that conduct search-and-rescue operations, or in the new field of “social robotics.” One of Dr. Bethel’s research projects is developing technology that would use small, humanlike robots to interview children who have been abused, sexually assaulted, trafficked or otherwise traumatized. In one of her recent studies, 250 children and adolescents who were interviewed about bullying were often willing to confide information in a robot that they would not disclose to an adult.

Having an investigator “drive” a robot in another room thus could yield less painful, more informative interviews of child survivors, said Dr. Bethel, who is a trained forensic interviewer.

“You have to understand the problem space before you can talk about robotics and police work,” she said. “They’re making a lot of generalizations without a lot of information.”

Dr. Crawford is among the signers of both “No Justice, No Robots” and the Black in Computing open letter. “And you know, anytime something like this happens, or awareness is made, especially in the community that I function in, I try to make sure that I support it,” he said.

Dr. Jenkins declined to sign the “No Justice” statement. “I thought it was worth consideration,” he said. “But in the end, I thought the bigger issue is, really, representation in the room — in the research lab, in the classroom, and the development team, the executive board.” Ethics discussions should be rooted in that first fundamental civil-rights question, he said.

Dr. Howard has not signed either statement. She reiterated her point that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software.

“If external people who have ethical values aren’t working with these law enforcement entities, then who is?” she said. “When you say ‘no,’ others are going to say ‘yes.’ It’s not good if there’s no one in the room to say, ‘Um, I don’t believe the robot should kill.’”

Source: https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html?action=click&module=News&pgtype=Homepage

Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Of note given complaints by conservatives:

Sen. Roger Wicker hit a familiar note when he announced on Thursday that the Commerce Committee was issuing subpoenas to force the testimony of Facebook Chief Executive Mark Zuckerberg and other tech leaders.

Tech platforms like Facebook, the Mississippi Republican said, “disproportionately suppress and censor conservative views online.”

When top tech bosses were summoned to Capitol Hill in July for a hearing on the industry’s immense power, Republican Congressman Jim Jordan made an even blunter accusation.

“I’ll just cut to the chase, Big Tech is out to get conservatives,” Jordan said. “That’s not a hunch. That’s not a suspicion. That’s a fact.”

But the facts to support that case have been hard to find. NPR called up half a dozen technology experts, including data scientists who have special access to Facebook’s internal metrics. The consensus: There is no statistical evidence to support the argument that Facebook does not give conservative views a fair shake.

Let’s step back for a moment.

When Republicans claim Facebook is “biased,” they often collapse two distinct complaints into one. First, that the social network deliberately scrubs right-leaning content from its site. There is no proof to back this up. Secondly, Republicans suggest that conservative news and perspectives are being throttled by Facebook, that the social network is preventing the content from finding a large audience. That claim is not only unproven, but publicly available data on Facebook shows the exact opposite to be true: conservative news regularly ranks among some of the popular content on the site.

Now, there are some complex layers to this, but former Facebook employees and data experts say the conservative bias argument would be easier to talk about — and easier to debunk — if Facebook was more transparent.

The social network keeps secret some of the most basic data points, like what news stories are the most viewed on Facebook on any given day, leaving data scientists, journalists and the general public in the dark about what people are actually seeing on their News Feeds.

There are other sources of data, but they offer just a tiny window into the sprawling universe of nearly 3 billion users. Facebook is quick to point out that the public metrics available are of limited use, yet it does so without offering a real solution, which would be opening up some of its more comprehensive analytics for public scrutiny.

Until they do, there’s little to counter rumors about what thrives and dies on Facebook and how the platform is shaping political discourse.

“It’s kind of a purgatory of their own making,” said Kathy Qian, a data scientist who co-founded Code for Democracy.

What the available data reveals about possible bias

Perhaps the most often-cited data point on what is popular on Facebook is a tracking tool called CrowdTangle, a startup that Facebook acquired in 2016.

New York Times journalist Kevin Roose has created a Twitter account where he posts the top ten most-engaging posts based on CrowdTangle data. These lists are dominated mostly by conservative commentators like Ben Shapiro and Dan Bongino and Fox News. They resemble a “parallel media universe that left-of-center Facebook users may never encounter,” Roose writes.

Yet these lists are like looking at Facebook through a soda straw, says researchers like MIT’s Jennifer Allen, who used to work at Facebook and now studies how people consume news on social media. CrowdTangle, Allen says, does not provide the whole story.

That’s because CrowdTangle only captures engagement — likes, shares, comments and other reactions — from public pages. But just because a post provokes lots of reactions does not mean it reaches many users. The data does not show how many people clicked on a link, or what the overall reach of the post was. And much of what people see on Facebook is from their friends, not public pages.

“You see these crazy numbers on CrowdTangle, but you don’t see how many people are engaging with this compared with the rest of the platform,” Allen said.

Another point researchers raise: All engagement is not created equal.

Users could “hate-like” a post, or click like as a way of bookmarking, or leave another reaction expressing disgust, not support. Take, for example, the laughing-face emoji.

“It could mean, ‘I agree with this’ or ‘This is so hilarious untrue,'” said data scientist Qian. “It’s just hard to know what people actually mean by those reactions.”

It’s also hard to tell whether people or automated bots are generating all the likes, comments and shares. Former Facebook research scientist Solomon Messing conducted a study of Twitter in 2018 finding hat bots were likely responsible for 66% of link shares on the platform. The tactic is employed on Facebook, too.

“What Facebook calls ‘inauthentic behavior’ and other borderline scam-like activity are unfortunately common and you can buy fake engagement easily on a number of websites,” Messing said.

Brendan Nyhan, a political scientist at Dartmouth College, is also wary about drawing any big conclusions from CrowdTangle.

“You can’t judge anything about American movies by looking at the top ten box films hits of all time,” Nyhan said. “That’s not a great way of understanding what people are actually watching. There’s the same risk here.”

‘Concerned about being seen as on the side of liberals’

Experts agree that a much better measure would be a by-the-numbers rundown of what posts are reaching the most people. So why doesn’t Facebook reveal that data?

In a Twitter thread back in July, John Hegeman, the head of Facebook’s NewsFeed, offered one sample of such a list, saying it is “not as partisan” as lists compiled with CrowdTangle data suggest.

But when asked why Facebook doesn’t share that broader data with the public, Hegeman did not reply.

It could be, some experts say, that Facebook fears that data will be used as ammunition against the company at a time when Congress and the Trump administration are threatening to rein in the power of Big Tech.

“They are incredibly concerned about being seen as on the side of liberals. That is against the profit motive of their business,” Dartmouth’s Nyhan said of Facebook executives. “I don’t see any reason to see that they have a secret, hidden liberal agenda, but they are just so unwilling to be transparent.”

Facebook has been more forthcoming with some academic researchers looking at how social media affects elections and democracy. In April 2019, it announced a partnership that would give 60 scholars access to more data, including the background and political affiliation of people who are engaging content.

One of those researchers is University of Pennsylvania data scientist Duncan Watts.

“Mostly it’s mainstream content,” he said of the most viewed and clicked on posts. “If anything, there is a bias in favor of conservative content.”

While Facebook posts from national television networks and major newspapers get the most clicks, partisan outlets like the Daily Wire and Brietbart routinely show up in top spots, too.

“That should be so marginal that it has no relevance at all,” Watts said of the right-wing content. “The fact that it is showing up at all is troubling.”

‘More false and misleading content on the right’

Accusations from Trump and other Republicans in Washington that Facebook is a biased referee of its content tend to flare up when the social network takes action against a conservative-leaning posts that violate its policies.

Researchers say there is a reason why most of the high-profile examples of content warnings and removals target conservative content.

“That is a result of there just being more false and misleading content on the right,” said researcher Allen. “There are bad actors on the left, but the ecosystem on the right is just much more mature and popular.”

Facebook’s algorithms could also be helping more people see right-wing content that’s meant to evoke passionate reactions, she added.

Because of the sheer amount of envelope-pushing conservative content, some of it veering into the realm of conspiracy theories, the moderation from Facebook is also greater.

Or as Nyhan put it: “When reality is asymmetric, enforcement may be asymmetric. That doesn’t necessarily indicate a bias.”

The attacks on Facebook over perceived prejudice against conservatives has helped fuel the push in Congress and the White House to reform Section 230 of the Communications and Decency Act of 1996, which allows platforms to avoid lawsuits over what users post and gives tech companies the freedom to police its sites as the companies see fit.

Joe Osborne, a Facebook spokesman, in a statement said the social network’s content moderation policies are applied fairly across the board.

“While many Republicans think we should do one thing, many Democrats think we should do the exact opposite. We’ve faced criticism from Republicans for being biased against conservatives and Democrats for not taking more steps to restrict the exact same content. Our job is to create one consistent set of rules that applies equally to everyone.”

Osborne confirmed that Facebook is exploring ways to make more data available in the platform’s public tools, but he declined to elaborate.

Watts, University of Pennsylvania data scientist who studies social media, said Facebook is sensitive to Republican criticism, but no matter what decision they make, the attacks will continue.

“Facebook could end up responding in a way to accommodate the right, but the right will never be appeased,” Watts said. “So it’s this constant pressure of ‘you have to give us more, you have to give us more,'” he said. “And it creates a situation where there’s no way to win arguments based on evidence, because they can just say, ‘Well, I don’t trust you.'”

Source: Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Police Researcher: Officers Have Similar Biases Regardless Of Race

Interesting study. But watching the visible minority officers doing nothing during the Floyd killing …:

One common recommendation for reducing police brutality against people of color is to have police departments mirror a given area’s racial makeup.

President Obama’s Task Force on 21st Century Policing recommended that law enforcement “reflect the demographics of the community”; the Justice Department and Equal Employment Opportunity Commission said diversity on police forces can help build trust with communities.

Rashawn Ray, a fellow at the Brookings Institution and a sociology professor at the University of Maryland, studies race and policing. He says that diversity helps but that “officers, regardless of their race or gender, have similar implicit biases, particularly about Black people.” Ray says it’s not enough to have Black cops in a Black neighborhood if they don’t know the area.

Ray and his University of Maryland colleagues have amassed policing data through tests and interviews with hundreds of officers. He talked with Morning Edition‘s Noel King about this research. Here are excerpts of that interview:

What kind of biases do police officers — implicit, explicit — say that they have?

The first big thing is that when officers take the implicit association test, they exhibit bias against Black people. They are more likely to make an association between Black people with weapons than they are with white people with weapons. We also know that officers speak less respectfully to Black people during traffic stops as well as during other sorts of settings. And they are particularly less likely to respect Black women in these encounters, even if they’re more likely to slightly use more force on Black men relative to other people.

I was talking to a former police officer whose job is now to recruit more Black and brown police officers into [the Minneapolis] force. It sounds like what you’re saying is if she is recruiting Black and brown officers from Phoenix or from Houston and bringing them over to Minneapolis, that is not likely to solve the problem.

That’s exactly right. So the optics look good, but we can’t make the assumption that simply because a person is Black that they’re going to know about the neighborhood. Part of the fundamental problem when it comes to policing that I’ve noticed is that when police officers interact with a white person, there is a pause, a slight pause, a slight benefit of the doubt. The reason why that exists is because subconsciously, implicitly, when they interact with that person, they see their neighbor, a parent at their kids’ school, and when they interact with a Black person, they are less likely to have what we call in sociology those “social scripts” that allow them to view people in those multitude of ways.

And if we’re going to change this, one big recommendation I have: Police officers need housing assistance that mandates that they live in the metropolitan area where they are policing. Because community policing isn’t about getting out, playing basketball with a kid in uniform. Community policing oftentimes is what you do when you’re not on duty. The way that you’re investing in a neighborhood.

Source: Police Researcher: Officers Have Similar Biases Regardless Of Race