New report details how autocrats use the internet to harass and suppress activists in Canada

Thousands of miles away from her homeland in Syria, she organized protests and ran social media pages in Canada in support of opposition forces fighting President Bashar al-Assad’s regime.

Then anonymous complaints started rolling in and prompted Facebook to shut down her group page. Trolls left “nasty and dirty” comments on social media and created fake profiles with her photos, she said, while a Gmail administrator alerted her that “a state sponsor” was trying to hack her account.

“The Assad regime was functioning through this network of thugs that they call Shabeeha. Inside of Syria, those thugs would be physically beating up people and terrorizing them,” said the 42-year-old Toronto woman.

“Then they were also very much online, so they terrorized people online as well.”

As diaspora communities are increasingly relying on social media and other online platforms to pursue advocacy work, authoritarian states are trying to exert their will over overseas dissidents through what’s dubbed “digital transnational repression,” said a new study released Tuesday.

“States that engage in transnational repression use a variety of methods to silence, persecute, control, coerce, or otherwise intimidate their nationals abroad into refraining from transnational political or social activities that may undermine or threaten the state and power within its border,” said the report by the Citizen Lab at University of Toronto’s Munk School of Global Affairs.

“Thus, nationals of these states who reside abroad are still limited in how they can exercise ‘their rights, liberties, and voice’ and remain subject to state authoritarianism even after leaving their country of origin.”

Being a country of immigrants — particularly refugees seeking protection from persecution — Canada is vulnerable to this kind of digital attacks, amid the advancement of surveillance technology and rising authoritarianism around the globe, said the report’s authors.

“There is this misassumption that once people arrive in Canada from authoritarian countries, they are safe. We need to redefine what safety is,” said Noura Al-Jizawi, one of the report’s co-authors.

“This is not only affecting the day-to-day life of these people, but it’s also affecting the civic rights, their freedom of speech or their freedom of assembly of an entire community that’s beyond the individuals who are being targeted.”

A team of researchers interviewed 18 individuals, all of whom resided in Canada and had moved or fled to Canada from 11 different places, including Syria, Saudi Arabia, Yemen, Tibet, Hong Kong, China, Rwanda, Iran, Afghanistan, East Turkestan, and Balochistan.

The participants shared their experiences of being intimidated for the advocacy work they conducted in Canada, as well as the impacts of such threats — allegedly from these foreign states and their supporters — on their well-being and the diaspora communities they come from.

“Their main concern besides their privacy and the privacy of their family is the friends and colleagues back home. If the government targets their devices digitally, they would reveal the underground and hidden network of activists,” said Al-Jizawi.

“Many of them mention that they try to avoid the communities from their country of origin because they can’t feel safe connecting with these people.”

Many of the participants in the study said they have reached out for assistance to authorities such as the Canadian Security Intelligence Service but were disappointed.

“The responses were generally like, we can’t help you or this isn’t a crime and there’s nothing actionable here. In one case, they suggested to the person to hire a private detective,” noted Siena Anstis, another co-author of the study.

“Law enforcement is probably not that well equipped or trained to understand the broader context within which this is happening. The way that they handle these cases is quite dismissive.”

The anonymous Syrian-Canadian political activist who participated in the study said victims of transnational repression will stop reporting to Canadian officials if nothing comes out of their complaints.

“Every day we’re becoming more and more digital, which makes us more vulnerable to digital attacks and digital privacy issues. I hope our government will start thinking about how to protect us from this emerging threat that we never had to worry about before,” said the woman, who came here from Aleppo as a 7-year-old and has stopped her political activities to free Syria.

“If someone like me who is extremely outspoken and very difficult to stifle felt a little bit overwhelmed by all of it, you can imagine other people who recently came from Syria and still have a lot of ties there. I know a lot of people that will not open their mouth publicly because they’re scared what will happen.”

The report urges Ottawa to create a dedicated government agency to support victims and conduct research to better understand the scale and impact of these activities on the exercise of Canadian human rights. It also recommends establishing federal policies for the sale of surveillance technologies to authoritarian states and for guiding how social media platforms can better protect victims from digital attacks.

“It might seem at this stage it’s only happening to some communities in Canada and it doesn’t matter,” said Anstis. “But collectively it’s our human rights that are being eroded. It’s our capacity to engage in, affirm and protect against human rights and democracy. That space for dialogue is really reducing.”

Source: New report details how autocrats use the internet to harass and suppress activists in Canada

U.S. accounts drive Canadian convoy protest chatter

Of note. While recent concerns have understandably focussed on Chinese and Russian government interference, we likely need to spend more attention on the threats from next door, along with the pernicious threats via Facebook and Twitter:

Known U.S.-based sources of misleading information have driven a majority of Facebook and Twitter posts about the Canadian COVID-19 vaccine mandate protest, per German Marshall Fund data shared exclusively with Axios.

Driving the news: Ottawa’s “Freedom Convoy” has ballooned into a disruptive political protest against Prime Minister Justin Trudeau and inspired support among right-wing and anti-vaccine mandate groups in the U.S.

Why it matters: Trending stories about the protest appear to be driven by a small number of voices as top-performing accounts with huge followings are using the protest to drive engagement and inflame emotions with another hot-button issue.

  • “They can flood the zone — making something news and distorting what appears to be popular,” said Karen Kornbluh, senior fellow and director of the Digital Innovation and Democracy Initiative at the German Marshall Fund. 

What they’re saying: “The three pages receiving the most interactions on [convoy protest] posts — Ben Shapiro, Newsmax and Breitbart -—are American,” Kornbluh said. Other pages with the most action on convoy-related posts include Fox News, Dan Bongino and Franklin Graham.

  • “These major online voices with their bullhorns determine what the algorithm promotes because the algorithm senses it is engaging,” she said.
  • Using a platform’s design to orchestrate anti-government action mirrors how the “Stop the Steal” groups worked around the Jan. 6 Capitol riot, with a few users quickly racking up massive followings, Kornbluh said.

By the numbers: Per German Marshall Fund data, from Jan. 22, when the protests began, to Feb. 12, there were 14,667 posts on Facebook pages about the Canadian protests, getting 19.3 million interactions (including likes, comments and shares).

  • For context: The Beijing Olympics had 20.9 million interactions in that same time period.
  • On Twitter, from Feb. 3 to Feb. 13, tweets about the protests from have been favorited at least 4.1 million times and retweeted at least 1.1 million times.
  • Pro-convoy videos on YouTube have racked up 47 million views, with Fox News’ YouTube page getting 29.6 million views on related videos.

The big picture: New research published in the Atlantic finds that most public activity on Facebook comes from a “tiny, hyperactive group of abusive users.”

  • Since user engagement remains the most important factor in Facebook’s weighting of content recommendations, the researchers write, the most abusive users will wield the most influence over the online conversation.
  • “Overall, we observed 52 million users active on these U.S. pages and public groups, less than a quarter of Facebook’s claimed user base in the country,” the researchers write. “Among this publicly active minority of users, the top 1 percent of accounts were responsible for 35 percent of all observed interactions; the top 3 percent were responsible for 52 percent. Many users, it seems, rarely, if ever, interact with public groups or pages.”

Meanwhile, Foreign meddling is further confusing the narrative around the trucker protest. 

  • NBC News reported that overseas content mills in Vietnam, Bangladesh, Romania and other countries are powering Facebook groups promoting American versions of the trucker convoys. Facebook took many of the pages down.
  • A report from Grid News found a Bangladeshi digital marketing firm was behind two of the largest Facebook groups related to the Canadian Freedom Convoy beforebeing removed from the platform.
  • Grid News reported earlier that Facebook groups supporting the Canadian convoy were being administered by a hacked Facebook account belonging to a Missouri woman.

Source: U.S. accounts drive Canadian convoy protest chatter

Immigration patterns are reflected in Facebook data on popular foods and drinks

Not surprising but nevertheless interesting:

Researchers have developed a novel strategy for using Facebook data to measure cultural similarity between countries, revealing associations between immigration patterns and people’s food and drink interests. Carolina Vieira of the Max Planck Institute for Demographic Research in Rostock, Germany, and colleagues present these findings in the open-access journal PLOS ONE on February 9, 2022.

Migration may play a key role in shaping cultural similarities between countries. However, its influence is difficult to study, partly due to the challenge of quantifying culture. Typically, researchers have relied on surveys to compare different countries’ cultures, but surveys are associated with several difficulties, such as their cost, the possibility of bias in their construction, and the difficulty of applying them to a large number of countries.

To complement survey data, Vieira and colleagues have now developed a new analytical method based on earlier evidence that and drink preferences may be a proxy for cultural similarities between countries. The new method employs data on the top 50 food and drink preferences for any given country as captured by the Facebook Advertising Platform.

To demonstrate the new method, the researchers applied it to 16 countries, finding that food and drink interests generally reflect immigration patterns. In most countries, including the U.S., preferences for foreign food and drink align with top foods and drinks in the countries from which most immigrants came. Countries with fewer immigrants, such as Indonesia, Japan, Russia, and Turkey, stand apart from the others, showing more idiosyncrasy in their preferences for foreign foods and drinks.

The findings align well with earlier survey data, and they highlight asymmetry between countries; for instance, the top 50 foods and drinks from Mexico are more popular in the U.S. than the top 50 U.S. foods and drinks are in Mexico, reflecting a greater degree of immigration from Mexico to the U.S. than vice versa.

Overall, the researchers say, this study suggests that immigrants indeed help shape the culture of their destination country. Future research could refine the new method outlined in this study or repurpose it to examine and compare other interests beyond food and drink.

The authors add: “We analyze data from Facebook users about their food and drink preferences to measure the cultural similarity between 16 countries. When compared with official migration data, we observe that countries with more immigrants show a higher cultural similarity between the origin and destination .”

Source: Immigration patterns are reflected in Facebook data on popular foods and drinks

Canada is sleepwalking into bed with Big Tech, as politicos float between firms and public office

Sort of inevitable, unfortunately:

Canadians have been served a familiar dish of election promises aimed at taking on the American web giants. But our governments have demonstrated a knack for aggressive procrastination on this file.

A new initiative is providing a glimpse into Canada’s revolving door with Big Tech, and as the clock ticks on the Liberal government’s hundred-day promise to enact legislation, Canadians have 22 reasons to start asking tough questions.

The Regulatory Capture Lab — a collaboration between FRIENDS(formerly Friends of Canadian Broadcasting), the Centre for Digital Rights and McMaster University’s Master of Public Policy in Digital Society Program — is shedding light on a carousel of unconstrained career moves between public policy teams at Big Tech firms and federal public offices. 

Canadians should review this new resource and see for themselves the creeping links between the most powerful companies on earth and the institutions responsible for reining them in. 

And they’d be wise to look soon. According to the Liberal government, a wave of tech-oriented policy is in formation, from updating the Broadcasting Act to forcing tech firms to pay for journalism that appears on their platforms.

But our work raises vital questions about all these proposals: are Canadians’ interests being served through these pieces of legislation? Has a slow creep of influence over public office put Big Tech in the driver’s seat? These promises of regulation have been around for years, so, why is it taking so long to get on with it?

Cosy relations between Big Tech and those in public office in Canada have bubbled to the surface before, most notably through the work of Kevin Chan, the man for Meta (Facebook) in Canada. In 2020, the Star exposed Chan’s efforts to recruit senior analysts from within Canadian Heritage, the department leading the efforts to regulate social media giants, to work at Facebook.

It doesn’t stop there. A 2021 story from The Logic revealed the scope of Chan’s enthusiasm in advancing the interests of his employer. Under Chan’s skilful direction, Facebook has managed to get its tendrils of influence into everything — government offices, universities, even media outlets. And in so many instances, Chan has found willing participants across the aisle who offer up glowing statements about strategic partnerships with Facebook.

Facebook isn’t alone in the revolving door. For some politicos, moving between Big Tech and public office appears to be the norm, in both directions. Big Tech public policy teams are filled with people who have worked in Liberal and Conservative offices, the PMO, Heritage and Finance ministries, the Office of the Privacy Commissioner, and more.

Conversely, some current senior public office holders are former Big Tech employees. Amazon, Google, Netflix, Huawei, Microsoft and Palantir are all connected through a revolving door with government. And this doesn’t even begin to cover Big Tech’s soft-power activities in Canada, from academic partnerships, deals with journalism outlets (including this one), and even shared initiatives with government to save democracy. The connections are vast and deep.

So, why has tech regulation taken so long? Armed with the knowledge that so many of Canada’s brightest public policy minds are moving between the offices of Big Tech and the halls of power in Ottawa, Canadians should be forgiven for jumping to conclusions. Or, maybe it’s just that simple? 

That these employment moves are taking place in both directions is hardly surprising. But the fact that so little attention has been paid to this phenomenon is deeply troubling. And how can this power be held to account when our journalism outlets are left with little choice but to partner with Big Tech?

The Regulatory Capture Lab has pried opened the window on this situation, but others must jump in. It’s time for Canadians to start asking tough questions. FRIENDS is ready to get the answers.

Source: https://www.thestar.com/opinion/contributors/2022/01/17/canada-is-sleepwalking-into-bed-with-big-tech-as-politicos-float-between-firms-and-public-office.html

Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

More on Facebook and Zuckerberg’s failure to act against mis- and dis-information:

In May 2019, a video purporting to show House Speaker Nancy Pelosiinebriated, slurring her words as she gave a speech at a public event, went viral on Facebook. In reality, somebody had slowed the footage down to 75% of its original speed.

On one Facebook page alone, the doctored video received more than 3 million views and 48,000 shares. Within hours it had been reuploaded to different pages and groups, and spread to other social media platforms. In thousands of Facebook comments on pro-Trump and rightwing pages sharing the video, users called Pelosi “demented,” “messed up” and “an embarrassment.”
[time-brightcove not-tgx=”true”]

Two days after the video was first uploaded, and following angry calls from Pelosi’s team, Facebook CEO Mark Zuckerberg made the final call: the video did not break his site’s rules against disinformation or deepfakes, and therefore it would not be taken down. At the time, Facebook said it would instead demote the video in people’s feeds.

Inside Facebook, employees soon discovered that the page that shared the video of Pelosi was a prime example of a type of platform manipulation that had been allowing misinformation to spread unchecked. The page—and others like it—had built up a large audience not by posting original content, but by taking content from other sources around the web that had already gone viral. Once audiences had been established, nefarious pages would often pivot to posting misinformation or financial scams to their many viewers. The tactic was similar to how the Internet Research Agency (IRA), the Russian troll farm that had meddled in the 2016 U.S. election, spread disinformation to American Facebook users. Facebook employees gave the tactic a name: “manufactured virality.”

In April 2020, a team at Facebook working on “soft actions”—solutions that stop short of removing problematic content—presented Zuckerberg with a plan to reduce the reach of pages that pursued “manufactured virality” as a tactic. The plan would down-rank these pages, making it less likely that users would see their posts in the News Feed. It would impact the pages that shared the doctored video of Pelosi, employees specifically pointed out in their presentation to Zuckerberg. They also suggested it could significantly reduce misinformation posted by pages on the platform since the pages accounted for 64% of page-related misinformation views but only 19% of total page-related views.

But in response to feedback given by Zuckerberg during the meeting, the employees “deprioritized” that line of work in order to focus on projects with a “clearer integrity impact,” internal company documents show.

This story is partially based on whistleblower Frances Haugen’s disclosures to the U.S. Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. The redacted versions were seen by a consortium of news organizations, including TIME. Many of the documents were first reported by the Wall Street Journal. They paint a picture of a company obsessed with boosting user engagement, even as its efforts to do so incentivized divisive, angry and sensational content. They also show how the company often turned a blind eye to warnings from its own researchers about how it was contributing to societal harms.

A pitch to Zuckerberg with few visible downsides

Manufactured virality is a tactic that has been used frequently by bad actors to game the platform, according to Jeff Allen, the co-founder of the Integrity Institute and a former Facebook data scientist who worked closely on manufactured virality before he left the company in 2019. This includes a range of groups, from teenagers in Macedonia who found that targeting hyper-partisan U.S. audiences in 2016 was a lucrative business, to covert influence operations by foreign governments including the Kremlin. “Aggregating content that previously went viral is a strategy that all sorts of bad actors have used to build large audiences on platforms,” Allen told TIME. “The IRA did it, the financially motivated troll farms in the Balkans did it, and it’s not just a U.S. problem. It’s a tactic used across the world by actors who want to target various communities for their own financial or political gain.”

In the April 2020 meeting, Facebook employees working in the platform’s “integrity” division, which focuses on safety, presented a raft of suggestions to Zuckerberg about how to reduce the virality of harmful content on the platform. Several of the suggestions—titled “Big ideas to reduce prevalence of bad content”—had already been launched; some were still the subjects of experiments being run on the platform by Facebook researchers. Others —including tackling “manufactured virality”—were early concepts that employees were seeking approval from Zuckerberg to explore in more detail.

The employees noted that much “manufactured virality” content was already against Facebook’s rules. The problem, they said, was that the company inconsistently enforced those rules. “We already have a policy against pages that [pursue manufactured virality],” they wrote. “But [we] don’t consistently enforce on this policy today.”

The employees’ presentation said that further research was needed to determine the “integrity impact” of taking action against manufactured virality. But they pointed out that the tactic disproportionately contributed to the platform’s misinformation problem. They had compiled statistics showing that nearly two-thirds of page-related misinformation came from “manufactured virality” pages, compared to less than one fifth of total page-related views.

Acting against “manufactured virality” would bring few business risks, the employees added. Doing so would not reduce the number of times users logged into Facebook per day, nor the number of “likes” that they gave to other pieces of content, the presentation noted. Neither would cracking down on such content impact freedom of speech, the presentation said, since only reshares of unoriginal content—not speech—would be affected.

But Zuckerberg appeared to discourage further research. After presenting the suggestion to the CEO, employees posted an account of the meeting on Facebook’s internal employee forum, Workplace. In the post, they said that based on Zuckerberg’s feedback they would now be “deprioritizing” the plans to reduce manufactured virality, “in favor of projects that have a clearer integrity impact.” Zuckerberg approved several of the other suggestions that the team presented in the same meeting, including “personalized demotions,” or demoting content for users based on their feedback.

Andy Stone, a Facebook spokesperson, rejected suggestions that employees were discouraged from researching manufactured virality. “Researchers pursued this and, while initial results didn’t demonstrate a significant impact, they were free to continue to explore it,” Stone wrote in a statement to TIME. He said the company had nevertheless contributed significant resources to reducing bad content, including down-ranking. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” he wrote. “We recently published our Content Distribution Guidelines that describe the kinds of content whose distribution we reduce in News Feed. And we’ve spent years standing up teams, developing policies and collaborating with industry peers to disrupt coordinated attempts by foreign and domestic inauthentic groups to abuse our platform.”

But even today, pages that share unoriginal viral content in order to boost engagement and drive traffic to questionable websites are still some of the most popular on the entire platform, according to a report released by Facebook in August.

Allen, the former Facebook data scientist, says Facebook and other platforms should be focused on tackling manufactured virality, because it’s a powerful way to make platforms more resilient against abuse. “Platforms need to ensure that building up large audiences in a community should require genuine work and provide genuine value for the community,” he says. “Platforms leave them themselves vulnerable and exploitable by bad actors across the globe if they allow large audiences to be built up by the extremely low-effort practice of scraping and reposting content that previously went viral.”

The internal Facebook documents show that some researchers noted that cracking down on “manufactured virality” might reduce Meaningful Social Interactions (MSI)—a statistic that Facebook began using in 2018 to help rank its News Feed. The algorithm change was meant to show users more content from their friends and family, and less from politicians and news outlets. But an internal analysis from 2018 titled “Does Facebook reward outrage” reported that the more negative comments a Facebook post elicited​​—content like the altered Pelosi video—the more likely the link in the post was to be clicked by users. “The mechanics of our platform are not neutral,” one Facebook employee wrote at the time. Since the content with more engagement was placed more highly in users’ feeds, it created a feedback loop that incentivized the posts that drew the most outrage. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Oct. 25.

How “manufactured virality” led to trouble in Washington

Zuckerberg’s decision in May 2019 not to remove the doctored video of Pelosi seemed to mark a turning point for many Democratic lawmakers fed up with the company’s larger failure to stem misinformation. At the time, it led Pelosi—one of the most powerful members of Congress, who represents the company’s home state of California—to deliver an unusually scathing rebuke. She blasted Facebook as “willing enablers” of political disinformation and interference, a criticism increasingly echoed by many other lawmakers. Facebook defended its decision, saying that they had “dramatically reduced the distribution of that content” as soon as its fact-checking partners flagged the video for misinformation.

Pelosi’s office did not respond to TIME’s request for comment on this story.

The circumstances surrounding the Pelosi video exemplify how Facebook’s pledge to show political disinformation to fewer users only after third-party fact-checkers flag it as misleading or manipulated—a process that can take hours or even days—does little to stop this content from going viral immediately after it is posted.

In the lead-up to the 2020 election, after Zuckerberg discouraged employees from tackling manufactured virality, hyper-partisan sites used the tactic as a winning formula to drive engagement to their pages. In August 2020, another doctored video falsely claiming to show Pelosi inebriated again went viral. Pro-Trump and rightwing Facebook pages shared thousands of similar posts, from doctored videos meant to make then-candidate Joe Biden appear lost or confused while speaking at events, to edited videos claiming to show voter fraud.

In the aftermath of the election, the same network of pages that had built up millions of followers between them using manufactured virality tactics used the reach they had built to spread the lie that the election had been stolen.

Source: Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

Facebook’s language gaps weaken screening of hate, terrorism

Any number of good articles on the “Facebook papers” and its unethical and dangerous business practices:

As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.

Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.

For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.

Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemicthan just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.

Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were reviewed by a consortium of news organizations, including The Associated Press.

In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.

But when it comes to Arabic content moderation, the company said, “We still have more work to do. … We conduct research to better understand this complexity and identify how we can improve.”

In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya’s persecution, which the U.S. has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.

Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.

In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.

In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.

Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.

The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Quran. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.

Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.

Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.

For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.

Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the U.S. government equivalent — are grounds for a takedown.

“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”

The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.

“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.

In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”

“We know our systems are not perfect,” it added.

The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.

Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.

Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.

“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”

Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.

“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.

Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.

Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.

“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.

In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.

When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.

When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.

In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.

Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.

Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77% of the time, one document said.

“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90% of the time, a report said.

Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.

“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”

Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.

“The stereotype that Arabic is one entity is a major problem,” said Enam al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.

Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.

Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.

Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.

“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.

During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.

In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.

“The repetition of false positives creates a huge drain of resources,” it said.

In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.

Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.

“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.

But the company also lamented that “there is not one clear mitigation strategy.”

Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.

“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”

Source: Facebook’s language gaps weaken screening of hate, terrorism

How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Good in-depth article:

Facebook’s civic-integrity team was always different from all the other teams that the social media company employed to combat misinformation and hate speech. For starters, every team member subscribed to an informal oath, vowing to “serve the people’s interest first, not Facebook’s.”

The “civic oath,” according to five former employees, charged team members to understand Facebook’s impact on the world, keep people safe and defuse angry polarization. Samidh Chakrabarti, the team’s leader, regularly referred to this oath—which has not been previously reported—as a set of guiding principles behind the team’s work, according to the sources.
[time-brightcove not-tgx=”true”]

Chakrabarti’s team was effective in fixing some of the problems endemic to the platform, former employees and Facebook itself have said.

But, just a month after the 2020 U.S. election, Facebook dissolved the civic-integrity team, and Chakrabarti took a leave of absence. Facebook said employees were assigned to other teams to help share the group’s experience across the company. But for many of the Facebook employees who had worked on the team, including a veteran product manager from Iowa named Frances Haugen, the message was clear: Facebook no longer wanted to concentrate power in a team whose priority was to put people ahead of profits.

Five weeks later, supporters of Donald Trump stormed the U.S. Capitol—after some of them organized on Facebook and used the platform to spread the lie that the election had been stolen. The civic-integrity team’s dissolution made it harder for the platform to respond effectively to Jan. 6, one former team member, who left Facebook this year, told TIME. “A lot of people left the company. The teams that did remain had significantly less power to implement change, and that loss of focus was a pretty big deal,” said the person. “Facebook did take its eye off the ball in dissolving the team, in terms of being able to actually respond to what happened on Jan. 6.” The former employee, along with several others TIME interviewed, spoke on the condition of anonymity, for fear that being named would ruin their career.

Enter Frances Haugen

Haugen revealed her identity on Oct. 3 as the whistle-blower behind the most significant leak of internal research in the company’s 17-year history. In a bombshell testimony to the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security two days later, Haugen said the civic-integrity team’s dissolution was the final event in a long series that convinced her of the need to blow the whistle. “I think the moment which I realized we needed to get help from the outside—that the only way these problems would be solved is by solving them together, not solving them alone—was when civic-integrity was dissolved following the 2020 election,” she said. “It really felt like a betrayal of the promises Facebook had made to people who had sacrificed a great deal to keep the election safe, by basically dissolving our community.”

In a statement provided to TIME, Facebook’s vice president for integrity Guy Rosen denied the civic-integrity team had been disbanded. “We did not disband Civic Integrity,” Rosen said. “We integrated it into a larger Central Integrity team so that the incredible work pioneered for elections could be applied even further, for example, across health-related issues. Their work continues to this day.” (Facebook did not make Rosen available for an interview for this story.)

Impacts of Civic Technology Conference 2016The defining values of the civic-integrity team, as described in a 2016 presentation given by Samidh Chakrabarti and Winter Mason. Civic-integrity team members were expected to adhere to this list of values, which was referred to internally as the “civic oath”.

Haugen left the company in May. Before she departed, she trawled Facebook’s internal employee forum for documents posted by integrity researchers about their work. Much of the research was not related to her job, but was accessible to all Facebook employees. What she found surprised her.

Some of the documents detailed an internal study that found that Instagram, its photo-sharing app, made 32% of teen girls feel worse about their bodies. Others showed how a change to Facebook’s algorithm in 2018, touted as a way to increase “meaningful social interactions” on the platform, actually incentivized divisive posts and misinformation. They also revealed that Facebook spends almost all of its budget for keeping the platform safe only on English-language content. In September, the Wall Street Journal published a damning series of articles based on some of the documents that Haugen had leaked to the paper. Haugen also gave copies of the documents to Congress and the Securities and Exchange Commission (SEC).

The documents, Haugen testified Oct. 5, “prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems, and its role in spreading divisive and extreme messages.” She told Senators that the failings revealed by the documents were all linked by one deep, underlying truth about how the company operates. “This is not simply a matter of certain social media users being angry or unstable, or about one side being radicalized against the other; it is about Facebook choosing to grow at all costs, becoming an almost trillion-dollar company by buying its profits with our safety,” she said.

Facebook’s focus on increasing user engagement, which ultimately drives ad revenue and staves off competition, she argued, may keep users coming back to the site day after day—but also systematically boosts content that is polarizing, misinformative and angry, and which can send users down dark rabbit holes of political extremism or, in the case of teen girls, body dysmorphia and eating disorders. “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people,” Haugen said. (In 2020, the company reported $29 billion in net income—up 58% from a year earlier. This year, it briefly surpassed $1 trillion in total market value, though Haugen’s leaks have since knocked the company down to around $940 billion.)

Asked if executives adhered to the same set of values as the civic-integrity team, including putting the public’s interests before Facebook’s, a company spokesperson told TIME it was “safe to say everyone at Facebook is committed to understanding our impact, keeping people safe and reducing polarization.”

In the same week that an unrelated systems outage took Facebook’s services offline for hours and revealed just how much the world relies on the company’s suite of products—including WhatsApp and Instagram—the revelations sparked a new round of national soul-searching. It led some to question how one company can have such a profound impact on both democracy and the mental health of hundreds of millions of people. Haugen’s documents are the basis for at least eight new SEC investigations into the company for potentially misleading its investors. And they have prompted senior lawmakers from both parties to call for stringent new regulations.

Haugen urged Congress to pass laws that would make Facebook and other social media platforms legally liable for decisions about how they choose to rank content in users’ feeds, and force companies to make their internal data available to independent researchers. She also urged lawmakers to find ways to loosen CEO Mark Zuckerberg’s iron grip on Facebook; he controls more than half of voting shares on its board, meaning he can veto any proposals for change from within. “I came forward at great personal risk because I believe we still have time to act,” Haugen told lawmakers. “But we must act now.”

Potentially even more worryingly for Facebook, other experts it hired to keep the platform safe, now alienated by the company’s actions, are growing increasingly critical of their former employer. They experienced first hand Facebook’s unwillingness to change, and they know where the bodies are buried. Now, on the outside, some of them are still honoring their pledge to put the public’s interests ahead of Facebook’s.

Inside Facebook’s civic-integrity team

Chakrabarti, the head of the civic-integrity team, was hired by Facebook in 2015 from Google, where he had worked on improving how the search engine communicated information about lawmakers and elections to its users. A polymath described by one person who worked under him as a “Renaissance man,” Chakrabarti holds master’s degrees from MIT, Oxford and Cambridge, in artificial intelligence engineering, modern history and public policy, respectively, according to his LinkedIn profile.

Although he was not in charge of Facebook’s company-wide “integrity” efforts (led by Rosen), Chakrabarti, who did not respond to requests to comment for this article, was widely seen by employees as the spiritual leader of the push to make sure the platform had a positive influence on democracy and user safety, according to multiple former employees. “He was a very inspirational figure to us, and he really embodied those values [enshrined in the civic oath] and took them quite seriously,” a former member of the team told TIME. “The team prioritized societal good over Facebook good. It was a team that really cared about the ways to address societal problems first and foremost. It was not a team that was dedicated to contributing to Facebook’s bottom line.”

Chakrabarti began work on the team by questioning how Facebook could encourage people to be more engaged with their elected representatives on the platform, several of his former team members said. An early move was to suggest tweaks to Facebook’s “more pages you may like” feature that the team hoped might make users feel more like they could have an impact on politics.

After the chaos of the 2016 election, which prompted Zuckerberg himself to admit that Facebook didn’t do enough to stop misinformation, the team evolved. It moved into Facebook’s wider “integrity” product group, which employs thousands of researchers and engineers to focus on fixing Facebook’s problems of misinformation, hate speech, foreign interference and harassment. It changed its name from “civic engagement” to “civic integrity,” and began tackling the platform’s most difficult problems head-on.

Shortly before the midterm elections in 2018, Chakrabarti gave a talk at a conference in which he said he had “never been told to sacrifice people’s safety in order to chase a profit.” His team was hard at work making sure the midterm elections did not suffer the same failures as in 2016, in an effort that was generally seen as a success, both inside the company and externally. “To see the way that the company has mobilized to make this happen has made me feel very good about what we’re doing here,” Chakrabarti told reporters at the time. But behind closed doors, integrity employees on Chakrabarti’s team and others were increasingly getting into disagreements with Facebook leadership, former employees said. It was the beginning of the process that would eventually motivate Haugen to blow the whistle.

In 2019, the year Haugen joined the company, researchers on the civic-integrity team proposed ending the use of an approved list of thousands of political accounts that were exempt from Facebook’s fact-checking program, according to tech news site The Information. Their research had found that the exemptions worsened the site’s misinformation problem because users were more likely to believe false information if it were shared by a politician. But Facebook executives rejected the proposal.

The pattern repeated time and time again, as proposals to tweak the platform to down-rank misinformation or abuse were rejected or watered down by executives concerned with engagement or worried that changes might disproportionately impact one political party more than another, according to multiple reports in the press and several former employees. One cynical joke among members of the civic-integrity team was that they spent 10% of their time coding and the other 90% arguing that the code they wrote should be allowed to run, one former employee told TIME. “You write code that does exactly what it’s supposed to do, and then you had to argue with execs who didn’t want to think about integrity, had no training in it and were mad that you were hurting their product, so they shut you down,” the person said.

Sometimes the civic-integrity team would also come into conflict with Facebook’s policy teams, which share the dual role of setting the rules of the platform while also lobbying politicians on Facebook’s behalf. “I found many times that there were tensions [in meetings] because the civic-integrity team was like, ‘We’re operating off this oath; this is our mission and our goal,’” says Katie Harbath, a long-serving public-policy director at the company’s Washington, D.C., office who quit in March 2021. “And then you get into decisionmaking meetings, and all of a sudden things are going another way, because the rest of the company and leadership are not basing their decisions off those principles.”

Harbath admitted not always seeing eye to eye with Chakrabarti on matters of company policy, but praised his character. “Samidh is a man of integrity, to use the word,” she told TIME. “I personally saw times when he was like, ‘How can I run an integrity team if I’m not upholding integrity as a person?’”

Years before the 2020 election, research by integrity teams had shownFacebook’s group recommendations feature was radicalizing users by driving them toward polarizing political groups, according to the Journal. The company declined integrity teams’ requests to turn off the feature, BuzzFeed News reported. Then, just weeks before the vote, Facebook executives changed their minds and agreed to freeze political group recommendations. The company also tweaked its News Feed to make it less likely that users would see content that algorithms flagged as potential misinformation, part of temporary emergency “break glass” measures designed by integrity teams in the run-up to the vote. “Facebook changed those safety defaults in the run-up to the election because they knew they were dangerous,” Haugen testified to Senators on Tuesday. But they didn’t keep those safety measures in place long, she added. “Because they wanted that growth back, they wanted the acceleration on the platform back after the election, they returned to their original defaults. And the fact that they had to break the glass on Jan. 6, and turn them back on, I think that’s deeply problematic.”

In a statement, Facebook spokesperson Tom Reynolds rejected the idea that the company’s actions contributed to the events of Jan. 6. “In phasing in and then adjusting additional measures before, during and after the election, we took into account specific on-platforms signals and information from our ongoing, regular engagement with law enforcement,” he said. “When those signals changed, so did the measures. It is wrong to claim that these steps were the reason for Jan. 6—the measures we did need remained in place through February, and some like not recommending new, civic or political groups remain in place to this day. These were all part of a much longer and larger strategy to protect the election on our platform—and we are proud of that work.”

Soon after the civic-integrity team was dissolved in December 2020, Chakrabarti took a leave of absence from Facebook. In August, he announced he was leaving for good. Other employees who had spent years working on platform-safety issues had begun leaving, too. In her testimony, Haugen said that several of her colleagues from civic integrity left Facebook in the same six-week period as her, after losing faith in the company’s pledge to spread their influence around the company. “Six months after the reorganization, we had clearly lost faith that those changes were coming,” she said.

After Haugen’s Senate testimony, Facebook’s director of policy communications Lena Pietsch suggested that Haugen’s criticisms were invalid because she “worked at the company for less than two years, had no direct reports, never attended a decision-point meeting with C-level executives—and testified more than six times to not working on the subject matter in question.” On Twitter, Chakrabarti said he was not supportive of company leaks but spoke out in support of the points Haugen raised at the hearing. “I was there for over 6 years, had numerous direct reports, and led many decision meetings with C-level execs, and I find the perspectives shared on the need for algorithmic regulation, research transparency, and independent oversight to be entirely valid for debate,” he wrote. “The public deserves better.”

Can Facebook’s latest moves protect the company?

Two months after disbanding the civic-integrity team, Facebook announced a sharp directional shift: it would begin testing ways to reduce the amount of political content in users’ News Feeds altogether. In August, the company said early testing of such a change among a small percentage of U.S. users was successful, and that it would expand the tests to several other countries. Facebook declined to provide TIME with further information about how its proposed down-ranking system for political content would work.

Many former employees who worked on integrity issues at the company are skeptical of the idea. “You’re saying that you’re going to define for people what political content is, and what it isn’t,” James Barnes, a former product manager on the civic-integrity team, said in an interview. “I cannot even begin to imagine all of the downstream consequences that nobody understands from doing that.”

Another former civic-integrity team member said that the amount of work required to design algorithms that could detect any political content in all the languages and countries in the world—and keeping those algorithms updated to accurately map the shifting tides of political debate—would be a task that even Facebook does not have the resources to achieve fairly and equitably. Attempting to do so would almost certainly result in some content deemed political being demoted while other posts thrived, the former employee cautioned. It could also incentivize certain groups to try to game those algorithms by talking about politics in nonpolitical language, creating an arms race for engagement that would privilege the actors with enough resources to work out how to win, the same person added.

When Zuckerberg was hauled to testify in front of lawmakers after the Cambridge Analytica data scandal in 2018, Senators were roundly mocked on social media for asking basic questions such as how Facebook makes money if its services are free to users. (“Senator, we run ads” was Zuckerberg’s reply.) In 2021, that dynamic has changed. “The questions asked are a lot more informed,” says Sophie Zhang, a former Facebook employee who was fired in 2020 after she criticized Facebook for turning a blind eye to platform manipulation by political actors around the world.

“The sentiment is increasingly bipartisan” in Congress, Zhang adds. In the past, Facebook hearings have been used by lawmakers to grandstand on polarizing subjects like whether social media platforms are censoring conservatives, but this week they were united in their condemnation of the company. “Facebook has to stop covering up what it knows, and must change its practices, but there has to be government accountability because Facebook can no longer be trusted,” Senator Richard Blumenthal of Connecticut, chair of the Subcommittee on Consumer Protection, told TIME ahead of the hearing. His Republican counterpart Marsha Blackburn agreed, saying during the hearing that regulation was coming “sooner rather than later” and that lawmakers were “close to bipartisan agreement.”

As Facebook reels from the revelations of the past few days, it already appears to be reassessing product decisions. It has begun conducting reputational reviewsof new products to assess whether the company could be criticized or its features could negatively affect children, the Journal reported Wednesday. It last week paused its Instagram Kids product amid the furor.

Whatever the future direction of Facebook, it is clear that discontent has been brewing internally. Haugen’s document leak and testimony have already sparked calls for stricter regulation and improved the quality of public debate about social media’s influence. In a post addressing Facebook staff on Wednesday, Zuckerberg put the onus on lawmakers to update Internet regulations, particularly relating to “elections, harmful content, privacy and competition.” But the real drivers of change may be current and former employees, who have a better understanding of the inner workings of the company than anyone—and the most potential to damage the business.

Source: How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

European Anti-Semitism Reappears with Virulent Versions for the Covid Era

Of note:

As the coronavirus spread through Europe last year, cartoons and posts began going up on French social media that might as well have come straight from the 14th century. In one series, Agnes Buzyn, who is Jewish and was France’s health minister until February 2020, was depicted with grotesquely distorted features dropping poison into wells.

This trope of Jews poisoning wells to kill Christians has made the rounds in most European epidemics since the Middle Ages, but was particularly rife during the Black Death, when it led to pogroms and massacres of Jews throughout the continent. The vile meme is just one example of a shocking, if sadly unsurprising, surge in anti-Semitism that correlates with the pandemic. That’s the disturbing conclusion of a new report by the Institute for Strategic Dialogue, a think tank, for the European Commission.

The authors mined French and German posts on Twitter, Facebook and Telegram between January 2020 — that is, just before Covid-19 first surged in Europe — and March 2021. They looked for content that’s anti-Semitic according to a definition by the International Holocaust Remembrance Alliance. They found not just petri dishes of hatred but entire cesspools.

In both countries, anti-Semitic tropes and memes soared during the pandemic (see chart). In France, where Twitter was the preferred medium for this bigotry — at least until the social network tweaked its policies — the number of anti-Semitic posts increased seven-fold; in Germany, where Telegram appears to be the platform of choice, it went up 13-fold. The likes, shares and retweets counted in the millions, the views in the billions.

As Covid Spreads, So Does Anti-Semitism

In Germany and France, posts with anti-Jewish content have been increasing during the pandemic

While the delivery vehicles may seem whizzbang modern, the narratives are depressingly hoary. The well-poisoning theme is ancient. But it’s now morphing into storylines that try to recast SARS-CoV-2 as a “zionist bioweapon” — by fabricating Jewish links to laboratories in China, for instance.

A German channel on Telegram with more than 34,000 followers doctored videos as alleged “proof” that the virus was bio-engineered to hurt only gentiles. “Corona is not for the Jews!” the channel’s owner wrote. “Only for the goyim! That’s what they call us!” On another channel, users claimed that “Virology was invented by the Eternal Jew” — a reference to a Nazi propaganda film.

A contradictory meme is somehow circulating in parallel. It says that that SARS-CoV-2 either doesn’t exist at all or exists but is harmless, and is instead a figment invented by Jews and the gentiles they have corrupted — such as Bill Gates or the Clintons — in their quest to control entire populations and establish a “New World Order.”

This so-called NWO genre of anti-Semitism also taps into an ancient narrative, one that was most notoriously exploited by the “Protocols of the Elders of Zion.” This entirely fictional text, produced over a century ago in Russia and translated into many languages, pretended to document how Jews were making secret plans to rule the world by manipulating the media, finance and government.

In some of anti-Semitism’s current strains, vaccination is the alleged tool chosen by the conspiracy — Albert Bourla, the Jewish chief executive of Pfizer, features prominently in these libels. Some posters claim that the vaccines are meant to kill or sterilize gentiles. To get around obvious logical hurdles such as Israel’s pioneering role in mass inoculation, other users fantasize that the Israeli shots are only placebos.

On and on it goes, in never-ending loops of paranoia and delusion. As it always has in Europe, and elsewhere. The researchers had to restrict themselves to just a small sample of countries and social networks. But from that, we can extrapolate how much of this garbage is out there.

The study’s authors felt compelled, as one does, to offer thoughts on regulatory or legal tweaks to mitigate the problem. And the social networks, for their part, should certainly think harder about how to drain their cesspools of bigotry while still hosting legitimate free speech. But the sad truth is that even as human technology keeps bounding ahead, human nature and culture lag woefully behind, often literally in the Middle Ages. If only there were a vaccine against stupidity and hatred.

Source: European Anti-Semitism Reappears with Virulent Versions for the Covid Era

Facebook Apologizes After Its AI Labels Black Men As ‘Primates’

Ouch!

Facebook issued an apology on behalf of its artificial intelligence software that asked users watching a video featuring Black men if they wanted to see more “videos about primates.” The social media giant has since disabled the topic recommendation feature and says it’s investigating the cause of the error, but the video had been online for more than a year.

A Facebook spokesperson told The New York Times on Friday, whichfirst reported on the story, that the automated prompt was an “unacceptable error” and apologized to anyone who came across the offensive suggestion.

The video, uploaded by the Daily Mail on June 27, 2020, documented an encounter between a white man and a group of Black men who were celebrating a birthday. The clip captures the white man allegedly calling 911 to report that he is “being harassed by a bunch of Black men,” before cutting to an unrelated video that showed police officers arresting a Black tenant at his own home.

Former Facebook employee Darci Groves tweeted about the error on Thursday after a friend clued her in on the misidentification. She shared a screenshot of the video that captured Facebook’s “Keep seeing videos about Primates?” message.

“This ‘keep seeing’ prompt is unacceptable, @Facebook,” she wrote. “And despite the video being more than a year old, a friend got this prompt yesterday. Friends at [Facebook], please escalate. This is egregious.”

This is not Facebook’s first time in the spotlight for major technical errors. Last year, Chinese President Xi Jinping’s name appeared as “Mr. S***hole” on its platform when translated from Burmese to English. The translation hiccup seemed to be Facebook-specific, and didn’t occur on Google, Reuters had reported.

However, in 2015, Google’s image recognition software classified photos of Black people as “gorillas.” Google apologized and removed the labels of gorilla, chimp, chimpanzee and monkey words that remained censored over two years later, Wired reported.

Facebook could not be reached for comment.

Source: Facebook Apologizes After Its AI Labels Black Men As ‘Primates’

Facebook Bans Holocaust Denial, Reversing Earlier Policy

Long overdue. Similar action needs to be take with respect to other forms of racism and hate on Facebook and other platforms:

Facebook is banning all content that “denies or distorts the Holocaust,” in a policy reversal that comes after increased pressure from critics.

Just two years ago, founder and chief executive Mark Zuckerberg said in an interview that even though he finds such posts “deeply offensive,” he did not believe Facebook should take them down. Zuckerberg has said on numerous occasions that Facebook shouldn’t be forced to be the arbiter of truth on its platform, but rather allow a wide range of speech.

In a Facebook post on Monday, Zuckerberg said his thinking has “evolved” because of data showing an increase in anti-Semitic violence. The company said it was also in response to an “alarming” level of ignorance about the Holocaust, especially among young people. It pointed to a recent survey that found almost a quarter of people in US aged 18-39 said they believed the Holocaust was either a myth, had been exaggerated or were not sure about the genocide.

“I’ve struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust,” Zuckerberg wrote. “Drawing the right lines between what is and isn’t acceptable speech isn’t straightforward, but with the current state of the world, I believe this is the right balance.”

Facebook has been under increased pressure to act more aggressively on hate speech, misinformation and other harmful content. The company has recently strengthened its rules to prohibit anti-Semitic stereotypes, and banned accounts related to militia groups and QAnon, a baseless conspiracy theory movement.

This summer, a group of Holocaust survivors, organized by the Conference on Jewish Material Claims Against Germany, launched a social media campaign urging Zuckerberg to remove Holocaust denial from Facebook.

On Monday, the group tweeted: “Survivors spoke! Facebook listened.”

In addition to removing Holocaust-denying posts, Facebook will begin directing users who search for terms associated with the Holocaust or its denial to “credible information” off the platform later this year, Monika Bickert, head of content policy, said in a blog post. She said it would take “some time” to train Facebook’s enforcement systems to enact the change.

Critics say how effectively Facebook polices its rules is the big question.

“We are seeing a trend toward Facebook listening to their critics and ultimately doing the right thing. That’s a trend we need to encourage,” Jonathan Greenblatt, CEO of the Anti-Defamation League, which has been pushing Facebook to crack down on Holocaust deniers for years, told NPR.

“Ultimately, Facebook will be judged not on the promises they make, but on how they keep those promises,” he said.

Source: Facebook Bans Holocaust Denial, Reversing Earlier Policy