Facebook’s language gaps weaken screening of hate, terrorism

Any number of good articles on the “Facebook papers” and its unethical and dangerous business practices:

As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.

Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.

For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.

Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemicthan just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.

Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were reviewed by a consortium of news organizations, including The Associated Press.

In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.

But when it comes to Arabic content moderation, the company said, “We still have more work to do. … We conduct research to better understand this complexity and identify how we can improve.”

In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya’s persecution, which the U.S. has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.

Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.

In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.

In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.

Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.

The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Quran. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.

Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.

Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.

For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.

Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the U.S. government equivalent — are grounds for a takedown.

“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”

The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.

“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.

In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”

“We know our systems are not perfect,” it added.

The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.

Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.

Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.

“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”

Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.

“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.

Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.

Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.

“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.

In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.

When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.

When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.

In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.

Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.

Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77% of the time, one document said.

“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90% of the time, a report said.

Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.

“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”

Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.

“The stereotype that Arabic is one entity is a major problem,” said Enam al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.

Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.

Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.

Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.

“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.

During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.

In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.

“The repetition of false positives creates a huge drain of resources,” it said.

In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.

Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.

“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.

But the company also lamented that “there is not one clear mitigation strategy.”

Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.

“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”

Source: Facebook’s language gaps weaken screening of hate, terrorism

How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Good in-depth article:

Facebook’s civic-integrity team was always different from all the other teams that the social media company employed to combat misinformation and hate speech. For starters, every team member subscribed to an informal oath, vowing to “serve the people’s interest first, not Facebook’s.”

The “civic oath,” according to five former employees, charged team members to understand Facebook’s impact on the world, keep people safe and defuse angry polarization. Samidh Chakrabarti, the team’s leader, regularly referred to this oath—which has not been previously reported—as a set of guiding principles behind the team’s work, according to the sources.
[time-brightcove not-tgx=”true”]

Chakrabarti’s team was effective in fixing some of the problems endemic to the platform, former employees and Facebook itself have said.

But, just a month after the 2020 U.S. election, Facebook dissolved the civic-integrity team, and Chakrabarti took a leave of absence. Facebook said employees were assigned to other teams to help share the group’s experience across the company. But for many of the Facebook employees who had worked on the team, including a veteran product manager from Iowa named Frances Haugen, the message was clear: Facebook no longer wanted to concentrate power in a team whose priority was to put people ahead of profits.

Five weeks later, supporters of Donald Trump stormed the U.S. Capitol—after some of them organized on Facebook and used the platform to spread the lie that the election had been stolen. The civic-integrity team’s dissolution made it harder for the platform to respond effectively to Jan. 6, one former team member, who left Facebook this year, told TIME. “A lot of people left the company. The teams that did remain had significantly less power to implement change, and that loss of focus was a pretty big deal,” said the person. “Facebook did take its eye off the ball in dissolving the team, in terms of being able to actually respond to what happened on Jan. 6.” The former employee, along with several others TIME interviewed, spoke on the condition of anonymity, for fear that being named would ruin their career.

Enter Frances Haugen

Haugen revealed her identity on Oct. 3 as the whistle-blower behind the most significant leak of internal research in the company’s 17-year history. In a bombshell testimony to the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security two days later, Haugen said the civic-integrity team’s dissolution was the final event in a long series that convinced her of the need to blow the whistle. “I think the moment which I realized we needed to get help from the outside—that the only way these problems would be solved is by solving them together, not solving them alone—was when civic-integrity was dissolved following the 2020 election,” she said. “It really felt like a betrayal of the promises Facebook had made to people who had sacrificed a great deal to keep the election safe, by basically dissolving our community.”

In a statement provided to TIME, Facebook’s vice president for integrity Guy Rosen denied the civic-integrity team had been disbanded. “We did not disband Civic Integrity,” Rosen said. “We integrated it into a larger Central Integrity team so that the incredible work pioneered for elections could be applied even further, for example, across health-related issues. Their work continues to this day.” (Facebook did not make Rosen available for an interview for this story.)

Impacts of Civic Technology Conference 2016The defining values of the civic-integrity team, as described in a 2016 presentation given by Samidh Chakrabarti and Winter Mason. Civic-integrity team members were expected to adhere to this list of values, which was referred to internally as the “civic oath”.

Haugen left the company in May. Before she departed, she trawled Facebook’s internal employee forum for documents posted by integrity researchers about their work. Much of the research was not related to her job, but was accessible to all Facebook employees. What she found surprised her.

Some of the documents detailed an internal study that found that Instagram, its photo-sharing app, made 32% of teen girls feel worse about their bodies. Others showed how a change to Facebook’s algorithm in 2018, touted as a way to increase “meaningful social interactions” on the platform, actually incentivized divisive posts and misinformation. They also revealed that Facebook spends almost all of its budget for keeping the platform safe only on English-language content. In September, the Wall Street Journal published a damning series of articles based on some of the documents that Haugen had leaked to the paper. Haugen also gave copies of the documents to Congress and the Securities and Exchange Commission (SEC).

The documents, Haugen testified Oct. 5, “prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems, and its role in spreading divisive and extreme messages.” She told Senators that the failings revealed by the documents were all linked by one deep, underlying truth about how the company operates. “This is not simply a matter of certain social media users being angry or unstable, or about one side being radicalized against the other; it is about Facebook choosing to grow at all costs, becoming an almost trillion-dollar company by buying its profits with our safety,” she said.

Facebook’s focus on increasing user engagement, which ultimately drives ad revenue and staves off competition, she argued, may keep users coming back to the site day after day—but also systematically boosts content that is polarizing, misinformative and angry, and which can send users down dark rabbit holes of political extremism or, in the case of teen girls, body dysmorphia and eating disorders. “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people,” Haugen said. (In 2020, the company reported $29 billion in net income—up 58% from a year earlier. This year, it briefly surpassed $1 trillion in total market value, though Haugen’s leaks have since knocked the company down to around $940 billion.)

Asked if executives adhered to the same set of values as the civic-integrity team, including putting the public’s interests before Facebook’s, a company spokesperson told TIME it was “safe to say everyone at Facebook is committed to understanding our impact, keeping people safe and reducing polarization.”

In the same week that an unrelated systems outage took Facebook’s services offline for hours and revealed just how much the world relies on the company’s suite of products—including WhatsApp and Instagram—the revelations sparked a new round of national soul-searching. It led some to question how one company can have such a profound impact on both democracy and the mental health of hundreds of millions of people. Haugen’s documents are the basis for at least eight new SEC investigations into the company for potentially misleading its investors. And they have prompted senior lawmakers from both parties to call for stringent new regulations.

Haugen urged Congress to pass laws that would make Facebook and other social media platforms legally liable for decisions about how they choose to rank content in users’ feeds, and force companies to make their internal data available to independent researchers. She also urged lawmakers to find ways to loosen CEO Mark Zuckerberg’s iron grip on Facebook; he controls more than half of voting shares on its board, meaning he can veto any proposals for change from within. “I came forward at great personal risk because I believe we still have time to act,” Haugen told lawmakers. “But we must act now.”

Potentially even more worryingly for Facebook, other experts it hired to keep the platform safe, now alienated by the company’s actions, are growing increasingly critical of their former employer. They experienced first hand Facebook’s unwillingness to change, and they know where the bodies are buried. Now, on the outside, some of them are still honoring their pledge to put the public’s interests ahead of Facebook’s.

Inside Facebook’s civic-integrity team

Chakrabarti, the head of the civic-integrity team, was hired by Facebook in 2015 from Google, where he had worked on improving how the search engine communicated information about lawmakers and elections to its users. A polymath described by one person who worked under him as a “Renaissance man,” Chakrabarti holds master’s degrees from MIT, Oxford and Cambridge, in artificial intelligence engineering, modern history and public policy, respectively, according to his LinkedIn profile.

Although he was not in charge of Facebook’s company-wide “integrity” efforts (led by Rosen), Chakrabarti, who did not respond to requests to comment for this article, was widely seen by employees as the spiritual leader of the push to make sure the platform had a positive influence on democracy and user safety, according to multiple former employees. “He was a very inspirational figure to us, and he really embodied those values [enshrined in the civic oath] and took them quite seriously,” a former member of the team told TIME. “The team prioritized societal good over Facebook good. It was a team that really cared about the ways to address societal problems first and foremost. It was not a team that was dedicated to contributing to Facebook’s bottom line.”

Chakrabarti began work on the team by questioning how Facebook could encourage people to be more engaged with their elected representatives on the platform, several of his former team members said. An early move was to suggest tweaks to Facebook’s “more pages you may like” feature that the team hoped might make users feel more like they could have an impact on politics.

After the chaos of the 2016 election, which prompted Zuckerberg himself to admit that Facebook didn’t do enough to stop misinformation, the team evolved. It moved into Facebook’s wider “integrity” product group, which employs thousands of researchers and engineers to focus on fixing Facebook’s problems of misinformation, hate speech, foreign interference and harassment. It changed its name from “civic engagement” to “civic integrity,” and began tackling the platform’s most difficult problems head-on.

Shortly before the midterm elections in 2018, Chakrabarti gave a talk at a conference in which he said he had “never been told to sacrifice people’s safety in order to chase a profit.” His team was hard at work making sure the midterm elections did not suffer the same failures as in 2016, in an effort that was generally seen as a success, both inside the company and externally. “To see the way that the company has mobilized to make this happen has made me feel very good about what we’re doing here,” Chakrabarti told reporters at the time. But behind closed doors, integrity employees on Chakrabarti’s team and others were increasingly getting into disagreements with Facebook leadership, former employees said. It was the beginning of the process that would eventually motivate Haugen to blow the whistle.

In 2019, the year Haugen joined the company, researchers on the civic-integrity team proposed ending the use of an approved list of thousands of political accounts that were exempt from Facebook’s fact-checking program, according to tech news site The Information. Their research had found that the exemptions worsened the site’s misinformation problem because users were more likely to believe false information if it were shared by a politician. But Facebook executives rejected the proposal.

The pattern repeated time and time again, as proposals to tweak the platform to down-rank misinformation or abuse were rejected or watered down by executives concerned with engagement or worried that changes might disproportionately impact one political party more than another, according to multiple reports in the press and several former employees. One cynical joke among members of the civic-integrity team was that they spent 10% of their time coding and the other 90% arguing that the code they wrote should be allowed to run, one former employee told TIME. “You write code that does exactly what it’s supposed to do, and then you had to argue with execs who didn’t want to think about integrity, had no training in it and were mad that you were hurting their product, so they shut you down,” the person said.

Sometimes the civic-integrity team would also come into conflict with Facebook’s policy teams, which share the dual role of setting the rules of the platform while also lobbying politicians on Facebook’s behalf. “I found many times that there were tensions [in meetings] because the civic-integrity team was like, ‘We’re operating off this oath; this is our mission and our goal,’” says Katie Harbath, a long-serving public-policy director at the company’s Washington, D.C., office who quit in March 2021. “And then you get into decisionmaking meetings, and all of a sudden things are going another way, because the rest of the company and leadership are not basing their decisions off those principles.”

Harbath admitted not always seeing eye to eye with Chakrabarti on matters of company policy, but praised his character. “Samidh is a man of integrity, to use the word,” she told TIME. “I personally saw times when he was like, ‘How can I run an integrity team if I’m not upholding integrity as a person?’”

Years before the 2020 election, research by integrity teams had shownFacebook’s group recommendations feature was radicalizing users by driving them toward polarizing political groups, according to the Journal. The company declined integrity teams’ requests to turn off the feature, BuzzFeed News reported. Then, just weeks before the vote, Facebook executives changed their minds and agreed to freeze political group recommendations. The company also tweaked its News Feed to make it less likely that users would see content that algorithms flagged as potential misinformation, part of temporary emergency “break glass” measures designed by integrity teams in the run-up to the vote. “Facebook changed those safety defaults in the run-up to the election because they knew they were dangerous,” Haugen testified to Senators on Tuesday. But they didn’t keep those safety measures in place long, she added. “Because they wanted that growth back, they wanted the acceleration on the platform back after the election, they returned to their original defaults. And the fact that they had to break the glass on Jan. 6, and turn them back on, I think that’s deeply problematic.”

In a statement, Facebook spokesperson Tom Reynolds rejected the idea that the company’s actions contributed to the events of Jan. 6. “In phasing in and then adjusting additional measures before, during and after the election, we took into account specific on-platforms signals and information from our ongoing, regular engagement with law enforcement,” he said. “When those signals changed, so did the measures. It is wrong to claim that these steps were the reason for Jan. 6—the measures we did need remained in place through February, and some like not recommending new, civic or political groups remain in place to this day. These were all part of a much longer and larger strategy to protect the election on our platform—and we are proud of that work.”

Soon after the civic-integrity team was dissolved in December 2020, Chakrabarti took a leave of absence from Facebook. In August, he announced he was leaving for good. Other employees who had spent years working on platform-safety issues had begun leaving, too. In her testimony, Haugen said that several of her colleagues from civic integrity left Facebook in the same six-week period as her, after losing faith in the company’s pledge to spread their influence around the company. “Six months after the reorganization, we had clearly lost faith that those changes were coming,” she said.

After Haugen’s Senate testimony, Facebook’s director of policy communications Lena Pietsch suggested that Haugen’s criticisms were invalid because she “worked at the company for less than two years, had no direct reports, never attended a decision-point meeting with C-level executives—and testified more than six times to not working on the subject matter in question.” On Twitter, Chakrabarti said he was not supportive of company leaks but spoke out in support of the points Haugen raised at the hearing. “I was there for over 6 years, had numerous direct reports, and led many decision meetings with C-level execs, and I find the perspectives shared on the need for algorithmic regulation, research transparency, and independent oversight to be entirely valid for debate,” he wrote. “The public deserves better.”

Can Facebook’s latest moves protect the company?

Two months after disbanding the civic-integrity team, Facebook announced a sharp directional shift: it would begin testing ways to reduce the amount of political content in users’ News Feeds altogether. In August, the company said early testing of such a change among a small percentage of U.S. users was successful, and that it would expand the tests to several other countries. Facebook declined to provide TIME with further information about how its proposed down-ranking system for political content would work.

Many former employees who worked on integrity issues at the company are skeptical of the idea. “You’re saying that you’re going to define for people what political content is, and what it isn’t,” James Barnes, a former product manager on the civic-integrity team, said in an interview. “I cannot even begin to imagine all of the downstream consequences that nobody understands from doing that.”

Another former civic-integrity team member said that the amount of work required to design algorithms that could detect any political content in all the languages and countries in the world—and keeping those algorithms updated to accurately map the shifting tides of political debate—would be a task that even Facebook does not have the resources to achieve fairly and equitably. Attempting to do so would almost certainly result in some content deemed political being demoted while other posts thrived, the former employee cautioned. It could also incentivize certain groups to try to game those algorithms by talking about politics in nonpolitical language, creating an arms race for engagement that would privilege the actors with enough resources to work out how to win, the same person added.

When Zuckerberg was hauled to testify in front of lawmakers after the Cambridge Analytica data scandal in 2018, Senators were roundly mocked on social media for asking basic questions such as how Facebook makes money if its services are free to users. (“Senator, we run ads” was Zuckerberg’s reply.) In 2021, that dynamic has changed. “The questions asked are a lot more informed,” says Sophie Zhang, a former Facebook employee who was fired in 2020 after she criticized Facebook for turning a blind eye to platform manipulation by political actors around the world.

“The sentiment is increasingly bipartisan” in Congress, Zhang adds. In the past, Facebook hearings have been used by lawmakers to grandstand on polarizing subjects like whether social media platforms are censoring conservatives, but this week they were united in their condemnation of the company. “Facebook has to stop covering up what it knows, and must change its practices, but there has to be government accountability because Facebook can no longer be trusted,” Senator Richard Blumenthal of Connecticut, chair of the Subcommittee on Consumer Protection, told TIME ahead of the hearing. His Republican counterpart Marsha Blackburn agreed, saying during the hearing that regulation was coming “sooner rather than later” and that lawmakers were “close to bipartisan agreement.”

As Facebook reels from the revelations of the past few days, it already appears to be reassessing product decisions. It has begun conducting reputational reviewsof new products to assess whether the company could be criticized or its features could negatively affect children, the Journal reported Wednesday. It last week paused its Instagram Kids product amid the furor.

Whatever the future direction of Facebook, it is clear that discontent has been brewing internally. Haugen’s document leak and testimony have already sparked calls for stricter regulation and improved the quality of public debate about social media’s influence. In a post addressing Facebook staff on Wednesday, Zuckerberg put the onus on lawmakers to update Internet regulations, particularly relating to “elections, harmful content, privacy and competition.” But the real drivers of change may be current and former employees, who have a better understanding of the inner workings of the company than anyone—and the most potential to damage the business.

Source: How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Why Silicon Valley’s Optimization Mindset Sets Us Up for Failure

Of interest. Depends on how one views optimization and what one considers to be the objectives. Engineers and programmers tend to have a relatively narrow focus and thus blind spots to social and public goods:

In 2013 a Silicon Valley software engineer decided that food is an inconvenience—a pain point in a busy life. Buying food, preparing it, and cleaning up afterwards struck him as an inefficient way to feed himself. And so was born the idea of Soylent, Rob Rhinehart’s meal replacement powder, described on its website as an International Complete Nutrition Platform. Soylent is the logical result of an engineer’s approach to the “problem” of feeding oneself with food: there must be a more optimal solution.

It’s not hard to sense the trouble with this crushingly instrumental approach to nutrition.

Soylent may optimize meeting one’s daily nutritional needs with minimal cost and time investment. But for most people, food is not just a delivery mechanism for one’s nutritional requirements. It brings gustatory pleasure. It provides for social connection. It sustains and transmits cultural identity. A world in which Soylent spells the end of food also spells the degradation of these values.

Maybe you don’t care about Soylent; it’s just another product in the marketplace that no one is required to buy. If tech workers want to economize on time spent grocery shopping or a busy person faces the choice between grabbing an unhealthy meal at a fast-food joint or bringing along some Soylent, why should anyone complain? In fact, it’s a welcome alternative for some people.

But the story of Soylent is powerful because it reveals the optimization mindset of the technologist. And problems arise when this mindset begins to dominate—when the technologies begin to scale and become universal and unavoidable.

That mindset is inculcated early in the training of technologists. When developing an algorithm, computer science courses often define the goal as providing an optimal solution to a computationally-specified problem. And when you look at the world through this mindset, it’s not just computational inefficiencies that annoy. Eventually, it becomes a defining orientation to life as well. As one of our colleagues at Stanford tells students, everything in life is an optimization problem.

The desire to optimize can favor some values over others. And the choice of which values to favor, and which to sacrifice, are made by the optimizers who then impose those values on the rest of us when their creations reach great scale. For example, consider that Facebook’s decisions about how content gets moderated or who loses their accounts are the rules of expression for more than three billion people on the platform; Google’s choices about what web pages to index determine what information most users of the internet get in response to searches. The small and anomalous group of human beings at these companies create, tweak, and optimize technology based on their notions of how it ought to be better. Their vision and their values about technology are remaking our individual lives and societies. As a result, the problems with the optimization mindset have become our problems, too.

A focus on optimization can lead technologists to believe that increasing efficiency is inherently a good thing. There’s something tempting about this view. Given a choice between doing something efficiently or inefficiently, who would choose the slower, more wasteful, more energy-intensive path?

Yet a moment’s reflection reveals other ways of approaching problems. We put speed bumps onto roads near schools to protect children; judges encourage juries to take ample time to deliberate before rendering a verdict; the media holds off on calling an election until all the polls have closed. It’s also obvious that the efficient pursuit of a malicious goal—such as deliberately harming or misinforming people—makes the world worse, not better. The quest to make something more efficient is not an inherently good thing. Everything depends on the goal.

Technologists with a single-minded focus on efficiency frequently take for granted that the goals they pursue are worth pursuing. But, in the context of Big Tech, that would have us believe that boosting screen time, increasing click-through rates on ads, promoting purchases of an algorithmically-recommended item, and profit-maximizing are the ultimate outcomes we care about.

The problem here is that goals such as connecting people, increasing human flourishing, or promoting freedom, equality, and democracy are not goals that are computationally tractable. Technologists are always on the lookout for quantifiable metrics. Measurable inputs to a model are their lifeblood, and the need to quantify produces a bias toward measuring things that are easy to quantify. But simple metrics can take us further away from the important goals we really care about, which may require multiple or more complicated metrics or, more fundamentally, may not lend themselves to straightforward quantification. This results in technologists frequently substituting what is measurable for what is meaningful. Or as the old saying goes, “Not everything that counts can be counted, and not everything that can be counted counts.”

There is no shortage of examples of the “bad proxy” phenomenon, but perhaps one of the most illustrative is an episode in Facebook’s history. Facebook Vice President Andrew Bosworth revealed in an internal memo in 2016 how the company pursued growth in the number of people on the platform as the one and only relevant metric for their larger mission of giving people the power to build community and bring the world closer together. “The natural state of the world,” he wrote, “is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.” To accomplish their mission of connecting people, Facebook simplified the task to growing their ever-more connected userbase. As Bosworth noted: “The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.” But what happens when “connecting people” comes with potential violations of user privacy, greater circulation of hate speech and misinformation, or political polarization that tears at the fabric of our democracy?

The optimization mindset is also prone to the “success disaster.” The issue here is not that the technologist has failed in accomplishing something, but rather that their success in solving for one objective has wide-ranging consequences for other things we care about. The realm of worthy ends is vast, and when it comes to world-changing technologies that have implications for fairness, privacy, national security, justice, human autonomy, freedom of expression, and democracy, it’s fair to assume that values conflict in many circumstances. Solutions aren’t so clear cut and inevitably involve trade-offs among competing values. This is where the optimization mindset can fail us.

Think for example of the amazing technological advances in agriculture. Factory farming has dramatically increased agricultural productivity. Where it once took 55 days to raise a chicken before slaughter, it now takes 35, and an estimated 50 billion are killed every year–more than five million killed every hour of every day of the year. But the success of factory farming has generated terrible consequences for the environment (increases in methane gases that contribute to climate change), our individual health (greater meat consumption is correlated with heart disease), and public health (greater likelihood of transmission of viruses from animals to humans that could cause a pandemic).

Success disasters abound in Big Tech as well. Facebook, YouTube, and Twitter have succeeded in connecting billions of people in a social network, but now that they have created a digital civic square, they have to grapple with the conflict between freedom of expression and the reality of misinformation and hate speech.

The bottom line is that technology is an explicit amplifier. It requires us to be explicit about the values we want to promote and how we trade-off between them, because those values are encoded in some way into the objective functions that are optimized. And it is an amplifier because it can often allow for the execution of a policy far more efficiently than humans. For example, with current technology we could produce vehicles that automatically issue speeding tickets whenever the driver exceeded the speed limit—and could issue a warrant for the driver’s arrest after they had enough speeding tickets. Such a vehicle would provide extreme efficiency in upholding speed limits. However, this amplification of safety would infringe on the competing values of autonomy (to make our own choices about safe driving speeds and the urgency of a given trip) or privacy (not to have our driving constantly surveilled).

Several years ago, one of us received an invitation to a small dinner. Founders, venture capitalists, researchers at a secretive tech lab, and two professors assembled in the private dining room of a four-star hotel in Silicon Valley. The host—one of the most prominent names in technology—thanked everyone for coming and reminded us of the topic we’d been invited to discuss: “What if a new state were created to maximize science and tech progress powered by commercial models—what would that run like? Utopia? Dystopia?”

The conversation progressed, with enthusiasm around the table for the establishment of a small nation-state dedicated to optimizing the progress of science and technology. Rob raised his hand to speak. “I’m just wondering, would this state be a democracy? What’s the governance structure here?” The response was quick: “Democracy? No. To optimize for science, we need a beneficent technocrat in charge. Democracy is too slow, and it holds science back.”

Adapted from Chapter 1 of System Error: Where Big Tech Went Wrong and How We Can Reboot published on September 7 by Harper Collins

Source: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjD2p7UmvryAhXBMd8KHXXUDmMQFnoECAgQAQ&url=https%3A%2F%2Ftime.com%2F6096754%2Fsilicon-valley-optimization-mindset%2F&usg=AOvVaw0CIPGWnSedYmuw2GOAzewq

A ‘safe space for racists’: antisemitism report criticises social media giants

Sigh….

There has been a serious and systemic failure to tackle antisemitism across the five biggest social media platforms, resulting in a “safe space for racists”, according to a new report.

Facebook, Twitter, Instagram, YouTube and TikTok failed to act on 84% of posts spreading anti-Jewish hatred and propaganda reported via the platforms’ official complaints system.

Researchers from the Center for Countering Digital Hate (CCDH), a UK/US non-profit organisation, flagged hundreds of antisemitic posts over a six-week period earlier this year. The posts, including Nazi, neo-Nazi and white supremacist content, received up to 7.3 million impressions.

Although each of the 714 posts clearly violated the platforms’ policies, fewer than one in six were removed or had the associated accounts deleted after being pointed out to moderators.

The report found that the platforms are particularly poor at acting on antisemitic conspiracy theories, including tropes about “Jewish puppeteers”, the Rothschild family and George Soros, as well as misinformation connecting Jewish people to the pandemic. Holocaust denial was also often left unchecked, with 80% of posts denying or downplaying the murder of 6 million Jews receiving no enforcement action whatsoever.

Facebook was the worst offender, acting on just 10.9% of posts, despite introducing tougher guidelines on antisemitic content last year. In November 2020, the company updated its hate speech policy to ban content that denies or distorts the Holocaust.

However, a post promoting a viral article that claimed the Holocaust was a hoax accompanied by a falsified image of the gates of Auschwitz with a white supremacist meme was not removed after researchers reported it to moderators. Instead, it was labelled as false information, which CCHD say contributed to it reaching hundreds of thousands of users. Statistics from Facebook’s own analytics tool show the article received nearly a quarter of a million likes, shares and comments across the platform.

Twitter also showed a poor rate of enforcement action, removing just 11% of posts or accounts and failing to act on hashtags such as #holohoax (often used by Holocaust deniers) or #JewWorldOrder (used to promote anti-Jewish global conspiracies). Instagram also failed to act on antisemitic hashtags, as well as videos inciting violence towards Jewish people.

YouTube acted on 21% of reported posts, while Instagram and TikTok on around 18%. On TikTok, an app popular with teenagers, antisemitism frequently takes the form of racist abuse sent directly to Jewish users as comments on their videos.

The report, entitled Failure to Protect, found that the platform did not act in three out of four cases of antisemitic comments sent to Jewish users. When TikTok did act, it more frequently removed individual comments instead of banning the users who sent them, barring accounts that sent direct antisemitic abuse in just 5% of cases.

Forty-one videos identified by researchers as containing hateful content, which have racked up a total of 3.5m views over an average of six years, remain on YouTube.

The report recommends financial penalties to incentivise better moderation, with improved training and support. Platforms should also remove groups dedicated to antisemitism and ban accounts that send racist abuse directly to users.

Imran Ahmed, CEO of CCDH, said the research showed that online abuse is not about algorithms or automation, as the tech companies allowed “bigots to keep their accounts open and their hate to remain online”, even after alerting human moderators.

He said that media, which he described as “how we connect as a society”, has become a “safe space for racists” to normalise “hateful rhetoric without fear of consequences”. “This is why social media is increasingly unsafe for Jewish people, just as it is becoming for women, Black people, Muslims, LGBT people and many other groups,” he added.

Ahmed said the test of the government’s online safety bill, first drafted in 2019 and introduced to parliament in May, is whether platforms can be made to enforce their own rules or face consequences themselves.

“While we have made progress in fighting antisemitism on Facebook, our work is never done,” said Dani Lever, a Facebook spokesperson. Lever told the New York Times that the prevalence of hate speech on the platform was decreasing, and she said that “given the alarming rise in antisemitism around the world, we have and will continue to take significant action through our policies”.

A Twitter spokesperson said the company condemned antisemitism and was working to make the platform a safer place for online engagement. “We recognise that there’s more to do, and we’ll continue to listen and integrate stakeholders’ feedback in these ongoing efforts,” the spokesperson said.

TikTok said in a statement that it condemns antisemitism and does not tolerate hate speech, and proactively removes accounts and content that violate its policies. “We are adamant about continually improving how we protect our community,” the company said.

YouTube said in a statement that it had “made significant progress” in removing hate speech over the last few years. “This work is ongoing and we appreciate this feedback,” said a YouTube spokesperson.

Instagram, which is owned by Facebook, did not respond to a request for comment before publication.

Source: A ‘safe space for racists’: antisemitism report criticises social media giants

A Different Way of Thinking About Cancel Culture: Social media companies and other organizations are looking out for themselves.

Needed different take on the role that social media companies play:

In March, Alexi McCammond, the newly hired editor of Teen Vogue, resigned following backlash over offensive tweets she’d sent a decade ago, beginning when she was 17. In January, Will Wilkinson lost his job as vice president for research at the center-right Niskanen Center for a satirical tweet about Republicans who wanted to hang Mike Pence. (Wilkinson was also suspended from his role as a Times Opinion contributor.)

To debate whether these punishments were fair is to commit a category error. These weren’t verdicts weighed and delivered on behalf of society. These were the actions of self-interested organizations that had decided their employees were now liabilities. Teen Vogue, which is part of Condé Nast, has remade itself in recent years as a leftist magazine built around anti-racist principles. Niskanen trades on its perceived clout with elected Republicans. In both cases, the organization was trying to protect itself, for its own reasons.

That suggests a different way of thinking about the amorphous thing we call cancel culture, and a more useful one. Cancellations — defined here as actually losing your job or your livelihood — occur when an employee’s speech infraction generates public attention that threatens an employer’s profits, influence or reputation. This isn’t an issue of “wokeness,” as anyone who has been on the business end of a right-wing mob trying to get them or their employees fired — as I have, multiple times — knows. It’s driven by economics, and the key actors are social media giants and employers who really could change the decisions they make in ways that would lead to a better speech climate for us all.

Boundaries on acceptable speech aren’t new, and they’re not narrower today than in the past. Remember the post-9/11 furor over whether you could run for office if you didn’t wear an American flag pin at all times? What is new is the role social media (and, to a lesser extent, digital news) plays in both focusing outrage and scaring employers. And this, too, is a problem of economics, not culture. Social platforms and media publishers want to attract people to their websites or shows and make sure they come back. They do this, in part, by tuning the platforms and home pages and story choices to surface content that outrages the audience.

My former Times colleague Charlie Warzel, in his new newsletter, points to Twitter’s trending box as an example of how this works, and it’s a good one if you want to see the hidden hand of technology and corporate business models in what we keep calling a cultural problem. This box is where Twitter curates its sprawling conversation, directing everyone who logs on to topics drawing unusual interest at the moment. Oftentimes that’s someone who said something stupid, or offensive — or even someone who said something innocuous only to have it misread as stupid or offensive.

The trending box blasts missives meant for one community to all communities. The original context for the tweet collapses; whatever generosity or prior knowledge the intended audience might have brought to the interaction is lost. The loss of context is supercharged by another feature of the platform: the quote-tweet, where instead of answering in the original conversation, you pull the tweet out of its context and write something cutting on top of it. (A crummier version comes when people just screenshot a tweet, so the audience can’t even click back to the original, or see the possible apology.) So the trending box concentrates attention on a particular person, already having a bad day, and the quote-tweet function encourages people to carve up the message for their own purposes.

This is not just a problem of social media platforms. Watch Fox News for a night, and you’ll see a festival of stories elevating some random local excess to national attention and inflicting terrible pain on the people who are targeted. Fox isn’t anti-cancel culture; it just wants to be the one controlling that culture.

Cancellations are sometimes intended, and deserved. Some speech should have consequences. But many of the people who participate in the digital pile-ons that lead to cancellation don’t want to cancel anybody. They’re just joining in that day’s online conversation. They’re criticizing an offensive or even dangerous idea, mocking someone they think deserves it, hunting for retweets, demanding accountability, making a joke. They aren’t trying to get anyone fired. But collectively, they do get someone fired.

In all these cases, the economics of corporations that monetize attention are colliding with the incentives of employers to avoid bad publicity. One structural way social media has changed corporate management is that it has made P.R. problems harder to ignore. Outrage that used to play out relatively quietly, through letters and emails and phone calls, now plays out in public. Hasty meetings get called, senior executives get pulled in, and that’s when people get fired.

An even more sinister version of this operates retrospectively, through search results. An employer considering a job candidate does a basic Google search, finds an embarrassing controversy from three years ago and quietly move on to the next candidate. Wokeness has particular economic power right now because corporations, correctly, don’t want to be seen as racist and homophobic, but imagine how social media would have supercharged the censorious dynamics that dominated right after 9/11, when even french fries were suspected of disloyalty.

Tressie McMillan Cottom, the sociologist and cultural critic, made a great point to me about this on a recent podcast. “One of the problems right now is that social shame, which I think in and of itself is enough, usually, to discipline most people, is now tied to economic and political and cultural capital,” she said.

People should be shamed when they say something awful. Social sanctions are an important mechanism for social change, and they should be used. The problem is when that one awful thing someone said comes to define their online identity, and then it defines their future economic and political and personal opportunities. I don’t like the line that no one deserves to be defined by the worst thing they’ve ever done — tell me the body count first — but let’s agree that most of us don’t deserve to be defined by the dumbest thing we’ve ever said, forever, just because Google’s algorithm noticed that that moment got more links than the rest of our life combined.

I think this suggests a few ways to make online discourse better. Twitter should rethink its trending box, and at least consider the role quote-tweets play on the platform. (It would be easy enough to retain them as a function while throttling their virality.) Fox News should stop being, well, Fox News. All of the social media platforms need to think about the way their algorithms juice outrage and supercharge the human tendency to police group boundaries.

For months, when I logged onto Facebook, I saw the posts of a distant acquaintance who had turned into an anti-masker, and whose comment threads had turned into flame wars. This wasn’t someone I was close to, but the algorithm knew that what was being posted was generating a lot of angry reaction among our mutual friends, and it repeatedly tried to get me to react, too. These are design choices that are making society more toxic. Different choices can, and should, be made.

The rest of corporate America — and that includes my own industry — needs to think seriously about how severe a punishment it is to fire people under public conditions. When termination is for private misdeeds or poor performance, it typically stays private. When it is for something the internet is outraged about, it can shatter someone’s economic prospects for years to come. It’s always hard, from the outside, to evaluate any individual case, but I’ve seen a lot of firings that probably should have been suspensions or scoldings.

This also raises the question of our online identities, and the way strange and unexpected moments come to define them. A person’s Google results can shape the rest of that person’s life, both economically and otherwise. And yet people have almost no control over what’s shown in those results, unless they have the money to hire a firm that specializes in rehabilitating online reputations. This isn’t an easy problem to solve, but our lifelong digital identities are too important to be left to the terms and conditions of a single company, or even a few.

Finally, it would be better to focus on cancel behavior than cancel culture. There is no one ideology that gleefully mobs or targets employers online. Plenty of anti-cancel culture warriors get their retweets directing their followers to mob others. So here’s a guideline that I think would make online discourse better. Unless something that is said is truly dangerous and you actually want to see that person fired from their current job and potentially unable to find a new one — a high bar, but one that is sometimes met — you shouldn’t use social media to join an ongoing pile-on against a normal person. If it’s a politician or a cable news host or a senator, well, that’s politics. But this works differently when it’s someone unprepared for that scrutiny. We would all do better to remember that what feels like an offhand tweet to us could have real consequences for others if there are hundreds or thousands of similar tweets and articles. Scale matters.

What I’m offering here would, I hope, help ease a specific problem: the disproportionate and capricious economic punishments meted out in the aftermath of an online pile-on. It won’t end the political conflict over acceptable speech, nor should it. There have always been things we cannot say in polite society, and those things are changing, in overdue ways. The balance of demographic power is shifting, and groups that had little voice in the language and ordering of the national agenda are gaining that voice and using it.

Slowly and painfully, we are creating a society in which more people can speak and have some say over how they’re spoken of. What I hope we can do is keep that fight from serving the business models of social media platforms and the shifting priorities of corporate marketing departments.

Source: https://www.nytimes.com/2021/04/18/opinion/cancel-culture-social-media.html

Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Of note given complaints by conservatives:

Sen. Roger Wicker hit a familiar note when he announced on Thursday that the Commerce Committee was issuing subpoenas to force the testimony of Facebook Chief Executive Mark Zuckerberg and other tech leaders.

Tech platforms like Facebook, the Mississippi Republican said, “disproportionately suppress and censor conservative views online.”

When top tech bosses were summoned to Capitol Hill in July for a hearing on the industry’s immense power, Republican Congressman Jim Jordan made an even blunter accusation.

“I’ll just cut to the chase, Big Tech is out to get conservatives,” Jordan said. “That’s not a hunch. That’s not a suspicion. That’s a fact.”

But the facts to support that case have been hard to find. NPR called up half a dozen technology experts, including data scientists who have special access to Facebook’s internal metrics. The consensus: There is no statistical evidence to support the argument that Facebook does not give conservative views a fair shake.

Let’s step back for a moment.

When Republicans claim Facebook is “biased,” they often collapse two distinct complaints into one. First, that the social network deliberately scrubs right-leaning content from its site. There is no proof to back this up. Secondly, Republicans suggest that conservative news and perspectives are being throttled by Facebook, that the social network is preventing the content from finding a large audience. That claim is not only unproven, but publicly available data on Facebook shows the exact opposite to be true: conservative news regularly ranks among some of the popular content on the site.

Now, there are some complex layers to this, but former Facebook employees and data experts say the conservative bias argument would be easier to talk about — and easier to debunk — if Facebook was more transparent.

The social network keeps secret some of the most basic data points, like what news stories are the most viewed on Facebook on any given day, leaving data scientists, journalists and the general public in the dark about what people are actually seeing on their News Feeds.

There are other sources of data, but they offer just a tiny window into the sprawling universe of nearly 3 billion users. Facebook is quick to point out that the public metrics available are of limited use, yet it does so without offering a real solution, which would be opening up some of its more comprehensive analytics for public scrutiny.

Until they do, there’s little to counter rumors about what thrives and dies on Facebook and how the platform is shaping political discourse.

“It’s kind of a purgatory of their own making,” said Kathy Qian, a data scientist who co-founded Code for Democracy.

What the available data reveals about possible bias

Perhaps the most often-cited data point on what is popular on Facebook is a tracking tool called CrowdTangle, a startup that Facebook acquired in 2016.

New York Times journalist Kevin Roose has created a Twitter account where he posts the top ten most-engaging posts based on CrowdTangle data. These lists are dominated mostly by conservative commentators like Ben Shapiro and Dan Bongino and Fox News. They resemble a “parallel media universe that left-of-center Facebook users may never encounter,” Roose writes.

Yet these lists are like looking at Facebook through a soda straw, says researchers like MIT’s Jennifer Allen, who used to work at Facebook and now studies how people consume news on social media. CrowdTangle, Allen says, does not provide the whole story.

That’s because CrowdTangle only captures engagement — likes, shares, comments and other reactions — from public pages. But just because a post provokes lots of reactions does not mean it reaches many users. The data does not show how many people clicked on a link, or what the overall reach of the post was. And much of what people see on Facebook is from their friends, not public pages.

“You see these crazy numbers on CrowdTangle, but you don’t see how many people are engaging with this compared with the rest of the platform,” Allen said.

Another point researchers raise: All engagement is not created equal.

Users could “hate-like” a post, or click like as a way of bookmarking, or leave another reaction expressing disgust, not support. Take, for example, the laughing-face emoji.

“It could mean, ‘I agree with this’ or ‘This is so hilarious untrue,'” said data scientist Qian. “It’s just hard to know what people actually mean by those reactions.”

It’s also hard to tell whether people or automated bots are generating all the likes, comments and shares. Former Facebook research scientist Solomon Messing conducted a study of Twitter in 2018 finding hat bots were likely responsible for 66% of link shares on the platform. The tactic is employed on Facebook, too.

“What Facebook calls ‘inauthentic behavior’ and other borderline scam-like activity are unfortunately common and you can buy fake engagement easily on a number of websites,” Messing said.

Brendan Nyhan, a political scientist at Dartmouth College, is also wary about drawing any big conclusions from CrowdTangle.

“You can’t judge anything about American movies by looking at the top ten box films hits of all time,” Nyhan said. “That’s not a great way of understanding what people are actually watching. There’s the same risk here.”

‘Concerned about being seen as on the side of liberals’

Experts agree that a much better measure would be a by-the-numbers rundown of what posts are reaching the most people. So why doesn’t Facebook reveal that data?

In a Twitter thread back in July, John Hegeman, the head of Facebook’s NewsFeed, offered one sample of such a list, saying it is “not as partisan” as lists compiled with CrowdTangle data suggest.

But when asked why Facebook doesn’t share that broader data with the public, Hegeman did not reply.

It could be, some experts say, that Facebook fears that data will be used as ammunition against the company at a time when Congress and the Trump administration are threatening to rein in the power of Big Tech.

“They are incredibly concerned about being seen as on the side of liberals. That is against the profit motive of their business,” Dartmouth’s Nyhan said of Facebook executives. “I don’t see any reason to see that they have a secret, hidden liberal agenda, but they are just so unwilling to be transparent.”

Facebook has been more forthcoming with some academic researchers looking at how social media affects elections and democracy. In April 2019, it announced a partnership that would give 60 scholars access to more data, including the background and political affiliation of people who are engaging content.

One of those researchers is University of Pennsylvania data scientist Duncan Watts.

“Mostly it’s mainstream content,” he said of the most viewed and clicked on posts. “If anything, there is a bias in favor of conservative content.”

While Facebook posts from national television networks and major newspapers get the most clicks, partisan outlets like the Daily Wire and Brietbart routinely show up in top spots, too.

“That should be so marginal that it has no relevance at all,” Watts said of the right-wing content. “The fact that it is showing up at all is troubling.”

‘More false and misleading content on the right’

Accusations from Trump and other Republicans in Washington that Facebook is a biased referee of its content tend to flare up when the social network takes action against a conservative-leaning posts that violate its policies.

Researchers say there is a reason why most of the high-profile examples of content warnings and removals target conservative content.

“That is a result of there just being more false and misleading content on the right,” said researcher Allen. “There are bad actors on the left, but the ecosystem on the right is just much more mature and popular.”

Facebook’s algorithms could also be helping more people see right-wing content that’s meant to evoke passionate reactions, she added.

Because of the sheer amount of envelope-pushing conservative content, some of it veering into the realm of conspiracy theories, the moderation from Facebook is also greater.

Or as Nyhan put it: “When reality is asymmetric, enforcement may be asymmetric. That doesn’t necessarily indicate a bias.”

The attacks on Facebook over perceived prejudice against conservatives has helped fuel the push in Congress and the White House to reform Section 230 of the Communications and Decency Act of 1996, which allows platforms to avoid lawsuits over what users post and gives tech companies the freedom to police its sites as the companies see fit.

Joe Osborne, a Facebook spokesman, in a statement said the social network’s content moderation policies are applied fairly across the board.

“While many Republicans think we should do one thing, many Democrats think we should do the exact opposite. We’ve faced criticism from Republicans for being biased against conservatives and Democrats for not taking more steps to restrict the exact same content. Our job is to create one consistent set of rules that applies equally to everyone.”

Osborne confirmed that Facebook is exploring ways to make more data available in the platform’s public tools, but he declined to elaborate.

Watts, University of Pennsylvania data scientist who studies social media, said Facebook is sensitive to Republican criticism, but no matter what decision they make, the attacks will continue.

“Facebook could end up responding in a way to accommodate the right, but the right will never be appeased,” Watts said. “So it’s this constant pressure of ‘you have to give us more, you have to give us more,'” he said. “And it creates a situation where there’s no way to win arguments based on evidence, because they can just say, ‘Well, I don’t trust you.'”

Source: Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Sensitive to claims of bias, Facebook relaxed misinformation rules for conservative pages

The social media platforms continue to undermine social inclusion and cohesion:

Facebook has allowed conservative news outlets and personalities to repeatedly spread false information without facing any of the company’s stated penalties, according to leaked materials reviewed by NBC News.

According to internal discussions from the last six months, Facebook has relaxed its rules so that conservative pages, including those run by Breitbart, former Fox News personalities Diamond and Silk, the nonprofit media outlet PragerU and the pundit Charlie Kirk, were not penalized for violations of the company’s misinformation policies.

Facebook’s fact-checking rules dictate that pages can have their reach and advertising limited on the platform if they repeatedly spread information deemed inaccurate by its fact-checking partners. The company operates on a “strike” basis, meaning a page can post inaccurate information and receive a one-strike warning before the platform takes action. Two strikes in 90 days places an account into “repeat offender” status, which can lead to a reduction in distribution of the account’s content and a temporary block on advertising on the platform.

Facebook has a process that allows its employees or representatives from Facebook’s partners, including news organizations, politicians, influencers and others who have a significant presence on the platform to flag misinformation-related problems. Fact-checking labels are applied to posts by Facebook when third-party fact-checkers determine their posts contain misinformation. A news organization or politician can appeal the decision to attach a label to one of its posts.

Facebook employees who work with content partners then decide if an appeal is a high-priority issue or PR risk, in which case they log it in an internal task management system as a misinformation “escalation.” Marking something as an “escalation” means that senior leadership is notified so they can review the situation and quickly — often within 24 hours — make a decision about how to proceed.

Facebook receives many queries about misinformation from its partners, but only a small subsection are deemed to require input from senior leadership. Since February, more than 30 of these misinformation queries were tagged as “escalations” within the company’s task management system, used by employees to track and assign work projects.

The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook’s fact-checking could go public and fuel allegations that the social network was biased against conservatives.

The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias.

Two current Facebook employees and two former employees, who spoke anonymously out of fear of professional repercussions, said they believed the company had become hypersensitive to conservative complaints, in some cases making special allowances for conservative pages to avoid negative publicity.

“This supposed goal of this process is to prevent embarrassing false positives against respectable content partners, but the data shows that this is instead being used primarily to shield conservative fake news from the consequences,” said one former employee.

About two-thirds of the “escalations” included in the leaked list relate to misinformation issues linked to conservative pages, including those of Breitbart, Donald Trump Jr., Eric Trump and Gateway Pundit. There was one escalation related to a progressive advocacy group and one each for CNN, CBS, Yahoo and the World Health Organization.

There were also escalations related to left-leaning entities, including one about an ad from Democratic super PAC Priorities USA that the Trump campaign and fact checkers have labeled as misleading. Those matters focused on preventing misleading videos that were already being shared widely on other media platforms from spreading on Facebook and were not linked to complaints or concerns about strikes.

Facebook and other tech companies including Twitter and Google have faced repeated accusations of bias against conservatives in their content moderation decisions, though there is little clear evidence that this bias exists. The issue was reignited this week when Facebook removed a video posted to Trump’s personal Facebook page in which he falsely claimed that children are “almost immune” to COVID-19. The Trump campaign accused Facebook of “flagrant bias.”

Facebook spokesperson Andy Stone did not dispute the authenticity of the leaked materials, but said that it did not provide the full context of the situation.

In recent years, Facebook has developed a lengthy set of rules that govern how the platform moderates false or misleading information. But how those rules are applied can vary and is up to the discretion of Facebook’s executives.

In late March, a Facebook employee raised concerns on an internal message board about a “false” fact-checking label that had been added to a post by the conservative bloggers Diamond and Silk in which they expressed outrage over the false allegation that Democrats were trying to give members of Congress a $25 million raise as part of a COVID-19 stimulus package.

Diamond and Silk had not yet complained to Facebook about the fact check, but the employee was sounding the alarm because the “partner is extremely sensitive and has not hesitated going public about their concerns around alleged conservative bias on Facebook.”

Since it was the account’s second misinformation strike in 90 days, according to the leaked internal posts, the page was placed into “repeat offender” status.

Diamond and Silk appealed the “false” rating that had been applied by third-party fact-checker Lead Stories on the basis that they were expressing opinion and not stating a fact. The rating was downgraded by Lead Stories to “partly false” and they were taken out of “repeat offender” status. Even so, someone at Facebook described as “Policy/Leadership” intervened and instructed the team to remove both strikes from the account, according to the leaked material.

In another case in late May, a Facebook employee filed a misinformation escalation for PragerU, after a series of fact-checking labels were applied to several similar posts suggesting polar bear populations had not been decimated by climate change and that a photo of a starving animal was used as a “deliberate lie to advance the climate change agenda.” This claim was fact-checked by one of Facebook’s independent fact-checking partners, Climate Feedback, as false and meant that the PragerU page had “repeat offender” status and would potentially be banned from advertising.

A Facebook employee escalated the issue because of “partner sensitivity” and mentioned within that the repeat offender status was “especially worrisome due to PragerU having 500 active ads on our platform,” according to the discussion contained within the task management system and leaked to NBC News.

After some back and forth between employees, the fact check label was left on the posts, but the strikes that could have jeopardized the advertising campaign were removed from PragerU’s pages.

Stone, the Facebook spokesperson, said that the company defers to third-party fact-checkers on the ratings given to posts, but that the company is responsible for “how we manage our internal systems for repeat offenders.”

“We apply additional system-wide penalties for multiple false ratings, including demonetization and the inability to advertise, unless we determine that one or more of those ratings does not warrant additional consequences,” he said in an emailed statement.

He added that Facebook works with more than 70 fact-checking partners who apply fact-checks to “millions of pieces of content.”

Facebook announced Thursday that it banned a Republican PAC, the Committee to Defend the President, from advertising on the platform following repeated sharing of misinformation.

But the ongoing sensitivity to conservative complaints about fact-checking continues to trigger heated debates inside Facebook, according to leaked posts from Facebook’s internal message board and interviews with current and former employees.

“The research has shown no evidence of bias against conservatives on Facebook,” said another employee, “So why are we trying to appease them?”

Those concerns have also spilled out onto the company’s internal message boards.

One employee wrote a post on 19 July, first reported by BuzzFeed News on Thursday, summarizing the list of misinformation escalations found in the task management system and arguing that the company was pandering to conservative politicians.

The post, a copy of which NBC News has reviewed, also compared Mark Zuckerberg to President Donald Trump and Russian President Vladimir Putin.

“Just like all the robber barons and slavers and plunderers who came before you, you are spending a fortune you didn’t build. No amount of charity can ever balance out the poverty, war and environmental damage enabled by your support of Donald Trump,” the employee wrote.

The post was removed for violating Facebook’s “respectful communications” policy and the list of escalations, previously accessible to all employees, was made private. The employee who wrote the post was later fired.

“We recognize that transparency and openness are important company values,” wrote a Facebook employee involved in handling misinformation escalations in response to questions about the list of escalations. “Unfortunately, because information from these Tasks were leaked, we’ve made them private for only subscribers and are considering how best to move forward.”

Source: https://www.nbcnews.com/tech/tech-news/sensitive-claims-bias-facebook-relaxed-misinformation-rules-conservative-pages-n1236182

Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump, Twitter contrast

Needed backlash at what can only be described as business-motivated collusion, one that becomes harder and harder to justify from any perspective:

Facebook is facing an unusually public backlash from its employees over the company’s handling of President Trump’s inflammatory posts about protests in the police killing of George Floyd, an unarmed black man in Minneapolis.

At least a dozen employees, some in senior positions, have openly condemned Facebook’s lack of action on the president’s posts and CEO Mark Zuckerberg’s defense of that decision. Some employees staged a virtual walkout Monday.

“Mark is wrong, and I will endeavor in the loudest possible way to change his mind,” tweeted Ryan Freitas, director of product design for Facebook’s news feed.

“I work at Facebook and I am not proud of how we’re showing up,” tweeted Jason Toff, director of product management. “The majority of coworkers I’ve spoken to feel the same way. We are making our voice heard.”

The social network also is under intense pressure from civil rights groups, Democrats and the public over its decision to leave up posts from the president that critics say violate Facebook’s rules against inciting violence. These included a post last week about the protests in which the president said, “when the looting starts, the shooting starts.

Twitter, in contrast, put a warning label on a tweet in which the president said the same thing, saying it violated rules against glorifying violence.

The move escalated a feud with the president that started when the company put fact-checking labels on two of his tweets earlier in the week. Trump retaliated by signing an executive order that attempts to strip online platforms of long-held legal protections.

Zuckerberg has long said he believes the company should not police what politicians say on its platform, arguing that political speech is already highly scrutinized. In a postFriday, the Facebook CEO said he had “been struggling with how to respond” to Trump’s posts.

“Personally, I have a visceral negative reaction to this kind of divisive and inflammatory rhetoric,” he wrote. “I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.”

Zuckerberg said Facebook had examined the post and decided to leave it up because “we think people need to know if the government is planning to deploy force.” He added that the company had been in touch with the White House to explain its policies. Zuckerberg spoke with Trump by phone Friday, according to a report published by Axios.

While Facebook’s 48,000 employees often debate policies and actions within the company, it is unusual for staff to take that criticism public. But the decision not to remove Trump’s posts has caused significant distress within the company, which is spilling over into public view.

“Censoring information that might help people see the complete picture *is* wrong. But giving a platform to incite violence and spread disinformation is unacceptable, regardless who you are or if it’s newsworthy,” tweeted Andrew Crow, head of design for the company’s Portal devices. “I disagree with Mark’s position and will work to make change happen.”

Several employees said on Twitter they were joining Monday’s walkout.

“Facebook’s recent decision to not act on posts that incite violence ignores other options to keep our community safe,” tweeted Sara Zhang, a product designer.

In a statement, Facebook spokesman Joe Osborne said: “We recognize the pain many of our people are feeling right now, especially our Black community. We encourage employees to speak openly when they disagree with leadership. As we face additional difficult decisions around content ahead, we’ll continue seeking their honest feedback.”

Less than 4% of Facebook’s U.S.-based staff are African American, according to the company’s most recent diversity report.

Facebook will not make employees participating in the walkout use paid time off, and it will not discipline those who participate.

On Sunday, Zuckerberg said the company would commit $10 million to groups working on racial justice. “I know Facebook needs to do more to support equality and safety for the Black community through our platforms,” he wrote.

Source: Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump

And Kara Swisher’s call for Twitter to take Trump off the platform:

C’mon, @Jack. You can do it.

Throw on some Kendrick Lamar and get your head in the right space. Pour yourself a big old glass of salt juice. Draw an ice bath and fire up the cryotherapy pod and the infrared sauna. Then just pull the plug on him. You know you want to.

You could answer the existential question of whether @realDonaldTrump even exists if he doesn’t exist on Twitter. I tweet, therefore I am. Dorsey meets Descartes.

All it would take is one sweet click to force the greatest troll in the history of the internet to meet his maker. Maybe he just disappears in an orange cloud of smoke, screaming, “I’m melllllllting.”

Do Trump — and the world — a favor and send him back into the void whence he came. And then go have some fun: Meditate and fast for days on end!

Our country is going through biological, economic and societal convulsions. We can’t trust the powerful forces in this nation to tell us the truth or do the right thing. In fact, not only can we not trust them. We have every reason to believe they’re gunning for us.

In Washington, the Trump administration’s deception about the virus was lethal. On Wall Street and in Silicon Valley, the fat cats who carved up the country, drained us dry and left us with no safety net profiteered off the virus. In Minneapolis, the barbaric death of George Floyd after a police officer knelt on him for almost nine minutes showed yet again that black Americans have everything to fear from some who are charged with protecting them.

As if that weren’t enough, from the slough of our despond, we have to watch Donald Trump duke it out with the lords of the cloud in a contest to see who can destroy our democracy faster.

I wish I could go along with those who say this dark period of American life will ultimately make us nicer and simpler and more contemplative. How can that happen when the whole culture has been re-engineered to put us at each other’s throats?

Trump constantly torques up the tribal friction and cruelty, even as Twitter and Facebook refine their systems to ratchet up rage. It is amazing that a septuagenarian became the greatest exploiter of social media. Trump and Twitter were a match made in hell.

The Wall Street Journal had a chilling report a few days ago that Facebook’s own research in 2018 revealed that “our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

Mark Zuckerberg shelved the research.

Why not just let all the bots trying to undermine our elections and spreading false information about the coronavirus and right-wing conspiracy theories and smear campaigns run amok? Sure, we’re weakening our society, but the weird, infantile maniacs running Silicon Valley must be allowed to rake in more billions and finish their mission of creating a giant cyberorganism of people, one huge and lucrative ball of rage.

“The shareholders of Facebook decided, ‘If you can increase my stock tenfold, we can put up with a lot of rage and hate,’” says Scott Galloway, professor of marketing at New York University’s Stern School of Business.

“These platforms have very dangerous profit motives. When you monetize rage at such an exponential rate, it’s bad for the world. These guys don’t look left or right; they just look down. They’re willing to promote white nationalism if there’s money in it. The rise of social media will be seen as directly correlating to the decline of Western civilization.”

Dorsey, who has more leeway because his stock isn’t as valuable as Facebook’s, made some mild moves against the president who has been spewing lies and inciting violence on Twitter for years. He added footnotes clarifying false Trump tweets about mail-in ballots and put a warning label on the president’s tweet about the Minneapolis riots that echo the language of a Miami police chief in 1967 and segregationist George Wallace: “When the looting starts, the shooting starts.”

“Jack is really sincerely trying to find something to make it better,” said one friend of the Twitter chief’s. “He’s like somebody trapped in a maze, going down every hallway and turning every corner.”

Zuckerberg, on the other hand, went on Fox to report that he was happy to continue enabling the Emperor of Chaos, noting that he did not think Facebook should be “the arbiter of truth of everything that people say online.”

It was a sickening display that made even some loyal Facebook staffers queasy. As The Verge’s Casey Newton reported, some employees objected to the company’s rationale in internal posts.

“I have to say I am finding the contortions we have to go through incredibly hard to stomach,” one wrote. “All this points to a very high risk of a violent escalation and civil unrest in November and if we fail the test case here, history will not judge us kindly.”

Trump, furious that Dorsey would attempt to rein him in on the very platform that catapulted him into the White House, immediately decided to try to rein in Dorsey.

He signed an executive order that might strip liability protection from social media sites, which would mean they would have to more assiduously police false and defamatory posts. Now that social media sites are behemoths, Galloway thinks that the removal of the Communications Decency Act makes a lot of sense even if the president is trying to do it for the wrong reasons.

Trump does not seem to realize, however, that he’s removing his own protection. He huffs and puffs about freedom of speech when he really wants the freedom to be vile. “It’s the mother of all cutting-off-your-nose-to-spite-your-face moves,” says Galloway.

The president wants to say things on Twitter that he will not be allowed to say if he exerts this control over Twitter. In a sense, it’s Trump versus his own brain. If Twitter can be sued for what people say on it, how can Trump continue to torment? Wouldn’t thousands of his own tweets have to be deleted?

“He’d be the equivalent of a slippery floor at a store that sells equipment for hip replacements,” says Galloway, who also posits that, in our hyper-politicized world, this will turn Twitter into a Democratic site and Facebook into a Republican one.

Nancy Pelosi, whose district encompasses Twitter, said that it did little good for Dorsey to put up a few fact-checks while letting Trump’s rants about murder and other “misrepresentations” stay up.

“Facebook, all of them, they are all about making money,” the speaker said. “Their business model is to make money at the expense of the truth and the facts.” She crisply concluded that “all they want is to not pay taxes; they got their tax break in 2017” and “they don’t want to be regulated, so they pander to the White House.”

C’mon, Jack. Make @realDonaldTrump melt to help end our meltdown.

Source: Think Outside the Box, Jack

 

Corpses and mob violence: How China’s social media echo chamber fuels coronavirus fears

Of note:

Corpses lie on the ground near hospitals. People kill their pets for fear the animals will spread disease. Mobs chase down people without masks and angrily force them to cover up.

These are the scenes flooding social media in China as the country grapples with the novel coronavirus that has prompted the World Health Organization to declare a global emergency.

But how much of what the Chinese people and international observers are seeing on social media is true?

Public mistrust of government authorities in China has reached such a severe level, observers say, that many Chinese people have turned to alternative online sources of information — often of questionable veracity.

“Many Chinese people are well aware of the government’s long track record of censoring information about threats to public health,” said Sarah Cook, director of the China Media Bulletin at human rights research group Freedom House.

“This fuels deep mistrust in official updates and undermines efforts to reduce fear and anxiety,” she told The Star.

There’s history to the earned mistrust. In the first few months of the SARS outbreak in 2003, the Chinese government tried to keep it a secret. By the time the new virus was publicly reported, five people had died and hundreds had already fallen ill. It was a health disaster that led to heaps of global backlash, and China sacked its health minister and the mayor of Beijing in apparent contrition about the mishandling.

While central government authorities in Beijing were much quicker to publicly report the new coronavirus, the local Wuhan city government initially censored the first reports of a new illness emerging in the city last December. Medical experts said in a research paper published in The Lancet that they’ve found new evidence that the origin of the outbreak may not have been a seafood market in Wuhan as the Chinese government reported, and the first human infections may have occurred in November.

Li Wang is among those glued to social media.

The economics researcher at the University of New Brunswick and former Canadian student is currently on lockdown in Wuhan after flying home to visit family during Lunar New Year.

To pass the time, he was one of millions of Chinese glued to their screens watching a livestream of a hospital being built in ten days to house patients that have overwhelmed Wuhan’s hospitals. The government says a crew of 7,000 worked around the clock to build the 1,000-bed hospital, and vowed to build another this week.

“Everyone is afraid to go outside … Almost everyone I have talked to online are panicked,” Wang said. Because he is not a Canadian citizen or permanent resident, he’s not able to board the chartered flight Canada is sending to bring back Canadians from the city.

China’s control of social media is a factor that adds to the confusion. Many people are familiar with mainland China’s “Great Firewall,” the internet censorship apparatus that automatically blocks international social media platforms such as Twitter, Facebook, YouTube and Instagram as well as many news outlets and the entire suite of Google services.

Chinese authorities are continually developing and fine-tuning their ability to censor social media posts on domestic websites such as the Twitter-like Weibo blogging platform. They even have the ability to surveil and automatically block parts of private conservations on chat apps such as WeChat.

WeChat is the preferred platform for many in China during the coronavirus outbreak because the chat groups there tend to be small or medium-sized groups where some users know each other personally.

“People are getting at least some information from individuals they personally know and trust (on WeChat typically), but that doesn’t make them insusceptible from the spread of false information,” said Cook.

“But for those who personally know the original source — say a relative who is a nurse in Wuhan — her information will likely appear very credible and believable to them and possibly rightly so.”

However, like all social media platforms, the quality of what a user sees depends on the quality of the people they have in their circles. A WeChat user who is friends with many doctors and nurses would likely get more reliable information.

Perhaps aware of the communication challenges government control over the scarce number of independent media outlets in China has seemed to lighten over the past several weeks.

As a result, members of the public in China are turning to respected Chinese publications like Caixin to read quality journalism about the outbreak. The magazine recently published a four-part series produced by dozens of journalistsincluding a detailed account of the Wuhan government’s coverup of the crisis.

So are the images on social media real?

Yuri Qin, an editor at the Berkeley-based China Digital Times, a bilingual website that monitors the Chinese internet, says that unfortunately, some of the horrible videos and photographs might be real, although they are difficult to verify.

“Authorities in Wuhan have imposed some brutal measures to prevent the spread, and because of the panic some people are cruel to each other and sometimes they use extreme means to drive out or detain suspected carriers of the disease,” Qin told The Star in an email.

She says the loss of credibility of the local government has seemed to exacerbate paranoia and fear among citizens of Wuhan.

However, it’s also helpful to keep in mind that among the hundreds of millions of Chinese social media users, some have retained their sense of humour even during a health crisis. Some videos that have gone viral are jokes, and likely stem from people trying to make the best of their situations.

What are some reliable sources of English-language translations of Chinese social media posts on coronavirus?

The China Digital Times verifies and translates blog posts and diary entries from people living in China dealing with the coronavirus enforced quarantines and health checks.

The website What’s on Weibo tracks and analyses viral social media posts on China’s most popular platforms.

Bill Bishop’s Sinocism newsletter regularly compiles and comments on Chinese-language media sources on a variety of news topics.

Source: Corpses and mob violence: How China’s social media echo chamber fuels coronavirus fears

Fears of election meddling on social media were overblown, say researchers

Hype versus the reality (perhaps Canada not important enough…). The hype was in both mainstream and ethnic media:

Now that the election is over and researchers have combed through the data collected, their conclusion is clear: there was more talk about foreign trolls during the campaign than there was evidence of their activities.

Although there were a few confirmed cases of attempts to deceive Canadians online, three large research teams devoted to detecting co-ordinated influence campaigns on social media report they found little to worry about.

In fact, there were more news reports about malicious activity during the campaign than traces of it.

“We didn’t see high levels of effective disinformation campaigns. We didn’t see evidence of effective bot networks in any of the major platforms. Yet, we saw a lot of coverage of these things,” said Derek Ruths, a professor of computer science at McGill University in Montreal.

He monitored social media for foreign meddling during the campaign and, as part of the Digital Democracy Project, scoured the web for signs of disinformation campaigns.

Threat of foreign influence was hyped

“The vast majority of news stories about disinformation overstated the results and represented them as far more conclusive than they were. It was the case everywhere, with all media,” he said.

It’s a view mirrored by the Ryerson Social Media Lab, which also monitored social media during the campaign.

“Fears of foreign and domestic interference were overblown,” Philip Mai, co-director of the Social Media Lab, told CBC News.

A major focus of monitoring efforts during the campaign was Twitter, a platform favoured by politicians, journalists and partisans of all stripes. It’s where a lot of political exchanges take place, and it’s an easy target for automated influence campaigns.

“Our preliminary analysis of the [Twitter hashtag] #cdnpoli suggests that only about one per cent of accounts that used that hashtag earlier in the election cycle can be classified as likely to be bots,” said Mai.

The word “likely” is key. Any social media analyst will tell you that detecting bonafide automated accounts that exist solely to spread a message far and wide is incredibly difficult.

#TrudeauMustGo and other frenzies

A few times during the campaign, independent researchers found signs that certain conversations on Twitter were being amplified by accounts that appeared to be foreign. For example, the popular hashtag #TrudeauMustGo was tweeted and retweeted in large numbers by users who had the word “MAGA” in their user descriptions.

But this doesn’t mean those users were part of a foreign campaign, Ruths said.

“It’s very hard to prove that those MAGA accounts aren’t Canadian,” he said. “How can you prove who’s Canadian online? What does a Canadian look like on Twitter?”

Few Canadians use Twitter for news. According to the Digital News Report from the Reuters Institute for the Study of Journalism, only 11 per cent of Canadians got their news on Twitter in 2019, down slightly from 12 per cent last year.

Twitter’s most avid users tend to be politicians, journalists and highly engaged partisans.

Fenwick McKelvey, an assistant professor at Montreal’s Concordia University who researches social media platforms, said he feels journalists overestimate Twitter’s ability to take the pulse of the voting public.

“Twitter is an elite medium used by journalists and politicians more than everyday Canadians,” McKelvey told CBC News. “Twitter is a very specific public. Not a proxy for public opinion.”

In fact, most Canadians — 57 per cent — told a 2018 survey by the Social Media Lab that they have never shared political opinions on any social media platform.

Tweets for elites

For an idea of just how elitist Twitter can be, take a look at who is driving its political conversations. For some of the major hashtags during the election — like #cdnpoli, #defundCBC and the recently popular #wexit — just a fraction of users post original content. The rest just retweet.

And the users who get the most retweets, the biggest influencers, represent an even tinier sliver of Twitter users, according to data from the University of Toronto’s Citizen Lab, another outfit that monitored disinformation during the campaign.

“What we thought was a horizontal democratic space is dominated by less than two per cent of accounts,” said Gabrielle Lim, a fellow at the Citizen Lab.

“We need to take everything with a grain of salt when looking at Twitter. Doing data analysis is easy, but we’re bad at contextualizing what it means,” Lim said.

So why this focus on Twitter if it’s such a small and unrepresentative medium for Canadians? Because it’s easy to study. Unless a user sets an account to private, everything posted on Twitter is public and fairly easy to access.

On the other hand, more popular social networks like Facebook make it much harder to harvest user content at scale. A lot of misinformation may also be shared in closed channels like private Facebook groups and WhatsApp groups, which are nearly impossible for outsiders to access.

But even taking into account those larger social media audiences, the evidence shows that Canadians are getting their news from a variety of sources, Lim noted.

Although the threat posed by online disinformation to Canadian democracy was overblown in the context of the 2019 campaign, Ruths said he still believes it was important to be alert, just as it’s important to go to the dentist even if no cavities are found.

And he suggests that journalists looking for evidence of bot activity apply the same level of rigour as the people doing the research.

“We saw a lot of well-intentioned reporting,” he said. “But finding suspected accounts is not the same as finding bots. Saying that MAGA accounts don’t look like Canadians’ doesn’t mean they’re not.”

Source: Fears of election meddling on social media were overblown, say researchers