How Social Media Amplifies Misinformation More Than Information

Not surprising but useful studyÈ

It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday it began publishing results that it plans to update each week through the midterm elections on Nov. 8.

The institute’s initial report, posted online, found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.

Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.

“We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”

The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts. It analyzed nearly 600 fact-checked posts in September on a variety of subjects, including the Covid-19 pandemic, the war in Ukraine and the upcoming elections.

Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.

Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.

Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.

The institute plans to update its findings to track how the amplification fluctuates, especially as the midterm elections near. Misinformation, the institute’s report said, is much more likely to be shared than merely factual content.

“Amplification of misinformation can rise around critical events if misinformation narratives take hold,” the report said. “It can also fall, if platforms implement design changes around the event that reduce the spread of misinformation.”

Source: How Social Media Amplifies Misinformation More Than Information

Google Finds ‘Inoculating’ People Against Misinformation Helps Blunt Its Power

Interesting. Worth checking out the videos:

In the fight against online misinformation, falsehoods have key advantages: They crop up fast and spread at the speed of electrons, and there is a lag period before fact checkers can debunk them.

So researchers at Google, the University of Cambridge and the University of Bristol tested a different approach that tries to undermine misinformation before people see it. They call it “pre-bunking.”

The researchers found that psychologically “inoculating” internet users against lies and conspiracy theories — by pre-emptively showing them videos about the tactics behind misinformation — made people more skeptical of falsehoods afterward, according to an academic paper published in the journal Science Advances on Wednesday. But effective educational tools still may not be enough to reach people with hardened political beliefs, the researchers found.

Since Russia spread disinformation on Facebook during the 2016 election, major technology companies have struggled to balance concerns about censorship with fighting online lies and conspiracy theories. Despite an array of attempts by the companies to address the problem, it is still largely up to users to differentiate between fact and fiction.

The strategies and tools being deployed during the midterm vote in the United States this year by FacebookTikTok and other companies often resemble tactics developed to deal with misinformation in past elections: partnerships with fact-checking groups, warning labels, portals with vetted explainers as well as post removal and user bans.

Social media platforms have made attempts to pre-bunk before, though those efforts have done little to slow the spread of false information. Most have also not been as detailed — or as entertaining — as the videos used in the studies by the researchers.

Twitter said this month that it would try to “enable healthy civic conversation” during the midterm elections in part by reviving pop-up warnings, which it used during the 2020 election. Warnings, written in multiple languages, will appear as prompts placed atop users’ feeds and in searches for certain topics.

The new paper details seven experiments with almost 30,000 total participants. The researchers bought YouTube ad space to show users in the United States 90-second animated videos aiming to teach them about propaganda tropes and manipulation techniques. A million adults watched one of the ads for 30 seconds or longer.

The users were taught about tactics such as scapegoating and deliberate incoherence, or the use of conflicting explanations to assert that something is true, so that they could spot lies. Researchers tested some participants within 24 hours of seeing a pre-bunk video and found a 5 percent increase in their ability to recognize misinformation techniques.

One video opens with a mournful piano tune and a little girl grasping a teddy bear, as a narrator says, “What happens next will make you tear up.” Then the narrator explains that emotional content compels people to pay more attention than they otherwise would, and that fear-mongering and appeals to outrage are keys to spreading moral and political ideas on social media.

The video offers examples, such as headlines that describe a “horrific” accident instead of a “serious” one, before reminding viewers that if something they see makes them angry, “someone may be pulling your strings.”

Beth Goldberg, one of the paper’s authors and the head of research and development at Jigsaw, a technology incubator within Google, said in an interview that pre-bunking leaned into people’s innate desire to not be duped.

“This is one of the few misinformation interventions that I’ve seen at least that has worked not just across the conspiratorial spectrum but across the political spectrum,” Ms. Goldberg said.

Jigsaw will start a pre-bunking ad campaign on YouTube, Facebook, Twitter and TikTok at the end of August for users in Poland, Slovakia and the Czech Republic, meant to head off fear-mongering about Ukrainian refugees who entered those countries after Russia invaded Ukraine. It will be done in concert with local fact checkers, academics and disinformation experts.

The researchers don’t have plans for similar pre-bunking videos ahead of the midterm elections in the United States, but they are hoping other tech companies and civil groups will use their research as a template for addressing misinformation.

However, pre-bunking is not a silver bullet. The tactic was not effective on people with extreme views, such as white supremacists, Ms. Goldberg said. She added that elections were tricky to pre-bunk because people had such entrenched beliefs. The effects of pre-bunking last for only between a few days and a month.

Groups focused on information literacy and fact-checking have employed various pre-bunking strategies, such as a misinformation-identifying curriculum delivered over two weeks of texts, or lists of bullet points with tips such as “identify the author” and “check your biases.” Online games with names like Cranky Uncle, Harmony Square, Troll Factory and Go Viral try to build players’ cognitive resistance to bot armies, emotional manipulation, science denial and vaccine falsehoods.

A study conducted in 2020 by researchers at the University of Cambridge and at Uppsala University in Sweden found that people who played the online game Bad News learned to recognize common misinformation strategies across cultures. Players in the simulation were tasked with amassing as many followers as possible and maintaining credibility while they spread fake news.

The researchers wrote that pre-bunking worked like medical immunization: “Pre-emptively warning and exposing people to weakened doses of misinformation can cultivate ‘mental antibodies’ against fake news.”

Tech companies, academics and nongovernmental organizations fighting misinformation have the disadvantage of never knowing what lie will spread next. But Prof. Stephan Lewandowsky from the University of Bristol, a co-author of Wednesday’s paper, said propaganda and lies were predictable, nearly always created from the same playbook.

“Fact checkers can only rebut a fraction of the falsehoods circulating online,” Mr. Lewandowsky said in a statement. “We need to teach people to recognize the misinformation playbook, so they understand when they are being misled.”

Source: Google Finds ‘Inoculating’ People Against Misinformation Helps Blunt Its Power

Dwivedi: The politics of rage and disinformation — we ignore it at our peril

A warning against complacency:

From 2016 to 2020, I hosted a morning show on a Toronto talk radio station.

Very soon into the gig, a rather discernable and then predictable pattern emerged: other hosts on the station would promote baseless conspiracy theories or blatant misinformation, such as Justin Trudeau being a George Soros-controlled globalist or that a non-binding motion to condemn Islamophobia would criminalize all criticism of Islam. Then, when the morning show didn’t abide by the same rhetoric, I would see a huge uptick in the volume and vitriol in my email inbox.

One of the more graphic rape threats I received during that time made a reference to burning off my clitoris once I had been gang raped. That morning, I had corrected a false notion circulating in conservative circles, and being bolstered by colleagues at the station, that Canada signing onto the UN Global Compact for Migration would mean Canada would no longer have jurisdiction over its borders or have sovereignty in determining its immigration targets.

It has now been documented that there was a co-ordinated campaign to poison the discourse around the compact by pushing misinformation specifically on the issues of immigration and borders. And it worked. Conservatives in Canada repeated the campaign’s unsubstantiated talking points and worldwide, debate over the compact reached such a pitch, the coalition government in Belgium effectively collapsed.

Misinformation, disinformation, and conspiracy theories don’t exist in a vacuum, nor do they only live online. They spill out into the real world and impact very real people. And when misinformation, disinformation or conspiracy theories target groups of people already on the receiving end of hate, unsurprisingly, the hate experienced by those groups tends to increase.

In the aftermath of the last federal election, one thing that became abundantly clear was that much of our legacy political media seemed either unwilling or unable to report on the very real threat posed by politicians who use misinformation and conspiracy theories as part of their political shtick to appeal to voters.

The People’s Party of Canada (PPC) garnered just over 800 000 votes in the 2021 election, more than double its vote share in the 2019 election. Certainly, not every single PPC voter is an avowed white supremacist, but there were clear ties between the PPC and extremist groups that went largely ignored by legacy media. For example, columns and news coverage alike failed to acknowledge the PPC riding president charged for throwing gravel at the prime minister on the 2021 campaign trail had well-established, explicit ties to the white nationalist movement.

Instead of engaging in substantive discourse on the information ecosystem and political environment that allowed Maxime Bernier, a Harper-era cabinet minister and near-leader of the Conservative Party of Canada, to descend into a conspiracy theory-pushing zealot, our political chattering classes chose instead to focus on righteous indignation, decrying the import of American-style politics into our Canadian sphere.

Then came the “freedom convoy.” Suddenly, white journalists were regularly on the receiving end of deranged diatribes and threats of violence for reporting basic facts, akin to what their Jewish, Muslim, and BIPOC colleagues had experienced for years. There was a glimmer of hope that we’d collectively start to take these issues more seriously.

That was, however, short-lived as the bulk of legacy political media reverted to their natural resting state of being wilfully blind to the conspiracy theory-laden rage in this country and the politicians who encourage it, all under the guise of objectivity coupled with a healthy dose of normalcy bias.

Bernier has been unable to secure a single seat for his party in the last two federal elections, and so it’s easy to write him and the PPC off as having been wholly rejected by the Canadian electorate.

It will become much harder to do that once Pierre Poilievre officially leads the Conservative Party of Canada in September. Poilievre is an enthusiastic and unapologetic peddler of conspiracy theories about the World Economic Forum. As both NDP MP Charlie Angus and CPC MP Michelle Rempel Garner have noted, there is a very real danger in mainstreaming conspiracy theories about a secret elite cabal controlling the country.

There are plenty of fundamentally good and decent Conservatives out there, both inside and outside the official party apparatus, who are uncomfortable with the direction their party is taking. However, there is no indication that a CPC with Poilievre at the helm will feel the need to temper its rhetoric. The party will effectively become a better funded, more organized, more mainstream version of Bernier’s PPC.

It’s easy and even tempting to scoff at that notion. But that is being purposefully ignorant to what has happened to conservatism in a lot of places, including right here. When Conservatives point out Poilievre is the best-placed person to lead the party, they’re not wrong. He very much embodies the modern-day CPC core base: angry, aggrieved, and willing to say anything so long as it dunks on Libs in the process.

The revelations from the Jan. 6 committee hearings in the U.S. should serve as a stark warning to Canadians as to what happens when conspiracy theories and disinformation become mainstreamed by the political establishment. Downplaying or even placating this type of rhetoric poses a fundamental danger to democracy itself. The sooner Canada realizes this, the better off we’ll be.

In the meantime, I look forward to Canadian columnists telling us that we should consider ourselves lucky that we’re not in the same boat as the Americans. After all, our conservatives only actively cheered on and supported the people who were trying to subvert Canadian democracy, they didn’t actually try to subvert it themselves.

Supriya Dwivedi is the director of policy and engagement at the Centre for Media, Technology and Democracy at McGill University and is senior counsel for Enterprise Canada.

Source: The politics of rage and disinformation — we ignore it at our peril

Misinformation and Chinese interference in Canada’s affairs

Deeply concerning, and all parties should support such a registry:

The story started with a private member’s bill introduced by former Conservative MP Kenny Chiu in spring of 2021 – the Foreign Influence Registry Act (Bill C-282). Its intention was to impose “an obligation on individuals acting on behalf of a foreign principal to file a return when they undertake specific actions with respect to public office holders.” This was a potential way to expose the relationship between agents in Canada and their ties to foreign countries. It could have also exposed Canada’s susceptibility to foreign influence, making it more difficult for external states to conduct electoral interference, technological and intellectual property theft, or even surveillance and operations like the “Operation Fox Hunt” (a global covert operation conducted by Beijing to threaten and repatriate Chinese dissidents to mainland China).

However, the purposes of the bill, which did not pass, became the target of a misinformation campaign. How misinformation on the Foreign Influence Registry Act was spread can be used as a case study for the simple, yet effective tactics commonly deployed in the making of “fake news.”

Examining the disinformation tactics – why are they effective?

Fake news is widely spread in diaspora Chinese communities via social media such as WeChat and WhatsApp. Research indicates that people tend to accept misinformation as fact if it comes from a credible and trustworthy source, and so-called “trust” can also be based on “feelings of familiarity.”

Research indicates we are more likely to believe in our friends and family, or even acquaintances, than complete strangers. And that familiarity does not necessarily have to be based on previous face-to-face interaction, but can also come in the form of internet communication, especially in the new era of technological advancement. So, when fake news is tailored to the Chinese community and disseminated through its communication channels, particularly via its own social network, it increases the acceptance rate of disinformation.

In addition, according to the principle of social proof theory, people tend to endorse a belief that is generally agreed on among the majority of their community, even if they may not believe in such ideology or information in the first place. This may be due to a need to seek social recognition or to prevent being an outcast in the community, especially in an overseas diaspora group. As well, despite the fact that some Chinese immigrants would like to verify the truthfulness of the news, they may not have access to other mainstream, Western media because of a language barrier.

The reliance on internet information often results in the creation of an “echo chamber” that is further exacerbated by the filter effect of the online algorithm. Applications such as the “WeChat Moment,” a feature in WeChat, which is widely used by the Chinese community, similar to Facebook and Instagram, allow individuals to view others’ stories. Thus, the Chinese community is being trapped in the vicious cycle of reinforced information consumption patterns.

Repeated exposure to the same fake news increases its chances of being considered true. Thus, when a person encounters the same piece of news, regardless of its integrity and credibility, this “increase[s] perceptions of honesty and sincerity as well as agreement with what the person says.” The phenomenon is often called the “illusion truth effect” in psychology. In other words, even though one may not believe the fake news, reinforced disinformation increases one’s susceptibility to it.

Combatting a state-sponsored disinformation campaign is never an easy task. Multidisciplinary approaches – including international co-operation and exchange of information between liberal democracies, establishment of an integrated institution that oversees all cybersecurity intelligence and analysis, planning and executing efforts to counter disinformation, as well as education and training to increase critical thinking by the public ─ are vital to improve our resilience and defend our core values against foreign interference and disinformation.

The danger – state-sponsored disinformation campaigns 

The case of Bill C-282 is indeed a salient example of how fake news is tailored and disseminated in a particular target group. However, another common tactic is state-sponsored disinformation. This is difficult to disprove because it has direct linkages with the central authority, which then denies responsibility for releasing the misinformation.

Because he was an outspoken politician who advocated for Hong Kong and democracy and heavily criticized Beijing’s violation on human rights, Chiu was sanctioned by the Chinese government against returning to his birthplace, Hong Kong. Moreover, due to his role on the Subcommittee on International Human Rights (SDIR), and previous work urging the Canadian government to impose sanctions on China, as a parliamentarian he was viewed unfavourably by the Beijing government.

Therefore, when the disinformation around Bill C-282 was deployed, Chiu’s pro-democracy and “anti-Chinese communist party background” were being used as justification for the accusation and argument that the proposed Foreign influence Registry Act was indeed racial discrimination against the Chinese, and that the bill’s prime objective was to “suppress pro-China opinion, as well as to operate surveillance on organizations and individuals” in the overseas Chinese community.

In addition, heavy criticism and attacks were not only focused on Chiu, but also on the Conservative party and leader Erin O’Toole, well-known for their hawkish stance against Beijing’s policies. Now that the 2021 federal election is over, it is indeed logical to infer that whoever was responsible for disseminating the fake news had a clear motive in reshaping the narratives in favour of Beijing’s interests.

In spite of the fact that the Chiu incident made only ripples in the recent federal election (he lost his seat as MP), such disinformation campaigns and their potential to manipulate diaspora communities (via psychology and social connections) could generate waves that would drown Canada’s democracy in the future.

Taking a stand against a decision by the Chinese Communist Party does not make the Conservatives or Canada anti-China. The assumption that it does has driven this general belief in the Chinese community, especially for those who have weak critical thinking skills and no prior training or experience in dealing with disinformation.

Perhaps more alarming is the fact that these tactics could be deployed against any group in an information and psychological warfare campaign. In short, it has a high potential for interference in Canada’s electoral process by foreign state actors and thus severely threatens the country’s liberal democracy.

Canada remains vulnerable to the security risk constituted by foreign interference. As a liberal country that vows to uphold its values in freedom and democracy, specific countermeasures such as Chiu’s proposed act and laws like the U.S. Foreign Agents Registration Act should be implemented.

At the third-party entities and civilian levels, one countermeasure could be a “foreign influence transparency scheme” similar to the one suggested in the news campaign Can Xi Not, introduced by Alliance Canada Hong Kong. This may be particularly important for both traditional and new media, which often have the power to shape public debates. In other words, media would retain their freedom of press, but would be required to disclose their foreign sponsorship, if there is any. Last but not least, other approaches to increase citizens’ resilience, as well as the nation’s capability to deter state-sponsored disinformation, should be thoroughly considered and enforced.

Source: https://policyoptions.irpp.org/magazines/january-2022/misinformation-and-chinese-interference-in-canadas-affairs/?mc_cid=9caa3573a1&mc_eid=86cabdc518

Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

More on Facebook and Zuckerberg’s failure to act against mis- and dis-information:

In May 2019, a video purporting to show House Speaker Nancy Pelosiinebriated, slurring her words as she gave a speech at a public event, went viral on Facebook. In reality, somebody had slowed the footage down to 75% of its original speed.

On one Facebook page alone, the doctored video received more than 3 million views and 48,000 shares. Within hours it had been reuploaded to different pages and groups, and spread to other social media platforms. In thousands of Facebook comments on pro-Trump and rightwing pages sharing the video, users called Pelosi “demented,” “messed up” and “an embarrassment.”
[time-brightcove not-tgx=”true”]

Two days after the video was first uploaded, and following angry calls from Pelosi’s team, Facebook CEO Mark Zuckerberg made the final call: the video did not break his site’s rules against disinformation or deepfakes, and therefore it would not be taken down. At the time, Facebook said it would instead demote the video in people’s feeds.

Inside Facebook, employees soon discovered that the page that shared the video of Pelosi was a prime example of a type of platform manipulation that had been allowing misinformation to spread unchecked. The page—and others like it—had built up a large audience not by posting original content, but by taking content from other sources around the web that had already gone viral. Once audiences had been established, nefarious pages would often pivot to posting misinformation or financial scams to their many viewers. The tactic was similar to how the Internet Research Agency (IRA), the Russian troll farm that had meddled in the 2016 U.S. election, spread disinformation to American Facebook users. Facebook employees gave the tactic a name: “manufactured virality.”

In April 2020, a team at Facebook working on “soft actions”—solutions that stop short of removing problematic content—presented Zuckerberg with a plan to reduce the reach of pages that pursued “manufactured virality” as a tactic. The plan would down-rank these pages, making it less likely that users would see their posts in the News Feed. It would impact the pages that shared the doctored video of Pelosi, employees specifically pointed out in their presentation to Zuckerberg. They also suggested it could significantly reduce misinformation posted by pages on the platform since the pages accounted for 64% of page-related misinformation views but only 19% of total page-related views.

But in response to feedback given by Zuckerberg during the meeting, the employees “deprioritized” that line of work in order to focus on projects with a “clearer integrity impact,” internal company documents show.

This story is partially based on whistleblower Frances Haugen’s disclosures to the U.S. Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. The redacted versions were seen by a consortium of news organizations, including TIME. Many of the documents were first reported by the Wall Street Journal. They paint a picture of a company obsessed with boosting user engagement, even as its efforts to do so incentivized divisive, angry and sensational content. They also show how the company often turned a blind eye to warnings from its own researchers about how it was contributing to societal harms.

A pitch to Zuckerberg with few visible downsides

Manufactured virality is a tactic that has been used frequently by bad actors to game the platform, according to Jeff Allen, the co-founder of the Integrity Institute and a former Facebook data scientist who worked closely on manufactured virality before he left the company in 2019. This includes a range of groups, from teenagers in Macedonia who found that targeting hyper-partisan U.S. audiences in 2016 was a lucrative business, to covert influence operations by foreign governments including the Kremlin. “Aggregating content that previously went viral is a strategy that all sorts of bad actors have used to build large audiences on platforms,” Allen told TIME. “The IRA did it, the financially motivated troll farms in the Balkans did it, and it’s not just a U.S. problem. It’s a tactic used across the world by actors who want to target various communities for their own financial or political gain.”

In the April 2020 meeting, Facebook employees working in the platform’s “integrity” division, which focuses on safety, presented a raft of suggestions to Zuckerberg about how to reduce the virality of harmful content on the platform. Several of the suggestions—titled “Big ideas to reduce prevalence of bad content”—had already been launched; some were still the subjects of experiments being run on the platform by Facebook researchers. Others —including tackling “manufactured virality”—were early concepts that employees were seeking approval from Zuckerberg to explore in more detail.

The employees noted that much “manufactured virality” content was already against Facebook’s rules. The problem, they said, was that the company inconsistently enforced those rules. “We already have a policy against pages that [pursue manufactured virality],” they wrote. “But [we] don’t consistently enforce on this policy today.”

The employees’ presentation said that further research was needed to determine the “integrity impact” of taking action against manufactured virality. But they pointed out that the tactic disproportionately contributed to the platform’s misinformation problem. They had compiled statistics showing that nearly two-thirds of page-related misinformation came from “manufactured virality” pages, compared to less than one fifth of total page-related views.

Acting against “manufactured virality” would bring few business risks, the employees added. Doing so would not reduce the number of times users logged into Facebook per day, nor the number of “likes” that they gave to other pieces of content, the presentation noted. Neither would cracking down on such content impact freedom of speech, the presentation said, since only reshares of unoriginal content—not speech—would be affected.

But Zuckerberg appeared to discourage further research. After presenting the suggestion to the CEO, employees posted an account of the meeting on Facebook’s internal employee forum, Workplace. In the post, they said that based on Zuckerberg’s feedback they would now be “deprioritizing” the plans to reduce manufactured virality, “in favor of projects that have a clearer integrity impact.” Zuckerberg approved several of the other suggestions that the team presented in the same meeting, including “personalized demotions,” or demoting content for users based on their feedback.

Andy Stone, a Facebook spokesperson, rejected suggestions that employees were discouraged from researching manufactured virality. “Researchers pursued this and, while initial results didn’t demonstrate a significant impact, they were free to continue to explore it,” Stone wrote in a statement to TIME. He said the company had nevertheless contributed significant resources to reducing bad content, including down-ranking. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” he wrote. “We recently published our Content Distribution Guidelines that describe the kinds of content whose distribution we reduce in News Feed. And we’ve spent years standing up teams, developing policies and collaborating with industry peers to disrupt coordinated attempts by foreign and domestic inauthentic groups to abuse our platform.”

But even today, pages that share unoriginal viral content in order to boost engagement and drive traffic to questionable websites are still some of the most popular on the entire platform, according to a report released by Facebook in August.

Allen, the former Facebook data scientist, says Facebook and other platforms should be focused on tackling manufactured virality, because it’s a powerful way to make platforms more resilient against abuse. “Platforms need to ensure that building up large audiences in a community should require genuine work and provide genuine value for the community,” he says. “Platforms leave them themselves vulnerable and exploitable by bad actors across the globe if they allow large audiences to be built up by the extremely low-effort practice of scraping and reposting content that previously went viral.”

The internal Facebook documents show that some researchers noted that cracking down on “manufactured virality” might reduce Meaningful Social Interactions (MSI)—a statistic that Facebook began using in 2018 to help rank its News Feed. The algorithm change was meant to show users more content from their friends and family, and less from politicians and news outlets. But an internal analysis from 2018 titled “Does Facebook reward outrage” reported that the more negative comments a Facebook post elicited​​—content like the altered Pelosi video—the more likely the link in the post was to be clicked by users. “The mechanics of our platform are not neutral,” one Facebook employee wrote at the time. Since the content with more engagement was placed more highly in users’ feeds, it created a feedback loop that incentivized the posts that drew the most outrage. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Oct. 25.

How “manufactured virality” led to trouble in Washington

Zuckerberg’s decision in May 2019 not to remove the doctored video of Pelosi seemed to mark a turning point for many Democratic lawmakers fed up with the company’s larger failure to stem misinformation. At the time, it led Pelosi—one of the most powerful members of Congress, who represents the company’s home state of California—to deliver an unusually scathing rebuke. She blasted Facebook as “willing enablers” of political disinformation and interference, a criticism increasingly echoed by many other lawmakers. Facebook defended its decision, saying that they had “dramatically reduced the distribution of that content” as soon as its fact-checking partners flagged the video for misinformation.

Pelosi’s office did not respond to TIME’s request for comment on this story.

The circumstances surrounding the Pelosi video exemplify how Facebook’s pledge to show political disinformation to fewer users only after third-party fact-checkers flag it as misleading or manipulated—a process that can take hours or even days—does little to stop this content from going viral immediately after it is posted.

In the lead-up to the 2020 election, after Zuckerberg discouraged employees from tackling manufactured virality, hyper-partisan sites used the tactic as a winning formula to drive engagement to their pages. In August 2020, another doctored video falsely claiming to show Pelosi inebriated again went viral. Pro-Trump and rightwing Facebook pages shared thousands of similar posts, from doctored videos meant to make then-candidate Joe Biden appear lost or confused while speaking at events, to edited videos claiming to show voter fraud.

In the aftermath of the election, the same network of pages that had built up millions of followers between them using manufactured virality tactics used the reach they had built to spread the lie that the election had been stolen.

Source: Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

How to Reach the Unvaccinated: To counter online misinformation, it helps to knock on doors.

Of note, likely similar in Canada:

What does it take to get credible information about the coronavirus vaccine, and the vaccines themselves, to more people?

My colleague Sheera Frenkel spoke to experts and followed a community group as it went door to door in an ethnically diverse neighborhood in Northern California to understand the reasons behind the low vaccination rates for Black and Hispanic Americanscompared with non-Hispanic white people.

What Sheera found, as she detailed in an article on Wednesday, was how online vaccine myths reinforce people’s fears and the ways that personal outreach and easier access to doses can make a big difference.

Shira: What surprised you from your reporting?

Sheera: One question I was trying to answer was whether the incorrect narratives floating around online about the vaccines — that they change people’s DNA or are a means of government control — were reaching Black and Hispanic communities and other people of color in the real world. I heard false information like that firsthand. It was eye opening.

The other surprise was how effective it was for someone to stand on a person’s doorstep and talk about their own experience getting a coronavirus vaccine and answer questions. The outreach group talked to each household for half an hour or longer sometimes. That may make more of a difference than any online health campaign ever could.

But it’s laborious to go door to door. Can reliable information ever travel as far and fast as misinformation?

Internet platforms amplify misinformation, and countering it isn’t simple. It takes more than a celebrity posting a vaccine selfie on Instagram.

Are we overstating the impact of vaccine hesitancy? The pediatrician Rhea Boyd recently wrote in our Opinion section that the primary barrier to Covid-19 vaccinations among Black Americans is a lack of access, not wariness about getting the shot.

It’s both.

Two things struck me from my reporting. First, false vaccine information is persuasive because it builds on something that people know to be true: The medical community has mistreatedpeople of color, and the bias continues. And second, vaccine hesitancy is different in each community.

That makes reaching Black Americans different than reaching new immigrants who are reading articles in Vietnamese or Chinese that make them concerned about vaccine safety. It’s an opportunity for community leaders to address what’s keeping people who trust them from getting vaccinated.YOUR CORONAVIRUS TRACKER: We’ll send you the latest data for places you care about each day.Sign Up

You’ve written about Russian propaganda in Latin America that fanned concerns about European and American coronavirus vaccines. Is that also reaching people in the United States?

Yes. Two Russian state-backed media networks, Sputnik and Russia Today, have among the most popular Spanish-language Facebook pages in the world. Their news reaches Spanish speakers in the United States.

I heard people ask in my reporting, Why should they get an American vaccine when the Russian one is better? (Those articles tend to cite real statistics but in misleading contexts.) I asked one man I met, George Rodriguez, where he had read that, and we figured out that it was from one of those Russian news sites.

What has been effective at increasing the coronavirus vaccination rates among Black and Latino Americans?

It seems effective to hold walk-in vaccination clinics. People can show up, ask questions they have and get a shot.

What about Republicans? Surveysshow that they are among the wariest Americans about coronavirus vaccines.

There have been concerns among some Republicans that people will be forced to get vaccinated, but that isn’t happening. 

It’s clear that among Republicans and other groups with vaccine hesitancy, once we know more people who are getting vaccinated, we’re more willing to do it, too.

How do you see this moving forward?

In just the last few weeks, I’ve gotten more optimistic about closing the vaccination gap. There have been huge strides in reaching people, getting those walk-in vaccination clinics open or taking vaccines to people, and addressing people’s concerns.

Source: https://www.nytimes.com/2021/03/10/technology/vaccine-misinformation-access.html