Misinformation and Chinese interference in Canada’s affairs

Deeply concerning, and all parties should support such a registry:

The story started with a private member’s bill introduced by former Conservative MP Kenny Chiu in spring of 2021 – the Foreign Influence Registry Act (Bill C-282). Its intention was to impose “an obligation on individuals acting on behalf of a foreign principal to file a return when they undertake specific actions with respect to public office holders.” This was a potential way to expose the relationship between agents in Canada and their ties to foreign countries. It could have also exposed Canada’s susceptibility to foreign influence, making it more difficult for external states to conduct electoral interference, technological and intellectual property theft, or even surveillance and operations like the “Operation Fox Hunt” (a global covert operation conducted by Beijing to threaten and repatriate Chinese dissidents to mainland China).

However, the purposes of the bill, which did not pass, became the target of a misinformation campaign. How misinformation on the Foreign Influence Registry Act was spread can be used as a case study for the simple, yet effective tactics commonly deployed in the making of “fake news.”

Examining the disinformation tactics – why are they effective?

Fake news is widely spread in diaspora Chinese communities via social media such as WeChat and WhatsApp. Research indicates that people tend to accept misinformation as fact if it comes from a credible and trustworthy source, and so-called “trust” can also be based on “feelings of familiarity.”

Research indicates we are more likely to believe in our friends and family, or even acquaintances, than complete strangers. And that familiarity does not necessarily have to be based on previous face-to-face interaction, but can also come in the form of internet communication, especially in the new era of technological advancement. So, when fake news is tailored to the Chinese community and disseminated through its communication channels, particularly via its own social network, it increases the acceptance rate of disinformation.

In addition, according to the principle of social proof theory, people tend to endorse a belief that is generally agreed on among the majority of their community, even if they may not believe in such ideology or information in the first place. This may be due to a need to seek social recognition or to prevent being an outcast in the community, especially in an overseas diaspora group. As well, despite the fact that some Chinese immigrants would like to verify the truthfulness of the news, they may not have access to other mainstream, Western media because of a language barrier.

The reliance on internet information often results in the creation of an “echo chamber” that is further exacerbated by the filter effect of the online algorithm. Applications such as the “WeChat Moment,” a feature in WeChat, which is widely used by the Chinese community, similar to Facebook and Instagram, allow individuals to view others’ stories. Thus, the Chinese community is being trapped in the vicious cycle of reinforced information consumption patterns.

Repeated exposure to the same fake news increases its chances of being considered true. Thus, when a person encounters the same piece of news, regardless of its integrity and credibility, this “increase[s] perceptions of honesty and sincerity as well as agreement with what the person says.” The phenomenon is often called the “illusion truth effect” in psychology. In other words, even though one may not believe the fake news, reinforced disinformation increases one’s susceptibility to it.

Combatting a state-sponsored disinformation campaign is never an easy task. Multidisciplinary approaches – including international co-operation and exchange of information between liberal democracies, establishment of an integrated institution that oversees all cybersecurity intelligence and analysis, planning and executing efforts to counter disinformation, as well as education and training to increase critical thinking by the public ─ are vital to improve our resilience and defend our core values against foreign interference and disinformation.

The danger – state-sponsored disinformation campaigns 

The case of Bill C-282 is indeed a salient example of how fake news is tailored and disseminated in a particular target group. However, another common tactic is state-sponsored disinformation. This is difficult to disprove because it has direct linkages with the central authority, which then denies responsibility for releasing the misinformation.

Because he was an outspoken politician who advocated for Hong Kong and democracy and heavily criticized Beijing’s violation on human rights, Chiu was sanctioned by the Chinese government against returning to his birthplace, Hong Kong. Moreover, due to his role on the Subcommittee on International Human Rights (SDIR), and previous work urging the Canadian government to impose sanctions on China, as a parliamentarian he was viewed unfavourably by the Beijing government.

Therefore, when the disinformation around Bill C-282 was deployed, Chiu’s pro-democracy and “anti-Chinese communist party background” were being used as justification for the accusation and argument that the proposed Foreign influence Registry Act was indeed racial discrimination against the Chinese, and that the bill’s prime objective was to “suppress pro-China opinion, as well as to operate surveillance on organizations and individuals” in the overseas Chinese community.

In addition, heavy criticism and attacks were not only focused on Chiu, but also on the Conservative party and leader Erin O’Toole, well-known for their hawkish stance against Beijing’s policies. Now that the 2021 federal election is over, it is indeed logical to infer that whoever was responsible for disseminating the fake news had a clear motive in reshaping the narratives in favour of Beijing’s interests.

In spite of the fact that the Chiu incident made only ripples in the recent federal election (he lost his seat as MP), such disinformation campaigns and their potential to manipulate diaspora communities (via psychology and social connections) could generate waves that would drown Canada’s democracy in the future.

Taking a stand against a decision by the Chinese Communist Party does not make the Conservatives or Canada anti-China. The assumption that it does has driven this general belief in the Chinese community, especially for those who have weak critical thinking skills and no prior training or experience in dealing with disinformation.

Perhaps more alarming is the fact that these tactics could be deployed against any group in an information and psychological warfare campaign. In short, it has a high potential for interference in Canada’s electoral process by foreign state actors and thus severely threatens the country’s liberal democracy.

Canada remains vulnerable to the security risk constituted by foreign interference. As a liberal country that vows to uphold its values in freedom and democracy, specific countermeasures such as Chiu’s proposed act and laws like the U.S. Foreign Agents Registration Act should be implemented.

At the third-party entities and civilian levels, one countermeasure could be a “foreign influence transparency scheme” similar to the one suggested in the news campaign Can Xi Not, introduced by Alliance Canada Hong Kong. This may be particularly important for both traditional and new media, which often have the power to shape public debates. In other words, media would retain their freedom of press, but would be required to disclose their foreign sponsorship, if there is any. Last but not least, other approaches to increase citizens’ resilience, as well as the nation’s capability to deter state-sponsored disinformation, should be thoroughly considered and enforced.

Source: https://policyoptions.irpp.org/magazines/january-2022/misinformation-and-chinese-interference-in-canadas-affairs/?mc_cid=9caa3573a1&mc_eid=86cabdc518

Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

More on Facebook and Zuckerberg’s failure to act against mis- and dis-information:

In May 2019, a video purporting to show House Speaker Nancy Pelosiinebriated, slurring her words as she gave a speech at a public event, went viral on Facebook. In reality, somebody had slowed the footage down to 75% of its original speed.

On one Facebook page alone, the doctored video received more than 3 million views and 48,000 shares. Within hours it had been reuploaded to different pages and groups, and spread to other social media platforms. In thousands of Facebook comments on pro-Trump and rightwing pages sharing the video, users called Pelosi “demented,” “messed up” and “an embarrassment.”
[time-brightcove not-tgx=”true”]

Two days after the video was first uploaded, and following angry calls from Pelosi’s team, Facebook CEO Mark Zuckerberg made the final call: the video did not break his site’s rules against disinformation or deepfakes, and therefore it would not be taken down. At the time, Facebook said it would instead demote the video in people’s feeds.

Inside Facebook, employees soon discovered that the page that shared the video of Pelosi was a prime example of a type of platform manipulation that had been allowing misinformation to spread unchecked. The page—and others like it—had built up a large audience not by posting original content, but by taking content from other sources around the web that had already gone viral. Once audiences had been established, nefarious pages would often pivot to posting misinformation or financial scams to their many viewers. The tactic was similar to how the Internet Research Agency (IRA), the Russian troll farm that had meddled in the 2016 U.S. election, spread disinformation to American Facebook users. Facebook employees gave the tactic a name: “manufactured virality.”

In April 2020, a team at Facebook working on “soft actions”—solutions that stop short of removing problematic content—presented Zuckerberg with a plan to reduce the reach of pages that pursued “manufactured virality” as a tactic. The plan would down-rank these pages, making it less likely that users would see their posts in the News Feed. It would impact the pages that shared the doctored video of Pelosi, employees specifically pointed out in their presentation to Zuckerberg. They also suggested it could significantly reduce misinformation posted by pages on the platform since the pages accounted for 64% of page-related misinformation views but only 19% of total page-related views.

But in response to feedback given by Zuckerberg during the meeting, the employees “deprioritized” that line of work in order to focus on projects with a “clearer integrity impact,” internal company documents show.

This story is partially based on whistleblower Frances Haugen’s disclosures to the U.S. Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. The redacted versions were seen by a consortium of news organizations, including TIME. Many of the documents were first reported by the Wall Street Journal. They paint a picture of a company obsessed with boosting user engagement, even as its efforts to do so incentivized divisive, angry and sensational content. They also show how the company often turned a blind eye to warnings from its own researchers about how it was contributing to societal harms.

A pitch to Zuckerberg with few visible downsides

Manufactured virality is a tactic that has been used frequently by bad actors to game the platform, according to Jeff Allen, the co-founder of the Integrity Institute and a former Facebook data scientist who worked closely on manufactured virality before he left the company in 2019. This includes a range of groups, from teenagers in Macedonia who found that targeting hyper-partisan U.S. audiences in 2016 was a lucrative business, to covert influence operations by foreign governments including the Kremlin. “Aggregating content that previously went viral is a strategy that all sorts of bad actors have used to build large audiences on platforms,” Allen told TIME. “The IRA did it, the financially motivated troll farms in the Balkans did it, and it’s not just a U.S. problem. It’s a tactic used across the world by actors who want to target various communities for their own financial or political gain.”

In the April 2020 meeting, Facebook employees working in the platform’s “integrity” division, which focuses on safety, presented a raft of suggestions to Zuckerberg about how to reduce the virality of harmful content on the platform. Several of the suggestions—titled “Big ideas to reduce prevalence of bad content”—had already been launched; some were still the subjects of experiments being run on the platform by Facebook researchers. Others —including tackling “manufactured virality”—were early concepts that employees were seeking approval from Zuckerberg to explore in more detail.

The employees noted that much “manufactured virality” content was already against Facebook’s rules. The problem, they said, was that the company inconsistently enforced those rules. “We already have a policy against pages that [pursue manufactured virality],” they wrote. “But [we] don’t consistently enforce on this policy today.”

The employees’ presentation said that further research was needed to determine the “integrity impact” of taking action against manufactured virality. But they pointed out that the tactic disproportionately contributed to the platform’s misinformation problem. They had compiled statistics showing that nearly two-thirds of page-related misinformation came from “manufactured virality” pages, compared to less than one fifth of total page-related views.

Acting against “manufactured virality” would bring few business risks, the employees added. Doing so would not reduce the number of times users logged into Facebook per day, nor the number of “likes” that they gave to other pieces of content, the presentation noted. Neither would cracking down on such content impact freedom of speech, the presentation said, since only reshares of unoriginal content—not speech—would be affected.

But Zuckerberg appeared to discourage further research. After presenting the suggestion to the CEO, employees posted an account of the meeting on Facebook’s internal employee forum, Workplace. In the post, they said that based on Zuckerberg’s feedback they would now be “deprioritizing” the plans to reduce manufactured virality, “in favor of projects that have a clearer integrity impact.” Zuckerberg approved several of the other suggestions that the team presented in the same meeting, including “personalized demotions,” or demoting content for users based on their feedback.

Andy Stone, a Facebook spokesperson, rejected suggestions that employees were discouraged from researching manufactured virality. “Researchers pursued this and, while initial results didn’t demonstrate a significant impact, they were free to continue to explore it,” Stone wrote in a statement to TIME. He said the company had nevertheless contributed significant resources to reducing bad content, including down-ranking. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” he wrote. “We recently published our Content Distribution Guidelines that describe the kinds of content whose distribution we reduce in News Feed. And we’ve spent years standing up teams, developing policies and collaborating with industry peers to disrupt coordinated attempts by foreign and domestic inauthentic groups to abuse our platform.”

But even today, pages that share unoriginal viral content in order to boost engagement and drive traffic to questionable websites are still some of the most popular on the entire platform, according to a report released by Facebook in August.

Allen, the former Facebook data scientist, says Facebook and other platforms should be focused on tackling manufactured virality, because it’s a powerful way to make platforms more resilient against abuse. “Platforms need to ensure that building up large audiences in a community should require genuine work and provide genuine value for the community,” he says. “Platforms leave them themselves vulnerable and exploitable by bad actors across the globe if they allow large audiences to be built up by the extremely low-effort practice of scraping and reposting content that previously went viral.”

The internal Facebook documents show that some researchers noted that cracking down on “manufactured virality” might reduce Meaningful Social Interactions (MSI)—a statistic that Facebook began using in 2018 to help rank its News Feed. The algorithm change was meant to show users more content from their friends and family, and less from politicians and news outlets. But an internal analysis from 2018 titled “Does Facebook reward outrage” reported that the more negative comments a Facebook post elicited​​—content like the altered Pelosi video—the more likely the link in the post was to be clicked by users. “The mechanics of our platform are not neutral,” one Facebook employee wrote at the time. Since the content with more engagement was placed more highly in users’ feeds, it created a feedback loop that incentivized the posts that drew the most outrage. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Oct. 25.

How “manufactured virality” led to trouble in Washington

Zuckerberg’s decision in May 2019 not to remove the doctored video of Pelosi seemed to mark a turning point for many Democratic lawmakers fed up with the company’s larger failure to stem misinformation. At the time, it led Pelosi—one of the most powerful members of Congress, who represents the company’s home state of California—to deliver an unusually scathing rebuke. She blasted Facebook as “willing enablers” of political disinformation and interference, a criticism increasingly echoed by many other lawmakers. Facebook defended its decision, saying that they had “dramatically reduced the distribution of that content” as soon as its fact-checking partners flagged the video for misinformation.

Pelosi’s office did not respond to TIME’s request for comment on this story.

The circumstances surrounding the Pelosi video exemplify how Facebook’s pledge to show political disinformation to fewer users only after third-party fact-checkers flag it as misleading or manipulated—a process that can take hours or even days—does little to stop this content from going viral immediately after it is posted.

In the lead-up to the 2020 election, after Zuckerberg discouraged employees from tackling manufactured virality, hyper-partisan sites used the tactic as a winning formula to drive engagement to their pages. In August 2020, another doctored video falsely claiming to show Pelosi inebriated again went viral. Pro-Trump and rightwing Facebook pages shared thousands of similar posts, from doctored videos meant to make then-candidate Joe Biden appear lost or confused while speaking at events, to edited videos claiming to show voter fraud.

In the aftermath of the election, the same network of pages that had built up millions of followers between them using manufactured virality tactics used the reach they had built to spread the lie that the election had been stolen.

Source: Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

How to Reach the Unvaccinated: To counter online misinformation, it helps to knock on doors.

Of note, likely similar in Canada:

What does it take to get credible information about the coronavirus vaccine, and the vaccines themselves, to more people?

My colleague Sheera Frenkel spoke to experts and followed a community group as it went door to door in an ethnically diverse neighborhood in Northern California to understand the reasons behind the low vaccination rates for Black and Hispanic Americanscompared with non-Hispanic white people.

What Sheera found, as she detailed in an article on Wednesday, was how online vaccine myths reinforce people’s fears and the ways that personal outreach and easier access to doses can make a big difference.

Shira: What surprised you from your reporting?

Sheera: One question I was trying to answer was whether the incorrect narratives floating around online about the vaccines — that they change people’s DNA or are a means of government control — were reaching Black and Hispanic communities and other people of color in the real world. I heard false information like that firsthand. It was eye opening.

The other surprise was how effective it was for someone to stand on a person’s doorstep and talk about their own experience getting a coronavirus vaccine and answer questions. The outreach group talked to each household for half an hour or longer sometimes. That may make more of a difference than any online health campaign ever could.

But it’s laborious to go door to door. Can reliable information ever travel as far and fast as misinformation?

Internet platforms amplify misinformation, and countering it isn’t simple. It takes more than a celebrity posting a vaccine selfie on Instagram.

Are we overstating the impact of vaccine hesitancy? The pediatrician Rhea Boyd recently wrote in our Opinion section that the primary barrier to Covid-19 vaccinations among Black Americans is a lack of access, not wariness about getting the shot.

It’s both.

Two things struck me from my reporting. First, false vaccine information is persuasive because it builds on something that people know to be true: The medical community has mistreatedpeople of color, and the bias continues. And second, vaccine hesitancy is different in each community.

That makes reaching Black Americans different than reaching new immigrants who are reading articles in Vietnamese or Chinese that make them concerned about vaccine safety. It’s an opportunity for community leaders to address what’s keeping people who trust them from getting vaccinated.YOUR CORONAVIRUS TRACKER: We’ll send you the latest data for places you care about each day.Sign Up

You’ve written about Russian propaganda in Latin America that fanned concerns about European and American coronavirus vaccines. Is that also reaching people in the United States?

Yes. Two Russian state-backed media networks, Sputnik and Russia Today, have among the most popular Spanish-language Facebook pages in the world. Their news reaches Spanish speakers in the United States.

I heard people ask in my reporting, Why should they get an American vaccine when the Russian one is better? (Those articles tend to cite real statistics but in misleading contexts.) I asked one man I met, George Rodriguez, where he had read that, and we figured out that it was from one of those Russian news sites.

What has been effective at increasing the coronavirus vaccination rates among Black and Latino Americans?

It seems effective to hold walk-in vaccination clinics. People can show up, ask questions they have and get a shot.

What about Republicans? Surveysshow that they are among the wariest Americans about coronavirus vaccines.

There have been concerns among some Republicans that people will be forced to get vaccinated, but that isn’t happening. 

It’s clear that among Republicans and other groups with vaccine hesitancy, once we know more people who are getting vaccinated, we’re more willing to do it, too.

How do you see this moving forward?

In just the last few weeks, I’ve gotten more optimistic about closing the vaccination gap. There have been huge strides in reaching people, getting those walk-in vaccination clinics open or taking vaccines to people, and addressing people’s concerns.

Source: https://www.nytimes.com/2021/03/10/technology/vaccine-misinformation-access.html