Dwivedi: The politics of rage and disinformation — we ignore it at our peril

A warning against complacency:

From 2016 to 2020, I hosted a morning show on a Toronto talk radio station.

Very soon into the gig, a rather discernable and then predictable pattern emerged: other hosts on the station would promote baseless conspiracy theories or blatant misinformation, such as Justin Trudeau being a George Soros-controlled globalist or that a non-binding motion to condemn Islamophobia would criminalize all criticism of Islam. Then, when the morning show didn’t abide by the same rhetoric, I would see a huge uptick in the volume and vitriol in my email inbox.

One of the more graphic rape threats I received during that time made a reference to burning off my clitoris once I had been gang raped. That morning, I had corrected a false notion circulating in conservative circles, and being bolstered by colleagues at the station, that Canada signing onto the UN Global Compact for Migration would mean Canada would no longer have jurisdiction over its borders or have sovereignty in determining its immigration targets.

It has now been documented that there was a co-ordinated campaign to poison the discourse around the compact by pushing misinformation specifically on the issues of immigration and borders. And it worked. Conservatives in Canada repeated the campaign’s unsubstantiated talking points and worldwide, debate over the compact reached such a pitch, the coalition government in Belgium effectively collapsed.

Misinformation, disinformation, and conspiracy theories don’t exist in a vacuum, nor do they only live online. They spill out into the real world and impact very real people. And when misinformation, disinformation or conspiracy theories target groups of people already on the receiving end of hate, unsurprisingly, the hate experienced by those groups tends to increase.

In the aftermath of the last federal election, one thing that became abundantly clear was that much of our legacy political media seemed either unwilling or unable to report on the very real threat posed by politicians who use misinformation and conspiracy theories as part of their political shtick to appeal to voters.

The People’s Party of Canada (PPC) garnered just over 800 000 votes in the 2021 election, more than double its vote share in the 2019 election. Certainly, not every single PPC voter is an avowed white supremacist, but there were clear ties between the PPC and extremist groups that went largely ignored by legacy media. For example, columns and news coverage alike failed to acknowledge the PPC riding president charged for throwing gravel at the prime minister on the 2021 campaign trail had well-established, explicit ties to the white nationalist movement.

Instead of engaging in substantive discourse on the information ecosystem and political environment that allowed Maxime Bernier, a Harper-era cabinet minister and near-leader of the Conservative Party of Canada, to descend into a conspiracy theory-pushing zealot, our political chattering classes chose instead to focus on righteous indignation, decrying the import of American-style politics into our Canadian sphere.

Then came the “freedom convoy.” Suddenly, white journalists were regularly on the receiving end of deranged diatribes and threats of violence for reporting basic facts, akin to what their Jewish, Muslim, and BIPOC colleagues had experienced for years. There was a glimmer of hope that we’d collectively start to take these issues more seriously.

That was, however, short-lived as the bulk of legacy political media reverted to their natural resting state of being wilfully blind to the conspiracy theory-laden rage in this country and the politicians who encourage it, all under the guise of objectivity coupled with a healthy dose of normalcy bias.

Bernier has been unable to secure a single seat for his party in the last two federal elections, and so it’s easy to write him and the PPC off as having been wholly rejected by the Canadian electorate.

It will become much harder to do that once Pierre Poilievre officially leads the Conservative Party of Canada in September. Poilievre is an enthusiastic and unapologetic peddler of conspiracy theories about the World Economic Forum. As both NDP MP Charlie Angus and CPC MP Michelle Rempel Garner have noted, there is a very real danger in mainstreaming conspiracy theories about a secret elite cabal controlling the country.

There are plenty of fundamentally good and decent Conservatives out there, both inside and outside the official party apparatus, who are uncomfortable with the direction their party is taking. However, there is no indication that a CPC with Poilievre at the helm will feel the need to temper its rhetoric. The party will effectively become a better funded, more organized, more mainstream version of Bernier’s PPC.

It’s easy and even tempting to scoff at that notion. But that is being purposefully ignorant to what has happened to conservatism in a lot of places, including right here. When Conservatives point out Poilievre is the best-placed person to lead the party, they’re not wrong. He very much embodies the modern-day CPC core base: angry, aggrieved, and willing to say anything so long as it dunks on Libs in the process.

The revelations from the Jan. 6 committee hearings in the U.S. should serve as a stark warning to Canadians as to what happens when conspiracy theories and disinformation become mainstreamed by the political establishment. Downplaying or even placating this type of rhetoric poses a fundamental danger to democracy itself. The sooner Canada realizes this, the better off we’ll be.

In the meantime, I look forward to Canadian columnists telling us that we should consider ourselves lucky that we’re not in the same boat as the Americans. After all, our conservatives only actively cheered on and supported the people who were trying to subvert Canadian democracy, they didn’t actually try to subvert it themselves.

Supriya Dwivedi is the director of policy and engagement at the Centre for Media, Technology and Democracy at McGill University and is senior counsel for Enterprise Canada.

Source: The politics of rage and disinformation — we ignore it at our peril

Russian-language propaganda stations spread hate in Canada for Ukrainians, say critics

Of note:

A group of Russian-language journalists in Canada are demanding the federal government remove from this country’s airwaves a pair of Russian-language television channels the journalists say spread hate and propaganda.

Last week, Canadian television providers pulled English-language network RT, formerly known as Russia Today, from their services. But Russian-language channels, RTR Planeta and Channel One Russia, are still available and spreading “weapons grade war-mongering,” says a letter from the Canadian Association of Russian Language Media.

“This aggressive propaganda is used to justify Putin’s invasion, spread anti-Ukrainian hate and radicalize parts of the Russian speaking community in Canada,” reads the letter, signed by 18 journalists from a number of outlets including Russian Canadian Broadcasting, Russian Infotrade LTD and Russianweek.ca.

“Even though we are fully committed and desperately trying to deliver to our viewers, listeners and readers the truth about unfolding events, in accordance with the international journalistic practises and standards, our voices are simply no match to the 24/7 Kremlin war propaganda machine.”

The organization has sent the letter to Prime Minister Justin Trudeau, Deputy Prime Minister Chrystia Freeland and Heritage Minister Pablo Rodriguez. It asks that a directive be issued to the Canadian Radio-television and Telecommunications Commission (CRTC) to pull all channels approved, controlled or owned by the Russian state from public airwaves.

RTR Planeta, an international service of Russian state-owned broadcaster VGTRK, and Channel One Russia are a source for Russians around the world of news and commentary in their language. However, the channels deliver mistruths more than anything, argues Alla Kadysh, a Russian-language radio and podcast host in Toronto who signed the letter.

“It’s been going on for years; it’s basically lies and projections,” Kadysh said of RTR Planeta, whose recent broadcasts have not been seen by the Star. “It’s basically hate-mongering. It’s got to the point where you can’t watch it three or four minutes, you’d go crazy.”

Earlier this week, Canadian television operators announced they were removingRT, the English channel, from their channel listings. That state-backed English-language news network has been accused by analysts of spreading disinformation meant to undermine democracies around the world.

But RTR Planeta and Channel One Russia are still carried by Rogers and Bell, according to the Canadian companies’ websites. (Neither Bell nor Rogers answered requests for comment.)

Critics of the channels say RTR Planeta is particularly sinister. Kadysh said she concerned it is radicalizing its viewers, as presenters frequently call Ukrainians “Nazis” and report false news about the Russian invasion of Ukraine. She fears it is stoking hatred that may lead to violence here in Canada as the war continues.

She said many in the Russian community have bought into the rhetoric.

“I talk to people like this every single day,” Kadysh said. “They don’t believe anything you say because they are already conditioned to believe only Russian propaganda. You talk to these people and there’s something wrong with them.”

RTR Planeta’s signal hasn’t been available since last week due to an unknown reason; a message on the screen blames technical difficulties. The channel’s website has also been down.

The Star has made attempts to speak to the channel’s representatives, but has not been successful.

Marcus Kolga, a disinformation expert with the Macdonald-Laurier Institute, shares Kadysh’s concerns. Often Russian news programming has engaged in a nationalistic stance meant to keep Russians abroad loyal, and uses distorted news as part of the approach, he said, adding that the channels are a major source for news.

“The shows that they have on there are using extremely inflammatory language to describe the Ukrainians today,” Kolga said, referring to RTR Planeta. “They’re calling them dogs, dogs that need to be put down, this is the kind of language you hear where governments and organizations are about to engage in genocide.”

Last week, the Star asked The Department of Canadian Heritage if it planned to address the concerns about RTR Planeta and was told in response that the government was requesting that the CRTC investigate RT, the English and French channels removed by Canadian satellite-TV providers earlier this week.

“We will continue to listen and be led by affected communities,” wrote David Larose of Canadian Heritage media relations. He pointed out the CRTC has said in a statement about its preliminary view of RT that the channel’s programming “may not be consistent with the Commission’s broadcasting regulations, in particular, the abusive comment provisions.”

The Star pointed out the question was about RTR Planeta, the Russian-language channel, and got no response. Some countries have already taken the step of banning RTR Planeta.

Last week, Lithuania banned the broadcaster along with a number of other Russian stations. A majority of the country’s population speaks Russian, causing the government concern.

Lithuania’s ambassador to Canada, Darius Skusevicius, told the Star the Lithuanian government didn’t want the country subjected to the “lies” of Russian television.

“We don’t want our population to get poisoned,” Skusevicius said. “Simple.”

He said during the invasion the network has reported the Russian military is being welcomed with open arms in Ukraine, even as the country maintains a ferocious resistance to Moscow’s troops.

“It’s just unacceptable, it’s a continuation of the glorification of Putin,” he said of the programming.

Meanwhile on Friday, Russia passed a draft law threatening 15 years in prison for those publishing information counter to Moscow’s version of events in Ukraine.

State media in Russia refers to its attack on Ukraine as a “special military operation” instead of calling it a “war” or “invasion.” Moscow also blocked Twitter and Facebook from Russian internet.

The move was no surprise to Kolga, who pointed out Russian leader Vladimir Putin has been working to silence dissent against his rule in the country for years.

Kolga not only wants the network removed from the airwaves, but said Canada needs to apply sanctions to dissuade people from participating in it. He said it’s not a matter of free speech, but one of national security.

Source: Russian-language propaganda stations spread hate in Canada for Ukrainians, say critics

Facebook’s language gaps weaken screening of hate, terrorism

Any number of good articles on the “Facebook papers” and its unethical and dangerous business practices:

As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.

Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.

For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.

Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemicthan just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.

Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were reviewed by a consortium of news organizations, including The Associated Press.

In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.

But when it comes to Arabic content moderation, the company said, “We still have more work to do. … We conduct research to better understand this complexity and identify how we can improve.”

In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya’s persecution, which the U.S. has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.

Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.

In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.

In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.

Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.

The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Quran. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.

Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.

Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.

For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.

Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the U.S. government equivalent — are grounds for a takedown.

“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”

The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.

“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.

In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”

“We know our systems are not perfect,” it added.

The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.

Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.

Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.

“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”

Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.

“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.

Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.

Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.

“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.

In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.

When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.

When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.

In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.

Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.

Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77% of the time, one document said.

“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90% of the time, a report said.

Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.

“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”

Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.

“The stereotype that Arabic is one entity is a major problem,” said Enam al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.

Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.

Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.

Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.

“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.

During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.

In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.

“The repetition of false positives creates a huge drain of resources,” it said.

In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.

Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.

“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.

But the company also lamented that “there is not one clear mitigation strategy.”

Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.

“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”

Source: Facebook’s language gaps weaken screening of hate, terrorism

Alt-right finds new partners in hate on China’s internet

Of interest:

In the early days of the 2016 US election campaign, Fang Kecheng, a former journalist at the liberal-leaning Chinese newspaper Southern Weekly and then a PhD student at the University of Pennsylvania, began fact-checking Donald Trump’s statements on refugees and Muslims on Chinese social media, hoping to provide additional context to the reporting of the presidential candidate back home in China. But his effort was quickly met with fierce criticism on the Chinese internet.

Some accused him of being a “white left” – a popular insult for idealistic, leftwing and western-oriented liberals; others labelled him a “virgin”, a “bleeding heart” and a “white lotus” – demeaning phrases that describe do-gooders who care about the underprivileged – as he tried to defend women’s rights.

“It was absurd,” Fang, now a journalism professor at the Chinese University of Hong Kong, told the Observer. “When did caring for disadvantaged groups become the reason for being scolded? When did social Darwinism become so justified?”

Around the time of Trump’s election victory, he began to notice striking similarities between the “alt-right” community in America and a group of social media users posting on the Chinese internet.

“Like their counterparts in the English-speaking sphere, this small but growing community also rejects the liberal paradigm and identity-based rights – similar to what is called ‘alt-right’ in the US context,” Fang noted, adding that in the Chinese context, the discourse often comprises what he considers anti-feminist ideas, xenophobia, Islamophobia, racism and Han-ethnicity chauvinism.

Throughout the Trump presidency and immediately after the Brexit vote in 2016, researchers on both sides of the Atlantic began carefully studying the rise of the alt-right in English-speaking cyberspace. On the Chinese internet, a similar trend was taking place at the same time, with some observing that the Chinese online group would additionally often strike a nationalistic tone and call for state intervention.

In a recent paper that he co-authored with Tian Yang, a University of Pennsylvania colleague, Fang analysed nearly 30,000 alt-right posts on the Chinese internet. They discovered that the users share not only domestic alt-right posts, but also global ones. Many of the issues, they found, were brought in by US-based Chinese immigrants feeling disillusioned by the progressive agenda set by the American left.

Not all scholars are comfortable with the description “alt-right”. “I’m sceptical about applying categories lifted from US politics to the Chinese internet,” said Sebastian Veg of the School of Advanced Studies in Social Sciences in Paris. “Many former ‘liberal’ intellectuals in China or from China are extremely critical of Black Lives Matter, the refugee crisis, political correctness, etc. They are hardly populists, but on the contrary, a regime-critical elite. Are they alt-right?”

Dylan Levi King, a Tokyo-based writer on the Chinese internet, first noticed this loosely defined group during the 2015 European migrant crisis. “Whether you call it populist nationalism or alt-right,” he said, “if you paid close attention to what they talked about back then, you find them borrowing similar talking points from the European ‘alt-right’ community, such as the phrase ‘the great replacement’, or the alleged ‘no-go zones’ for non-Muslims in European cities, which was also used by Fox News.” 

Shortly after the migrant crisis broke out, Liu Zhongjing – a Chinese translator and commentator who built a name through his staunch anti-leftist and anti-progressive stance – was asked about his view on the way Germany handled it.

“A new kind of political correctness has taken shape in Germany, and many things can no longer be mentioned,” he observed. Liu also quoted Thilo Sarrazin, a controversial figure who some say is the “flag-bearer for Germany’s far-right” in supporting his argument.

On 20 June 2017, when the United Nations High Commissioner for Refugees posted about the plight of the displaced people on Weibo on World Refugee Day with the hashtag #StandWithRefugees, thousands of internet users overwhelmed its account with negative comments.

The UNHCR’s goodwill ambassador – Chinese actress Yao Chen – had to clarify that she had no intention of suggesting that China should take part in accepting refugees.

In the same year, another post appeared on popular social media site Zhihu, with the headline “Sweden: the capital of sexual assault in Europe”. “But,” author Wu Yuting wrote, “the cruel reality is that, with the large number of Muslims flocking into Sweden, they also brought in Islam’s repression and damage against women, and destroyed gender equality in Swedish society.”

Islamophobia is the main topic among China’s alt-right, Fang and Yang’s research found.

“By framing the policies as biased, they interpreted them as a source of inequality and intended to trigger resentment by presenting Han as victims in their narrative,” Fang said. “They portrayed a confrontational relationship between Han – the dominant ethnic group in China – and other ethnic minorities, especially the two Muslim minorities – the Hui and the Uyghurs.” He added: “It’s exactly the same logic and mainstream narrative deployed in the alt-right in the US: poor working-class white men being taken advantage of by immigrants and by minorities.”

Other researchers went a step further. In a 2019 paper, Zhang Chenchen of Queen’s University Belfast, analysed 1,038 Chinese social media posts and concluded that by criticising western “liberal elites”, the rightwing discourse on the Chinese internet constructed the ethno-racial identity against the “inferior” of non-Western other.

This is “exemplified by non-white immigrants and Muslims, with racial nationalism on the one hand; and formulates China’s political identity against the ‘declining Western other with realist authoritarianism on the other,” she wrote.

Anti-feminism is another issue frequently discussed by the Chinese online alt-right. Last December, 29-year-old Chinese standup comedian Yang Li faced a backlash after a question she posed in her show. “Do men have the bottom line?” she quipped.

The line brought laughter from her live audience, but anger among many on the internet. Although Yang does not publicly identify herself as a feminist, many accused her of adopting a feminist agenda, with some calling her “feminist militant” and “female boxer”, “in an attempt to gain more privilege over men,” one critic said. “Feminist bitch,” another scolded.

And in April, Xiao Meili, a well-known Chinese feminist activist, received a slew of abuse after she posted online a video of a man throwing hot liquid at her after she asked him to stop smoking. Some of the messages called her and others – without credible evidence – “anti-China” and “foreign forces”. Others said: “I hope you die, bitch”, or “Little bitch, screw the feminists”.

“When the Xiao Meili incident happened, a lot of feminists were being trolled, including myself,” said one of the artists who later collected more than 1,000 of the abusive messages posted to feminists and feminist groups and turned them into a piece of artwork. “We wanted to make the trolling words into something that could be seen, touched, to materialise the trolling comments and amplify the abuse of what happens to people online,” she said.

Xiao blamed social media companies for not doing enough to stop such vitriol, even though China has the world’s most sophisticated internet filtering system. “Weibo is the biggest enabler,” she told a US-based website in April. “It treats the incels as if they are the royal family.” 

But Michel Hockx, director of the Liu institute for Asia and Asian studies at the US’s University of Notre Dame, thinks this is because such speeches do not threaten the government. “They don’t necessarily challenge the ruling party and spill over to collective action,” he said, “so there’s less of an incentive for social media companies to remove them. They are not told to do so by the authorities.”

King says that Chinese state censors also walk a fine line in monitoring such content: “The ‘alt-right’ tend to be broadly supportive of the Communist party line on most things. They do see China as a bulwark against the corrosive power of western liberalism.”

But their online rhetoric has offline consequences, he cautioned: “Things like ethnic resentment are something just below the surface, which can’t be allowed to fester. When it explodes, it’s very ugly.”

Source: https://www.theguardian.com/world/2021/sep/11/alt-right-finds-new-partners-in-hate-on-chinas-internet

Delacourt: It’s time to talk about this rage against Justin Trudeau

Good commentary. It may also be time to call out some of the enablers and fomenters, Rebel and “True” North, given their frequent invective (which at time of writing this Sunday afternoon, have not covered or commented on the rage). Both Erin O’Toole and Jagmeet Singh strongly condemned the mob’s actions but did not see anything from the Greens or Bloc. PPC tweet:

<blockquote class=”twitter-tweet” data-partner=”tweetdeck”><p lang=”en” dir=”ltr”>Trudeau doesn’t respect democracy. He uses billions in taxpayer money to overtly buy votes. He violates the Constitution. He demonizes opponents. He curtails our rights. He’s a wannabe fascist tyrant. But yeah, protesters yelling at him are the problem. <a href=”https://t.co/uaUbTmP9gd”>https://t.co/uaUbTmP9gd</a></p>&mdash; Maxime Bernier (@MaximeBernier) <a href=”https://twitter.com/MaximeBernier/status/1431986988765360133?ref_src=twsrc%5Etfw”>August 29, 2021</a></blockquote>

It’s time to talk about this rage against Justin Trudeau — not just the mob spectacles on the campaign trail, but all the toxic strains of that fury simmering through Canadian politics for some time now.

The incredible scene of Trudeau haters in Bolton, Ont., their faces contorted in gleeful rage, has elevated this phenomenon from an ugly undercurrent to a force that needs to be reckoned with in the current election campaign.

On one level, what was on display was deeply and intensely personal against the man who has been prime minister of Canada through six challenging years for the country. But as Trudeau himself suggested after the incident on Friday night, it is also a boiling cauldron of populist discontent, fuelled by a pandemic — and, I would add, stoked by the grievous state of the political culture.

“We all had a difficult year and those folks out protesting, they had a difficult year too, and I know and I hear the anger, the frustration, perhaps the fear, and I hear that,” Trudeau said after his campaign had to flee the mob.

There is a chance here, not just for Trudeau, but for all politicians and voters in Canada, to look this toxicity in the eye and take the full measure of it right now, in a way the United States has failed to do, even after the storming of the Capitol earlier this year. The disgrace in Bolton on Friday night wasn’t of the same magnitude, but it comes from a similar place — the point where political disruption crosses into all-out eruption.

All politicians rile up some segments of the population and the RCMP isn’t accompanying them just to err on the side of caution. No one should need reminding that in July 2020, a military reservist named Corey Hurren crashed his truck full of weapons through the gates of Rideau Hall, looking to do damage to Trudeau. This was a day after a rally on Parliament Hill calling for Trudeau’s arrest for treason.

The threats are real, and they have been for as long as I’ve been covering federal politics. One of my first out-of-town assignments after being posted to the capital, in fact, was a rally in New Brunswick where Mila Mulroney, wife of prime minister Brian Mulroney, was jabbed in the ribs by a protester’s sign.

But the poisonous rage that is directed toward Trudeau on a daily basis, churning through social media 24/7, landing as flaming parcels every day in reporters’ email inboxes, and now manifesting itself as a high-level security threat in small-town Ontario, is another order altogether. It is woven with threads of racism, xenophobia, sexism, conspiracy theorists and COVID/vaccine deniers. It has been emboldened by a small cottage industry of commentary that portrays a “woke” Trudeau as the destroyer of all that holds the old Canada together.

Conservative Leader Erin O’Toole couldn’t have been more clear on Saturday after the incident in Bolton, where some of his party’s supporters were participants in the cursing and howling throng. Those people, O’Toole said, ”will no longer be involved with our campaign, full stop. I expect professionalism, I expect respect. I respect my opponents.”

Yet on the very eve of the current election campaign, O’Toole’s own party put out a video depicting Trudeau as a spoiled, flouncing girl having a temper tantrum. This wasn’t some rogue partisan, cobbling together a video in his parents’ basement. It appeared (now revoked for copyright reasons) on the official Twitter account of the Conservative party.

And this business of feminizing Trudeau to demonize him has deep, enduring roots. (Note to email correspondents: calling him “Justine” is neither original nor witty.) For years, Trudeau haters have been spewing the same kind of bile they usually hurl at women politicians; mocking his hair, his family and casting any success as the product of smarter men around them.

There’s a direct line between that mockery and the taunting hordes on the campaign trail; the sneering contempt.

The immediate questions revolve around whether Bolton will help or hurt Trudeau — is this a turning point, when the Liberal leader gets to cast himself as the underdog/victim? Is it like the moment in 1993, when Jean Chrétien stood up to Conservatives’ mockery of his face?

There’s an old Jerry Seinfeld joke about those detergent ads you see on TV. “If you’ve got a T-shirt with a bloodstain all over it, maybe laundry isn’t your biggest problem.” All the speculation about how the Bolton incident will affect the election campaign feels a bit to me like seeing the problem as laundry. It’s not just about politicians cleaning up their strategic act for this election, but what is causing the stain on the political fabric of this country.

The faces of those protesters, accompanied by children chanting foul-mouthed curses at a prime minister, is not a sight that can be bleached from the memory of this campaign.

To paraphrase that Seinfeld joke, if you have mobs of citizens openly threatening harm to Trudeau, the biggest problem isn’t Trudeau.

Source: It’s time to talk about this rage against Justin Trudeau

To tackle hate-motivated crimes, Canada’s justice system needs to change

Of note even if the proposed solutions are modest and unlikely by themselves to make a significant difference although encouraging minorities and others to increase reporting would be a good step:

As Muslim chairs of police boards in Ontario, we are sadly familiar with hate-motivated crimes, and with the reality that no country is immune. Police services across Canada have been grappling with these issues for some time, and we are vividly aware that we cannot look away from the hatred that stole the lives of four fellow Canadians who died simply because they were walking while Muslim.

While the particulars of criminal investigations cannot be released, London Police Services were clear that our beloved community members were murdered and targeted for their Islamic faith. As hard as that is to hear for many Canadians, the truth is this is not a singular event. Islamophobic incidents happen all the time in Canada.

In the City of London and Peel Region, both of which are home to diverse communities with large numbers of racialized citizens, police-reported hate-crime numbers have remained consistent over the last few years. According to Statistics Canada, London’s numbers rose by more than a third from 2015 to 2019, and in four of those five years, the city’s rate per 100,000 population was higher than the national average. In 2019, London police reported that Black, Muslim, Jewish, Middle Eastern and LGBTQ2+ peoples constituted the five most targeted groups for hate crimes. In Peel, meanwhile, crimes motivated by race or nationality increased by 54 per cent from 2018 to 2020, with Black and South Asian people being the most targeted by race or ethnicity. Muslims and Jews experienced the most targeting based on faith.

Yet, despite these numbers, our justice system continues to have an incredibly high threshold for anyone to be prosecuted under hate-related laws, and as a result, it is not achieving its desired aims. There remains no specific definition of a “hate crime” in the Criminal Code as a chargeable offence, and what is laid out only provides a judge the ability to hand down harsher sentences based on his or her ruling around a given perpetrator’s motivations. In Peel, only a third of the Criminal Code offences designated by police as hate- or bias-motivated crimes resulted in Criminal Code charges in 2020.

This outdated model emboldens hateful behavior while doing little to dissuade perpetrators, which in turn normalizes their hate-filled rhetoric and actions. Perpetrators such as Alexandre Bissonnette, for instance, have reaped the benefit of loopholes such as concurrent sentences; Mr. Bissonnette murdered six people in Quebec City in 2017, yet serves time for only one murder. We cannot let this injustice continue in the case of the family killed in London, Ont.

Reporting mechanisms are also a challenge. Far too often, verbal threats and assaults are not brought to the police because victims don’t feel like they’ll be taken seriously, simply don’t want the trouble, or are concerned that their reporting will only further agitate the perpetrators, putting the victims and their families at further risk. This means that any hate-crime numbers are almost certainly underestimated, masking the magnitude of the problem.

Earlier this week, community leaders called for action at the vigil for the family killed on the streets of London, but political gesturing and posturing won’t be enough to help prevent the next hate-fueled mass murder. We must name hate for what it is, stare it down, and work with the affected communities to prioritize change over pandering for votes. All parties must work together to get tougher on hate and extremism. We must end the minimization and denial that has become commonplace in our system and in our discourse. Our politicians and legislators can get the ball rolling by changing hate-crime laws to better protect victims who do report, while holding those responsible maximally accountable.

We must also work with our communities to increase the reporting of such crimes so that we can both identify and engage the perpetrators and provide victims with a sense of safety and support. In addition, our laws must also reflect our society’s values and priorities. If hate crimes are difficult to prosecute and carry minimal odds of conviction, this sends the wrong message.

It’s time to take bolder action against anti-Muslim hate, and all other forms of hate and bigotry that continue to terrorize our communities. It’s time to arm our justice system with the necessary tools to root out hatred, and to hold accountable those who perpetrate hate crimes. It’s time to remind far-right extremists and terrorists that our country will not tolerate their hate-motivated crimes and rhetoric. The human cost of our inaction would be too great to bear.

Javeed Sukhera is the chair of the London Police Services Board and an associate professor of psychiatry and paediatrics at Western University. Ahmad Attia is the chair of the Peel Police Services Board and the CEO of Incisive Strategy.

Source: https://www.theglobeandmail.com/opinion/article-to-tackle-hate-motivated-crimes-canadas-justice-system-needs-to-change/

Quebec politicians denounce rise in online hate as Ottawa prepares to act

Ironic given some of the political discourse in Quebec:

Death threats over an animal control plan, personal insults over stop signs, social media attacks targeting spouses — these are examples of what politicians in Quebec say has become an increasingly difficult reality of their jobs during COVID-19.

From suburban mayors to the premier, politicians in the province have been raising the alarm about the rise in hateful and occasionally violent online messages they receive — and some are calling for stronger rules to shield them.

On Saturday, Premier Francois Legault denounced the torrent of hateful messages that regularly follow his online posts, which he said has worsened “in the last months.”

“Each time I post something now, I’m treated to an avalanche of aggressive and sometimes even violent comments, and to insults, obscenities and sometimes threats,” Legault wrote on Facebook.

Several Quebec municipal politicians have announced they won’t be running again in elections this fall, in part because of the hostile climate online. Others, including the mayors of Montreal and Quebec City, have spoken in the past about receiving death threats. In November, police in Longueuil, Que., arrested a man in connection with threats against the city’s mayor and other elected officials over a plan to cull deer in a municipal park.

Philippe Roy, the mayor of the Town of Mount-Royal, an on-island Montreal suburb, says he’s leaving municipal politics when his current term ends, partly because of the constant online insults directed at him and his spouse.

While taking criticism is part of the job, he said he’s seen a shift in the past two years toward more falsehoods and conspiracy theories, which he said are undermining the trust between elected officials and their constituents. After 16 years in politics, he said he’s tired of the constant accusations directed his way.

“When people are questioning your integrity, you start saying, ‘Well, maybe I have better things to do somewhere else,’ ” he said in a recent interview.

The problem is serious enough that the group representing Quebec municipalities has launched an awareness campaign and drafted a resolution denouncing the online vitriol. It has so far been adopted by some 260 municipal councils.

Suzanne Roy, the group’s president, says the campaign was launched in response to a “flood of testimonials” from mayors and councillors about an increase in abuse and hate speech during the pandemic.

She attributes the phenomenon to a rise in “stress and frustration.”

“People, without having the proper tools to manage their stress, will let off steam on social media and write inappropriate statements towards decisions taken at city council about a stop sign at the wrong place, a hole in the road, everything,” she said in a phone interview.

Roy, who is mayor of Ste-Julie on Montreal’s South Shore, said she experienced the perils of social media firsthand earlier this year when someone stole her identity online and posted anti-COVID conspiracy theories from her Facebook account.

She is among those pushing for stronger rules to combat hate speech, and for platforms such as Facebook to take quicker action to remove hateful comments or restore someone’s identity when it’s stolen. She said the platforms need to take down the messages as soon as they appear to ensure debate remains respectful and false messages aren’t spread.

“It’s a question of debate and a question of democracy,” she said.

Federal Heritage Minister Steven Guilbeault has promised to introduce new legislation to combat hate speech this spring.

In an interview Tuesday, he said the legislation will define five categories of illegal online activities and create a regulator. The regulator’s job would include pushing online platforms to respect the law and to remove hateful messages within 24 hours.

He said the bill’s goal is to take stronger actions against hate speech, child porn and non-consensual sharing of intimate images. He was careful to say that it would not tackle misinformation, saying it’s not the government’s job to “legislate information.”

Guilbeault said his government has also had to contend with critics who accuse the government of wanting to limit free speech, a charge he denies. Rather, he says the aim of the legislation is to ensure that laws, such as those against hate speech, are applied online as they are in the real world — something he argues will protect free speech rather than stifle it.

“Right now in the virtual world and, I’m sad to say, in the physical world, we’re seeing the safety and security of Canadians is being compromised, that freedom of speech is being affected online,” he said in a phone interview.

“We’re seeing it now with Quebec politicians who say, ‘No, no I don’t want to run for politics, it’s so violent.'” He said the chilling effect extends to equity-seeking groups and racialized Canadians, many of whom avoid the platforms because they’re constant targets of abuse.

“How does that protect free speech?” he asked. “Well, it doesn’t.”

Suzanne Roy says her group, the Union des municipalities du Quebec, gives new councillors some training on how to manage social media accounts, including advice on handling adversarial situations. She says the advice generally includes not getting into debates online and instead steering people to more formal channels to express their opinions, such as city council meetings and public consultations.

Philippe Roy, the soon-to-be ex-mayor of Mont-Royal, says that while there appear to be strong candidates to take his place, he’s already met people who have been discouraged from running by the prospect of online hate — something that bodes poorly for the future if the problem isn’t tackled.

“We’re losing people who could give back to the community, and that’s one of the threats that comes from this situation,” he said.

Source: Quebec politicians denounce rise in online hate as Ottawa prepares to act

Scientists combat anti-Semitism with artificial intelligence

Will be interesting to assess the effectiveness of this approach, and whether the definition of antisemitism used in the algorithms takes a narrow or more expansive approach, including how it deals with criticism of Israeli government poliicies.

Additionally, it may provide an approach that could serve as a model for efforts to combat anti-Black, anti-Muslim and other forms of hate:

An international team of scientists said Monday it had joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence.

The project Decoding Anti-Semitism includes discourse analysts, computational linguists and historians who will develop a “highly complex, AI-driven approach to identifying online anti-Semitism,” the Alfred Landecker Foundation, which supports the project, said in a statement Monday.

“In order to prevent more and more users from becoming radicalized on the web, it is important to identify the real dimensions of anti-Semitism — also taking into account the implicit forms that might become more explicit over time,” said Matthias Becker, a linguist and project leader from the Technical University of Berlin.

The team also includes researchers from King’s College in London and other scientific institutions in Europe and Israel.

Computers will help run through vast amounts of data and images that humans wouldn’t be able to assess because of their sheer quantity, the foundation said.

“Studies have also shown that the majority of anti-Semitic defamation is expressed in implicit ways – for example through the use of codes (“juice” instead of “Jews”) and allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images,” the statement said.

As implicit anti-Semitism is harder to detect, the combination of qualitative and AI-driven approaches will allow for a more comprehensive search, the scientists think.

The problem of anti-Semitism online has increased, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19, groups tracking anti-Semitism on the internet have found.

The focus of the current project is initially on Germany, France and the U.K., but will later be expanded to cover other countries and languages.

The Alfred Landecker Foundation, which was founded in 2019 in response to rising trends of populism, nationalism and hatred toward minorities, is supporting the project with 3 million euros ($3.5 million), the German news agency dpa reported.

Source: Scientists combat anti-Semitism with artificial intelligence

The UK social media platform where neo-Nazis can view terror atrocities

Of interest:

A UK-registered technology company with British directors is behind a global platform used by neo-Nazis to upload footage of racist killings.

BitChute, which was used in the dissemination of far-right propaganda during the protests in London and elsewhere this month, has hosted films of terror attacks and thousands of antisemitic videos which have been viewed over three million times.

The platform has also hosted several videos from the proscribed far-right terrorist group National Action, now taken down.

Concerns about BitChute have been flagged in a new report, Hate Fuel: The hidden online world fuelling far right terror, produced by the Community Security Trust (CST), a charity set up to combat antisemitism.

In response to the report, the company, based in Newbury, Berkshire, said in a tweet that it blocks “any such videos, including incitement to violence”.

But the platform was still showing the full footage of the 2019 Christchurch mosque shootings and an attack on a German synagogue until the Observerbrought the videos to its attention.

The far-right activist Tommy Robinson has a channel on BitChute with more than 25,000 subscribers.

When far-right protesters recently descended on London and several other cities, BitChute’s comment facility carried numerous racist postings on Robinson’s channel, many of them making derogatory claims about George Floyd, whose death after being restrained by police in the US city of Minneapolis has sparked worldwide protests.

“Extremists know that they can post anything on BitChute and it won’t be removed by the platform unless they are forced to do so,” said Dave Rich, director of policy at the CST. “Some of the terrorist videos we found on BitChute had been on the site for over a year and had been watched tens of thousands of times.”

He added: “It’s no surprise, therefore, that the website is a cesspit of vile racist, antisemitic neo-Nazi videos and comments. This is why there need to be legal consequences for website hosts who refuse to take responsibility for moderating and blocking this content themselves.”

BitChute is one of several platforms alarming those who monitor the far right.

Another, Gab, created in 2016, has a dedicated network of British users called “Britfam” that has 4,000 members and which far-right extremists use to circulate racism, antisemitism and Holocaust denial.

Last week the chief executive of Gab, Andrew Torba, sent an email to users attacking what he alleged was the “anti-white, anti-Trump and anti-conservative bias” on more mainstream social media platforms.

Source: The UK social media platform where neo-Nazis can view terror atrocities

Trump may have emboldened hate in Canada, but it was already here: Ryan Scrivens

Good overview by Scrivens:

A key turning point, in fact, was during the latter months of 2015. Two important events created a perfect storm for the movement.

First was Justin Trudeau’s pledge, as part of the Liberal party’s election platform, to welcome 25,000 Syrian refugees to Canada by the end of 2015. The second was the terrorist attack on concert-goers in Paris, inspired by the so-called “Islamic State,” on Nov. 13, 2015.

Each of these events were distinct in nature, yet Canada’s radical right wing treated them as interconnected, arguing that Canada’s newly elected prime minister was not only allowing Muslims into the country who would impose Sharia law on Canadian citizens, but that they too could be “radical Islamic terrorists.”

It all sparked a flurry of anti-Muslim discourse in Canada.

The day after the Paris attack, a mosque in Peterborough, Ont., was deliberately set ablaze, causing significant damage to the interior of the building.

The next day, a Muslim woman picking up her children from a Toronto school was robbed and her hijab torn off. The perpetrators called her a “terrorist” and told her to “go back to your country.”

Days later, an anti-Muslim video was posted on YouTube in which a man from Montreal, wearing a Joker mask and wielding a firearm, threatened to kill one Muslim or Arab each week.

Similar events continued to unfold in 2016 — all, of course, well before Trump’s election victory. A school in Calgary, for example, was spray-painted with hate messages against Syrians and Trudeau: “Syrians go home and die” and “Kill the traitor Trudeau.”

Edmonton residents were also faced with a series of anti-Islamic flyer campaigns, hateful messages were spray-painted on a Muslim elementary school in Ottawa and man in Abbotsford, B.C. went on a racist tirade and was caught on video.

Canadian chapters of Soldiers of Odin first made their presence known in the early part of 2016 by patrolling communities and “protecting” Canadians from the threat of Islam, and the Patriotic Europeans Against the Islamization of the West (PEGIDA), an anti-Islam group who first appeared in Canada in 2015, continued to rally in Montreal and Toronto in 2016.

And so it would be impulsive and short-sighted for us to attribute our spike in hatred solely to Trump and his divisive politics.

Instead, the instances listed above serve as an important reminder that prior to Trump’s election victory Canada was experiencing a rise in hatred.

In responding to hatred in Canada, we must first acknowledge that it exists in Canada, and it becomes ever more present during times of social and economic uncertainty. We must also acknowledge that the foundations of hatred are complex and multi-faceted, grounded in both individual and social conditions.

So too, then, must counter-extremist initiatives be multi-dimensional, building on the strengths and expertise of diverse sectors, including but not limited to community organizations, police officers, policy-makers and the media. Multi-agency efforts are needed to coordinate the acknowledgement and response to right-wing extremism in Canada.

I see signs of us moving in the right direction in building resilience against hatred in Canada. But in the months and years ahead, there’s still much to do.

via Trump may have emboldened hate in Canada, but it was already here