Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Harsh but accurate:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

The news in the last weeks highlighted both the good and bad of platforms like Facebook and Twitter. The good: Graphic videos of police brutality from multiple cities transformed public sentiment about race, creating a potential movement for addressing an issue that has plagued the country since its founding. Peaceful protesters leveraged social platforms to get their message across, outcompeting the minority that advocated for violent tactics. The bad: waves of disinformation from politicians, police departments, Fox News, and others denied the reality of police brutality, overstated the role of looters in protests, and warned of busloads of antifa radicals. Only a month ago, critics exposed the role of internet platforms in undermining the country’s response to the COVID-19 pandemic by amplifying health disinformation. That disinformation convinced millions that face masks and social distancing were culture war issues, rather than public health guidance that would enable the economy to reopen safely.

The internet platforms have worked hard to minimize the perception of harm from their business. When faced with a challenge that they cannot deny or deflect, their response is always an apology and a promise to do better. In the case of Facebook, University of North Carolina Scholar Zeynep Tufekci coined the term “Zuckerberg’s 14-year apology tour.” If challenged to offer a roadmap, tech CEOs leverage the opaque nature of their platforms to create the illusion of progress, while minimizing the impact of the proposed solution on business practices. Despite many disclosures of harm, beginning with their role in undermining the integrity of the 2016 election, these platforms continue to be successful at framing the issues in a favorable light.

When pressured to reduce targeted harassment, disinformation, and conspiracy theories, the platforms frame the solution in terms of content moderation, implying there are no other options. Despite several waves of loudly promoted investments in artificial intelligence and human moderators, no platform has been successful at limiting the harm from third party content. When faced with public pressure to remove harmful content, internet platforms refuse to address root causes, which means old problems never go away, even as new ones develop. For example, banning Alex Jones removed conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.

The platforms respond to each new public relations challenge with an apology, another promise, and sometimes an increased investment in moderation. They have done it so many times I have lost track. And yet, policy makers and journalists continue to largely let them get away with it.

We need to recognize that internet platforms are experts in human attention. They know how to distract us. They know we will eventually get bored and move on.

Despite copious evidence to the contrary, too many policy makers and journalists behave as if internet platforms will eventually reduce the harm from targeted harassment, disinformation, and conspiracies through content moderation. There are three reasons why it will not do so: scale, latency, and intent. These platforms are huge. In the most recent quarter, Facebook reported that 1.7 billion people use its main platform every day and roughly 2.3 billion across its four large platforms. They do not disclose the numbers of messages posted each day, but it is likely to be in the hundreds of millions, if not a billion or more, just on Facebook. Substantial investments in artificial intelligence and human moderators cannot prevent millions of harmful messages from getting through.

The second hurdle is latency, which describes the time it takes for moderation to identify and remove a harmful message. AI works rapidly, but humans can take minutes or days. This means a large number of messages will circulate for some time before eventually being removed. Harm will occur in that interval. It is tempting to imagine that AI can solve everything, but that is a long way off. AI systems are built on data sets from older systems, and they are not yet capable of interpreting nuanced content like hate speech.

The final – and most important – obstacle for content moderation is intent. The sad truth is that the content we have asked internet platforms to remove is exceptionally valuable and they do not want to remove it. As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

A permissive approach to content has two huge benefits for platforms: profits and power. The business model of internet platforms like Facebook, Instagram, YouTube, and Twitter is based on advertising, the value of which depends on consumer attention. Where traditional media properties create content for mass audiences, internet platforms optimize content for each user individually, using surveillance to enable exceptionally precise targeting. Advertisers are addicted to the precision and convenience offered by internet platforms. Every year, they shift an ever larger percentage of their spending to them, from which platforms derive massive profits and wealth. Limiting the amplification of targeted harassment, disinformation, and conspiracy theories would lower engagement and revenues.

Power, in the form of political influence, is an essential component of success for the largest internet platforms. They are ubiquitous, which makes them vulnerable to politics. Tight alignment with the powerful ensures success in every country, which leads platforms to support authoritarians, including ones who violate human rights. For example, Facebook has enabled regime-aligned genocide in Myanmar and state-sponsored repression in Cambodia and the Philippines. In the United States, Facebook and other platforms have ignored or altered their terms of service to enable Trump and his allies to use the platform in ways that would normally be prohibited. For example, when journalists exposed Trump campaign ads that violated Facebook’s terms of service with falsehoods, Facebook changed its terms of service, rather than pulling the ads. In addition, Facebook chose not to follow Twitter’s lead in placing a public safety warning on a Trump post that promised violence in the event of looting.

Thanks to their exceptional targeting, platforms play an essential role in campaign fundraising and communications for candidates of both parties. While the dollars are not meaningful to the platforms, they derive power and influence from playing an essential role in electoral politics. This is particularly true for Facebook.

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

Thanks to COVID-19 and the protest marches, consumers and policy makers are far more aware of the role that internet platforms play in amplifying disinformation. For the first time in a generation, there is support in both parties in Congress for revisions to Section 230. There is increasing public support for regulation.

We do not need to accept disinformation as the cost of access to internet platforms. Harmful amplification is the result of business choices that can be changed. It is up to us and to our elected representatives to make that happen. The pandemic and the social justice protests underscore the urgency of doing so.

Source: Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Social Media Giants Support Racial Justice. Their Products Undermine It. Shows of support from Facebook, Twitter and YouTube don’t address the way those platforms have been weaponized by racists and partisan provocateurs.

Of note. “Thoughts and prayers” rather than action:

Several weeks ago, as protests erupted across the nation in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on his Facebook page, denouncing racial bias and proclaiming that “black lives matter.” Mr. Zuckerberg, Facebook’s chief executive, also announced that the company would donate $10 million to racial justice organizations.

A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the chief executive, pledged $3 million to an anti-racism organization started by Colin Kaepernick, the former N.F.L. quarterback.

YouTube joined the protests, too. Susan Wojcicki, its chief executive, wrote in a blog post that “we believe Black lives matter and we all need to do more to dismantle systemic racism.” YouTube also announced it would start a $100 million fund for black creators.

Pretty good for a bunch of supposedly heartless tech executives, right?

Well, sort of. The problem is that, while these shows of support were well intentioned, they didn’t address the way that these companies’ own products — Facebook, Twitter and YouTube — have been successfully weaponized by racists and partisan provocateurs, and are being used to undermine Black Lives Matter and other social justice movements. It’s as if the heads of McDonald’s, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.

It’s hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalized. In Tahrir Square in Cairo, Ferguson, Mo., and Baltimore, activists used Twitter and Facebook to organize demonstrations and get their messages out.

But in recent years, a right-wing reactionary movement has turned the tide. Now, some of the loudest and most established voices on these platforms belong to conservative commentators and paid provocateurs whose aim is mocking and subverting social justice movements, rather than supporting them.

The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn’t necessarily know it by scrolling through your social media feeds.

On Facebook, for example, the most popular post on the day of Mr. Zuckerberg’s Black Lives Matter pronouncement was an 18-minute video posted by the right-wing activist Candace Owens. In the video, Ms. Owens, who is black, railed against the protests, calling the idea of racially biased policing a “fake narrative” and deriding Mr. Floyd as a “horrible human being.” Her monologue, which was shared by right-wing media outlets — and which several people told me they had seen because Facebook’s algorithm recommended it to them — racked up nearly 100 million views.

Ms. Owens is a serial offender, known for spreading misinformation and stirring up partisan rancor. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly four million followers on Facebook. So can other high-profile conservative commentators like Terrence K. Williams, Ben Shapiro and the Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.

In all, seven of the 10 most-shared Facebook posts containing the phrase “Black Lives Matter” over the past month were critical of the movement, according to data from CrowdTangle, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favorable, perhaps because its users skew younger and more liberal.)

Facebook declined to comment. On Thursday, it announced it would spend $200 million to support black-owned businesses and organizations, and add a “Lift Black Voices” section to its app to highlight stories from black people and share educational resources.

Twitter has been a supporter of Black Lives Matter for years — remember Mr. Dorsey’s trip to Ferguson? — but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was run by a group of white nationalists posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter’s trending topics sidebar, which is often gamed by trolls looking to hijack online conversations, has filled upwith inflammatory hashtags like #whitelivesmatter and #whiteoutwednesday, often as a result of coordinated campaigns by far-right extremists.

A Twitter spokesman, Brandon Borrman, said: “We’ve taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world. From #BlackLivesMatter to #MeToo and #BringBackOurGirls, our company is motivated by the power of social movements to usher in meaningful societal change.”

YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made stridesin recent years to remove conspiracy theories and misinformation from its search results and recommendations, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contributed to racial division for years.

As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn’t footage of a protest or a police killing, but a four-year-old “social experiment” by the viral prankster and former Republican congressional candidate Joey Saladino, which has 14 million views. In the video, Mr. Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an “All Lives Matter” sign in a predominantly black neighborhood.

A YouTube spokeswoman, Andrea Faville, said that Mr. Saladino’s video had received fewer than 5 percent of its views this year, and that it was not being widely recommended by the company’s algorithms. Mr. Saladino recently reposted the video to Facebook, where it has gotten several million more views.

In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Mr. Floyd’s killing, or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the country are being organized in Facebook groups and Twitter threads, and social media has been helpful in creating more accountability for the police.

But these platforms aren’t just megaphones. They’re also global, real-time contests for attention, and many of the experienced players have gotten good at provoking controversy by adopting exaggerated views. They understand that if the whole world is condemning Mr. Floyd’s killing, a post saying he deserved it will stand out. If the data suggests that black people are disproportionately targeted by police violence, they know that there’s likely a market for a video saying that white people are the real victims.

The point isn’t that platforms should bar people like Mr. Saladino and Ms. Owens for criticizing Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet, and which features of their platforms are undermining the social justice movements they claim to support.

They don’t seem eager to do so. Recently, The Wall Street Journal reported that an internal Facebook study in 2016 found that 64 percent of the people who joined extremist groups on the platform did so because Facebook’s recommendations algorithms steered them there. Facebook could have responded to those findings by shutting off groups recommendations entirely, or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.

As a result, Facebook groups continue to be useful for violent extremists. This week, two members of the far-right “boogaloo” movement, which wants to destabilize society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, Calif. According to investigators, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommendations, they’re still appearing in plenty of people’s feeds.Rashad Robinson, the president of Color of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview this week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.

“What I see, particularly from Facebook and Mark Zuckerberg, it’s kind of like ‘thoughts and prayers’ after something tragic happens with guns,” Mr. Robinson said. “It’s a lot of sympathy without having to do anything structural about it.”

There is plenty more Mr. Zuckerberg, Mr. Dorsey and Ms. Wojcicki could do. They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don’t involve using racial slurs or organized hate groups. They could dismantle the recommendations systems that give provocateurs and cranks free attention, or make changes to the way their platforms rank information. (Ranking it by how engaging it is, the way some platforms still do, tends to amplify misinformation and outrage-bait.) They could institute a “viral ceiling” on posts about sensitive topics, to make it harder for trolls to hijack the conversation.

I’m optimistic that some of these tech leaders will eventually be convinced — either by their employees of color or their own conscience — that truly supporting racial justice means that they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices. But I’m worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.

So is Mr. Robinson, the civil rights organizer. A few weeks ago, he chatted with Mr. Zuckerberg by phone about Facebook’s policies on race, elections and other topics. Afterward, he said he thought that while Mr. Zuckerberg and other tech leaders generally meant well, he didn’t think they truly understood how harmful their products could be.

“I don’t think they can truly mean ‘Black Lives Matter’ when they have systems that put black people at risk,” he said.

Source: Social Media Giants Support Racial Justice. Their Products Undermine It.

Spy agency says Canadians are targets of foreign influence campaigns

More on foreign influence and interference:

Canadians are more exposed to “influence” operations than ever before according to an internal assessment from the country’s electronic spy agency.

A 2018 memo from the Communications Security Establishment (CSE) warned the rise of “web technology” like social media, along with Canadians’ changing habits for consuming media, make the population much more likely to encounter efforts by foreign powers to shape domestic political opinion.

“These new systems have generated unintended threats to the democratic process, as they deprive the public of accurate information, informed political commentary and the means to identify and ignore fraudulent information,” reads the memo, classified as Canadian Eyes Only.

“Foreign states have harnessed the new online influence systems to undertake influence activities against Western democratic processes, and they use cyber capabilities to enhance their influence activities through, for example, cyber espionage.”

“Foreign states steal and release information, modify or make information more compelling and distracting, create fraudulent or distorted ‘news,’ or amplify fringe and sometimes noxious opinions.”

The memo was prepared as Canada’s intelligence agencies were engaged in an exercise to protect the 2019 federal election from foreign interference.

Elections across the democratic world — the United States, France, the United Kingdom, Germany and the European Union — have in recent years been the targets of misinformation and cyberespionage campaigns from hostile countries.

There is no evidence that Canada’s recent federal election was the target of sophisticated cyber espionage or misinformation campaigns.

But another document prepared by CSE makes clear that Canadian politicians have already been targeted by foreign “influence” campaigns.

An undated slide deck prepared by the CSE suggested “sources linked to Russia popularized (then Global Affairs Minister Chrystia) Freeland’s family history” and targeted Defence Minister Harjit Sajjan’s appearance and turban in Russian-language media outlets in the Balkans.

The agency appears to be referring to stories, which were reported by mainstream Canadian news outlets, suggesting Freeland’s grandfather edited a Nazi-associated newspaper in occupied Poland.

The stories were “very likely intended to cause personal reputational damage in order to discredit the Government (of) Canada’s ongoing diplomatic and military support for Ukraine, to delegitimize Canada’s decision to enact the Justice for Victims of Corrupt Foreign Offices Act, and the 2018 expulsion of several Russian diplomats,” the documents, first reported by Global News, state.

The attacks against Sajjan, meanwhile, were “almost certainly” intended to discredit the NATO presence in Latvia, where Canadian forces are deployed as part of a NATO mission to deter Russian expansion after the invasion of Crimea.

“Since Canada’s deployment to Latvia, subtle and overtly racist comments pertaining to … Sajjan’s appearance, particularly his turban, have consistently appeared across Russian-language media in the Baltic region,” the documents read.

“Even ostensibly professional news sources are not above such descriptions. When … Sajjan attended a conference in Latvia in October 2017, he was described by Vesti.lv as ‘a large swarthy man in a big black turban.’”

Compared to some of the attacks on Western democracies, those two influence campaigns were minor in scale and impact. But the intelligence agency suggested that more and more countries are turning to cyber capabilities to further their own goals at the expense of other nations. And CSE’s analysis suggests their willing to play the long game.

“In the longer-term, influence activities, both cyber and human, are likely to challenge the transparency and independence of the decision-making process, reduce public trust (and) confidence in institutions, and push policy in directions inimical to Canadian interests,” the documents, released under access to information law, read.

“Many European states and some private companies have begun to develop countermeasures to malicious activities aimed at democratic processes, including increasing public understanding and resilience. However, little has been done to create robust, institutionalized multilateral responses.”

Parliament’s new national security review committee has completed a review of foreign espionage activities in Canada and submitted it to Prime Minister Justin Trudeau. The classified report detailing their findings is expected to be released early in 2020, once the House of Commons resumes sitting.

Source: Spy agency says Canadians are targets of foreign influence campaigns

Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

For the record:

British comedian Sacha Baron Cohen has said if Facebook had existed in the 1930s it would have allowed Hitler a platform for his anti-Semitic beliefs.

The Ali G star singled out the social media company in a speech in New York.

He also criticised Google, Twitter and YouTube for pushing “absurdities to billions of people”.

Social media giants and internet companies are under growing pressure to curb the spread of misinformation around political campaigns.

Twitter announced in late October that it would ban all political advertising globally from 22 November.

Earlier this week Google said it would not allow political advertisers to target voters using “microtargeting” based on browsing data or other factors.

Analysts say Facebook has come under increasing pressure to follow suit.

The company said in a statement that Baron Cohen had misrepresented its policies and that hate speech was banned on its platforms.

“We ban people who advocate for violence and we remove anyone who praises or supports it. Nobody – including politicians – can advocate or advertise hate, violence or mass murder on Facebook,” it added.

What did Baron Cohen say?

Addressing the Anti-Defamation League’s Never is Now summit, Baron Cohen took aim at Facebook boss Mark Zuckerberg who in October defended his company’s position not to ban political adverts that contain falsehoods.

“If you pay them, Facebook will run any ‘political’ ad you want, even if it’s a lie. And they’ll even help you micro-target those lies to their users for maximum effect,” he said.

“Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.”

Baron Cohen said it was time “for a fundamental rethink of social media and how it spreads hate, conspiracies and lies”. He also questioned Mr Zuckerberg’s characterisation of Facebook as a bastion of “free expression”.

“I think we could all agree that we should not be giving bigots and paedophiles a free platform to amplify their views and target their victims,” he added.

Earlier this month, an international group of lawmakers called for targeted political adverts on social media to be suspended until they are properly regulated.

The International Committee on Disinformation and Fake News was told that the business model adopted by social networks made “manipulation profitable”.

A BBC investigation into political ads for next month’s UK election suggested they were being targeted towards key constituencies and certain age groups.

Source: Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

Why are the U.S. immigration norms being tightened?

US immigration checking of social media noted in Indian media (a reminder to us all to more mindful when on social media):

The story so far: On May 31, 2019, the U.S. Department of State introduced a change in online visa forms for immigrant (form DS-260) and non-immigrant visas (form DS-160) requiring applicants to register their social media handles over a five-year period. The newly released DS-160 and DS-260 forms ask, “Do you have a social media presence?” A drop-down menu provides a list of some 20 options, including Facebook, Instagram, Sina Weibo and Twitter. There is also a “NONE” option. Applicants are required to list their handles alone and not passwords. All sites will soon be listable according to an administration official who spoke to The Hill, a Washington DC-based newsletter. The policy does not cover those eligible for the visa waiver programme and those applying for diplomatic visas and certain categories of official visas.

How did it come about?

The policy is part of U.S. President Donald Trump’s intent to conduct “extreme vetting” of foreigners seeking admission into the U.S. In March 2017, Mr. Trump issued an Executive Order asking the administration to implement a programme that “shall include the development of a uniform baseline for screening and vetting standards and procedures for all immigrant programs.”

In September 2017, the Department of Homeland Security started including “social media handles, aliases, associated identifiable information, and search results” information in the files it keeps on each immigrant. The notice regarding this policy said those impacted would include Green Card holders and naturalised citizens. In March 2018, the State Department proposed a similar policy, but for all visa applicants — this is the policy now in effect. Earlier, only certain visa applicants identified for extra screening were required to provide such information. Asking visa applicants to volunteer social media history started during the Obama administration which was criticised for not catching Tashfeen Malik, one of those who carried out a mass-shooting in San Bernardino, California, in 2015. Malik had come to the U.S. on a K-1 fiancé visa, and had exchanged social media messages about jihad prior to her admission to the U.S.

How will it impact India?

Most Indians applying for U.S. visas will be covered by this policy. Over 955,000 non-immigrant visas (excluding A and G visas) and some 28,000 immigrant visas were issued to Indians in fiscal year 2018. So at least 10 lakh Indians — and these are just those who are successful in their visa applicants and not all applicants — will be directly impacted by the policy.

What lies ahead?

The new policy is expected to impact 14 million travellers to the U.S. and 700,000 immigrants worldwide according to the administration’s prior estimates. In some individual cases it is possible that the visa policy achieves what it is (ostensibly) supposed to — allow the gathering of social media information that results in the denial of a visa for an applicant who genuinely presents a security threat. However, the bluntness of the policy and its vast scope raise serious concerns around civil liberties including questions of arbitrariness, mass surveillance, privacy, and the stifling of free speech.

First, it is not unusual for an individual to not recall all their social media handles over a five-year period. Consequently, even if acting in good faith, it is entirely possible for individuals to provide an incomplete social media history. This could give consular officers grounds for denying a visa.

Second, there is a significant degree of discretion involved in determining what constitutes a visa-disqualifying social media post and this could stifle free speech. For instance, is criticising the President of the United States or posting memes about him (there are plenty of those on social media these days) grounds for visa denial? What about media professionals? Is criticising U.S. foreign policy ground for not granting someone a visa?

Third, one can expect processing delays with visas as social media information of applicants is checked. It is possible that individuals impacted by the policy will bring cases against the U.S. government on grounds of privacy or on grounds of visa delays. The strength of these cases depends on a number of factors including whether they are brought by Green Card holders and naturalised citizens (who were impacted by the September 2017 policy not the May 31 one) or non-immigrants. The courts could examine the intent of the U.S. government’s policy and ask whether it has discriminatory intent.

Source: Why are the U.S. immigration norms being tightened?

Why old, false claims about Canadian Muslims are resurfacing online

Of note:

In the summer of 2017, signs that seemed engineered to stoke anti-Muslim sentiment first appeared in a city park in Pitt Meadows, B.C.

“Many Muslims live in this area and dogs are considered filthy in Islam,” said the signs, which included the city’s logo. “Please keep your dogs on a leash and away from the Muslims who live in this community.”

After a spate of media coverage questioning their authenticity — and a statement from Pitt Meadows Mayor John Becker that the city didn’t make them — the signs were discredited and largely forgotten.

But almost two years later, a mix of right-wing American websites, Russian state media, and Canadian Facebook groups have made them go viral again, unleashing hateful comments and claims that Muslims are trying to “colonize” Western society.

The revival of this story shows how false, even discredited claims about Muslims in Canada find an eager audience in Facebook groups and on websites originating on both sides of the border, and how easily misinformation can be recirculated as the federal election approaches.

“Many people who harbour (or have been encouraged to hold) anti-Muslim feelings are looking for information to confirm their view that these people aren’t like them. This story plays into this,” Danah Boyd, a principal researcher at Microsoft and the founder of Data & Society, a non-profit research institute that studies disinformation and media manipulation, wrote in an email.

Boyd said a dubious story like this keeps recirculating “because the underlying fear and hate-oriented sentiment hasn’t faded.”

Daniel Funke, a reporter covering misinformation for the International Fact-Checking Network, said old stories with anti-Muslim aspects also recirculated after the recent fire at the Notre Dame cathedral in Paris.

“Social media users took real newspaper articles out of their original context, often years after they were first published, to falsely claim that the culprits behind the fire were Muslims,” he said. “The same thing has happened with health misinformation, when real news stories about product recalls or disease outbreaks go viral years after they were originally published.”

The signs about dogs first appeared in Hoffman Park in September 2017, and were designed to look official. They carried the logo of the city of Pitt Meadows and that of the Council on American-Islamic Relations (CAIR), a U.S. Muslim advocacy organization.

Media outlets reported on them after an image of one sign was shared online. Many noted that the city logo was falsely used and there was no evidence that actual Muslims were behind the messages.

A representative for CAIR told CBC News in 2017 that his organization had no involvement in the B.C. signs, but he did have an idea about why they were created.

“We see this on occasion where people try to be kind of an agent provocateur and use these kinds of messages to promote hostility towards Muslims and Islam,” Ibrahim Hooper said in an interview with CBC. “Sometimes people use the direct bigoted approach — we see that all too often in America and Canada, unfortunately — but other times they try and be a little more sophisticated or subtle.”

The Muslims of Vancouver Facebook page had a similar view, labelling it a case of “Bigots attempting to incite resentment and hatred towards Muslims.”

After the initial frenzy of articles about the signs, the story died down — until last week, when an American conservative website called The Stream published a story. It cited a 2017 report from CTV Vancouver, without noting that the incident was almost two years old.

“No Dogs: It Offends the Muslims,” read the headline on a story that cited the signs as an example of Muslims not integrating into Western society.

“That sign in the Canadian dog park tells us much that we’d rather not think about. That kind of sign notifies you when your country has been colonized,” John Zmirak wrote.

Zmirak’s post was soon summarized by state-funded Russian website Sputnik, and picked up by American conservative site Red State. Writing in Red State, Elizabeth Vaughn said “Muslims cannot expect Americans or Brits or anybody else to change their ways of life to accommodate them.” Conservative commentator Ann Coulter tweeted the Red State link to her 2.14 million followers, and the story was also cited by the right-wing website WND.

The Stream and Red State did not respond to emailed requests for comment. A spokesperson for Sputnik said its story made it clear to readers that the original incident happened in 2017. “I would like to stress that Sputnik has never mentioned that the flyers in question were created by Muslims, Sputnik just reported facts and indicated the sources,” Dmitry Borschevsky wrote in an email.

Nonetheless, the three stories generated more than 60,000 shares, reactions and comments on Facebook in less than a week. Some of that engagement also came thanks to right-wing Canadian Facebook groups and pages, bringing the dubious tale back to its original Canadian audience.

“Dogs can pick out evil! That’s why Death Cult adherents despise these lil canine truth detectors!” wrote one person in the “Canadians 1st Movement” Facebook group after seeing the Red State link.

“How about no muslims!” wrote one person after the Sputnik story was shared in the Canadian Combat Coalition National Facebook group. Another commenter in the group said he’d prefer to see Muslims “put down” instead of dogs.

On the page of anti-Muslim organization Pegida Canada, one commenter wrote, “I will take any dog over these animals.”

Those reactions were likely intended by whoever created the signs, according to Boyd, and it wasn’t the first incident of this type. In July 2016, flyers appeared in Manchester, England that asked residents to “limit the presence of dogs in the public sphere” out of sensitivity to the area’s “large Muslim community.”

The origin of the flyers was equally dubious, with evidence suggesting the idea may have been part of a campaign hatched on the anonymous message board 4chan. That’s where internet trolls often plan online harassment and disinformation campaigns aimed at generating outrage and media coverage.

“At this point, actors in 4chan have a lot of different motives, but there is no doubt that there are some who hold white nationalist attitudes and espouse racist anti-Muslim views,” Boyd said.

“There are also trolls who relish playing on political antagonisms to increase hostility and polarization. At the end of the day, the motivation doesn’t matter as much as the impact. And the impact is clear: these posters — and the conspiracists who amplify them — help intensify anti-Muslim sentiment in a way that is destructive to democracy.”

Source: Why old, false claims about Canadian Muslims are resurfacing online

Government surveillance of social media related to immigration more extensive than you realize | TheHill

Of note:

In June 2018, more than 400,000 people protested the Trump administration’s policy of separating families at the border. The following month saw a host of demonstrations in New York City on issues including racism and xenophobia, the abolition of Immigration and Customs Enforcement (ICE), and the National Rifle Association.

Given the ease of connecting online, it is unsurprising that many of these events got an organizing boost on social media platforms like Facebook or Twitter. A recent spate of articles did bring a surprise, however: the Department of Homeland Security (DHS) has been watching online too. Congress should demand that DHS detail the full extent of social media use and commit to ensuring that the programs are effective, non-discriminatory, and protective of privacy.

Last month, for instance, it was revealed that a Virginia-based intelligence firm used Facebook data to compile details about more than 600 protests against family separation. The firm sent its spreadsheet to the Department of Homeland Security, where the data was disseminated internally and evidently shared with the FBI and national fusion centers; these centers, which facilitate data sharing among federal, state, local, and tribal law enforcement, as well as the private sector, have been heavily criticized for violating Americans’ privacy and civil liberties while providing little of value.

In the meantime, Homeland Security Investigations — an arm of ICE createdto combat criminal organizations, not collect information about lawful protests — assembled and shared a spreadsheet of the New York City demonstrations, labeling them with the tag “Anti-Trump Protests.” And as Central American caravans slowly traveled north, DHS’s Customs and Border Protection (CBP) drew on Facebook data to create dossiers on lawyers, journalists, and advocates — many of them U.S. citizens — providing services and documenting the situation on the southern border.

As shocking as these revelations are, DHS’s social media ambitions are both broader and opaque. A recent report I co-wrote for the Brennan Center for Justice, based on a review of more than 150 government documents, examines how social media is used by four DHS agencies — ICE, CBP, TSA, and the U.S. Customs and Immigration Service (USCIS) — and describes the deficiencies and risks of these programs.

First, DHS now uses social media in nearly every aspect of its immigration operations. Participants in the Visa Waiver Program, for instance — largely travelers from Western Europe — have been asked since late 2016 to voluntarily provide their social media handles. The Department of State recently won approval to demand the same of all visa applicants, nearly 15 million people per year; this data will be vetted against DHS holdings. While information from social media may not be the sole basis for denial, it could easily be combined with other factors to justify exclusion, a process that is likely to have a disproportionate impact on Muslim travelers and those coming from Latin America.

Travelers may have their social media data examined at the U.S. border as well, via warrantless searches of electronic devices undertaken by CBP and ICE. Between 2015 and 2017, the number of device searches carried out by CBP jumped more than threefold; one report suggests that about 20 percent are conducted on American travelers. (ICE does not reveal its figures.) CBP recently issued more stringent rules, though it remains to be seen how closely it will follow them; a December 2018 inspector general report concluded that the agency had failed to follow its prior procedures.

ICE operates under a decade-old policy allowing its agents to “search, detain, seize, retain, and share” electronic devices and any information on them — including social media — without individualized suspicion. Remarkably, ICE justifies this authority by pointing to centuries-old statutes, equating electronic devices with “merchandise” that customs inspectors were authorized to review under a 1790 Act passed by the First Congress. This approach puts the agency out of step with the Supreme Court, which recently recognized that treating a search of a cell phone as identical to a search of a wallet or purse “is like saying a ride on horseback is materially indistinguishable from a flight to the moon.”

The breadth of DHS’s social media monitoring begs the question: Is it effective? It is notable that a 2016 DHS brief reported that in three of four refugee vetting programs, the social media accounts “did not yield clear, articulable links to national security concerns,” even where a national security concern did exist. And a February 2017 Inspector General audit of seven social media pilot programs concluded that DHS had failed to establish any mechanisms to measure their effectiveness.

Indeed, content on social media can be difficult to decode under the best of circumstances. Natural language processing tools, used for some automated analysis, fail to accurately interpret 20-30 percent of the text they analyze, a gap that is compounded when it comes to unfamiliar languages or cultural contexts. Even human reviewers can fail to understand their own language if it’s filled with slang.

We now know far more about the scope of DHS’s efforts to collect and use social media, but there is much that remains obscured. Without robust, ongoing oversight, neither the public nor lawmakers can be confident that these programs are serving our national interest.

Source: Government surveillance of social media related to immigration more extensive than you realize | TheHill

Alex Jones Was Victimized by One Oligopoly. But He Perpetuated Another

Good take on social media, free speech and Alex Jones (and others of his ilk):

This month, Twitter joined Apple, Facebook, Spotify and YouTube in banning the popular right-wing conspiracy theorist Alex Jones from its platform. Like the other bans, Twitter’s decision was announced as a fait accompli, with opaque justifications ranging from “hate speech” to “abusive behavior.”

The seemingly arbitrary nature of these bans has raised fears from all political quarters. Alexis Madrigal, writing in The Atlantic, cited the development as proof that “these platforms have tremendous power, they have hardly begun to use it, and it’s not clear how anyone would stop them from doing so.” His sentiments were echoed by Ben Shapiro in the National Review, who expressed alarm at “social-media arbiters suddenly deciding that vague ‘hate speech’ standards ought to govern our common spaces.”

Even some on the left displayed concern. Steve Coll wrote in the New Yorker that “practices that marginalize the unconventional right will also marginalize the unconventional left,” and argued that we must defend even “awful speakers” in the interests of protecting free speech. Ben Wizner of the American Civil Liberties Union described the tech giants’ behavior as “worrisome,” and suggested the policies used to justify the bans could be “misused and abused.”

It is indeed worrying that some corporations now have the power to restrict how much influence someone can have on the marketplace of ideas. But what is more worrying, and what few people seem to be considering, is how Alex Jones was able to gain such influence in the first place. In my view, the ideological forces responsible for his rise are a greater threat to free speech than the corporate forces responsible for his “fall.” Principled defenders of free speech would therefore be unwise to rail against the former while ignoring the latter.

The reason tech giants like Twitter and Facebook are able to exert such worrying control over our speech is that they comprise an oligopoly, with no significant competitors. Such oligopolies tend to form in business due to the Matthew principle, which holds that advantage begets further advantage. If Facebook manages to get all your friends to use it, then Facebook’s chances of getting you to use it are drastically increased, because you want to be connected with your friends. This particular example of the Matthew principle is known as a “network effect.”

Crucially, network effects don’t just apply to free market economies; they also apply to the free market of ideas. Concepts that get more exposure will get more exposure. This virality can cause the arena of debate to quickly become dominated by an “oligopoly” of perspectives.

Hence, just as the free market of infotech is now dictated by the Googles and Facebooks of the world, so too has the free market of ideas come to be controlled by a few political narratives, particularly the social-justice narrative of the left and the anti-globalism narrative of the right. The social-justice left dominates among the cultural elite, including the mainstream media, the literati, the tech industry, Hollywood and academia. The anti-globalist right, meanwhile, is popular among the general public, as evidenced by the success of the U.K.’s Brexit campaign, and the election of Donald Trump in the U.S. and a host of nativist parties in Europe, such as Poland’s Law and Justice and Italy’s Five Star movement.

The story of Alex Jones brings these two strands together, because he has fueled the surge of right-wing populism in large part by leveraging the power of tech oligopolies. Far from being a “fringe” figure (as he is often portrayed), Jones is a key conduit of a popular narrative, broadcasting to over 3.6 million unique online monthly viewers, and apparently having the ear of the American president (which may help explain why baseless conspiracy theories about a “deep state” keep circulating around the White House). Jones, in short, is an ambassador for one half of an ideological oligopoly, which is just as hostile to competition as the tech oligopoly.

But how could this be? To some, the very idea of an oligopoly on ideas may seem bizarre; we are all free to believe whatever we wish. Unfortunately, our brains did not evolve to understand the world but to survive it. Reality is software that doesn’t run well on our mental hardware, unless the display resolution is minimized. We therefore seek out stories, not because they are true, but because they reduce the incomprehensible into that which is comprehensible, giving us a counterfeit of truth whose elegant simplicity makes it seem truer than actual, authentic truth.

A typical mental schematic that allows us to do this is the Karpman drama triangle, which divides people into victims, oppressors and rescuers. We have a tendency to view events using this cognitive compression algorithm because it simplifies reality into drama, offering not just clarity to the confused, but also belonging to the lonely, purpose to the aimless, battle to the bored, and scapegoats to the vindictive.

The social-justice left and anti-globalist right both fully embrace the Karpman drama triangle as a lens for looking at the world. In the social-justice narrative, minorities are the victims, the white patriarchy is the oppressor, and the social-justice activists are the rescuers. In the populist-right narrative, the silent oppressed majorities constitute the victims, the globalist elites are the oppressors, and certain maverick figures (such as Alex Jones and Donald Trump) are the rescuers.

Anyone who doesn’t neatly fit into a corner of the drama triangles will either be shoehorned in, or ignored. This simplification of reality into a dramatic struggle is what makes these narratives so hostile to competing ideas; disagreement is viewed not as a legitimate difference of opinion, but as an attempt at oppression. And when you feel you are being oppressed, you can justify the use of any tactic to fight it.

This is why we see those on the social-justice left using their influence in media, academia and the tech industry to forcefully suffocate the expression of alternative viewpoints — including by the firing of those with different opinions, or by shouting them down at universities, or by physically assaulting them.

And on the populist right, we see similar tactics of intimidation and ostracism, whether through the harassment of climate scientists, the denial of security clearance to former CIA directors who won’t toe the president’s line, or the demonization of conservative pundits who fall out of love with Donald Trump.

Alex Jones himself has been among the biggest instigators of right-wing intimidation. For years, he has concocted lies about those who don’t agree with his narrative, claiming they are agents of foreign governments, literal demons, or child molesters. He also has suggested that his followers should take up arms against the nonbelievers (which is why Twitter suspended him), and his conspiracy theories have led his followers to harass and threaten people with violence.

Unfortunately, this sometimes has led to actual violence. In 2009, Richard Poplawski, who regularly commented on Jones’ Infowars website, and cross-posted many of Jones’ articles on neo-Nazi forums, killed three police officers with an AK-47. The following year, Byron Williams, who cited Jones as an influence on his thinking, engaged in a firefight with police, injuring two. A year later, Jared Lee Loughner, who counted among his favorite documentary films the Jones-produced Zeitgeist and Loose Change, attempted to assassinate U.S. representative Gabrielle Giffords, injuring her and 12 other people, and killing six. Later that year, Oscar Ortega, having watched the Jones-produced film The Obama Deception, shot at the White House. In 2014, Jerad and Amanda Miller, both regular commenters on Jones’ Infowars site, posted anti-government videos and then went on a shooting spree, killing three before dying themselves. Two years later, Edgar Maddison Welch, convinced by the Pizzagate conspiracy theory pushed by Jones, shot up the Comet Ping Pong pizza parlor in Washington D.C.

It is difficult to determine how much influence Jones’ views had on these atrocities. However, the link between hateful Infowars-style rhetoric on Facebook and hate crime was explored by an extensive study of 3,335 attacks against refugees in Germany, where the populist right-wing Alternative für Deutschland (AfD) has developed a major web presence. The study found that such attacks were strongly predicted by social media use: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by an average of 50 percent.

Violence is the most direct and dramatic way that the social-justice left and anti-globalist right censor speech. But it is just one tactic among many — including threats, doxings, firings, harassment, mobbings and demonization. This is why the radical left and right, led by demagogues like Alex Jones, represent an even greater threat to our speech than the tech giants. They are gradually turning the free market of ideas into a kind of ongoing hostage crisis, by which people are either afraid to speak their minds, or are doomed to have their words interpreted in the worst possible way when they do so.

I don’t want to make the same mistakes as Jones, so I should emphasize that these drama triangles that are so hostile to free expression weren’t engineered by a secret cabal of ideologues or tech CEOs. They arose organically, regulated only by laws of nature such as the Matthew principle, network effects, and the public’s demand for easy answers. Sure, there are individual Facebook employees who may be interested in pushing a political agenda, but Facebook as a business is not. It seeks to do what is best for profits: making its platform as inviting to as many people as possible.

That’s not to say it is successful in this venture. Corporations are ill-equipped to police the information traffic of millions of users, which is why they frequently get things wrong (such as censoring the Declaration of Independence as hate speech). And even when they get things “right,” they usually only end up benefiting their targets — as evidenced by the fact that, in the wake of Jones’ de-platforming by the major media companies, his Infowarsapp surged up the download charts. The greatest endorsement a conspiracy theorist can receive is censorship by authority figures. It’s a golden opportunity to portray themselves as the victim in their Karpman drama triangle.

So, if we can’t rely on powerful organizations such as governments or corporations to protect our voices from mob rule, what then?

In my view, it leaves only one real option: We must be the protectors of our own free speech, and habitually speak out not just against designated “oppressors” like the tech giants, but also against designated “victims” and “rescuers,” like Alex Jones, who seek to oppress by dehumanizing others as oppressors. And we must do all this without constructing our own drama triangle of oppression, or else we’ll become part of the very problem we seek to solve.

John Stuart Mill believed that in a free market of ideas, good ideas would naturally trump bad ones. But experience has shown that this won’t happen unless the marketplace is populated by those who actively seek truth and openness. Free speech is the foundation of all other rights. It is the seed of innovation, the wheel of progress, the space to breathe. It must therefore be protected at all costs — including, at times such as these, from itself.

Source: Alex Jones Was Victimized by One Oligopoly. But He Perpetuated Another

Inside YouTube’s Far-Right Radicalization Factory

Interesting study, symptomatic of the problems with social media companies:

YouTube is a readymade radicalization network for the far right, a new study finds.

The Google-owned video platform recently banned conspiracy outlet InfoWars and its founder Alex Jones for hate speech. But another unofficial network of fringe channels is pulling YouTubers down the rabbit hole of extremism, said the Tuesday report from research group Data & Society.

The study tracked 65 YouTubers—some of them openly alt-right or white nationalist, others who claim to be simply libertarians, and most of whom have voiced anti-progressive views—as they collaborated across YouTube channels. The result, the study found, is an ecosystem in which a person searching for video game reviews can quickly find themselves watching a four-hour conversation with white nationalist Richard Spencer.

Becca Lewis, the researcher behind the report, calls the group the Alternative Influence Network. Its members include racists like Spencer, Gamergate figureheads like Carl Benjamin (who goes by ‘Sargon of Akkad’), and talk-show hosts like Joe Rogan, who promotes guests from fringe ideologies. Not all people in the group express far-right political views themselves, but will platform guests who do. Combined, the 65 YouTubers account for millions of YouTube followers, who can find themselves clicking through a series of increasingly radical-right videos.

Take Rogan, a comedian and self-described libertarian whose 3.5 million subscribers recently witnessed him host a bizarre interviewwith Tesla founder Elon Musk. While Rogan might not express extreme views, his guests often tend to be more fringe. Last year, he hosted Benjamin, the anti-feminist who gained a large following for his harassment campaigns during Gamergate.

Rogan’s interview with Benjamin, which has nearly 2 million views, describes Benjamin as an “Anti-Identitarian liberal YouTuber.” It’s a misleading title for Rogan fans who might go on to view Benjamin’s work.

Benjamin, in turn, has also claimed not support the alt-right. Like other less explicitly racist members of the network, he’s hyped his “not racist” cred by promoting livestreamed “debates” (a favorite term in these circles) with white supremacists.

But the line between “debate” and collaboration can be indistinct, as Lewis noted in her study. She pointed to one such debate between Benjamin and Spencer, which was moderated by white nationalist creep Jean-Francois Gariepy, and which briefly became the world’s top trending live video on YouTube, with more than 10,000 live viewers.

“In his video with [Richard] Spencer, Benjamin was presumably debating against scientific racism, a stance he frequently echoes,” Lewis wrote in her study. “However, by participating in the debate, he was building a shared audience—and thus, a symbiotic relationship— with white nationalists. In fact, Benjamin has become a frequent guest on channels that host such ‘debates,’ which often function as group entertainment as much as genuine disagreements.”

Debates are often better measures of rhetorical skill than they are of an idea’s merits. A well-spoken idiot might stand a good chance against a shy expert in a televised argument. When they disagreed during the four-hour livestream, Spencer, a more practiced speaker, mopped the floor with Benjamin. The debate earned Spencer new followers, some of whom appear to have been lured in by the other YouTubers’ thinly-disguised bigotry.

“I’ve never really listened to Spencer speak before,” one commenter wrote. “But it is immediately apparent that he’s on a whole different level.”

And Benjamin has been willing to collaborate with further-right far right YouTubers when the circumstances benefited him.

“In many ways, we do have similar objectives,” he told the openly racist YouTuber Millennial Woes in one video cited in the study. “We have the same enemies, right? I mean, you guys hate the SJWs, I hate the SJWs. I want to see the complete destruction of social justice. . . . If the alt-right took the place of the SJWs, I would have a lot less to fear.”

“Some of the more mainstream conservatives or libertarians are able to have it both ways,” Lewis told The Daily Beast on Tuesday. “They can say they reject the alt-right … but at the same time, there’s a lot of nudging and winking.”

Her report cited other instances of this phenomenon, including self-identified “classical liberal” YouTuber Dave Rubin, who promotes anti-progressive views on his talk show, where he hosts more extreme personalities, ostensibly for debate. But the debates can skew friendly. The study pointed to a conversation in which Rubin allowed far-right YouTuber Stefan Molyneux to make junk science claims unchecked. A description for the video encouraged viewers to do their own research, but provided links to Molyneux’s own content.

“It gives a generally unchallenged platform for that white nationalist and their ideas,” Lewis said on Tuesday.

YouTube’s algorithms can sometimes reward fringe content. Researcher Zeynep Tufekci previously highlighted the phenomenonwhen she noted that, after she watched footage of Donald Trump rallies, YouTube began recommending an increasingly radical series of white supremacist and conspiracy videos.

Lewis said YouTubers have learned to leverage the site’s algorithms, frontloading their videos with terms like “liberal” and “intersectional” in a bid to “hijack” search results that would typically be dominated by the left.

YouTube, which is built to keep users watching videos, might be a perfect recruiting platform for fringe movements, which want followers to remain similarly engaged.

“One way scholars of social movements often talk about recruitment is in terms of the capacity of the movement to bring in new recruits and then retain them,” Joan Donovan, a research lead at Data & Society said on Tuesday.“Social media is optimized for engagement, which is both recruitment of an audience and retention of that audience. These groups often use the tools of analytics to make sure they continue to grow their networks.”

Source: Inside YouTube’s Far-Right Radicalization Factory

Exclusive: Right-wing sites swamp Sweden with ‘junk news’ in tight election race

Possible factor contributing to the shift in Swedish politics (scheduled before results known):

One in three news articles shared online about the upcoming Swedish election come from websites publishing deliberately misleading information, most with a right-wing focus on immigration and Islam, Oxford University researchers say.

Their study, published on Thursday, points to widespread online disinformation in the final stages of a tightly-contested campaign which could mark a lurch to the right in one of Europe’s most prominent liberal democracies.

The authors, from the Oxford Internet Institute, labeled certain websites “junk news”, based on a range of detailed criteria. Reuters found the three most popular sites they identified have employed former members of the Sweden Democrats party; one has a former MP listed among its staff.

It was not clear whether the sharing of “junk news” had affected voting intentions in Sweden, but the study helps show the impact platforms such as Twitter and Facebook have on elections, and how domestic or foreign groups can use them to exacerbate sensitive social and political issues.

Prime Minister Stefan Lofven, whose center-left Social Democrats have dominated politics since 1914 but are now unlikely to secure a ruling majority, told Reuters the spread of false or distorted information online risked shaking “the foundations of democracy” if left unchecked.

The Institute, a department of Oxford University, analyzed 275,000 tweets about the Swedish election from a 10-day period in August. It counted articles shared from websites it identified as “junk news” sources, defined as outlets which “deliberately publish misleading, deceptive or incorrect information purporting to be real news”.

“Roughly speaking, for every two professional content articles shared, one junk news article was shared. Junk news therefore constituted a significant part of the conversation around the Swedish general election,” it said.

A Twitter spokesman declined to comment on the results of the study.

Facebook, where interactions between users are harder to track, said it was working with Swedish officials to help voters spot disinformation. It has also partnered with Viralgranskaren – an arm of Sweden’s Metro newspaper – to identify, demote and counterbalance “false news” on its site.

Joakim Wallerstein, head of communications for the Sweden Democrats, said he had no knowledge of or interest in the party sympathies of media outlets. Asked to comment on his party’s relationship with the sites identified by the study, he said he had been interviewed by one of them once.

“I think it is strange that a foreign institute is trying to label various news outlets in Sweden as ‘junk news’ and release such a report in connection to an election,” he said.

“DECEPTIVE TOOLS”

Swedish security officials say there is currently no evidence of a coordinated online attempt by foreign powers to sway the Sept. 9 vote, despite repeated government warnings about the threat.

But Mikael Tofvesson, head of the counter influence team at the Swedish Civil Contingencies Agency (MSB), a government agency tasked with safeguarding the election, said the widespread sharing of false or distorted information makes countries more vulnerable to hostile influence operations.

“Incorrect and biased reporting promotes a harder, harsher tone in the debate, which makes it easier to throw in disinformation and other deceptive tools,” he said.

Lisa-Maria Neudert, a researcher from the Oxford Internet Institute’s Project on Computational Propaganda, said most of the “junk news” in Sweden supported right-wing policies, and was largely focused on issues around immigration and Islam.

The top three “junk news” sources identified by the study – right-wing websites Samhallsnytt, Nyheter Idag and Fria Tider – accounted for more than 85 percent of the “junk news” content.

Samhallsnytt received donations through the personal bank account of a Sweden Democrat member between 2011-2013 when it operated under the name Avpixlat. A former Sweden Democrat member of parliament, who also previously ran the party’s youth wing, is listed on the Samhallsnytt website as a columnist.

Samhallsnytt often publishes articles saying Sweden is under threat from Islam. In June, for example, it said a youth soccer tournament in the second-biggest city had banned pork as “haram” – or forbidden under Islamic law. The article is still online with the headline: “Islam is the new foundation of the Gothia Cup – pork proclaimed ‘haram’”.

A tournament organizer told the Dagens Nyheter newspaper that caterers had not served pork for more than 10 years for practical reasons, and there was no ban against eating or selling pork at the event.

Samhallsnytt and Fria Tider did not respond to repeated requests for comment.

Commenting before the Oxford study was published, Nyheter Idag founder Chang Frick disputed the “junk news” label and said his website followed ethical journalistic practices, citing its membership of Sweden’s self-regulated Press Council body.

“Yes, we put our editorial perspective on news, of course, like everyone else,” he said. “If you are doing a tabloid you cannot have dry, boring headlines, it should have some punch to it. But we do not lie, we do not make false accusations.”

FACT CHECKERS AND BOTS

Social media companies have come under increasing pressure to tackle disinformation on their platforms following accusations that Russia and Iran tried to meddle in domestic politics in the United States, Europe and elsewhere. Moscow and Tehran deny the allegations.A report by the Swedish Defence Research Institute last week said the number of automated Twitter accounts discussing the upcoming election almost doubled in July from the previous month. Such so-called “bot” accounts shared articles from Samhallsnytt and Fria Tider more frequently than real people, the report said, and were 40 percent more likely to express support for the Sweden Democrats.

Facebook said its work with Viralgranskaren to fact check content on its sites helped it quickly identify “false news.”

The company declined to give specific figures about the amount or sources of false news it had recorded around the Swedish election, but said any flagged content is given a lower position on its site, a practice known as “downranking” which it says cuts views by 80 percent. Users who see disputed articles are also shown other sources of verified information, it said.

In a blog post on its website, Twitter says it “should not be the arbiter of truth”.

But the MSB’s counter influence team’s head Tofvesson said there had been a “positive increase” in the work of Facebook, Twitter and other social media companies to help safeguard the election, largely via better communication and coordination with local authorities.

Source: Right-wing sites swamp Sweden with “junk news” in tight election race