Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

For the record:

British comedian Sacha Baron Cohen has said if Facebook had existed in the 1930s it would have allowed Hitler a platform for his anti-Semitic beliefs.

The Ali G star singled out the social media company in a speech in New York.

He also criticised Google, Twitter and YouTube for pushing “absurdities to billions of people”.

Social media giants and internet companies are under growing pressure to curb the spread of misinformation around political campaigns.

Twitter announced in late October that it would ban all political advertising globally from 22 November.

Earlier this week Google said it would not allow political advertisers to target voters using “microtargeting” based on browsing data or other factors.

Analysts say Facebook has come under increasing pressure to follow suit.

The company said in a statement that Baron Cohen had misrepresented its policies and that hate speech was banned on its platforms.

“We ban people who advocate for violence and we remove anyone who praises or supports it. Nobody – including politicians – can advocate or advertise hate, violence or mass murder on Facebook,” it added.

What did Baron Cohen say?

Addressing the Anti-Defamation League’s Never is Now summit, Baron Cohen took aim at Facebook boss Mark Zuckerberg who in October defended his company’s position not to ban political adverts that contain falsehoods.

“If you pay them, Facebook will run any ‘political’ ad you want, even if it’s a lie. And they’ll even help you micro-target those lies to their users for maximum effect,” he said.

“Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.”

Baron Cohen said it was time “for a fundamental rethink of social media and how it spreads hate, conspiracies and lies”. He also questioned Mr Zuckerberg’s characterisation of Facebook as a bastion of “free expression”.

“I think we could all agree that we should not be giving bigots and paedophiles a free platform to amplify their views and target their victims,” he added.

Earlier this month, an international group of lawmakers called for targeted political adverts on social media to be suspended until they are properly regulated.

The International Committee on Disinformation and Fake News was told that the business model adopted by social networks made “manipulation profitable”.

A BBC investigation into political ads for next month’s UK election suggested they were being targeted towards key constituencies and certain age groups.

Source: Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

Why are the U.S. immigration norms being tightened?

US immigration checking of social media noted in Indian media (a reminder to us all to more mindful when on social media):

The story so far: On May 31, 2019, the U.S. Department of State introduced a change in online visa forms for immigrant (form DS-260) and non-immigrant visas (form DS-160) requiring applicants to register their social media handles over a five-year period. The newly released DS-160 and DS-260 forms ask, “Do you have a social media presence?” A drop-down menu provides a list of some 20 options, including Facebook, Instagram, Sina Weibo and Twitter. There is also a “NONE” option. Applicants are required to list their handles alone and not passwords. All sites will soon be listable according to an administration official who spoke to The Hill, a Washington DC-based newsletter. The policy does not cover those eligible for the visa waiver programme and those applying for diplomatic visas and certain categories of official visas.

How did it come about?

The policy is part of U.S. President Donald Trump’s intent to conduct “extreme vetting” of foreigners seeking admission into the U.S. In March 2017, Mr. Trump issued an Executive Order asking the administration to implement a programme that “shall include the development of a uniform baseline for screening and vetting standards and procedures for all immigrant programs.”

In September 2017, the Department of Homeland Security started including “social media handles, aliases, associated identifiable information, and search results” information in the files it keeps on each immigrant. The notice regarding this policy said those impacted would include Green Card holders and naturalised citizens. In March 2018, the State Department proposed a similar policy, but for all visa applicants — this is the policy now in effect. Earlier, only certain visa applicants identified for extra screening were required to provide such information. Asking visa applicants to volunteer social media history started during the Obama administration which was criticised for not catching Tashfeen Malik, one of those who carried out a mass-shooting in San Bernardino, California, in 2015. Malik had come to the U.S. on a K-1 fiancé visa, and had exchanged social media messages about jihad prior to her admission to the U.S.

How will it impact India?

Most Indians applying for U.S. visas will be covered by this policy. Over 955,000 non-immigrant visas (excluding A and G visas) and some 28,000 immigrant visas were issued to Indians in fiscal year 2018. So at least 10 lakh Indians — and these are just those who are successful in their visa applicants and not all applicants — will be directly impacted by the policy.

What lies ahead?

The new policy is expected to impact 14 million travellers to the U.S. and 700,000 immigrants worldwide according to the administration’s prior estimates. In some individual cases it is possible that the visa policy achieves what it is (ostensibly) supposed to — allow the gathering of social media information that results in the denial of a visa for an applicant who genuinely presents a security threat. However, the bluntness of the policy and its vast scope raise serious concerns around civil liberties including questions of arbitrariness, mass surveillance, privacy, and the stifling of free speech.

First, it is not unusual for an individual to not recall all their social media handles over a five-year period. Consequently, even if acting in good faith, it is entirely possible for individuals to provide an incomplete social media history. This could give consular officers grounds for denying a visa.

Second, there is a significant degree of discretion involved in determining what constitutes a visa-disqualifying social media post and this could stifle free speech. For instance, is criticising the President of the United States or posting memes about him (there are plenty of those on social media these days) grounds for visa denial? What about media professionals? Is criticising U.S. foreign policy ground for not granting someone a visa?

Third, one can expect processing delays with visas as social media information of applicants is checked. It is possible that individuals impacted by the policy will bring cases against the U.S. government on grounds of privacy or on grounds of visa delays. The strength of these cases depends on a number of factors including whether they are brought by Green Card holders and naturalised citizens (who were impacted by the September 2017 policy not the May 31 one) or non-immigrants. The courts could examine the intent of the U.S. government’s policy and ask whether it has discriminatory intent.

Source: Why are the U.S. immigration norms being tightened?

Why old, false claims about Canadian Muslims are resurfacing online

Of note:

In the summer of 2017, signs that seemed engineered to stoke anti-Muslim sentiment first appeared in a city park in Pitt Meadows, B.C.

“Many Muslims live in this area and dogs are considered filthy in Islam,” said the signs, which included the city’s logo. “Please keep your dogs on a leash and away from the Muslims who live in this community.”

After a spate of media coverage questioning their authenticity — and a statement from Pitt Meadows Mayor John Becker that the city didn’t make them — the signs were discredited and largely forgotten.

But almost two years later, a mix of right-wing American websites, Russian state media, and Canadian Facebook groups have made them go viral again, unleashing hateful comments and claims that Muslims are trying to “colonize” Western society.

The revival of this story shows how false, even discredited claims about Muslims in Canada find an eager audience in Facebook groups and on websites originating on both sides of the border, and how easily misinformation can be recirculated as the federal election approaches.

“Many people who harbour (or have been encouraged to hold) anti-Muslim feelings are looking for information to confirm their view that these people aren’t like them. This story plays into this,” Danah Boyd, a principal researcher at Microsoft and the founder of Data & Society, a non-profit research institute that studies disinformation and media manipulation, wrote in an email.

Boyd said a dubious story like this keeps recirculating “because the underlying fear and hate-oriented sentiment hasn’t faded.”

Daniel Funke, a reporter covering misinformation for the International Fact-Checking Network, said old stories with anti-Muslim aspects also recirculated after the recent fire at the Notre Dame cathedral in Paris.

“Social media users took real newspaper articles out of their original context, often years after they were first published, to falsely claim that the culprits behind the fire were Muslims,” he said. “The same thing has happened with health misinformation, when real news stories about product recalls or disease outbreaks go viral years after they were originally published.”

The signs about dogs first appeared in Hoffman Park in September 2017, and were designed to look official. They carried the logo of the city of Pitt Meadows and that of the Council on American-Islamic Relations (CAIR), a U.S. Muslim advocacy organization.

Media outlets reported on them after an image of one sign was shared online. Many noted that the city logo was falsely used and there was no evidence that actual Muslims were behind the messages.

A representative for CAIR told CBC News in 2017 that his organization had no involvement in the B.C. signs, but he did have an idea about why they were created.

“We see this on occasion where people try to be kind of an agent provocateur and use these kinds of messages to promote hostility towards Muslims and Islam,” Ibrahim Hooper said in an interview with CBC. “Sometimes people use the direct bigoted approach — we see that all too often in America and Canada, unfortunately — but other times they try and be a little more sophisticated or subtle.”

The Muslims of Vancouver Facebook page had a similar view, labelling it a case of “Bigots attempting to incite resentment and hatred towards Muslims.”

After the initial frenzy of articles about the signs, the story died down — until last week, when an American conservative website called The Stream published a story. It cited a 2017 report from CTV Vancouver, without noting that the incident was almost two years old.

“No Dogs: It Offends the Muslims,” read the headline on a story that cited the signs as an example of Muslims not integrating into Western society.

“That sign in the Canadian dog park tells us much that we’d rather not think about. That kind of sign notifies you when your country has been colonized,” John Zmirak wrote.

Zmirak’s post was soon summarized by state-funded Russian website Sputnik, and picked up by American conservative site Red State. Writing in Red State, Elizabeth Vaughn said “Muslims cannot expect Americans or Brits or anybody else to change their ways of life to accommodate them.” Conservative commentator Ann Coulter tweeted the Red State link to her 2.14 million followers, and the story was also cited by the right-wing website WND.

The Stream and Red State did not respond to emailed requests for comment. A spokesperson for Sputnik said its story made it clear to readers that the original incident happened in 2017. “I would like to stress that Sputnik has never mentioned that the flyers in question were created by Muslims, Sputnik just reported facts and indicated the sources,” Dmitry Borschevsky wrote in an email.

Nonetheless, the three stories generated more than 60,000 shares, reactions and comments on Facebook in less than a week. Some of that engagement also came thanks to right-wing Canadian Facebook groups and pages, bringing the dubious tale back to its original Canadian audience.

“Dogs can pick out evil! That’s why Death Cult adherents despise these lil canine truth detectors!” wrote one person in the “Canadians 1st Movement” Facebook group after seeing the Red State link.

“How about no muslims!” wrote one person after the Sputnik story was shared in the Canadian Combat Coalition National Facebook group. Another commenter in the group said he’d prefer to see Muslims “put down” instead of dogs.

On the page of anti-Muslim organization Pegida Canada, one commenter wrote, “I will take any dog over these animals.”

Those reactions were likely intended by whoever created the signs, according to Boyd, and it wasn’t the first incident of this type. In July 2016, flyers appeared in Manchester, England that asked residents to “limit the presence of dogs in the public sphere” out of sensitivity to the area’s “large Muslim community.”

The origin of the flyers was equally dubious, with evidence suggesting the idea may have been part of a campaign hatched on the anonymous message board 4chan. That’s where internet trolls often plan online harassment and disinformation campaigns aimed at generating outrage and media coverage.

“At this point, actors in 4chan have a lot of different motives, but there is no doubt that there are some who hold white nationalist attitudes and espouse racist anti-Muslim views,” Boyd said.

“There are also trolls who relish playing on political antagonisms to increase hostility and polarization. At the end of the day, the motivation doesn’t matter as much as the impact. And the impact is clear: these posters — and the conspiracists who amplify them — help intensify anti-Muslim sentiment in a way that is destructive to democracy.”

Source: Why old, false claims about Canadian Muslims are resurfacing online

Government surveillance of social media related to immigration more extensive than you realize | TheHill

Of note:

In June 2018, more than 400,000 people protested the Trump administration’s policy of separating families at the border. The following month saw a host of demonstrations in New York City on issues including racism and xenophobia, the abolition of Immigration and Customs Enforcement (ICE), and the National Rifle Association.

Given the ease of connecting online, it is unsurprising that many of these events got an organizing boost on social media platforms like Facebook or Twitter. A recent spate of articles did bring a surprise, however: the Department of Homeland Security (DHS) has been watching online too. Congress should demand that DHS detail the full extent of social media use and commit to ensuring that the programs are effective, non-discriminatory, and protective of privacy.

Last month, for instance, it was revealed that a Virginia-based intelligence firm used Facebook data to compile details about more than 600 protests against family separation. The firm sent its spreadsheet to the Department of Homeland Security, where the data was disseminated internally and evidently shared with the FBI and national fusion centers; these centers, which facilitate data sharing among federal, state, local, and tribal law enforcement, as well as the private sector, have been heavily criticized for violating Americans’ privacy and civil liberties while providing little of value.

In the meantime, Homeland Security Investigations — an arm of ICE createdto combat criminal organizations, not collect information about lawful protests — assembled and shared a spreadsheet of the New York City demonstrations, labeling them with the tag “Anti-Trump Protests.” And as Central American caravans slowly traveled north, DHS’s Customs and Border Protection (CBP) drew on Facebook data to create dossiers on lawyers, journalists, and advocates — many of them U.S. citizens — providing services and documenting the situation on the southern border.

As shocking as these revelations are, DHS’s social media ambitions are both broader and opaque. A recent report I co-wrote for the Brennan Center for Justice, based on a review of more than 150 government documents, examines how social media is used by four DHS agencies — ICE, CBP, TSA, and the U.S. Customs and Immigration Service (USCIS) — and describes the deficiencies and risks of these programs.

First, DHS now uses social media in nearly every aspect of its immigration operations. Participants in the Visa Waiver Program, for instance — largely travelers from Western Europe — have been asked since late 2016 to voluntarily provide their social media handles. The Department of State recently won approval to demand the same of all visa applicants, nearly 15 million people per year; this data will be vetted against DHS holdings. While information from social media may not be the sole basis for denial, it could easily be combined with other factors to justify exclusion, a process that is likely to have a disproportionate impact on Muslim travelers and those coming from Latin America.

Travelers may have their social media data examined at the U.S. border as well, via warrantless searches of electronic devices undertaken by CBP and ICE. Between 2015 and 2017, the number of device searches carried out by CBP jumped more than threefold; one report suggests that about 20 percent are conducted on American travelers. (ICE does not reveal its figures.) CBP recently issued more stringent rules, though it remains to be seen how closely it will follow them; a December 2018 inspector general report concluded that the agency had failed to follow its prior procedures.

ICE operates under a decade-old policy allowing its agents to “search, detain, seize, retain, and share” electronic devices and any information on them — including social media — without individualized suspicion. Remarkably, ICE justifies this authority by pointing to centuries-old statutes, equating electronic devices with “merchandise” that customs inspectors were authorized to review under a 1790 Act passed by the First Congress. This approach puts the agency out of step with the Supreme Court, which recently recognized that treating a search of a cell phone as identical to a search of a wallet or purse “is like saying a ride on horseback is materially indistinguishable from a flight to the moon.”

The breadth of DHS’s social media monitoring begs the question: Is it effective? It is notable that a 2016 DHS brief reported that in three of four refugee vetting programs, the social media accounts “did not yield clear, articulable links to national security concerns,” even where a national security concern did exist. And a February 2017 Inspector General audit of seven social media pilot programs concluded that DHS had failed to establish any mechanisms to measure their effectiveness.

Indeed, content on social media can be difficult to decode under the best of circumstances. Natural language processing tools, used for some automated analysis, fail to accurately interpret 20-30 percent of the text they analyze, a gap that is compounded when it comes to unfamiliar languages or cultural contexts. Even human reviewers can fail to understand their own language if it’s filled with slang.

We now know far more about the scope of DHS’s efforts to collect and use social media, but there is much that remains obscured. Without robust, ongoing oversight, neither the public nor lawmakers can be confident that these programs are serving our national interest.

Source: Government surveillance of social media related to immigration more extensive than you realize | TheHill

Alex Jones Was Victimized by One Oligopoly. But He Perpetuated Another

Good take on social media, free speech and Alex Jones (and others of his ilk):

This month, Twitter joined Apple, Facebook, Spotify and YouTube in banning the popular right-wing conspiracy theorist Alex Jones from its platform. Like the other bans, Twitter’s decision was announced as a fait accompli, with opaque justifications ranging from “hate speech” to “abusive behavior.”

The seemingly arbitrary nature of these bans has raised fears from all political quarters. Alexis Madrigal, writing in The Atlantic, cited the development as proof that “these platforms have tremendous power, they have hardly begun to use it, and it’s not clear how anyone would stop them from doing so.” His sentiments were echoed by Ben Shapiro in the National Review, who expressed alarm at “social-media arbiters suddenly deciding that vague ‘hate speech’ standards ought to govern our common spaces.”

Even some on the left displayed concern. Steve Coll wrote in the New Yorker that “practices that marginalize the unconventional right will also marginalize the unconventional left,” and argued that we must defend even “awful speakers” in the interests of protecting free speech. Ben Wizner of the American Civil Liberties Union described the tech giants’ behavior as “worrisome,” and suggested the policies used to justify the bans could be “misused and abused.”

It is indeed worrying that some corporations now have the power to restrict how much influence someone can have on the marketplace of ideas. But what is more worrying, and what few people seem to be considering, is how Alex Jones was able to gain such influence in the first place. In my view, the ideological forces responsible for his rise are a greater threat to free speech than the corporate forces responsible for his “fall.” Principled defenders of free speech would therefore be unwise to rail against the former while ignoring the latter.

The reason tech giants like Twitter and Facebook are able to exert such worrying control over our speech is that they comprise an oligopoly, with no significant competitors. Such oligopolies tend to form in business due to the Matthew principle, which holds that advantage begets further advantage. If Facebook manages to get all your friends to use it, then Facebook’s chances of getting you to use it are drastically increased, because you want to be connected with your friends. This particular example of the Matthew principle is known as a “network effect.”

Crucially, network effects don’t just apply to free market economies; they also apply to the free market of ideas. Concepts that get more exposure will get more exposure. This virality can cause the arena of debate to quickly become dominated by an “oligopoly” of perspectives.

Hence, just as the free market of infotech is now dictated by the Googles and Facebooks of the world, so too has the free market of ideas come to be controlled by a few political narratives, particularly the social-justice narrative of the left and the anti-globalism narrative of the right. The social-justice left dominates among the cultural elite, including the mainstream media, the literati, the tech industry, Hollywood and academia. The anti-globalist right, meanwhile, is popular among the general public, as evidenced by the success of the U.K.’s Brexit campaign, and the election of Donald Trump in the U.S. and a host of nativist parties in Europe, such as Poland’s Law and Justice and Italy’s Five Star movement.

The story of Alex Jones brings these two strands together, because he has fueled the surge of right-wing populism in large part by leveraging the power of tech oligopolies. Far from being a “fringe” figure (as he is often portrayed), Jones is a key conduit of a popular narrative, broadcasting to over 3.6 million unique online monthly viewers, and apparently having the ear of the American president (which may help explain why baseless conspiracy theories about a “deep state” keep circulating around the White House). Jones, in short, is an ambassador for one half of an ideological oligopoly, which is just as hostile to competition as the tech oligopoly.

But how could this be? To some, the very idea of an oligopoly on ideas may seem bizarre; we are all free to believe whatever we wish. Unfortunately, our brains did not evolve to understand the world but to survive it. Reality is software that doesn’t run well on our mental hardware, unless the display resolution is minimized. We therefore seek out stories, not because they are true, but because they reduce the incomprehensible into that which is comprehensible, giving us a counterfeit of truth whose elegant simplicity makes it seem truer than actual, authentic truth.

A typical mental schematic that allows us to do this is the Karpman drama triangle, which divides people into victims, oppressors and rescuers. We have a tendency to view events using this cognitive compression algorithm because it simplifies reality into drama, offering not just clarity to the confused, but also belonging to the lonely, purpose to the aimless, battle to the bored, and scapegoats to the vindictive.

The social-justice left and anti-globalist right both fully embrace the Karpman drama triangle as a lens for looking at the world. In the social-justice narrative, minorities are the victims, the white patriarchy is the oppressor, and the social-justice activists are the rescuers. In the populist-right narrative, the silent oppressed majorities constitute the victims, the globalist elites are the oppressors, and certain maverick figures (such as Alex Jones and Donald Trump) are the rescuers.

Anyone who doesn’t neatly fit into a corner of the drama triangles will either be shoehorned in, or ignored. This simplification of reality into a dramatic struggle is what makes these narratives so hostile to competing ideas; disagreement is viewed not as a legitimate difference of opinion, but as an attempt at oppression. And when you feel you are being oppressed, you can justify the use of any tactic to fight it.

This is why we see those on the social-justice left using their influence in media, academia and the tech industry to forcefully suffocate the expression of alternative viewpoints — including by the firing of those with different opinions, or by shouting them down at universities, or by physically assaulting them.

And on the populist right, we see similar tactics of intimidation and ostracism, whether through the harassment of climate scientists, the denial of security clearance to former CIA directors who won’t toe the president’s line, or the demonization of conservative pundits who fall out of love with Donald Trump.

Alex Jones himself has been among the biggest instigators of right-wing intimidation. For years, he has concocted lies about those who don’t agree with his narrative, claiming they are agents of foreign governments, literal demons, or child molesters. He also has suggested that his followers should take up arms against the nonbelievers (which is why Twitter suspended him), and his conspiracy theories have led his followers to harass and threaten people with violence.

Unfortunately, this sometimes has led to actual violence. In 2009, Richard Poplawski, who regularly commented on Jones’ Infowars website, and cross-posted many of Jones’ articles on neo-Nazi forums, killed three police officers with an AK-47. The following year, Byron Williams, who cited Jones as an influence on his thinking, engaged in a firefight with police, injuring two. A year later, Jared Lee Loughner, who counted among his favorite documentary films the Jones-produced Zeitgeist and Loose Change, attempted to assassinate U.S. representative Gabrielle Giffords, injuring her and 12 other people, and killing six. Later that year, Oscar Ortega, having watched the Jones-produced film The Obama Deception, shot at the White House. In 2014, Jerad and Amanda Miller, both regular commenters on Jones’ Infowars site, posted anti-government videos and then went on a shooting spree, killing three before dying themselves. Two years later, Edgar Maddison Welch, convinced by the Pizzagate conspiracy theory pushed by Jones, shot up the Comet Ping Pong pizza parlor in Washington D.C.

It is difficult to determine how much influence Jones’ views had on these atrocities. However, the link between hateful Infowars-style rhetoric on Facebook and hate crime was explored by an extensive study of 3,335 attacks against refugees in Germany, where the populist right-wing Alternative für Deutschland (AfD) has developed a major web presence. The study found that such attacks were strongly predicted by social media use: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by an average of 50 percent.

Violence is the most direct and dramatic way that the social-justice left and anti-globalist right censor speech. But it is just one tactic among many — including threats, doxings, firings, harassment, mobbings and demonization. This is why the radical left and right, led by demagogues like Alex Jones, represent an even greater threat to our speech than the tech giants. They are gradually turning the free market of ideas into a kind of ongoing hostage crisis, by which people are either afraid to speak their minds, or are doomed to have their words interpreted in the worst possible way when they do so.

I don’t want to make the same mistakes as Jones, so I should emphasize that these drama triangles that are so hostile to free expression weren’t engineered by a secret cabal of ideologues or tech CEOs. They arose organically, regulated only by laws of nature such as the Matthew principle, network effects, and the public’s demand for easy answers. Sure, there are individual Facebook employees who may be interested in pushing a political agenda, but Facebook as a business is not. It seeks to do what is best for profits: making its platform as inviting to as many people as possible.

That’s not to say it is successful in this venture. Corporations are ill-equipped to police the information traffic of millions of users, which is why they frequently get things wrong (such as censoring the Declaration of Independence as hate speech). And even when they get things “right,” they usually only end up benefiting their targets — as evidenced by the fact that, in the wake of Jones’ de-platforming by the major media companies, his Infowarsapp surged up the download charts. The greatest endorsement a conspiracy theorist can receive is censorship by authority figures. It’s a golden opportunity to portray themselves as the victim in their Karpman drama triangle.

So, if we can’t rely on powerful organizations such as governments or corporations to protect our voices from mob rule, what then?

In my view, it leaves only one real option: We must be the protectors of our own free speech, and habitually speak out not just against designated “oppressors” like the tech giants, but also against designated “victims” and “rescuers,” like Alex Jones, who seek to oppress by dehumanizing others as oppressors. And we must do all this without constructing our own drama triangle of oppression, or else we’ll become part of the very problem we seek to solve.

John Stuart Mill believed that in a free market of ideas, good ideas would naturally trump bad ones. But experience has shown that this won’t happen unless the marketplace is populated by those who actively seek truth and openness. Free speech is the foundation of all other rights. It is the seed of innovation, the wheel of progress, the space to breathe. It must therefore be protected at all costs — including, at times such as these, from itself.

Source: Alex Jones Was Victimized by One Oligopoly. But He Perpetuated Another

Inside YouTube’s Far-Right Radicalization Factory

Interesting study, symptomatic of the problems with social media companies:

YouTube is a readymade radicalization network for the far right, a new study finds.

The Google-owned video platform recently banned conspiracy outlet InfoWars and its founder Alex Jones for hate speech. But another unofficial network of fringe channels is pulling YouTubers down the rabbit hole of extremism, said the Tuesday report from research group Data & Society.

The study tracked 65 YouTubers—some of them openly alt-right or white nationalist, others who claim to be simply libertarians, and most of whom have voiced anti-progressive views—as they collaborated across YouTube channels. The result, the study found, is an ecosystem in which a person searching for video game reviews can quickly find themselves watching a four-hour conversation with white nationalist Richard Spencer.

Becca Lewis, the researcher behind the report, calls the group the Alternative Influence Network. Its members include racists like Spencer, Gamergate figureheads like Carl Benjamin (who goes by ‘Sargon of Akkad’), and talk-show hosts like Joe Rogan, who promotes guests from fringe ideologies. Not all people in the group express far-right political views themselves, but will platform guests who do. Combined, the 65 YouTubers account for millions of YouTube followers, who can find themselves clicking through a series of increasingly radical-right videos.

Take Rogan, a comedian and self-described libertarian whose 3.5 million subscribers recently witnessed him host a bizarre interviewwith Tesla founder Elon Musk. While Rogan might not express extreme views, his guests often tend to be more fringe. Last year, he hosted Benjamin, the anti-feminist who gained a large following for his harassment campaigns during Gamergate.

Rogan’s interview with Benjamin, which has nearly 2 million views, describes Benjamin as an “Anti-Identitarian liberal YouTuber.” It’s a misleading title for Rogan fans who might go on to view Benjamin’s work.

Benjamin, in turn, has also claimed not support the alt-right. Like other less explicitly racist members of the network, he’s hyped his “not racist” cred by promoting livestreamed “debates” (a favorite term in these circles) with white supremacists.

But the line between “debate” and collaboration can be indistinct, as Lewis noted in her study. She pointed to one such debate between Benjamin and Spencer, which was moderated by white nationalist creep Jean-Francois Gariepy, and which briefly became the world’s top trending live video on YouTube, with more than 10,000 live viewers.

“In his video with [Richard] Spencer, Benjamin was presumably debating against scientific racism, a stance he frequently echoes,” Lewis wrote in her study. “However, by participating in the debate, he was building a shared audience—and thus, a symbiotic relationship— with white nationalists. In fact, Benjamin has become a frequent guest on channels that host such ‘debates,’ which often function as group entertainment as much as genuine disagreements.”

Debates are often better measures of rhetorical skill than they are of an idea’s merits. A well-spoken idiot might stand a good chance against a shy expert in a televised argument. When they disagreed during the four-hour livestream, Spencer, a more practiced speaker, mopped the floor with Benjamin. The debate earned Spencer new followers, some of whom appear to have been lured in by the other YouTubers’ thinly-disguised bigotry.

“I’ve never really listened to Spencer speak before,” one commenter wrote. “But it is immediately apparent that he’s on a whole different level.”

And Benjamin has been willing to collaborate with further-right far right YouTubers when the circumstances benefited him.

“In many ways, we do have similar objectives,” he told the openly racist YouTuber Millennial Woes in one video cited in the study. “We have the same enemies, right? I mean, you guys hate the SJWs, I hate the SJWs. I want to see the complete destruction of social justice. . . . If the alt-right took the place of the SJWs, I would have a lot less to fear.”

“Some of the more mainstream conservatives or libertarians are able to have it both ways,” Lewis told The Daily Beast on Tuesday. “They can say they reject the alt-right … but at the same time, there’s a lot of nudging and winking.”

Her report cited other instances of this phenomenon, including self-identified “classical liberal” YouTuber Dave Rubin, who promotes anti-progressive views on his talk show, where he hosts more extreme personalities, ostensibly for debate. But the debates can skew friendly. The study pointed to a conversation in which Rubin allowed far-right YouTuber Stefan Molyneux to make junk science claims unchecked. A description for the video encouraged viewers to do their own research, but provided links to Molyneux’s own content.

“It gives a generally unchallenged platform for that white nationalist and their ideas,” Lewis said on Tuesday.

YouTube’s algorithms can sometimes reward fringe content. Researcher Zeynep Tufekci previously highlighted the phenomenonwhen she noted that, after she watched footage of Donald Trump rallies, YouTube began recommending an increasingly radical series of white supremacist and conspiracy videos.

Lewis said YouTubers have learned to leverage the site’s algorithms, frontloading their videos with terms like “liberal” and “intersectional” in a bid to “hijack” search results that would typically be dominated by the left.

YouTube, which is built to keep users watching videos, might be a perfect recruiting platform for fringe movements, which want followers to remain similarly engaged.

“One way scholars of social movements often talk about recruitment is in terms of the capacity of the movement to bring in new recruits and then retain them,” Joan Donovan, a research lead at Data & Society said on Tuesday.“Social media is optimized for engagement, which is both recruitment of an audience and retention of that audience. These groups often use the tools of analytics to make sure they continue to grow their networks.”

Source: Inside YouTube’s Far-Right Radicalization Factory

Exclusive: Right-wing sites swamp Sweden with ‘junk news’ in tight election race

Possible factor contributing to the shift in Swedish politics (scheduled before results known):

One in three news articles shared online about the upcoming Swedish election come from websites publishing deliberately misleading information, most with a right-wing focus on immigration and Islam, Oxford University researchers say.

Their study, published on Thursday, points to widespread online disinformation in the final stages of a tightly-contested campaign which could mark a lurch to the right in one of Europe’s most prominent liberal democracies.

The authors, from the Oxford Internet Institute, labeled certain websites “junk news”, based on a range of detailed criteria. Reuters found the three most popular sites they identified have employed former members of the Sweden Democrats party; one has a former MP listed among its staff.

It was not clear whether the sharing of “junk news” had affected voting intentions in Sweden, but the study helps show the impact platforms such as Twitter and Facebook have on elections, and how domestic or foreign groups can use them to exacerbate sensitive social and political issues.

Prime Minister Stefan Lofven, whose center-left Social Democrats have dominated politics since 1914 but are now unlikely to secure a ruling majority, told Reuters the spread of false or distorted information online risked shaking “the foundations of democracy” if left unchecked.

The Institute, a department of Oxford University, analyzed 275,000 tweets about the Swedish election from a 10-day period in August. It counted articles shared from websites it identified as “junk news” sources, defined as outlets which “deliberately publish misleading, deceptive or incorrect information purporting to be real news”.

“Roughly speaking, for every two professional content articles shared, one junk news article was shared. Junk news therefore constituted a significant part of the conversation around the Swedish general election,” it said.

A Twitter spokesman declined to comment on the results of the study.

Facebook, where interactions between users are harder to track, said it was working with Swedish officials to help voters spot disinformation. It has also partnered with Viralgranskaren – an arm of Sweden’s Metro newspaper – to identify, demote and counterbalance “false news” on its site.

Joakim Wallerstein, head of communications for the Sweden Democrats, said he had no knowledge of or interest in the party sympathies of media outlets. Asked to comment on his party’s relationship with the sites identified by the study, he said he had been interviewed by one of them once.

“I think it is strange that a foreign institute is trying to label various news outlets in Sweden as ‘junk news’ and release such a report in connection to an election,” he said.

“DECEPTIVE TOOLS”

Swedish security officials say there is currently no evidence of a coordinated online attempt by foreign powers to sway the Sept. 9 vote, despite repeated government warnings about the threat.

But Mikael Tofvesson, head of the counter influence team at the Swedish Civil Contingencies Agency (MSB), a government agency tasked with safeguarding the election, said the widespread sharing of false or distorted information makes countries more vulnerable to hostile influence operations.

“Incorrect and biased reporting promotes a harder, harsher tone in the debate, which makes it easier to throw in disinformation and other deceptive tools,” he said.

Lisa-Maria Neudert, a researcher from the Oxford Internet Institute’s Project on Computational Propaganda, said most of the “junk news” in Sweden supported right-wing policies, and was largely focused on issues around immigration and Islam.

The top three “junk news” sources identified by the study – right-wing websites Samhallsnytt, Nyheter Idag and Fria Tider – accounted for more than 85 percent of the “junk news” content.

Samhallsnytt received donations through the personal bank account of a Sweden Democrat member between 2011-2013 when it operated under the name Avpixlat. A former Sweden Democrat member of parliament, who also previously ran the party’s youth wing, is listed on the Samhallsnytt website as a columnist.

Samhallsnytt often publishes articles saying Sweden is under threat from Islam. In June, for example, it said a youth soccer tournament in the second-biggest city had banned pork as “haram” – or forbidden under Islamic law. The article is still online with the headline: “Islam is the new foundation of the Gothia Cup – pork proclaimed ‘haram’”.

A tournament organizer told the Dagens Nyheter newspaper that caterers had not served pork for more than 10 years for practical reasons, and there was no ban against eating or selling pork at the event.

Samhallsnytt and Fria Tider did not respond to repeated requests for comment.

Commenting before the Oxford study was published, Nyheter Idag founder Chang Frick disputed the “junk news” label and said his website followed ethical journalistic practices, citing its membership of Sweden’s self-regulated Press Council body.

“Yes, we put our editorial perspective on news, of course, like everyone else,” he said. “If you are doing a tabloid you cannot have dry, boring headlines, it should have some punch to it. But we do not lie, we do not make false accusations.”

FACT CHECKERS AND BOTS

Social media companies have come under increasing pressure to tackle disinformation on their platforms following accusations that Russia and Iran tried to meddle in domestic politics in the United States, Europe and elsewhere. Moscow and Tehran deny the allegations.A report by the Swedish Defence Research Institute last week said the number of automated Twitter accounts discussing the upcoming election almost doubled in July from the previous month. Such so-called “bot” accounts shared articles from Samhallsnytt and Fria Tider more frequently than real people, the report said, and were 40 percent more likely to express support for the Sweden Democrats.

Facebook said its work with Viralgranskaren to fact check content on its sites helped it quickly identify “false news.”

The company declined to give specific figures about the amount or sources of false news it had recorded around the Swedish election, but said any flagged content is given a lower position on its site, a practice known as “downranking” which it says cuts views by 80 percent. Users who see disputed articles are also shown other sources of verified information, it said.

In a blog post on its website, Twitter says it “should not be the arbiter of truth”.

But the MSB’s counter influence team’s head Tofvesson said there had been a “positive increase” in the work of Facebook, Twitter and other social media companies to help safeguard the election, largely via better communication and coordination with local authorities.

Source: Right-wing sites swamp Sweden with “junk news” in tight election race

MacDougall: Journalists are addicted to Twitter, and it’s poisoning their journalism

Valid points by MacDougall. Other observation, to be corrected as necessary by journalists, is the degree to which it cuts down on their time for more detailed investigation and reporting, thus resulting in less deep coverage of issues:

What’s the problem with the media?

Ping a journalist that question, and you’ll get back chapter and verse about the money problems facing newsrooms and the indifference of advertising-stealing platforms such as Facebook and Google.

Ask a random bloke on the street, however, and there’s a good chance the answer will be “bias” or “trust,” as in: “I don’t trust the press, they’re all biased.”

Ah, yes. The “fake” news. The “enemies of the people.” It’s not the best time to be repping the fourth estate.

The question now is how the press should fix their dismal approval ratings. A good start would be to stop being their own worst enemies. And a good place to start with that is ditching social media. It’s simply too easy for opinions to slip into posts that would never make it into news copy, leading to perceptions of bias.

Reporters should instead treat social media like the poison it is. For one, it’s not a representative sample of the public. Nor is the “shoot-first, think-later” mentality encouraged conducive to good journalism. Most importantly, social media reveals way too much of a reporter’s own bias to the people they cover and the people who read that coverage.

The ability of social media to reveal reporter bias has been apparent for years, but it’s shifted into overdrive now that Donald Trump has turned Twitter into grotesque political performance art, dragging an enraged global press corps with him, most of whom tweet their disgust or puzzlement at what the president does every day. And it’s affecting political journalism in every country. A day now isn’t a day without reporters broadcasting hot takes that risk tainting the coverage they ultimately provide.

And while it’s true most media organizations have guidelines or social media codes of conduct — most of which prohibit opining — they are largely self-enforced. Stretched editors simply can’t track their charges all day long on Twitter.

Forget about columnists, who are paid to give their opinion; it’s a mystery why straight news reporters would want to reveal anything about themselves or their views on public policy. Most politicians already think the press is biased — why risk confirming it for them in real-time?

Why, for example, would a freelance journalist want Conservative leader Andrew Scheer to know that his views on Scheer’s views on government are that they are a “ridiculous collection of straw men?” They might be, but good luck convincing Scheer’s people that anything you ever write will be a fair shake.

Sadly, it’s not just the smaller fish in the profession who blunder in this way; the problem reaches up much higher.

Lots of people heaped scorn on Maxime Bernier’s clumsy foray into multicuralism on Twitter before his split from the Conservative party, but did one of them really need to be the senior broadcast producer of Canada’s most-watched television news broadcast?

And then there was Rosemary Barton of the CBC, who suggested on Twitter that her network didn’t have a clue about Bernier’s motives for tweeting about diversity, even though reporter Evan Dyer inferred in his report that the one-year anniversary of the alt-right march in Charlottesville had informed Bernier’s timing, if not his thinking.

These examples are the kind of clever or knowing things journalists have always said to each other or their subjects. In private. Now they fire away for all to see. And for what? A bushel of RT’s and “likes”?

Ten years or so into the folly of social media, it should by now be clear that it’s the ranters and shouters who get the most clicks, not the neutral observer. Reporters should stop trying to play the game.

Ten years or so into the folly of social media, it should by now be clear that it’s the ranters and shouters who get the most clicks, not the neutral observer. Reporters should stop trying to play the game.

They should instead go back to being a mystery. To valuing personal scarcity over ubiquity. To ditching Twitter, and forgetting Facebook. Or, at least limiting appearances there to the posting of their work. They should also say “no” to shouty panel appearances alongside partisans.

Reporters might even find the lack of distraction focuses them on their work. And if a politician’s B.S. needs to be called out in real-time, reporters should have an editor or colleague peek over their shoulder to give them a sense check on tone. Because even super-fact checkers such as Daniel Dale of the Toronto Star can appear biased owing to the sheer volume of material they post to their channels. And most reporters aren’t dedicated super-fact checkers, they’re just smart people with opinions, ones the news-consuming public shouldn’t know.

Political journalism is at a crossroads. Reporters need to keep doing their valuable work. But do the work, full stop. Keep your opinions to yourself. More people will believe the good work you do if they have no idea who in the hell you are, or what you think about what’s going on.

YouTube, the Great Radicalizer – The New York Times

Good article on how social media reinforces echo chambers and tends towards more extreme views:

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigationof YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

via YouTube, the Great Radicalizer – The New York Times

Identifying radical content online: Ryan Scrivens

I only wish we could use some of these analytical tools to better understand overall integration and the role that social networks play in either increasing integration or allowing individuals and groups to remain within their own community or group?

Violent extremists and those who subscribe to radical beliefs have left their digital footprints online since the inception of the World Wide Web. Notable examples include Anders Breivik, the Norwegian far-right terrorist convicted of killing 77 people in 2011, who was a registered member of a white supremacy web forum and had ties to a far-right wing social media site; Dylann Roof, the 21-year-old who murdered nine Black parishioners in Charleston, South Carolina, in 2015, and who allegedly posted messages on a white power website; and Aaron Driver, the Canadian suspected of planning a terrorist attack in 2016, who showed outright support for the so-called Islamic State on several social media platforms.

It should come as little surprise that, in an increasingly digital world, identifying signs of extremism online sits at the top of the priority list for counter-extremist agencies. Within this context, researchers have argued that successfully identifying radical content online, on a large scale, is the first step in reacting to it. Yet in the last 10 years alone, it is estimated that the number of individuals with access to the Internet has increased threefold, from over 1 billion users in 2005 to more than 3.8 billion as of 2018. With all of these new users, more information has been generated, leading to a flood of data.

It is becoming increasingly difficult, nearly impossible really, to manually search for violent extremists, potentially violent extremists or even users who post radical content online because the Internet contains an overwhelming amount of information. These new conditions have necessitated the creation of guided data filtering methods, which may replace the laborious manual methods that traditionally have been used to identify relevant information.

Governments in Canada and around the globe have engaged researchers to develop advanced information technologies, machine-learning algorithms and risk-assessment tools to identify and counter extremism through the collection and analysis of big data available online. Whether this work involves finding radical users of interest, measuring digital pathways of radicalization or detecting virtual indicators that may prevent future terrorist attacks, the urgent need to pinpoint radical content online is one of the most significant policy challenges faced by law enforcement agencies and security officials worldwide.

We have been part of this growing field of research at the International CyberCrime Research Centre, hosted at Simon Fraser University’s School of Criminology. Our work has ranged from identifying radical authors in online discussion forums to understanding terrorist organizations’ online recruitment efforts on various online platforms. These experiences have provided us with insights we can offer regarding the policy implications of conducting large-scale data analyses of extremist content online.

First, there is much that practitioners and policy-makers can learn about extremist movements by studying their online activities. Online discussion forums of the radical right or social media accounts of radical Islamists, for example, are rich with information about how members of a particular movement communicate, how they construct their radical identities, and who they are targeting — discussions, behaviours and actions that can spill over into the offline realm. Exploring the dark corners of the Internet can be helpful in understanding or perhaps even predicting trends in activity or behaviour before they happen in the offline world. If, for example, analysts can track an author’s online activity or identify an online trend that is becoming more radical over time, analysts may be in a better position to assist law enforcement officials and the intelligence community. At the same time, it is important to note that online behaviour often does not translate into offline behaviour; authorities must proceed with caution to ascertain the specific nature of an instance of online activity and the potential threat it poses.

Second, practitioners and policy-makers can gain valuable information about extremist movements by utilizing computational tools to study radical online activities. Our research suggests that it is possible to identify radical topics, authors or even behaviours in online spaces that contain an overwhelming amount of information. Signs of extremism can be found by drawing upon keyword-retrieval software that identifies and counts a specific set of words, or sentiment analysis programs that classify and categorize opinions in a piece of text. Large-scale, semi-automated analyses can provide practitioners and policy-makers with a macro-level understanding of extremist movements online, ranging from their radical ideology to their actual activities. This understanding, in turn, can assist in the development of counter-narratives or deradicalization and disengagement programs to counter violent extremism.

We must caution practitioners and policy-makers that our work suggests there is no simple typology or behaviour that best describes radical online activity or what constitutes radical content online. Instead, extremism comes in many shapes and sizes and varies with the online platform: some radical platforms, for example, promote blatant forms of extremism while other platforms encourage their subscribers to tone down the rhetoric and present their extremist views in a subtler manner. Nonetheless, a useful starting point in identifying signs of extremism online is to go directly to the source: identifying topics of discussion that are indeed radical at the core — with language that describes the “enemies” of the extreme right, for example, such as derogatory terms about Jews, Blacks, Muslims or LGBTQ communities.

Lastly, in order to gain a broader understanding of online extremism or to improve the means by which researchers and practitioners “search for a needle in a haystack,” social scientists and computer scientists should collaborate with one another. Historically, large-scale data analyses have been conducted by computer scientists and technical experts, which can be problematic in the field of terrorism and extremism research. These experts tend to take a high-level methodological perspective, measuring levels of — or propensity toward — radicalization or ways of identifying violent extremists or predicting the next terrorist attack. But searching for radical material online without a fundamental understanding of the radicalization process or how extremists and terrorists use the Internet can be counterproductive. Social scientists, on the other hand, may be well-versed in terrorism and extremism research, but most tend to be ill-equipped to manage big data — from collecting to formatting to archiving large volumes of information. Bridging the computer science and social science approaches to build on the strengths of each discipline offers perhaps the best chance to construct a useful framework for assisting authorities in addressing the threat of violent extremism as it evolves in the online milieu.

via Identifying radical content online