MacDougall: Let’s dump Trump’s accomplices: social media and cable news

Hard not to agree:

Now that Donald Trump has been fired by (enough of) the American people, it’s time to think about how to bin his accomplices: cable news and social media.

The Trump Era has been exhausting and the lion’s share of that exhaustion stems from our grossly expanded information economy. What used to come to us in dollops of papers and broadcasts is now streamed non-stop across all hours of the day on too many platforms to count. But there can be too much of a good thing. A glass of water quenches your thirst; a firehose knocks you over and leaves you drenched. It’s time to turn off the tap.

Whatever the intention at their points of creation, cable news and social media have flown a long way off course. Watching CNN or Fox News during (and after) the Presidential election was to subject yourself to a marathon of preachy monologues/inquisitions interspersed with furious nine-person panels, in which various partisans were invited to bark at each other, not listen to an argument or concede a point. It was a stark reminder of how far our public sphere has degraded.

But it’s actually worse than that. Cable news has also sought to make stars out of journalists but journalism isn’t meant to be celebrity entertainment. It’s supposed to serve a nobler purpose. It’s the work that’s meant to be important, not the author. What’s more, inviting reporters on to discuss or opine on the news of the day is to make them active participants, not impartial observers. What news value is there, for example, in having CNN’s Anderson Cooper calling the President of the United States of America an “obese turtle”? Is it any wonder that trust in the news is at record lows?

And if that wasn’t bad enough, social media then picks up the baton to make everything worse. Instead of bringing hidden expertise to bear on conversations, social media makes everyone ‘experts’ on everything, no matter what they don’t know about the subject. Even worse, the loudest and most extreme takes get the most attention. As study after study has shown, social media encourages people to indulge their emotions, not to apply logic or reason. These channels encourage us to huddle amongst like-minded people and then helps us radicalize. It makes enemies of citizens instead of encouraging a common understanding.

That’s why the sooner we get our politics and news off 24/7 platforms, the better. If the past four years of Trumpism have taught us anything, it’s that our brains simply cannot handle the volume of information they’ve been receiving. Seeing so much means we retain little of actual value. And it’s not just politics that suffers from this consumption pattern. Our recall with music, for example, was much stronger when we had to buy physical albums than it is now when we can stream literally anything for a few bucks a month. Everything now goes in one ear (or eyeball) and out the other.

It turns out quality content isn’t a gas; it doesn’t expand to fill the available space. If anything, whatever quality exists in our news environment now gets choked by the amateur fumes polluting our screens and feeds. Using quality to compete for attention in the 24/7 information economy is to lose the battle before it starts. Everybody is more interested in the outrage. A better approach would be to evacuate the pitch and find a new place to play, somewhere it has a chance of being noticed.

Pulling news content off social media would be a risk, yes, but it’s less of a risk than hoping the current information environment will improve. The news can either die on its terms or someone else’s, and right now social media companies and cable news programmers are incentivized to virality and outrage, not analysis or introspection. More importantly, their current output is cheap, unlike quality journalism. They do not, as presently constructed, serve a civic good. We wouldn’t miss them when they’re gone.

Of course, we can’t actually bin cable news and social media. For one, the purveyors of cable news and social media make too much money doing it. They won’t stop. But we can make the choice to stop watching and clicking.

It would help if the media outlets took the first step of not seeding the outrage machine with the lifeblood of their content. It would also help if they forbid their reporters from appearing on cable shows. We have enough data now to know that social media and cable news aren’t gateways to serious news consumption; they’re pathways to polarization and misinformation. They are platforms for the already convinced. More pertinently, they’re not serious money makers for news organizations. Media organizations need to make their content scarce, not ubiquitous. It’s time to put up paywalls and demand money for quality.

And now that we’re all properly exhausted, people might be open to a return to the subscription model. I know my mood has improved significantly since I prioritized one paper in the morning to the exclusion of all others. And while I might miss some stories because of it, I trust in the quality of my morning read to know that I won’t be out of too many important loops.

As strange as it seems after years of the firehose, we’ll have to consume less to understand more.

Andrew MacDougall is a director at Trafalgar Strategy, and a former Head of Communications to Prime Minister Stephen Harper

Source: Let’s dump Trump’s accomplices: social media and cable news

Stopping Online Vitriol at the Roots: With the election upon us, we’re awash in misleading and angry information. Here’s what we can do.

Some useful pointers, not just applicable to the USA post-election:

America, it’s one day before a pivotal election, and we’re awash in a lot of garbage information and online vitriol. It comes from strangers on the internet, scammers in our text messagesdisreputable news organizations and even our friends and family.

Whitney Phillips, an assistant professor in the department of communication and rhetorical studies at Syracuse University and an author on polluted information, says that all of this is making our brains go haywire.

With the U.S. election ginning up misleading information and the nonstop political discussions online wearing many of us out, I spoke to her about how we can individually and collectively fight back. Here are edited excerpts from our discussion:

You’ve written that angry conversations online and misleading information essentially short circuits our brains. How?

When our brains are overloaded, and we’re confronted constantly with upsetting or confusing information, it sends us into a state in which we’re less capable of processing information. We say things we probably shouldn’tPeople get retweet happy. It’s not productive, even when people have good intentions and think they’re helping.

How do we stop that process?

I’ve been researching how mindfulness meditation processes can help us navigate this information hellscape. When you see or read something that triggers that emotional reaction, take a moment to breathe and try to establish some emotional space. It doesn’t mean you shouldn’t say the critical thing you’re thinking, but you should first reflect on the most constructive thing to do next.

But we don’t tend to think that we’re the ones acting irresponsibly or irrationally. We think the people who disagree with us are irrational and irresponsible.

Most people think if they’re not setting out to do damage or don’t have hate in their hearts, then they don’t have to consider what they do. But even if we aren’t vicious ourselves, we’re still fundamentally a part of what information spreads and how.

We all affect the ecology around us. Bad actors like high-profile influencers can scar the land, but everyone else does, too. The more information pollution there is in the landscape, the less functional our democracy is. If you feel that everything is terrible and everyone lies, then people don’t want to engage in civic discourse.

This imposes a lot of personal responsibility on a problem that is much bigger than us as individuals.

Yes, individual solutions are not enough. We all can make better choices, but that means nothing if we’re not also thinking about structural, systemic reasons that we’re forced to confront bad information in the first place.

What are those structural forces? What can be done to make the information environment better at the structural level?

For us to understand how bad information travels we have to think about all the forces that contributed to it — decisions made by the internet platforms, broader capitalist forces, local and national influences. And it includes you. All of them feed into each other.

Part of the problem is that people haven’t understood how information works or recommendation algorithms of social media companies that influence why we see what we do online. If people understand, they can imagine a different world and they can fight to change the system.

I’m tempted to unplug the internet and go live in a cave. Should I?

We need to find a way to balance between evacuating from the hurricane and running toward the hurricane. If we only evacuate, we’re not doing our part as citizens, and we force people on the informational front lines to bear that burden. If we only run toward the storm, we’ll burn out.

Source: https://www.nytimes.com/2020/11/02/technology/stopping-election-misinformation.html

Liberal, Conservative MPs join international task force to curb anti-Semitism online

Of note. Not an easy task and one that should aim to be also applicable to other forms of hate, whether anti-Black, anti-Muslim, anti-Indigenous, anti-LGBTQ or other minorities:

Two members of Parliament are joining forces with legislators in four other countries in an international effort to force web giants to curb the proliferation of anti-Semitic content online.

Liberal MP Anthony Housefather and Conservative MP Marty Morantz are part of a new task force that includes politicians from Australia, Israel, the United Kingdom, and the United States.

A report out of the U.K. this summer said online incidents of anti-Semitism were on the rise in that country, driven by conspiracy theories about Jews being responsible for the COVID-19 pandemic.

In Canada, advocacy group B’nai Brith has said anti-Semitic incidents are up overall, with an 11 per cent rise in online harassment that often advocates genocide.

But how different countries measure and define the problem is a barrier to convincing web companies to address it, said Housefather.

The point of the task force is to get like-minded countries to agree on how to define the problem, how to solve it, convince their respective legislatures to pass similar laws and then collectively pressure the companies to act, he said.

“If we can come up with something that’s common to everybody, it will make life much easier for the providers to co-operate with us,” he said.

The task force is getting underway just as the federal Liberals promised in last week’s throne speech to take more action to curb online hate as part an effort to address systemic racism.

Housefather said the task force’s initial work predates that pledge but he hopes it can support the government’s own efforts.

Social media companies have been under sustained pressure to do more to address online hate, and give users better tools for reporting instances of it.

Earlier this year, Twitter began flagging some tweets from U.S. President Donald Trump for violating its policies, saying they included threats of harm against an identifiable group.

But, both Housefather and Morantz said, Twitter does nothing when the Iranian leader Ayatollah Ali Khamenei puts out tweets calling for the destruction of Israel or uses violent or potentially racist language to describe the state.

Twitter said earlier this year in response to criticism over that approach that Khamenei’s remarks amounted to “foreign policy sabre-rattling.”

Morantz said while the task force is focused on anti-Semitism, the work also applies more broadly.

“Hate against one group online is really a concern to all groups,” Morantz said.

“We need to emphasize that if we can’t protect one minority we can’t protect any of them.”

The Liberal government has repeatedly pledged to do more to combat hate speech online.

During the last election, they promised new regulations for social media platforms, including a requirement that they remove “illegal content, including hate speech, within 24 hours or face significant penalties.”

Critics, including conservative media outlets like True North and The Rebel, have accused the Liberals of wanting to crack down on free speech.

Morantz said a distinction must be made between free speech and that which breaks existing criminal laws. The focus needs to be on the latter, he said.

In turn, he refused to comment on a fellow Tory MP who recently circulated a message on Twitter that was criticized for using an anti-Semitic trope.

In August, B.C. MP Kerry-Lynne Findlay share a video of Finance Minister Chrystia Freeland and liberal philanthropist George Soros, saying Canadians ought to be alarmed by their “closeness.”

Soros is often linked to anti-Semitic conspiracy theories. Findlay later took down the tweet and apologized for sharing content from what she described as a source that promotes “hateful conspiracy theories.”

Housefather was among those who spoke out about Findlay’s tweet. He said he accepts her apology, but the incident highlights the issue.

“People often innocently retweet something without understanding the implications of it,” he said.

What needs to happen is for social media platforms to step up and figure out a way the flag the content, he said.

“If the media platform lets them know that, they can make a conscious decision whether or not they want to retweet it, knowing that it’s been flagged as being anti-Semitic content or other types of racist, misogynistic, etc., content.”

Source: Liberal, Conservative MPs join international task force to curb anti-Semitism online

Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Harsh but accurate:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

The news in the last weeks highlighted both the good and bad of platforms like Facebook and Twitter. The good: Graphic videos of police brutality from multiple cities transformed public sentiment about race, creating a potential movement for addressing an issue that has plagued the country since its founding. Peaceful protesters leveraged social platforms to get their message across, outcompeting the minority that advocated for violent tactics. The bad: waves of disinformation from politicians, police departments, Fox News, and others denied the reality of police brutality, overstated the role of looters in protests, and warned of busloads of antifa radicals. Only a month ago, critics exposed the role of internet platforms in undermining the country’s response to the COVID-19 pandemic by amplifying health disinformation. That disinformation convinced millions that face masks and social distancing were culture war issues, rather than public health guidance that would enable the economy to reopen safely.

The internet platforms have worked hard to minimize the perception of harm from their business. When faced with a challenge that they cannot deny or deflect, their response is always an apology and a promise to do better. In the case of Facebook, University of North Carolina Scholar Zeynep Tufekci coined the term “Zuckerberg’s 14-year apology tour.” If challenged to offer a roadmap, tech CEOs leverage the opaque nature of their platforms to create the illusion of progress, while minimizing the impact of the proposed solution on business practices. Despite many disclosures of harm, beginning with their role in undermining the integrity of the 2016 election, these platforms continue to be successful at framing the issues in a favorable light.

When pressured to reduce targeted harassment, disinformation, and conspiracy theories, the platforms frame the solution in terms of content moderation, implying there are no other options. Despite several waves of loudly promoted investments in artificial intelligence and human moderators, no platform has been successful at limiting the harm from third party content. When faced with public pressure to remove harmful content, internet platforms refuse to address root causes, which means old problems never go away, even as new ones develop. For example, banning Alex Jones removed conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.

The platforms respond to each new public relations challenge with an apology, another promise, and sometimes an increased investment in moderation. They have done it so many times I have lost track. And yet, policy makers and journalists continue to largely let them get away with it.

We need to recognize that internet platforms are experts in human attention. They know how to distract us. They know we will eventually get bored and move on.

Despite copious evidence to the contrary, too many policy makers and journalists behave as if internet platforms will eventually reduce the harm from targeted harassment, disinformation, and conspiracies through content moderation. There are three reasons why it will not do so: scale, latency, and intent. These platforms are huge. In the most recent quarter, Facebook reported that 1.7 billion people use its main platform every day and roughly 2.3 billion across its four large platforms. They do not disclose the numbers of messages posted each day, but it is likely to be in the hundreds of millions, if not a billion or more, just on Facebook. Substantial investments in artificial intelligence and human moderators cannot prevent millions of harmful messages from getting through.

The second hurdle is latency, which describes the time it takes for moderation to identify and remove a harmful message. AI works rapidly, but humans can take minutes or days. This means a large number of messages will circulate for some time before eventually being removed. Harm will occur in that interval. It is tempting to imagine that AI can solve everything, but that is a long way off. AI systems are built on data sets from older systems, and they are not yet capable of interpreting nuanced content like hate speech.

The final – and most important – obstacle for content moderation is intent. The sad truth is that the content we have asked internet platforms to remove is exceptionally valuable and they do not want to remove it. As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

A permissive approach to content has two huge benefits for platforms: profits and power. The business model of internet platforms like Facebook, Instagram, YouTube, and Twitter is based on advertising, the value of which depends on consumer attention. Where traditional media properties create content for mass audiences, internet platforms optimize content for each user individually, using surveillance to enable exceptionally precise targeting. Advertisers are addicted to the precision and convenience offered by internet platforms. Every year, they shift an ever larger percentage of their spending to them, from which platforms derive massive profits and wealth. Limiting the amplification of targeted harassment, disinformation, and conspiracy theories would lower engagement and revenues.

Power, in the form of political influence, is an essential component of success for the largest internet platforms. They are ubiquitous, which makes them vulnerable to politics. Tight alignment with the powerful ensures success in every country, which leads platforms to support authoritarians, including ones who violate human rights. For example, Facebook has enabled regime-aligned genocide in Myanmar and state-sponsored repression in Cambodia and the Philippines. In the United States, Facebook and other platforms have ignored or altered their terms of service to enable Trump and his allies to use the platform in ways that would normally be prohibited. For example, when journalists exposed Trump campaign ads that violated Facebook’s terms of service with falsehoods, Facebook changed its terms of service, rather than pulling the ads. In addition, Facebook chose not to follow Twitter’s lead in placing a public safety warning on a Trump post that promised violence in the event of looting.

Thanks to their exceptional targeting, platforms play an essential role in campaign fundraising and communications for candidates of both parties. While the dollars are not meaningful to the platforms, they derive power and influence from playing an essential role in electoral politics. This is particularly true for Facebook.

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

Thanks to COVID-19 and the protest marches, consumers and policy makers are far more aware of the role that internet platforms play in amplifying disinformation. For the first time in a generation, there is support in both parties in Congress for revisions to Section 230. There is increasing public support for regulation.

We do not need to accept disinformation as the cost of access to internet platforms. Harmful amplification is the result of business choices that can be changed. It is up to us and to our elected representatives to make that happen. The pandemic and the social justice protests underscore the urgency of doing so.

Source: Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Social Media Giants Support Racial Justice. Their Products Undermine It. Shows of support from Facebook, Twitter and YouTube don’t address the way those platforms have been weaponized by racists and partisan provocateurs.

Of note. “Thoughts and prayers” rather than action:

Several weeks ago, as protests erupted across the nation in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on his Facebook page, denouncing racial bias and proclaiming that “black lives matter.” Mr. Zuckerberg, Facebook’s chief executive, also announced that the company would donate $10 million to racial justice organizations.

A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the chief executive, pledged $3 million to an anti-racism organization started by Colin Kaepernick, the former N.F.L. quarterback.

YouTube joined the protests, too. Susan Wojcicki, its chief executive, wrote in a blog post that “we believe Black lives matter and we all need to do more to dismantle systemic racism.” YouTube also announced it would start a $100 million fund for black creators.

Pretty good for a bunch of supposedly heartless tech executives, right?

Well, sort of. The problem is that, while these shows of support were well intentioned, they didn’t address the way that these companies’ own products — Facebook, Twitter and YouTube — have been successfully weaponized by racists and partisan provocateurs, and are being used to undermine Black Lives Matter and other social justice movements. It’s as if the heads of McDonald’s, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.

It’s hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalized. In Tahrir Square in Cairo, Ferguson, Mo., and Baltimore, activists used Twitter and Facebook to organize demonstrations and get their messages out.

But in recent years, a right-wing reactionary movement has turned the tide. Now, some of the loudest and most established voices on these platforms belong to conservative commentators and paid provocateurs whose aim is mocking and subverting social justice movements, rather than supporting them.

The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn’t necessarily know it by scrolling through your social media feeds.

On Facebook, for example, the most popular post on the day of Mr. Zuckerberg’s Black Lives Matter pronouncement was an 18-minute video posted by the right-wing activist Candace Owens. In the video, Ms. Owens, who is black, railed against the protests, calling the idea of racially biased policing a “fake narrative” and deriding Mr. Floyd as a “horrible human being.” Her monologue, which was shared by right-wing media outlets — and which several people told me they had seen because Facebook’s algorithm recommended it to them — racked up nearly 100 million views.

Ms. Owens is a serial offender, known for spreading misinformation and stirring up partisan rancor. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly four million followers on Facebook. So can other high-profile conservative commentators like Terrence K. Williams, Ben Shapiro and the Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.

In all, seven of the 10 most-shared Facebook posts containing the phrase “Black Lives Matter” over the past month were critical of the movement, according to data from CrowdTangle, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favorable, perhaps because its users skew younger and more liberal.)

Facebook declined to comment. On Thursday, it announced it would spend $200 million to support black-owned businesses and organizations, and add a “Lift Black Voices” section to its app to highlight stories from black people and share educational resources.

Twitter has been a supporter of Black Lives Matter for years — remember Mr. Dorsey’s trip to Ferguson? — but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was run by a group of white nationalists posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter’s trending topics sidebar, which is often gamed by trolls looking to hijack online conversations, has filled upwith inflammatory hashtags like #whitelivesmatter and #whiteoutwednesday, often as a result of coordinated campaigns by far-right extremists.

A Twitter spokesman, Brandon Borrman, said: “We’ve taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world. From #BlackLivesMatter to #MeToo and #BringBackOurGirls, our company is motivated by the power of social movements to usher in meaningful societal change.”

YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made stridesin recent years to remove conspiracy theories and misinformation from its search results and recommendations, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contributed to racial division for years.

As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn’t footage of a protest or a police killing, but a four-year-old “social experiment” by the viral prankster and former Republican congressional candidate Joey Saladino, which has 14 million views. In the video, Mr. Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an “All Lives Matter” sign in a predominantly black neighborhood.

A YouTube spokeswoman, Andrea Faville, said that Mr. Saladino’s video had received fewer than 5 percent of its views this year, and that it was not being widely recommended by the company’s algorithms. Mr. Saladino recently reposted the video to Facebook, where it has gotten several million more views.

In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Mr. Floyd’s killing, or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the country are being organized in Facebook groups and Twitter threads, and social media has been helpful in creating more accountability for the police.

But these platforms aren’t just megaphones. They’re also global, real-time contests for attention, and many of the experienced players have gotten good at provoking controversy by adopting exaggerated views. They understand that if the whole world is condemning Mr. Floyd’s killing, a post saying he deserved it will stand out. If the data suggests that black people are disproportionately targeted by police violence, they know that there’s likely a market for a video saying that white people are the real victims.

The point isn’t that platforms should bar people like Mr. Saladino and Ms. Owens for criticizing Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet, and which features of their platforms are undermining the social justice movements they claim to support.

They don’t seem eager to do so. Recently, The Wall Street Journal reported that an internal Facebook study in 2016 found that 64 percent of the people who joined extremist groups on the platform did so because Facebook’s recommendations algorithms steered them there. Facebook could have responded to those findings by shutting off groups recommendations entirely, or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.

As a result, Facebook groups continue to be useful for violent extremists. This week, two members of the far-right “boogaloo” movement, which wants to destabilize society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, Calif. According to investigators, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommendations, they’re still appearing in plenty of people’s feeds.Rashad Robinson, the president of Color of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview this week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.

“What I see, particularly from Facebook and Mark Zuckerberg, it’s kind of like ‘thoughts and prayers’ after something tragic happens with guns,” Mr. Robinson said. “It’s a lot of sympathy without having to do anything structural about it.”

There is plenty more Mr. Zuckerberg, Mr. Dorsey and Ms. Wojcicki could do. They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don’t involve using racial slurs or organized hate groups. They could dismantle the recommendations systems that give provocateurs and cranks free attention, or make changes to the way their platforms rank information. (Ranking it by how engaging it is, the way some platforms still do, tends to amplify misinformation and outrage-bait.) They could institute a “viral ceiling” on posts about sensitive topics, to make it harder for trolls to hijack the conversation.

I’m optimistic that some of these tech leaders will eventually be convinced — either by their employees of color or their own conscience — that truly supporting racial justice means that they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices. But I’m worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.

So is Mr. Robinson, the civil rights organizer. A few weeks ago, he chatted with Mr. Zuckerberg by phone about Facebook’s policies on race, elections and other topics. Afterward, he said he thought that while Mr. Zuckerberg and other tech leaders generally meant well, he didn’t think they truly understood how harmful their products could be.

“I don’t think they can truly mean ‘Black Lives Matter’ when they have systems that put black people at risk,” he said.

Source: Social Media Giants Support Racial Justice. Their Products Undermine It.

Spy agency says Canadians are targets of foreign influence campaigns

More on foreign influence and interference:

Canadians are more exposed to “influence” operations than ever before according to an internal assessment from the country’s electronic spy agency.

A 2018 memo from the Communications Security Establishment (CSE) warned the rise of “web technology” like social media, along with Canadians’ changing habits for consuming media, make the population much more likely to encounter efforts by foreign powers to shape domestic political opinion.

“These new systems have generated unintended threats to the democratic process, as they deprive the public of accurate information, informed political commentary and the means to identify and ignore fraudulent information,” reads the memo, classified as Canadian Eyes Only.

“Foreign states have harnessed the new online influence systems to undertake influence activities against Western democratic processes, and they use cyber capabilities to enhance their influence activities through, for example, cyber espionage.”

“Foreign states steal and release information, modify or make information more compelling and distracting, create fraudulent or distorted ‘news,’ or amplify fringe and sometimes noxious opinions.”

The memo was prepared as Canada’s intelligence agencies were engaged in an exercise to protect the 2019 federal election from foreign interference.

Elections across the democratic world — the United States, France, the United Kingdom, Germany and the European Union — have in recent years been the targets of misinformation and cyberespionage campaigns from hostile countries.

There is no evidence that Canada’s recent federal election was the target of sophisticated cyber espionage or misinformation campaigns.

But another document prepared by CSE makes clear that Canadian politicians have already been targeted by foreign “influence” campaigns.

An undated slide deck prepared by the CSE suggested “sources linked to Russia popularized (then Global Affairs Minister Chrystia) Freeland’s family history” and targeted Defence Minister Harjit Sajjan’s appearance and turban in Russian-language media outlets in the Balkans.

The agency appears to be referring to stories, which were reported by mainstream Canadian news outlets, suggesting Freeland’s grandfather edited a Nazi-associated newspaper in occupied Poland.

The stories were “very likely intended to cause personal reputational damage in order to discredit the Government (of) Canada’s ongoing diplomatic and military support for Ukraine, to delegitimize Canada’s decision to enact the Justice for Victims of Corrupt Foreign Offices Act, and the 2018 expulsion of several Russian diplomats,” the documents, first reported by Global News, state.

The attacks against Sajjan, meanwhile, were “almost certainly” intended to discredit the NATO presence in Latvia, where Canadian forces are deployed as part of a NATO mission to deter Russian expansion after the invasion of Crimea.

“Since Canada’s deployment to Latvia, subtle and overtly racist comments pertaining to … Sajjan’s appearance, particularly his turban, have consistently appeared across Russian-language media in the Baltic region,” the documents read.

“Even ostensibly professional news sources are not above such descriptions. When … Sajjan attended a conference in Latvia in October 2017, he was described by Vesti.lv as ‘a large swarthy man in a big black turban.’”

Compared to some of the attacks on Western democracies, those two influence campaigns were minor in scale and impact. But the intelligence agency suggested that more and more countries are turning to cyber capabilities to further their own goals at the expense of other nations. And CSE’s analysis suggests their willing to play the long game.

“In the longer-term, influence activities, both cyber and human, are likely to challenge the transparency and independence of the decision-making process, reduce public trust (and) confidence in institutions, and push policy in directions inimical to Canadian interests,” the documents, released under access to information law, read.

“Many European states and some private companies have begun to develop countermeasures to malicious activities aimed at democratic processes, including increasing public understanding and resilience. However, little has been done to create robust, institutionalized multilateral responses.”

Parliament’s new national security review committee has completed a review of foreign espionage activities in Canada and submitted it to Prime Minister Justin Trudeau. The classified report detailing their findings is expected to be released early in 2020, once the House of Commons resumes sitting.

Source: Spy agency says Canadians are targets of foreign influence campaigns

Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

For the record:

British comedian Sacha Baron Cohen has said if Facebook had existed in the 1930s it would have allowed Hitler a platform for his anti-Semitic beliefs.

The Ali G star singled out the social media company in a speech in New York.

He also criticised Google, Twitter and YouTube for pushing “absurdities to billions of people”.

Social media giants and internet companies are under growing pressure to curb the spread of misinformation around political campaigns.

Twitter announced in late October that it would ban all political advertising globally from 22 November.

Earlier this week Google said it would not allow political advertisers to target voters using “microtargeting” based on browsing data or other factors.

Analysts say Facebook has come under increasing pressure to follow suit.

The company said in a statement that Baron Cohen had misrepresented its policies and that hate speech was banned on its platforms.

“We ban people who advocate for violence and we remove anyone who praises or supports it. Nobody – including politicians – can advocate or advertise hate, violence or mass murder on Facebook,” it added.

What did Baron Cohen say?

Addressing the Anti-Defamation League’s Never is Now summit, Baron Cohen took aim at Facebook boss Mark Zuckerberg who in October defended his company’s position not to ban political adverts that contain falsehoods.

“If you pay them, Facebook will run any ‘political’ ad you want, even if it’s a lie. And they’ll even help you micro-target those lies to their users for maximum effect,” he said.

“Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.”

Baron Cohen said it was time “for a fundamental rethink of social media and how it spreads hate, conspiracies and lies”. He also questioned Mr Zuckerberg’s characterisation of Facebook as a bastion of “free expression”.

“I think we could all agree that we should not be giving bigots and paedophiles a free platform to amplify their views and target their victims,” he added.

Earlier this month, an international group of lawmakers called for targeted political adverts on social media to be suspended until they are properly regulated.

The International Committee on Disinformation and Fake News was told that the business model adopted by social networks made “manipulation profitable”.

A BBC investigation into political ads for next month’s UK election suggested they were being targeted towards key constituencies and certain age groups.

Source: Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

Why are the U.S. immigration norms being tightened?

US immigration checking of social media noted in Indian media (a reminder to us all to more mindful when on social media):

The story so far: On May 31, 2019, the U.S. Department of State introduced a change in online visa forms for immigrant (form DS-260) and non-immigrant visas (form DS-160) requiring applicants to register their social media handles over a five-year period. The newly released DS-160 and DS-260 forms ask, “Do you have a social media presence?” A drop-down menu provides a list of some 20 options, including Facebook, Instagram, Sina Weibo and Twitter. There is also a “NONE” option. Applicants are required to list their handles alone and not passwords. All sites will soon be listable according to an administration official who spoke to The Hill, a Washington DC-based newsletter. The policy does not cover those eligible for the visa waiver programme and those applying for diplomatic visas and certain categories of official visas.

How did it come about?

The policy is part of U.S. President Donald Trump’s intent to conduct “extreme vetting” of foreigners seeking admission into the U.S. In March 2017, Mr. Trump issued an Executive Order asking the administration to implement a programme that “shall include the development of a uniform baseline for screening and vetting standards and procedures for all immigrant programs.”

In September 2017, the Department of Homeland Security started including “social media handles, aliases, associated identifiable information, and search results” information in the files it keeps on each immigrant. The notice regarding this policy said those impacted would include Green Card holders and naturalised citizens. In March 2018, the State Department proposed a similar policy, but for all visa applicants — this is the policy now in effect. Earlier, only certain visa applicants identified for extra screening were required to provide such information. Asking visa applicants to volunteer social media history started during the Obama administration which was criticised for not catching Tashfeen Malik, one of those who carried out a mass-shooting in San Bernardino, California, in 2015. Malik had come to the U.S. on a K-1 fiancé visa, and had exchanged social media messages about jihad prior to her admission to the U.S.

How will it impact India?

Most Indians applying for U.S. visas will be covered by this policy. Over 955,000 non-immigrant visas (excluding A and G visas) and some 28,000 immigrant visas were issued to Indians in fiscal year 2018. So at least 10 lakh Indians — and these are just those who are successful in their visa applicants and not all applicants — will be directly impacted by the policy.

What lies ahead?

The new policy is expected to impact 14 million travellers to the U.S. and 700,000 immigrants worldwide according to the administration’s prior estimates. In some individual cases it is possible that the visa policy achieves what it is (ostensibly) supposed to — allow the gathering of social media information that results in the denial of a visa for an applicant who genuinely presents a security threat. However, the bluntness of the policy and its vast scope raise serious concerns around civil liberties including questions of arbitrariness, mass surveillance, privacy, and the stifling of free speech.

First, it is not unusual for an individual to not recall all their social media handles over a five-year period. Consequently, even if acting in good faith, it is entirely possible for individuals to provide an incomplete social media history. This could give consular officers grounds for denying a visa.

Second, there is a significant degree of discretion involved in determining what constitutes a visa-disqualifying social media post and this could stifle free speech. For instance, is criticising the President of the United States or posting memes about him (there are plenty of those on social media these days) grounds for visa denial? What about media professionals? Is criticising U.S. foreign policy ground for not granting someone a visa?

Third, one can expect processing delays with visas as social media information of applicants is checked. It is possible that individuals impacted by the policy will bring cases against the U.S. government on grounds of privacy or on grounds of visa delays. The strength of these cases depends on a number of factors including whether they are brought by Green Card holders and naturalised citizens (who were impacted by the September 2017 policy not the May 31 one) or non-immigrants. The courts could examine the intent of the U.S. government’s policy and ask whether it has discriminatory intent.

Source: Why are the U.S. immigration norms being tightened?

Why old, false claims about Canadian Muslims are resurfacing online

Of note:

In the summer of 2017, signs that seemed engineered to stoke anti-Muslim sentiment first appeared in a city park in Pitt Meadows, B.C.

“Many Muslims live in this area and dogs are considered filthy in Islam,” said the signs, which included the city’s logo. “Please keep your dogs on a leash and away from the Muslims who live in this community.”

After a spate of media coverage questioning their authenticity — and a statement from Pitt Meadows Mayor John Becker that the city didn’t make them — the signs were discredited and largely forgotten.

But almost two years later, a mix of right-wing American websites, Russian state media, and Canadian Facebook groups have made them go viral again, unleashing hateful comments and claims that Muslims are trying to “colonize” Western society.

The revival of this story shows how false, even discredited claims about Muslims in Canada find an eager audience in Facebook groups and on websites originating on both sides of the border, and how easily misinformation can be recirculated as the federal election approaches.

“Many people who harbour (or have been encouraged to hold) anti-Muslim feelings are looking for information to confirm their view that these people aren’t like them. This story plays into this,” Danah Boyd, a principal researcher at Microsoft and the founder of Data & Society, a non-profit research institute that studies disinformation and media manipulation, wrote in an email.

Boyd said a dubious story like this keeps recirculating “because the underlying fear and hate-oriented sentiment hasn’t faded.”

Daniel Funke, a reporter covering misinformation for the International Fact-Checking Network, said old stories with anti-Muslim aspects also recirculated after the recent fire at the Notre Dame cathedral in Paris.

“Social media users took real newspaper articles out of their original context, often years after they were first published, to falsely claim that the culprits behind the fire were Muslims,” he said. “The same thing has happened with health misinformation, when real news stories about product recalls or disease outbreaks go viral years after they were originally published.”

The signs about dogs first appeared in Hoffman Park in September 2017, and were designed to look official. They carried the logo of the city of Pitt Meadows and that of the Council on American-Islamic Relations (CAIR), a U.S. Muslim advocacy organization.

Media outlets reported on them after an image of one sign was shared online. Many noted that the city logo was falsely used and there was no evidence that actual Muslims were behind the messages.

A representative for CAIR told CBC News in 2017 that his organization had no involvement in the B.C. signs, but he did have an idea about why they were created.

“We see this on occasion where people try to be kind of an agent provocateur and use these kinds of messages to promote hostility towards Muslims and Islam,” Ibrahim Hooper said in an interview with CBC. “Sometimes people use the direct bigoted approach — we see that all too often in America and Canada, unfortunately — but other times they try and be a little more sophisticated or subtle.”

The Muslims of Vancouver Facebook page had a similar view, labelling it a case of “Bigots attempting to incite resentment and hatred towards Muslims.”

After the initial frenzy of articles about the signs, the story died down — until last week, when an American conservative website called The Stream published a story. It cited a 2017 report from CTV Vancouver, without noting that the incident was almost two years old.

“No Dogs: It Offends the Muslims,” read the headline on a story that cited the signs as an example of Muslims not integrating into Western society.

“That sign in the Canadian dog park tells us much that we’d rather not think about. That kind of sign notifies you when your country has been colonized,” John Zmirak wrote.

Zmirak’s post was soon summarized by state-funded Russian website Sputnik, and picked up by American conservative site Red State. Writing in Red State, Elizabeth Vaughn said “Muslims cannot expect Americans or Brits or anybody else to change their ways of life to accommodate them.” Conservative commentator Ann Coulter tweeted the Red State link to her 2.14 million followers, and the story was also cited by the right-wing website WND.

The Stream and Red State did not respond to emailed requests for comment. A spokesperson for Sputnik said its story made it clear to readers that the original incident happened in 2017. “I would like to stress that Sputnik has never mentioned that the flyers in question were created by Muslims, Sputnik just reported facts and indicated the sources,” Dmitry Borschevsky wrote in an email.

Nonetheless, the three stories generated more than 60,000 shares, reactions and comments on Facebook in less than a week. Some of that engagement also came thanks to right-wing Canadian Facebook groups and pages, bringing the dubious tale back to its original Canadian audience.

“Dogs can pick out evil! That’s why Death Cult adherents despise these lil canine truth detectors!” wrote one person in the “Canadians 1st Movement” Facebook group after seeing the Red State link.

“How about no muslims!” wrote one person after the Sputnik story was shared in the Canadian Combat Coalition National Facebook group. Another commenter in the group said he’d prefer to see Muslims “put down” instead of dogs.

On the page of anti-Muslim organization Pegida Canada, one commenter wrote, “I will take any dog over these animals.”

Those reactions were likely intended by whoever created the signs, according to Boyd, and it wasn’t the first incident of this type. In July 2016, flyers appeared in Manchester, England that asked residents to “limit the presence of dogs in the public sphere” out of sensitivity to the area’s “large Muslim community.”

The origin of the flyers was equally dubious, with evidence suggesting the idea may have been part of a campaign hatched on the anonymous message board 4chan. That’s where internet trolls often plan online harassment and disinformation campaigns aimed at generating outrage and media coverage.

“At this point, actors in 4chan have a lot of different motives, but there is no doubt that there are some who hold white nationalist attitudes and espouse racist anti-Muslim views,” Boyd said.

“There are also trolls who relish playing on political antagonisms to increase hostility and polarization. At the end of the day, the motivation doesn’t matter as much as the impact. And the impact is clear: these posters — and the conspiracists who amplify them — help intensify anti-Muslim sentiment in a way that is destructive to democracy.”

Source: Why old, false claims about Canadian Muslims are resurfacing online

Government surveillance of social media related to immigration more extensive than you realize | TheHill

Of note:

In June 2018, more than 400,000 people protested the Trump administration’s policy of separating families at the border. The following month saw a host of demonstrations in New York City on issues including racism and xenophobia, the abolition of Immigration and Customs Enforcement (ICE), and the National Rifle Association.

Given the ease of connecting online, it is unsurprising that many of these events got an organizing boost on social media platforms like Facebook or Twitter. A recent spate of articles did bring a surprise, however: the Department of Homeland Security (DHS) has been watching online too. Congress should demand that DHS detail the full extent of social media use and commit to ensuring that the programs are effective, non-discriminatory, and protective of privacy.

Last month, for instance, it was revealed that a Virginia-based intelligence firm used Facebook data to compile details about more than 600 protests against family separation. The firm sent its spreadsheet to the Department of Homeland Security, where the data was disseminated internally and evidently shared with the FBI and national fusion centers; these centers, which facilitate data sharing among federal, state, local, and tribal law enforcement, as well as the private sector, have been heavily criticized for violating Americans’ privacy and civil liberties while providing little of value.

In the meantime, Homeland Security Investigations — an arm of ICE createdto combat criminal organizations, not collect information about lawful protests — assembled and shared a spreadsheet of the New York City demonstrations, labeling them with the tag “Anti-Trump Protests.” And as Central American caravans slowly traveled north, DHS’s Customs and Border Protection (CBP) drew on Facebook data to create dossiers on lawyers, journalists, and advocates — many of them U.S. citizens — providing services and documenting the situation on the southern border.

As shocking as these revelations are, DHS’s social media ambitions are both broader and opaque. A recent report I co-wrote for the Brennan Center for Justice, based on a review of more than 150 government documents, examines how social media is used by four DHS agencies — ICE, CBP, TSA, and the U.S. Customs and Immigration Service (USCIS) — and describes the deficiencies and risks of these programs.

First, DHS now uses social media in nearly every aspect of its immigration operations. Participants in the Visa Waiver Program, for instance — largely travelers from Western Europe — have been asked since late 2016 to voluntarily provide their social media handles. The Department of State recently won approval to demand the same of all visa applicants, nearly 15 million people per year; this data will be vetted against DHS holdings. While information from social media may not be the sole basis for denial, it could easily be combined with other factors to justify exclusion, a process that is likely to have a disproportionate impact on Muslim travelers and those coming from Latin America.

Travelers may have their social media data examined at the U.S. border as well, via warrantless searches of electronic devices undertaken by CBP and ICE. Between 2015 and 2017, the number of device searches carried out by CBP jumped more than threefold; one report suggests that about 20 percent are conducted on American travelers. (ICE does not reveal its figures.) CBP recently issued more stringent rules, though it remains to be seen how closely it will follow them; a December 2018 inspector general report concluded that the agency had failed to follow its prior procedures.

ICE operates under a decade-old policy allowing its agents to “search, detain, seize, retain, and share” electronic devices and any information on them — including social media — without individualized suspicion. Remarkably, ICE justifies this authority by pointing to centuries-old statutes, equating electronic devices with “merchandise” that customs inspectors were authorized to review under a 1790 Act passed by the First Congress. This approach puts the agency out of step with the Supreme Court, which recently recognized that treating a search of a cell phone as identical to a search of a wallet or purse “is like saying a ride on horseback is materially indistinguishable from a flight to the moon.”

The breadth of DHS’s social media monitoring begs the question: Is it effective? It is notable that a 2016 DHS brief reported that in three of four refugee vetting programs, the social media accounts “did not yield clear, articulable links to national security concerns,” even where a national security concern did exist. And a February 2017 Inspector General audit of seven social media pilot programs concluded that DHS had failed to establish any mechanisms to measure their effectiveness.

Indeed, content on social media can be difficult to decode under the best of circumstances. Natural language processing tools, used for some automated analysis, fail to accurately interpret 20-30 percent of the text they analyze, a gap that is compounded when it comes to unfamiliar languages or cultural contexts. Even human reviewers can fail to understand their own language if it’s filled with slang.

We now know far more about the scope of DHS’s efforts to collect and use social media, but there is much that remains obscured. Without robust, ongoing oversight, neither the public nor lawmakers can be confident that these programs are serving our national interest.

Source: Government surveillance of social media related to immigration more extensive than you realize | TheHill