A Different Way of Thinking About Cancel Culture: Social media companies and other organizations are looking out for themselves.

Needed different take on the role that social media companies play:

In March, Alexi McCammond, the newly hired editor of Teen Vogue, resigned following backlash over offensive tweets she’d sent a decade ago, beginning when she was 17. In January, Will Wilkinson lost his job as vice president for research at the center-right Niskanen Center for a satirical tweet about Republicans who wanted to hang Mike Pence. (Wilkinson was also suspended from his role as a Times Opinion contributor.)

To debate whether these punishments were fair is to commit a category error. These weren’t verdicts weighed and delivered on behalf of society. These were the actions of self-interested organizations that had decided their employees were now liabilities. Teen Vogue, which is part of Condé Nast, has remade itself in recent years as a leftist magazine built around anti-racist principles. Niskanen trades on its perceived clout with elected Republicans. In both cases, the organization was trying to protect itself, for its own reasons.

That suggests a different way of thinking about the amorphous thing we call cancel culture, and a more useful one. Cancellations — defined here as actually losing your job or your livelihood — occur when an employee’s speech infraction generates public attention that threatens an employer’s profits, influence or reputation. This isn’t an issue of “wokeness,” as anyone who has been on the business end of a right-wing mob trying to get them or their employees fired — as I have, multiple times — knows. It’s driven by economics, and the key actors are social media giants and employers who really could change the decisions they make in ways that would lead to a better speech climate for us all.

Boundaries on acceptable speech aren’t new, and they’re not narrower today than in the past. Remember the post-9/11 furor over whether you could run for office if you didn’t wear an American flag pin at all times? What is new is the role social media (and, to a lesser extent, digital news) plays in both focusing outrage and scaring employers. And this, too, is a problem of economics, not culture. Social platforms and media publishers want to attract people to their websites or shows and make sure they come back. They do this, in part, by tuning the platforms and home pages and story choices to surface content that outrages the audience.

My former Times colleague Charlie Warzel, in his new newsletter, points to Twitter’s trending box as an example of how this works, and it’s a good one if you want to see the hidden hand of technology and corporate business models in what we keep calling a cultural problem. This box is where Twitter curates its sprawling conversation, directing everyone who logs on to topics drawing unusual interest at the moment. Oftentimes that’s someone who said something stupid, or offensive — or even someone who said something innocuous only to have it misread as stupid or offensive.

The trending box blasts missives meant for one community to all communities. The original context for the tweet collapses; whatever generosity or prior knowledge the intended audience might have brought to the interaction is lost. The loss of context is supercharged by another feature of the platform: the quote-tweet, where instead of answering in the original conversation, you pull the tweet out of its context and write something cutting on top of it. (A crummier version comes when people just screenshot a tweet, so the audience can’t even click back to the original, or see the possible apology.) So the trending box concentrates attention on a particular person, already having a bad day, and the quote-tweet function encourages people to carve up the message for their own purposes.

This is not just a problem of social media platforms. Watch Fox News for a night, and you’ll see a festival of stories elevating some random local excess to national attention and inflicting terrible pain on the people who are targeted. Fox isn’t anti-cancel culture; it just wants to be the one controlling that culture.

Cancellations are sometimes intended, and deserved. Some speech should have consequences. But many of the people who participate in the digital pile-ons that lead to cancellation don’t want to cancel anybody. They’re just joining in that day’s online conversation. They’re criticizing an offensive or even dangerous idea, mocking someone they think deserves it, hunting for retweets, demanding accountability, making a joke. They aren’t trying to get anyone fired. But collectively, they do get someone fired.

In all these cases, the economics of corporations that monetize attention are colliding with the incentives of employers to avoid bad publicity. One structural way social media has changed corporate management is that it has made P.R. problems harder to ignore. Outrage that used to play out relatively quietly, through letters and emails and phone calls, now plays out in public. Hasty meetings get called, senior executives get pulled in, and that’s when people get fired.

An even more sinister version of this operates retrospectively, through search results. An employer considering a job candidate does a basic Google search, finds an embarrassing controversy from three years ago and quietly move on to the next candidate. Wokeness has particular economic power right now because corporations, correctly, don’t want to be seen as racist and homophobic, but imagine how social media would have supercharged the censorious dynamics that dominated right after 9/11, when even french fries were suspected of disloyalty.

Tressie McMillan Cottom, the sociologist and cultural critic, made a great point to me about this on a recent podcast. “One of the problems right now is that social shame, which I think in and of itself is enough, usually, to discipline most people, is now tied to economic and political and cultural capital,” she said.

People should be shamed when they say something awful. Social sanctions are an important mechanism for social change, and they should be used. The problem is when that one awful thing someone said comes to define their online identity, and then it defines their future economic and political and personal opportunities. I don’t like the line that no one deserves to be defined by the worst thing they’ve ever done — tell me the body count first — but let’s agree that most of us don’t deserve to be defined by the dumbest thing we’ve ever said, forever, just because Google’s algorithm noticed that that moment got more links than the rest of our life combined.

I think this suggests a few ways to make online discourse better. Twitter should rethink its trending box, and at least consider the role quote-tweets play on the platform. (It would be easy enough to retain them as a function while throttling their virality.) Fox News should stop being, well, Fox News. All of the social media platforms need to think about the way their algorithms juice outrage and supercharge the human tendency to police group boundaries.

For months, when I logged onto Facebook, I saw the posts of a distant acquaintance who had turned into an anti-masker, and whose comment threads had turned into flame wars. This wasn’t someone I was close to, but the algorithm knew that what was being posted was generating a lot of angry reaction among our mutual friends, and it repeatedly tried to get me to react, too. These are design choices that are making society more toxic. Different choices can, and should, be made.

The rest of corporate America — and that includes my own industry — needs to think seriously about how severe a punishment it is to fire people under public conditions. When termination is for private misdeeds or poor performance, it typically stays private. When it is for something the internet is outraged about, it can shatter someone’s economic prospects for years to come. It’s always hard, from the outside, to evaluate any individual case, but I’ve seen a lot of firings that probably should have been suspensions or scoldings.

This also raises the question of our online identities, and the way strange and unexpected moments come to define them. A person’s Google results can shape the rest of that person’s life, both economically and otherwise. And yet people have almost no control over what’s shown in those results, unless they have the money to hire a firm that specializes in rehabilitating online reputations. This isn’t an easy problem to solve, but our lifelong digital identities are too important to be left to the terms and conditions of a single company, or even a few.

Finally, it would be better to focus on cancel behavior than cancel culture. There is no one ideology that gleefully mobs or targets employers online. Plenty of anti-cancel culture warriors get their retweets directing their followers to mob others. So here’s a guideline that I think would make online discourse better. Unless something that is said is truly dangerous and you actually want to see that person fired from their current job and potentially unable to find a new one — a high bar, but one that is sometimes met — you shouldn’t use social media to join an ongoing pile-on against a normal person. If it’s a politician or a cable news host or a senator, well, that’s politics. But this works differently when it’s someone unprepared for that scrutiny. We would all do better to remember that what feels like an offhand tweet to us could have real consequences for others if there are hundreds or thousands of similar tweets and articles. Scale matters.

What I’m offering here would, I hope, help ease a specific problem: the disproportionate and capricious economic punishments meted out in the aftermath of an online pile-on. It won’t end the political conflict over acceptable speech, nor should it. There have always been things we cannot say in polite society, and those things are changing, in overdue ways. The balance of demographic power is shifting, and groups that had little voice in the language and ordering of the national agenda are gaining that voice and using it.

Slowly and painfully, we are creating a society in which more people can speak and have some say over how they’re spoken of. What I hope we can do is keep that fight from serving the business models of social media platforms and the shifting priorities of corporate marketing departments.

Source: https://www.nytimes.com/2021/04/18/opinion/cancel-culture-social-media.html

Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Of note given complaints by conservatives:

Sen. Roger Wicker hit a familiar note when he announced on Thursday that the Commerce Committee was issuing subpoenas to force the testimony of Facebook Chief Executive Mark Zuckerberg and other tech leaders.

Tech platforms like Facebook, the Mississippi Republican said, “disproportionately suppress and censor conservative views online.”

When top tech bosses were summoned to Capitol Hill in July for a hearing on the industry’s immense power, Republican Congressman Jim Jordan made an even blunter accusation.

“I’ll just cut to the chase, Big Tech is out to get conservatives,” Jordan said. “That’s not a hunch. That’s not a suspicion. That’s a fact.”

But the facts to support that case have been hard to find. NPR called up half a dozen technology experts, including data scientists who have special access to Facebook’s internal metrics. The consensus: There is no statistical evidence to support the argument that Facebook does not give conservative views a fair shake.

Let’s step back for a moment.

When Republicans claim Facebook is “biased,” they often collapse two distinct complaints into one. First, that the social network deliberately scrubs right-leaning content from its site. There is no proof to back this up. Secondly, Republicans suggest that conservative news and perspectives are being throttled by Facebook, that the social network is preventing the content from finding a large audience. That claim is not only unproven, but publicly available data on Facebook shows the exact opposite to be true: conservative news regularly ranks among some of the popular content on the site.

Now, there are some complex layers to this, but former Facebook employees and data experts say the conservative bias argument would be easier to talk about — and easier to debunk — if Facebook was more transparent.

The social network keeps secret some of the most basic data points, like what news stories are the most viewed on Facebook on any given day, leaving data scientists, journalists and the general public in the dark about what people are actually seeing on their News Feeds.

There are other sources of data, but they offer just a tiny window into the sprawling universe of nearly 3 billion users. Facebook is quick to point out that the public metrics available are of limited use, yet it does so without offering a real solution, which would be opening up some of its more comprehensive analytics for public scrutiny.

Until they do, there’s little to counter rumors about what thrives and dies on Facebook and how the platform is shaping political discourse.

“It’s kind of a purgatory of their own making,” said Kathy Qian, a data scientist who co-founded Code for Democracy.

What the available data reveals about possible bias

Perhaps the most often-cited data point on what is popular on Facebook is a tracking tool called CrowdTangle, a startup that Facebook acquired in 2016.

New York Times journalist Kevin Roose has created a Twitter account where he posts the top ten most-engaging posts based on CrowdTangle data. These lists are dominated mostly by conservative commentators like Ben Shapiro and Dan Bongino and Fox News. They resemble a “parallel media universe that left-of-center Facebook users may never encounter,” Roose writes.

Yet these lists are like looking at Facebook through a soda straw, says researchers like MIT’s Jennifer Allen, who used to work at Facebook and now studies how people consume news on social media. CrowdTangle, Allen says, does not provide the whole story.

That’s because CrowdTangle only captures engagement — likes, shares, comments and other reactions — from public pages. But just because a post provokes lots of reactions does not mean it reaches many users. The data does not show how many people clicked on a link, or what the overall reach of the post was. And much of what people see on Facebook is from their friends, not public pages.

“You see these crazy numbers on CrowdTangle, but you don’t see how many people are engaging with this compared with the rest of the platform,” Allen said.

Another point researchers raise: All engagement is not created equal.

Users could “hate-like” a post, or click like as a way of bookmarking, or leave another reaction expressing disgust, not support. Take, for example, the laughing-face emoji.

“It could mean, ‘I agree with this’ or ‘This is so hilarious untrue,'” said data scientist Qian. “It’s just hard to know what people actually mean by those reactions.”

It’s also hard to tell whether people or automated bots are generating all the likes, comments and shares. Former Facebook research scientist Solomon Messing conducted a study of Twitter in 2018 finding hat bots were likely responsible for 66% of link shares on the platform. The tactic is employed on Facebook, too.

“What Facebook calls ‘inauthentic behavior’ and other borderline scam-like activity are unfortunately common and you can buy fake engagement easily on a number of websites,” Messing said.

Brendan Nyhan, a political scientist at Dartmouth College, is also wary about drawing any big conclusions from CrowdTangle.

“You can’t judge anything about American movies by looking at the top ten box films hits of all time,” Nyhan said. “That’s not a great way of understanding what people are actually watching. There’s the same risk here.”

‘Concerned about being seen as on the side of liberals’

Experts agree that a much better measure would be a by-the-numbers rundown of what posts are reaching the most people. So why doesn’t Facebook reveal that data?

In a Twitter thread back in July, John Hegeman, the head of Facebook’s NewsFeed, offered one sample of such a list, saying it is “not as partisan” as lists compiled with CrowdTangle data suggest.

But when asked why Facebook doesn’t share that broader data with the public, Hegeman did not reply.

It could be, some experts say, that Facebook fears that data will be used as ammunition against the company at a time when Congress and the Trump administration are threatening to rein in the power of Big Tech.

“They are incredibly concerned about being seen as on the side of liberals. That is against the profit motive of their business,” Dartmouth’s Nyhan said of Facebook executives. “I don’t see any reason to see that they have a secret, hidden liberal agenda, but they are just so unwilling to be transparent.”

Facebook has been more forthcoming with some academic researchers looking at how social media affects elections and democracy. In April 2019, it announced a partnership that would give 60 scholars access to more data, including the background and political affiliation of people who are engaging content.

One of those researchers is University of Pennsylvania data scientist Duncan Watts.

“Mostly it’s mainstream content,” he said of the most viewed and clicked on posts. “If anything, there is a bias in favor of conservative content.”

While Facebook posts from national television networks and major newspapers get the most clicks, partisan outlets like the Daily Wire and Brietbart routinely show up in top spots, too.

“That should be so marginal that it has no relevance at all,” Watts said of the right-wing content. “The fact that it is showing up at all is troubling.”

‘More false and misleading content on the right’

Accusations from Trump and other Republicans in Washington that Facebook is a biased referee of its content tend to flare up when the social network takes action against a conservative-leaning posts that violate its policies.

Researchers say there is a reason why most of the high-profile examples of content warnings and removals target conservative content.

“That is a result of there just being more false and misleading content on the right,” said researcher Allen. “There are bad actors on the left, but the ecosystem on the right is just much more mature and popular.”

Facebook’s algorithms could also be helping more people see right-wing content that’s meant to evoke passionate reactions, she added.

Because of the sheer amount of envelope-pushing conservative content, some of it veering into the realm of conspiracy theories, the moderation from Facebook is also greater.

Or as Nyhan put it: “When reality is asymmetric, enforcement may be asymmetric. That doesn’t necessarily indicate a bias.”

The attacks on Facebook over perceived prejudice against conservatives has helped fuel the push in Congress and the White House to reform Section 230 of the Communications and Decency Act of 1996, which allows platforms to avoid lawsuits over what users post and gives tech companies the freedom to police its sites as the companies see fit.

Joe Osborne, a Facebook spokesman, in a statement said the social network’s content moderation policies are applied fairly across the board.

“While many Republicans think we should do one thing, many Democrats think we should do the exact opposite. We’ve faced criticism from Republicans for being biased against conservatives and Democrats for not taking more steps to restrict the exact same content. Our job is to create one consistent set of rules that applies equally to everyone.”

Osborne confirmed that Facebook is exploring ways to make more data available in the platform’s public tools, but he declined to elaborate.

Watts, University of Pennsylvania data scientist who studies social media, said Facebook is sensitive to Republican criticism, but no matter what decision they make, the attacks will continue.

“Facebook could end up responding in a way to accommodate the right, but the right will never be appeased,” Watts said. “So it’s this constant pressure of ‘you have to give us more, you have to give us more,'” he said. “And it creates a situation where there’s no way to win arguments based on evidence, because they can just say, ‘Well, I don’t trust you.'”

Source: Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Sensitive to claims of bias, Facebook relaxed misinformation rules for conservative pages

The social media platforms continue to undermine social inclusion and cohesion:

Facebook has allowed conservative news outlets and personalities to repeatedly spread false information without facing any of the company’s stated penalties, according to leaked materials reviewed by NBC News.

According to internal discussions from the last six months, Facebook has relaxed its rules so that conservative pages, including those run by Breitbart, former Fox News personalities Diamond and Silk, the nonprofit media outlet PragerU and the pundit Charlie Kirk, were not penalized for violations of the company’s misinformation policies.

Facebook’s fact-checking rules dictate that pages can have their reach and advertising limited on the platform if they repeatedly spread information deemed inaccurate by its fact-checking partners. The company operates on a “strike” basis, meaning a page can post inaccurate information and receive a one-strike warning before the platform takes action. Two strikes in 90 days places an account into “repeat offender” status, which can lead to a reduction in distribution of the account’s content and a temporary block on advertising on the platform.

Facebook has a process that allows its employees or representatives from Facebook’s partners, including news organizations, politicians, influencers and others who have a significant presence on the platform to flag misinformation-related problems. Fact-checking labels are applied to posts by Facebook when third-party fact-checkers determine their posts contain misinformation. A news organization or politician can appeal the decision to attach a label to one of its posts.

Facebook employees who work with content partners then decide if an appeal is a high-priority issue or PR risk, in which case they log it in an internal task management system as a misinformation “escalation.” Marking something as an “escalation” means that senior leadership is notified so they can review the situation and quickly — often within 24 hours — make a decision about how to proceed.

Facebook receives many queries about misinformation from its partners, but only a small subsection are deemed to require input from senior leadership. Since February, more than 30 of these misinformation queries were tagged as “escalations” within the company’s task management system, used by employees to track and assign work projects.

The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook’s fact-checking could go public and fuel allegations that the social network was biased against conservatives.

The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias.

Two current Facebook employees and two former employees, who spoke anonymously out of fear of professional repercussions, said they believed the company had become hypersensitive to conservative complaints, in some cases making special allowances for conservative pages to avoid negative publicity.

“This supposed goal of this process is to prevent embarrassing false positives against respectable content partners, but the data shows that this is instead being used primarily to shield conservative fake news from the consequences,” said one former employee.

About two-thirds of the “escalations” included in the leaked list relate to misinformation issues linked to conservative pages, including those of Breitbart, Donald Trump Jr., Eric Trump and Gateway Pundit. There was one escalation related to a progressive advocacy group and one each for CNN, CBS, Yahoo and the World Health Organization.

There were also escalations related to left-leaning entities, including one about an ad from Democratic super PAC Priorities USA that the Trump campaign and fact checkers have labeled as misleading. Those matters focused on preventing misleading videos that were already being shared widely on other media platforms from spreading on Facebook and were not linked to complaints or concerns about strikes.

Facebook and other tech companies including Twitter and Google have faced repeated accusations of bias against conservatives in their content moderation decisions, though there is little clear evidence that this bias exists. The issue was reignited this week when Facebook removed a video posted to Trump’s personal Facebook page in which he falsely claimed that children are “almost immune” to COVID-19. The Trump campaign accused Facebook of “flagrant bias.”

Facebook spokesperson Andy Stone did not dispute the authenticity of the leaked materials, but said that it did not provide the full context of the situation.

In recent years, Facebook has developed a lengthy set of rules that govern how the platform moderates false or misleading information. But how those rules are applied can vary and is up to the discretion of Facebook’s executives.

In late March, a Facebook employee raised concerns on an internal message board about a “false” fact-checking label that had been added to a post by the conservative bloggers Diamond and Silk in which they expressed outrage over the false allegation that Democrats were trying to give members of Congress a $25 million raise as part of a COVID-19 stimulus package.

Diamond and Silk had not yet complained to Facebook about the fact check, but the employee was sounding the alarm because the “partner is extremely sensitive and has not hesitated going public about their concerns around alleged conservative bias on Facebook.”

Since it was the account’s second misinformation strike in 90 days, according to the leaked internal posts, the page was placed into “repeat offender” status.

Diamond and Silk appealed the “false” rating that had been applied by third-party fact-checker Lead Stories on the basis that they were expressing opinion and not stating a fact. The rating was downgraded by Lead Stories to “partly false” and they were taken out of “repeat offender” status. Even so, someone at Facebook described as “Policy/Leadership” intervened and instructed the team to remove both strikes from the account, according to the leaked material.

In another case in late May, a Facebook employee filed a misinformation escalation for PragerU, after a series of fact-checking labels were applied to several similar posts suggesting polar bear populations had not been decimated by climate change and that a photo of a starving animal was used as a “deliberate lie to advance the climate change agenda.” This claim was fact-checked by one of Facebook’s independent fact-checking partners, Climate Feedback, as false and meant that the PragerU page had “repeat offender” status and would potentially be banned from advertising.

A Facebook employee escalated the issue because of “partner sensitivity” and mentioned within that the repeat offender status was “especially worrisome due to PragerU having 500 active ads on our platform,” according to the discussion contained within the task management system and leaked to NBC News.

After some back and forth between employees, the fact check label was left on the posts, but the strikes that could have jeopardized the advertising campaign were removed from PragerU’s pages.

Stone, the Facebook spokesperson, said that the company defers to third-party fact-checkers on the ratings given to posts, but that the company is responsible for “how we manage our internal systems for repeat offenders.”

“We apply additional system-wide penalties for multiple false ratings, including demonetization and the inability to advertise, unless we determine that one or more of those ratings does not warrant additional consequences,” he said in an emailed statement.

He added that Facebook works with more than 70 fact-checking partners who apply fact-checks to “millions of pieces of content.”

Facebook announced Thursday that it banned a Republican PAC, the Committee to Defend the President, from advertising on the platform following repeated sharing of misinformation.

But the ongoing sensitivity to conservative complaints about fact-checking continues to trigger heated debates inside Facebook, according to leaked posts from Facebook’s internal message board and interviews with current and former employees.

“The research has shown no evidence of bias against conservatives on Facebook,” said another employee, “So why are we trying to appease them?”

Those concerns have also spilled out onto the company’s internal message boards.

One employee wrote a post on 19 July, first reported by BuzzFeed News on Thursday, summarizing the list of misinformation escalations found in the task management system and arguing that the company was pandering to conservative politicians.

The post, a copy of which NBC News has reviewed, also compared Mark Zuckerberg to President Donald Trump and Russian President Vladimir Putin.

“Just like all the robber barons and slavers and plunderers who came before you, you are spending a fortune you didn’t build. No amount of charity can ever balance out the poverty, war and environmental damage enabled by your support of Donald Trump,” the employee wrote.

The post was removed for violating Facebook’s “respectful communications” policy and the list of escalations, previously accessible to all employees, was made private. The employee who wrote the post was later fired.

“We recognize that transparency and openness are important company values,” wrote a Facebook employee involved in handling misinformation escalations in response to questions about the list of escalations. “Unfortunately, because information from these Tasks were leaked, we’ve made them private for only subscribers and are considering how best to move forward.”

Source: https://www.nbcnews.com/tech/tech-news/sensitive-claims-bias-facebook-relaxed-misinformation-rules-conservative-pages-n1236182

Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump, Twitter contrast

Needed backlash at what can only be described as business-motivated collusion, one that becomes harder and harder to justify from any perspective:

Facebook is facing an unusually public backlash from its employees over the company’s handling of President Trump’s inflammatory posts about protests in the police killing of George Floyd, an unarmed black man in Minneapolis.

At least a dozen employees, some in senior positions, have openly condemned Facebook’s lack of action on the president’s posts and CEO Mark Zuckerberg’s defense of that decision. Some employees staged a virtual walkout Monday.

“Mark is wrong, and I will endeavor in the loudest possible way to change his mind,” tweeted Ryan Freitas, director of product design for Facebook’s news feed.

“I work at Facebook and I am not proud of how we’re showing up,” tweeted Jason Toff, director of product management. “The majority of coworkers I’ve spoken to feel the same way. We are making our voice heard.”

The social network also is under intense pressure from civil rights groups, Democrats and the public over its decision to leave up posts from the president that critics say violate Facebook’s rules against inciting violence. These included a post last week about the protests in which the president said, “when the looting starts, the shooting starts.

Twitter, in contrast, put a warning label on a tweet in which the president said the same thing, saying it violated rules against glorifying violence.

The move escalated a feud with the president that started when the company put fact-checking labels on two of his tweets earlier in the week. Trump retaliated by signing an executive order that attempts to strip online platforms of long-held legal protections.

Zuckerberg has long said he believes the company should not police what politicians say on its platform, arguing that political speech is already highly scrutinized. In a postFriday, the Facebook CEO said he had “been struggling with how to respond” to Trump’s posts.

“Personally, I have a visceral negative reaction to this kind of divisive and inflammatory rhetoric,” he wrote. “I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.”

Zuckerberg said Facebook had examined the post and decided to leave it up because “we think people need to know if the government is planning to deploy force.” He added that the company had been in touch with the White House to explain its policies. Zuckerberg spoke with Trump by phone Friday, according to a report published by Axios.

While Facebook’s 48,000 employees often debate policies and actions within the company, it is unusual for staff to take that criticism public. But the decision not to remove Trump’s posts has caused significant distress within the company, which is spilling over into public view.

“Censoring information that might help people see the complete picture *is* wrong. But giving a platform to incite violence and spread disinformation is unacceptable, regardless who you are or if it’s newsworthy,” tweeted Andrew Crow, head of design for the company’s Portal devices. “I disagree with Mark’s position and will work to make change happen.”

Several employees said on Twitter they were joining Monday’s walkout.

“Facebook’s recent decision to not act on posts that incite violence ignores other options to keep our community safe,” tweeted Sara Zhang, a product designer.

In a statement, Facebook spokesman Joe Osborne said: “We recognize the pain many of our people are feeling right now, especially our Black community. We encourage employees to speak openly when they disagree with leadership. As we face additional difficult decisions around content ahead, we’ll continue seeking their honest feedback.”

Less than 4% of Facebook’s U.S.-based staff are African American, according to the company’s most recent diversity report.

Facebook will not make employees participating in the walkout use paid time off, and it will not discipline those who participate.

On Sunday, Zuckerberg said the company would commit $10 million to groups working on racial justice. “I know Facebook needs to do more to support equality and safety for the Black community through our platforms,” he wrote.

Source: Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump

And Kara Swisher’s call for Twitter to take Trump off the platform:

C’mon, @Jack. You can do it.

Throw on some Kendrick Lamar and get your head in the right space. Pour yourself a big old glass of salt juice. Draw an ice bath and fire up the cryotherapy pod and the infrared sauna. Then just pull the plug on him. You know you want to.

You could answer the existential question of whether @realDonaldTrump even exists if he doesn’t exist on Twitter. I tweet, therefore I am. Dorsey meets Descartes.

All it would take is one sweet click to force the greatest troll in the history of the internet to meet his maker. Maybe he just disappears in an orange cloud of smoke, screaming, “I’m melllllllting.”

Do Trump — and the world — a favor and send him back into the void whence he came. And then go have some fun: Meditate and fast for days on end!

Our country is going through biological, economic and societal convulsions. We can’t trust the powerful forces in this nation to tell us the truth or do the right thing. In fact, not only can we not trust them. We have every reason to believe they’re gunning for us.

In Washington, the Trump administration’s deception about the virus was lethal. On Wall Street and in Silicon Valley, the fat cats who carved up the country, drained us dry and left us with no safety net profiteered off the virus. In Minneapolis, the barbaric death of George Floyd after a police officer knelt on him for almost nine minutes showed yet again that black Americans have everything to fear from some who are charged with protecting them.

As if that weren’t enough, from the slough of our despond, we have to watch Donald Trump duke it out with the lords of the cloud in a contest to see who can destroy our democracy faster.

I wish I could go along with those who say this dark period of American life will ultimately make us nicer and simpler and more contemplative. How can that happen when the whole culture has been re-engineered to put us at each other’s throats?

Trump constantly torques up the tribal friction and cruelty, even as Twitter and Facebook refine their systems to ratchet up rage. It is amazing that a septuagenarian became the greatest exploiter of social media. Trump and Twitter were a match made in hell.

The Wall Street Journal had a chilling report a few days ago that Facebook’s own research in 2018 revealed that “our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

Mark Zuckerberg shelved the research.

Why not just let all the bots trying to undermine our elections and spreading false information about the coronavirus and right-wing conspiracy theories and smear campaigns run amok? Sure, we’re weakening our society, but the weird, infantile maniacs running Silicon Valley must be allowed to rake in more billions and finish their mission of creating a giant cyberorganism of people, one huge and lucrative ball of rage.

“The shareholders of Facebook decided, ‘If you can increase my stock tenfold, we can put up with a lot of rage and hate,’” says Scott Galloway, professor of marketing at New York University’s Stern School of Business.

“These platforms have very dangerous profit motives. When you monetize rage at such an exponential rate, it’s bad for the world. These guys don’t look left or right; they just look down. They’re willing to promote white nationalism if there’s money in it. The rise of social media will be seen as directly correlating to the decline of Western civilization.”

Dorsey, who has more leeway because his stock isn’t as valuable as Facebook’s, made some mild moves against the president who has been spewing lies and inciting violence on Twitter for years. He added footnotes clarifying false Trump tweets about mail-in ballots and put a warning label on the president’s tweet about the Minneapolis riots that echo the language of a Miami police chief in 1967 and segregationist George Wallace: “When the looting starts, the shooting starts.”

“Jack is really sincerely trying to find something to make it better,” said one friend of the Twitter chief’s. “He’s like somebody trapped in a maze, going down every hallway and turning every corner.”

Zuckerberg, on the other hand, went on Fox to report that he was happy to continue enabling the Emperor of Chaos, noting that he did not think Facebook should be “the arbiter of truth of everything that people say online.”

It was a sickening display that made even some loyal Facebook staffers queasy. As The Verge’s Casey Newton reported, some employees objected to the company’s rationale in internal posts.

“I have to say I am finding the contortions we have to go through incredibly hard to stomach,” one wrote. “All this points to a very high risk of a violent escalation and civil unrest in November and if we fail the test case here, history will not judge us kindly.”

Trump, furious that Dorsey would attempt to rein him in on the very platform that catapulted him into the White House, immediately decided to try to rein in Dorsey.

He signed an executive order that might strip liability protection from social media sites, which would mean they would have to more assiduously police false and defamatory posts. Now that social media sites are behemoths, Galloway thinks that the removal of the Communications Decency Act makes a lot of sense even if the president is trying to do it for the wrong reasons.

Trump does not seem to realize, however, that he’s removing his own protection. He huffs and puffs about freedom of speech when he really wants the freedom to be vile. “It’s the mother of all cutting-off-your-nose-to-spite-your-face moves,” says Galloway.

The president wants to say things on Twitter that he will not be allowed to say if he exerts this control over Twitter. In a sense, it’s Trump versus his own brain. If Twitter can be sued for what people say on it, how can Trump continue to torment? Wouldn’t thousands of his own tweets have to be deleted?

“He’d be the equivalent of a slippery floor at a store that sells equipment for hip replacements,” says Galloway, who also posits that, in our hyper-politicized world, this will turn Twitter into a Democratic site and Facebook into a Republican one.

Nancy Pelosi, whose district encompasses Twitter, said that it did little good for Dorsey to put up a few fact-checks while letting Trump’s rants about murder and other “misrepresentations” stay up.

“Facebook, all of them, they are all about making money,” the speaker said. “Their business model is to make money at the expense of the truth and the facts.” She crisply concluded that “all they want is to not pay taxes; they got their tax break in 2017” and “they don’t want to be regulated, so they pander to the White House.”

C’mon, Jack. Make @realDonaldTrump melt to help end our meltdown.

Source: Think Outside the Box, Jack


Corpses and mob violence: How China’s social media echo chamber fuels coronavirus fears

Of note:

Corpses lie on the ground near hospitals. People kill their pets for fear the animals will spread disease. Mobs chase down people without masks and angrily force them to cover up.

These are the scenes flooding social media in China as the country grapples with the novel coronavirus that has prompted the World Health Organization to declare a global emergency.

But how much of what the Chinese people and international observers are seeing on social media is true?

Public mistrust of government authorities in China has reached such a severe level, observers say, that many Chinese people have turned to alternative online sources of information — often of questionable veracity.

“Many Chinese people are well aware of the government’s long track record of censoring information about threats to public health,” said Sarah Cook, director of the China Media Bulletin at human rights research group Freedom House.

“This fuels deep mistrust in official updates and undermines efforts to reduce fear and anxiety,” she told The Star.

There’s history to the earned mistrust. In the first few months of the SARS outbreak in 2003, the Chinese government tried to keep it a secret. By the time the new virus was publicly reported, five people had died and hundreds had already fallen ill. It was a health disaster that led to heaps of global backlash, and China sacked its health minister and the mayor of Beijing in apparent contrition about the mishandling.

While central government authorities in Beijing were much quicker to publicly report the new coronavirus, the local Wuhan city government initially censored the first reports of a new illness emerging in the city last December. Medical experts said in a research paper published in The Lancet that they’ve found new evidence that the origin of the outbreak may not have been a seafood market in Wuhan as the Chinese government reported, and the first human infections may have occurred in November.

Li Wang is among those glued to social media.

The economics researcher at the University of New Brunswick and former Canadian student is currently on lockdown in Wuhan after flying home to visit family during Lunar New Year.

To pass the time, he was one of millions of Chinese glued to their screens watching a livestream of a hospital being built in ten days to house patients that have overwhelmed Wuhan’s hospitals. The government says a crew of 7,000 worked around the clock to build the 1,000-bed hospital, and vowed to build another this week.

“Everyone is afraid to go outside … Almost everyone I have talked to online are panicked,” Wang said. Because he is not a Canadian citizen or permanent resident, he’s not able to board the chartered flight Canada is sending to bring back Canadians from the city.

China’s control of social media is a factor that adds to the confusion. Many people are familiar with mainland China’s “Great Firewall,” the internet censorship apparatus that automatically blocks international social media platforms such as Twitter, Facebook, YouTube and Instagram as well as many news outlets and the entire suite of Google services.

Chinese authorities are continually developing and fine-tuning their ability to censor social media posts on domestic websites such as the Twitter-like Weibo blogging platform. They even have the ability to surveil and automatically block parts of private conservations on chat apps such as WeChat.

WeChat is the preferred platform for many in China during the coronavirus outbreak because the chat groups there tend to be small or medium-sized groups where some users know each other personally.

“People are getting at least some information from individuals they personally know and trust (on WeChat typically), but that doesn’t make them insusceptible from the spread of false information,” said Cook.

“But for those who personally know the original source — say a relative who is a nurse in Wuhan — her information will likely appear very credible and believable to them and possibly rightly so.”

However, like all social media platforms, the quality of what a user sees depends on the quality of the people they have in their circles. A WeChat user who is friends with many doctors and nurses would likely get more reliable information.

Perhaps aware of the communication challenges government control over the scarce number of independent media outlets in China has seemed to lighten over the past several weeks.

As a result, members of the public in China are turning to respected Chinese publications like Caixin to read quality journalism about the outbreak. The magazine recently published a four-part series produced by dozens of journalistsincluding a detailed account of the Wuhan government’s coverup of the crisis.

So are the images on social media real?

Yuri Qin, an editor at the Berkeley-based China Digital Times, a bilingual website that monitors the Chinese internet, says that unfortunately, some of the horrible videos and photographs might be real, although they are difficult to verify.

“Authorities in Wuhan have imposed some brutal measures to prevent the spread, and because of the panic some people are cruel to each other and sometimes they use extreme means to drive out or detain suspected carriers of the disease,” Qin told The Star in an email.

She says the loss of credibility of the local government has seemed to exacerbate paranoia and fear among citizens of Wuhan.

However, it’s also helpful to keep in mind that among the hundreds of millions of Chinese social media users, some have retained their sense of humour even during a health crisis. Some videos that have gone viral are jokes, and likely stem from people trying to make the best of their situations.

What are some reliable sources of English-language translations of Chinese social media posts on coronavirus?

The China Digital Times verifies and translates blog posts and diary entries from people living in China dealing with the coronavirus enforced quarantines and health checks.

The website What’s on Weibo tracks and analyses viral social media posts on China’s most popular platforms.

Bill Bishop’s Sinocism newsletter regularly compiles and comments on Chinese-language media sources on a variety of news topics.

Source: Corpses and mob violence: How China’s social media echo chamber fuels coronavirus fears

Fears of election meddling on social media were overblown, say researchers

Hype versus the reality (perhaps Canada not important enough…). The hype was in both mainstream and ethnic media:

Now that the election is over and researchers have combed through the data collected, their conclusion is clear: there was more talk about foreign trolls during the campaign than there was evidence of their activities.

Although there were a few confirmed cases of attempts to deceive Canadians online, three large research teams devoted to detecting co-ordinated influence campaigns on social media report they found little to worry about.

In fact, there were more news reports about malicious activity during the campaign than traces of it.

“We didn’t see high levels of effective disinformation campaigns. We didn’t see evidence of effective bot networks in any of the major platforms. Yet, we saw a lot of coverage of these things,” said Derek Ruths, a professor of computer science at McGill University in Montreal.

He monitored social media for foreign meddling during the campaign and, as part of the Digital Democracy Project, scoured the web for signs of disinformation campaigns.

Threat of foreign influence was hyped

“The vast majority of news stories about disinformation overstated the results and represented them as far more conclusive than they were. It was the case everywhere, with all media,” he said.

It’s a view mirrored by the Ryerson Social Media Lab, which also monitored social media during the campaign.

“Fears of foreign and domestic interference were overblown,” Philip Mai, co-director of the Social Media Lab, told CBC News.

A major focus of monitoring efforts during the campaign was Twitter, a platform favoured by politicians, journalists and partisans of all stripes. It’s where a lot of political exchanges take place, and it’s an easy target for automated influence campaigns.

“Our preliminary analysis of the [Twitter hashtag] #cdnpoli suggests that only about one per cent of accounts that used that hashtag earlier in the election cycle can be classified as likely to be bots,” said Mai.

The word “likely” is key. Any social media analyst will tell you that detecting bonafide automated accounts that exist solely to spread a message far and wide is incredibly difficult.

#TrudeauMustGo and other frenzies

A few times during the campaign, independent researchers found signs that certain conversations on Twitter were being amplified by accounts that appeared to be foreign. For example, the popular hashtag #TrudeauMustGo was tweeted and retweeted in large numbers by users who had the word “MAGA” in their user descriptions.

But this doesn’t mean those users were part of a foreign campaign, Ruths said.

“It’s very hard to prove that those MAGA accounts aren’t Canadian,” he said. “How can you prove who’s Canadian online? What does a Canadian look like on Twitter?”

Few Canadians use Twitter for news. According to the Digital News Report from the Reuters Institute for the Study of Journalism, only 11 per cent of Canadians got their news on Twitter in 2019, down slightly from 12 per cent last year.

Twitter’s most avid users tend to be politicians, journalists and highly engaged partisans.

Fenwick McKelvey, an assistant professor at Montreal’s Concordia University who researches social media platforms, said he feels journalists overestimate Twitter’s ability to take the pulse of the voting public.

“Twitter is an elite medium used by journalists and politicians more than everyday Canadians,” McKelvey told CBC News. “Twitter is a very specific public. Not a proxy for public opinion.”

In fact, most Canadians — 57 per cent — told a 2018 survey by the Social Media Lab that they have never shared political opinions on any social media platform.

Tweets for elites

For an idea of just how elitist Twitter can be, take a look at who is driving its political conversations. For some of the major hashtags during the election — like #cdnpoli, #defundCBC and the recently popular #wexit — just a fraction of users post original content. The rest just retweet.

And the users who get the most retweets, the biggest influencers, represent an even tinier sliver of Twitter users, according to data from the University of Toronto’s Citizen Lab, another outfit that monitored disinformation during the campaign.

“What we thought was a horizontal democratic space is dominated by less than two per cent of accounts,” said Gabrielle Lim, a fellow at the Citizen Lab.

“We need to take everything with a grain of salt when looking at Twitter. Doing data analysis is easy, but we’re bad at contextualizing what it means,” Lim said.

So why this focus on Twitter if it’s such a small and unrepresentative medium for Canadians? Because it’s easy to study. Unless a user sets an account to private, everything posted on Twitter is public and fairly easy to access.

On the other hand, more popular social networks like Facebook make it much harder to harvest user content at scale. A lot of misinformation may also be shared in closed channels like private Facebook groups and WhatsApp groups, which are nearly impossible for outsiders to access.

But even taking into account those larger social media audiences, the evidence shows that Canadians are getting their news from a variety of sources, Lim noted.

Although the threat posed by online disinformation to Canadian democracy was overblown in the context of the 2019 campaign, Ruths said he still believes it was important to be alert, just as it’s important to go to the dentist even if no cavities are found.

And he suggests that journalists looking for evidence of bot activity apply the same level of rigour as the people doing the research.

“We saw a lot of well-intentioned reporting,” he said. “But finding suspected accounts is not the same as finding bots. Saying that MAGA accounts don’t look like Canadians’ doesn’t mean they’re not.”

Source: Fears of election meddling on social media were overblown, say researchers

Liberals step up attacks with 2 weeks left, but Conservative campaign most negative, data shows

Nice to see this kind of social media analysis. But depressing the reliance on negative attacks by both major parties:

The Conservatives lead other major federal parties in the amount of negative attacks on Twitter and in press releases this campaign, but at the midpoint of a close race the Liberals are increasingly turning negative, an analysis by CBC News shows.

CBC News analyzed more than 1,800 press releases and tweets from official party and party leader accounts since the start of the campaign. We categorized them as either positive, negative, both positive and negative or neutral. (See methodology below.)

Overall, the Conservatives have put out the highest volume of negative communications to date, the analysis revealed. The party tends to put out communications attacking the Liberals about as often as they promote their own policies.

That doesn’t mean the Conservatives were the only party to go negative early on. At the outset of the campaign, the Liberals went after Conservative Leader Andrew Scheer on Twitter for his 2005 stance on same-sex marriage and other Conservative candidates for anti-abortion views or past social media missteps.

But almost half (47 per cent) of Conservative communications have been negative or partly negative. The share of negative messages is 37 per cent for the NDP, 26 per cent for the Liberals, 18 per cent for the Greens and 13 per cent for the Bloc Quebecois, which has run the most positive campaign.

Liberals, NDP step up attacks

While the Conservatives have been consistently negative since the start of the campaign, other parties have become markedly more so in the last two weeks.

The uptick in attacks appears to be driven by two factors: the climate marches across the country on Sept. 20 and 27 and the French-language debate hosted by TVA on Oct. 2.

The NDP and Greens took aim at the Liberals’ environmental record around the time of the climate marches. It was also during the last week of September that the Liberals announced a number of environmental policies they would enact if re-elected, which were promptly criticized by the NDP and Greens.

The tone of Liberal communications turned markedly critical during the TVA French-language debate on Oct. 2. This was the first debate Liberal Leader Justin Trudeau took part in, and the Liberal war room put out press releases and tweets countering statements made during the debate by Scheer and the Bloc Quebecois’ Yves-Francois Blanchet.

The TVA debate also marked the first instance during the campaign of the Liberals targeting a party other than the Conservatives with critical tweets and press releases. The party took the Bloc leader to task over his environmental record, among other things.

Liberals the target of most attacks

The Liberals were the target of more than two-thirds (70 per cent) of negative or partly negative communications.

The Conservatives have yet to target a party other than the Liberals with a critical press release or tweet.

The Liberals also have been the primary target of the NDP and, to a lesser extent, the Greens.

While these two parties may be closer ideologically to the Liberals than to the Conservatives, the NDP and Greens are focused on stopping progressive voters from rallying around the Liberals. University of B.C. political science professor Gerald Baier said this reflects a coordination problem on the centre-left.

“The NDP and Greens, I think, would presumably prefer the Liberal record on the environment to what the Conservatives would do, but at the same time their main points are against the existing government,” he said.

The lack of Liberal attacks on the NDP and the Greens is telling, Baier said.

“It suggests that they know that their path to a majority to some degree is to appeal to some of those NDP and Green voters,” he said.

It also could be because the Liberals may need the support of those parties to govern in a minority Parliament, Baier added.

NDP and Green attacks against the Liberals have focused largely on the environment, while the Conservatives have zeroed in on themes of accountability, taxes and spending.

Environment, taxes the two biggest themes

Much of the official party communications focus on the campaign trail, specific candidates and where the party leaders are.

The two policy exceptions are the environment — a popular subject for all parties except the Conservatives — and tax policy, on which the Conservatives have focused. Affordability and housing are also common themes.


CBC News analyzed every press release and tweet from official party and party leader accounts since the start of the campaign. We categorized each communication as positive (if the focus was promoting a party’s own policies or candidates), negative (if the focus was criticizing another party), both positive and negative (if the communication was equally split between the two) or neutral (leader itineraries, event announcements). We also kept track of the topics of communications and who, if anyone, was targeted.

We did not include retweets and treated identical tweets in English and French as one communication.

To keep the project’s scope manageable, the methodology excludes other platforms such as Facebook, YouTube, radio, television and print ads.

Source: Liberals step up attacks with 2 weeks left, but Conservative campaign most negative, data shows

Canada banning Christian demonstrations? How a private member’s bill sprouted fake news

Interesting account of the social media trail and actors:

Imagine scrolling through Facebook when you come across this headline: “Canada Moves to Ban Christians from Demonstrating in Public Under New Anti-Hate Proposal.” If you think it reads too shocking and absurd to be true, that’s because it is.

But this exact headline appeared atop a story that was shared more than 16,000 times online since it was published in May, according to social media tool CrowdTangle. The federal government and Justin Trudeau, who is pictured in the story, are not seeking to ban Christians from demonstrating. In fact, the bill the story is based on was introduced in the Ontario legislature, by a Conservative MPP, and never made it past a second reading.

Incorrect and misleading content is common on social media, but it’s not always obvious where it originates. To learn more, CBC News tracked this particular example back through time on social media to uncover where it came from and how it evolved over time.

March 20: Private member’s bill introduced in Ontario

In this case, it all started with a bill. In March, Roman Baber, a freshman member of the Ontario provincial legislature introduced his very first private member’s bill. Had he known how badly the bill would be misconstrued online, he might have chosen something else, he later told CBC News.

“I expected that people would understand what prompted the bill, as a proud member of the Jewish community who’s been subjected to repeated demonstrations at Queen’s Park by certain groups that were clearly promoting hate,” said Baber, Progressive Conservative member for York Centre.

The bill was simple. It sought to ban any demonstrations on the grounds of Queen’s Park, where Ontario’s provincial legislature is located, that promote hate speech or incite violence. Baber said the bill was prompted by previous demonstrations that occurred at the legislature grounds.

“In 2017, we saw a demonstration that called for the shooting of Israelis. We saw a demonstration that called for a bus bombing and murder of innocent civilians,” he said.

The bill went through two readings at Queen’s Park and was punted to the standing committee on justice, where it’s languished since.

March 27: Canadian Jewish News covers story

At first, the bill garnered modest attention online. The Canadian Jewish News ran a straight-forward report on the bill that included an interview with Baber shortly after he first introduced it. It was shared only a handful of times.

But a few weeks after the second reading, the bill drew the attention of LifeSiteNews, a socially-conservative website. The story was shared 212 times, according to CrowdTangle, including to the Yellow Vests Canada Facebook group.

In its story, LifeSiteNews suggested that a bill banning hate speech might be interpreted to include demonstrations like those that opposed updates to the province’s sex education curriculum.

Baber said this isn’t the case, because hate speech is already defined and interpreted by legal precedent.

“The words ‘hate’ and ‘hate-promoting’ have been defined by the courts repeatedly through common law and is enforced in courts routinely,” Baber said. “So it would be a mistake to suggest that the bill expands the realm of hate speech.”

April 24: The Post Millennial invokes ‘free speech’ argument 

But the idea stuck around. A few weeks later, on April 24, the Post Millennial posted a story labelled as news that argued the bill could infringe on free speech. The story was, however, clear that the bill was only in Ontario and had not yet moved beyond a second reading. It was shared over 200 times and drew nearly 400 interactions — likes, shares, comments and reactions — on social media, according to CrowdTangle.

May 6: Powerful emotions evoke response on social media 

On May 6, a socially conservative women’s group called Real Women of Canada published a news release on the bill calling it “an attack on free speech.” In the release, the group argues that hate speech isn’t clearly defined in Canadian law, and draws on unrelated examples to claim that Christian demonstrations, in particular, could be targeted.

For example, the group pointed to the case of British Columbia’s Trinity Western University, a Christian post-secondary institution that used to require all students sign a covenant that prohibited sex outside of heterosexual marriage. A legal challenge around the covenant and a potential law school at Trinity Western occurred last year, but it had nothing to do with hate speech.

May 9: LifeSiteNews republishes news release

Though this news release itself was not widely shared, three days later it was republished by LifeSiteNews as an opinion piece. That post did better, drawing 5,500 shares and over 8,000 interactions, according to CrowdTangle It also embellished the release with a dramatic image and sensational headline: “Ontario Bill Threatens to Criminalize Christian Speech as ‘Hate.'”

LifeSiteNews published the news release as an opinion piece, drawing 5,500 shares and over 8,000 interactions. (Screengrab/LifeSiteNews)

At this point, the nugget of truth has been nearly entirely obscured by several layers of opinion and misrepresentation. For example, the bill doesn’t specifically cite Christian speech, but this headline suggests it does.

These tactics are used to elicit a strong response from readers and encourage them to share, according to Samantha Bradshaw, a researcher on the Computational Propaganda project at Oxford University.

“People like to consume this kind of content because it’s very emotional, and it gets us feeling certain things: anger, frustration, anxiety, fear,” Bradshaw said. “These are all very powerful emotions that get people sharing and consuming content.”

May 11: Big League Politics publishes sensational inaccuracies 

That framing on LifeSiteNews caught the attention of a major U.S. publication known for spreading conspiracy theories and misinformation: Big League Politics. On May 11, the site published a story that cited the LifeSiteNews story heavily.

The headline and image make it seem like Trudeau’s government has introduced legislation that would specifically prohibit Christians from demonstrating anywhere in the country, a far cry from the truth.

While the story provides a few facts, like the fact the bill was introduced in Ontario, much of it is incorrect. For example, in the lead sentence, the writer claimed the bill would “criminalize public displays by Christians deemed hateful to Muslims, the LGBT community and other victim groups designated by the left.”

The disinformation and alarmist headline proved successful: the Big League Politics version of the story was shared more than 16,000 times, drew more than 26,000 interactions and continued to circulate online for over two weeks.

This evolution is a common occurrence. Disinformation is often based on a nugget of truth that gets buried under layers of emotionally-charged language and opinion. Here, that nugget of truth was a private member’s bill introduced in the Ontario legislature. But that fact was gradually churned through an online network of spin until it was unrecognizable in the final product.

At the end of the day, democracy is really hard work. It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.– Samantha Bradshaw, researcher at Oxford University

“That is definitely something that we see often: taking little truths and stretching them, misreporting them or implementing commentary and treating someone’s opinion about what happened as news,” Bradshaw said. “The incremental changes that we see in these stories and these narratives is something very typical of normal disinformation campaigns.”

Bradshaw said even though disinformation is only a small portion of the content online, it can have an outsized impact on our attention. With that in mind, she said it’s partly up to readers to think critically about what they’re reading and sharing online.

“At the end of the day, democracy is really hard work,” Bradshaw said. “It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.”

Source: Canada banning Christian demonstrations? How a private member’s bill sprouted fake news

A Hunt for Ways to Combat Online Radicalization – The New York Times

Interesting approach, applicable to extremists and radicals, whether on right, left or other:

Law enforcement officials, technology companies and lawmakers have long tried to limit what they call the “radicalization” of young people over the internet.

The term has often been used to describe a specific kind of radicalization — that of young Muslim men who are inspired to take violent action by the online messages of Islamist groups like the Islamic State. But as it turns out, it isn’t just violent jihadists who benefit from the internet’s power to radicalize young people from afar.

White supremacists are just as adept at it. Where the pre-internet Ku Klux Klan grew primarily from personal connections and word of mouth, today’s white supremacist groups have figured out a way to expertly use the internet to recruit and coordinate among a huge pool of potential racists. That became clear two weeks ago with the riots in Charlottesville, Va., which became a kind of watershed event for internet-addled racists.

“It was very important for them to coordinate and become visible in public space,” said Joan Donovan, a scholar of media manipulation and right-wing extremism at Data & Society, an online research institute. “This was an attempt to say, ‘Let’s come out; let’s meet each other. Let’s build camaraderie, and let’s show people who we are.’”

Ms. Donovan and others who study how the internet shapes extremism said that even though Islamists and white nationalists have different views and motivations, there are broad similarities in how the two operate online — including how they spread their message, recruit and organize offline actions. The similarities suggest a kind of blueprint for a response — efforts that may work for limiting the reach of jihadists may also work for white supremacists, and vice versa.

In fact, that’s the battle plan. Several research groups in the United States and Europe now see the white supremacist and jihadi threats as two faces of the same coin. They’re working on methods to fight both, together — and slowly, they have come up with ideas for limiting how these groups recruit new members to their cause.

Their ideas are grounded in a few truths about how extremist groups operate online, and how potential recruits respond. After speaking to many researchers, I compiled this rough guide for combating online radicalization.

Recognize the internet as an extremist breeding ground.

The first step in combating online extremism is kind of obvious: It is to recognize the extremists as a threat.

For the Islamic State, that began to happen in the last few years. After a string of attacks in Europe and the United States by people who had been indoctrinated in the swamp of online extremism, politicians demanded action. In response, Google, Facebook, Microsoft and other online giants began identifying extremist content and systematically removing it from their services, and have since escalated their efforts.

When it comes to fighting white supremacists, though, much of the tech industry has long been on the sidelines. This laxity has helped create a monster. In many ways, researchers said, white supremacists are even more sophisticated than jihadists in their use of the internet.

The earliest white nationalist sites date back to the founding era of the web. For instance, Stormfront.org, a pioneering hate site, was started as a bulletin board in 1990. White supremacist groups have also been proficient at spreading their messages using the memes, language and style that pervade internet subcultures. Beyond setting up sites of their own, they have more recently managed to spread their ideology to online groups that were once largely apolitical, like gaming and sci-fi groups.

And they’ve grown huge. “The white nationalist scene online in America is phenomenally larger than the jihadists’ audience, which tends to operate under the radar,” said Vidhya Ramalingam, the co-founder of Moonshot CVE, a London-based start-up that works with internet companies to combat violent extremism. “It’s just a stunning difference between the audience size.”

After the horror of Charlottesville, internet companies began banning and blocking content posted by right-wing extremist groups. So far their efforts have been hasty and reactive, but Ms. Ramalingam sees it as at the start of a wider effort.

“It’s really an unprecedented moment where social media and tech companies are recognizing that their platforms have become spaces where these groups can grow, and have been often unpoliced,” she said. “They’re really kind of waking up to this and taking some action.”

Engage directly with potential recruits.

If tech companies are finally taking action to prevent radicalization, is it the right kind of action? Extremism researchers said that blocking certain content may work to temporarily disrupt groups, but may eventually drive them further underground, far from the reach of potential saviors.

A more lasting plan involves directly intervening in the process of radicalization. Consider The Redirect Method, an anti-extremism project created by Jigsaw, a think tank founded by Google. The plan began with intensive field research. After interviews with many former jihadists, white supremacists and other violent extremists, Jigsaw discovered several important personality traits that may abet radicalization.

One factor is a skepticism of mainstream media. Whether on the far right or ISIS, people who are susceptible to extremist ideologies tend to dismiss outlets like The New York Times or the BBC, and they often go in search of alternative theories online.

Another key issue is timing. There’s a brief window between initial interest in an extremist ideology and a decision to join the cause — and after recruits make that decision, they are often beyond the reach of outsiders. For instance, Jigsaw found that when jihadists began planning their trips to Syria to join ISIS, they had fallen too far down the rabbit hole and dismissed any new information presented to them.

Jigsaw put these findings to use in an innovative way. It curated a series of videos showing what life is truly like under the Islamic State in Syria and Iraq. The videos, which weren’t filmed by news outlets, offered a credible counterpoint to the fantasies peddled by the group — they show people queuing up for bread, fighters brutally punishing civilians, and women and children being mistreated.

Experiencing the Caliphate Video by Upvotely

Then, to make sure potential recruits saw the videos at the right time in their recruitment process, Jigsaw used one of Google’s most effective technologies: ad targeting. In the same way that a pair of shoes you looked up last week follows you around the internet, Jigsaw’s counterterrorism videos were pushed to likely recruits.

Jigsaw can’t say for sure if the project worked, but it found that people spent lots of time watching the videos, which suggested they were of great interest, and perhaps dissuaded some from extremism.

Moonshot CVE, which worked with Jigsaw on the Redirect project, put together several similar efforts to engage with both jihadists and white supremacist groups. It has embedded undercover social workers in extremist forums who discreetly message potential recruits to dissuade them. And lately it’s been using targeted ads to offer mental health counseling to those who might be radicalized.

“We’ve seen that it’s really effective to go beyond ideology,” Ms. Ramalingam said. “When you offer them some information about their lives, they’re disproportionately likely to interact with it.”

What happens online isn’t all that matters in the process of radicalization. The offline world obviously matters too. Dylann Roof — the white supremacist who murdered nine people at a historically African-American church in Charleston, S.C., in 2015 — was radicalized online. But as a new profile in GQ Magazine makes clear, there was much more to his crime than the internet, including his mental state and a racist upbringing.

Still, just about every hate crime and terrorist attack, these days, was planned or in some way coordinated online. Ridding the world of all of the factors that drive young men to commit heinous acts isn’t possible. But disrupting the online radicalization machine? With enough work, that may just be possible.

How to Know What Donald Trump Really Cares About: Look at What He’s Insulting – The New York Times

This is a truly remarkable analysis of social media and Donald Trump, rich in data and beautifully charted by Kevin Quealy and Jasmine Lee.

Well worth reading, both in terms of the specifics as well as a more general illustration of social media analysis:

Donald J. Trump’s tweets can be confounding for journalists and his political opponents. Many see them as a master class in diversion, shifting attention to minutiae – “Hamilton” and flag-burning, to name two recent examples – and away from his conflicts of interest and proposed policies. Our readers aren’t quite sure what to make of them, either.

For better or worse, I’ve developed a deep expertise of what he has tweeted about in the last two years. Over the last 11 months, my colleague Jasmine C. Lee and I have read, tagged and sorted more than 14,000 tweets. We’ve found that about one in every nine was an insult of some kind.

This work, mundane as it sometimes is, has helped reveal a clear pattern – one that has not changed much in the weeks since Mr. Trump’s victory.

First, Mr. Trump likes to identify a couple of chief enemies and attack them until they are no longer threatening enough to interest him. He hurls insults at these foils relentlessly, for sustained periods – weeks or months. Jeb Bush, Marco Rubio, Ted Cruz and Hillary Clinton have all held Mr. Trump’s attention in this way; nearly one in every three insults in the last two years has been directed at them.

If Mr. Trump continues to use Twitter as president the way he did as a candidate, we may see new chief antagonists: probably Democratic leaders, perhaps Republican leaders in Congress and possibly even foreign countries and their leaders. For now, the news media – like CNN and The New York Times – is starting to fill that role. The chart at the top of this page illustrates this story very clearly.

That’s not to say that the media is necessarily becoming his next full-time target. Rather, it suggests that one has not yet presented itself. The chart below, which shows the total number of insults per day, shows how these targets have come and gone in absolute terms. An increasing number of insults are indeed being directed at the media, but, for now, those insults are still at relatively normal levels.

Insults per day

Second, there’s a nearly constant stream of insults in the background directed at a wider range of subjects. These insults can be a response to a news event, unfavorable media coverage or criticism, or they can simply be a random thought. These subjects receive short bursts of attention, and inevitably Mr. Trump turns to other things in a day or two. Mr. Trump’s brief feuds with Macy’sElizabeth WarrenJohn McCain and The New Hampshire Union Leader fit this bucket well. The election has not changed this pattern either.