U.S. accounts drive Canadian convoy protest chatter

Of note. While recent concerns have understandably focussed on Chinese and Russian government interference, we likely need to spend more attention on the threats from next door, along with the pernicious threats via Facebook and Twitter:

Known U.S.-based sources of misleading information have driven a majority of Facebook and Twitter posts about the Canadian COVID-19 vaccine mandate protest, per German Marshall Fund data shared exclusively with Axios.

Driving the news: Ottawa’s “Freedom Convoy” has ballooned into a disruptive political protest against Prime Minister Justin Trudeau and inspired support among right-wing and anti-vaccine mandate groups in the U.S.

Why it matters: Trending stories about the protest appear to be driven by a small number of voices as top-performing accounts with huge followings are using the protest to drive engagement and inflame emotions with another hot-button issue.

  • “They can flood the zone — making something news and distorting what appears to be popular,” said Karen Kornbluh, senior fellow and director of the Digital Innovation and Democracy Initiative at the German Marshall Fund. 

What they’re saying: “The three pages receiving the most interactions on [convoy protest] posts — Ben Shapiro, Newsmax and Breitbart -—are American,” Kornbluh said. Other pages with the most action on convoy-related posts include Fox News, Dan Bongino and Franklin Graham.

  • “These major online voices with their bullhorns determine what the algorithm promotes because the algorithm senses it is engaging,” she said.
  • Using a platform’s design to orchestrate anti-government action mirrors how the “Stop the Steal” groups worked around the Jan. 6 Capitol riot, with a few users quickly racking up massive followings, Kornbluh said.

By the numbers: Per German Marshall Fund data, from Jan. 22, when the protests began, to Feb. 12, there were 14,667 posts on Facebook pages about the Canadian protests, getting 19.3 million interactions (including likes, comments and shares).

  • For context: The Beijing Olympics had 20.9 million interactions in that same time period.
  • On Twitter, from Feb. 3 to Feb. 13, tweets about the protests from have been favorited at least 4.1 million times and retweeted at least 1.1 million times.
  • Pro-convoy videos on YouTube have racked up 47 million views, with Fox News’ YouTube page getting 29.6 million views on related videos.

The big picture: New research published in the Atlantic finds that most public activity on Facebook comes from a “tiny, hyperactive group of abusive users.”

  • Since user engagement remains the most important factor in Facebook’s weighting of content recommendations, the researchers write, the most abusive users will wield the most influence over the online conversation.
  • “Overall, we observed 52 million users active on these U.S. pages and public groups, less than a quarter of Facebook’s claimed user base in the country,” the researchers write. “Among this publicly active minority of users, the top 1 percent of accounts were responsible for 35 percent of all observed interactions; the top 3 percent were responsible for 52 percent. Many users, it seems, rarely, if ever, interact with public groups or pages.”

Meanwhile, Foreign meddling is further confusing the narrative around the trucker protest. 

  • NBC News reported that overseas content mills in Vietnam, Bangladesh, Romania and other countries are powering Facebook groups promoting American versions of the trucker convoys. Facebook took many of the pages down.
  • A report from Grid News found a Bangladeshi digital marketing firm was behind two of the largest Facebook groups related to the Canadian Freedom Convoy beforebeing removed from the platform.
  • Grid News reported earlier that Facebook groups supporting the Canadian convoy were being administered by a hacked Facebook account belonging to a Missouri woman.

Source: U.S. accounts drive Canadian convoy protest chatter

Blanchet’s choice to block critics on Twitter limits free speech: experts

Snowflake?

Dozens of people — including some MPs — say Bloc Québécois Leader Yves-François Blanchet has blocked them on Twitter after they criticized his statements about Transport Minister Omar Alghabra, with some arguing they have a right to be heard.

Nour El Kadri, the president of the Canadian Arab Federation who was among those blocked by Blanchet on the social media platform, said people should be able to respond to accusations made by politicians.

Last week, after Alghabra, born to a Syrian family in Saudi Arabia, was sworn in as federal transport minister, the Bloc issued a release that sought to sow doubt about his association with what it called the “political Islam movement” due to the minister’s former role as head of the Canadian Arab Federation.

El Kadri tweeted at Blanchet to say the Canadian Arab Federation has been a secular organization under its constitution since it was founded in 1967.

“(I told him) it’s secular like Quebec that you’re asking for, then he blocked me,” he said.

“He started to block other people who were voicing opposing opinions.”

On Twitter, Blanchet argued against the idea that he was robbing anyone of their right to free expression.

“When I ‘block’ people, it’s because their posts don’t interest me (fake accounts, political staff, insults …),” he wrote in French last Thursday.

“That does not prevent them from publishing them. I just won’t see them, nor they mine,” he said, adding things are calmer this way.

Richard Moon, a law professor at the University of Windsor, said it is credible to claim that Blanchet infringed the charter-guaranteed right to freedom of expression of those who can no longer see or comment on his tweets.

While Twitter is not itself subject to the charter as a private entity, Moon said, when a politician uses it as a platform to make announcements and discuss political views, the politician’s account becomes a public platform.

“To exclude someone from responding or addressing because of their political views could then be understood as a restriction on their freedom of expression,” he said.

Duff Conacher, co-founder of the pro-transparency group Democracy Watch, said Blanchet’s Twitter account is a public communication channel and he cannot decide arbitrarily to not allow voters to communicate with him there.

“Politicians are public employees, so they can’t just cut people off from seeing what they’re saying through one of their communication channels,” he said.

“The public has a right to see all their communications.”

Ottawa Mayor Jim Watson faced a lawsuit in 2018 from local activists he had blocked on Twitter. The court action was dropped after Watson conceded his account is public and unblocked everyone he had blocked, so no legal precedent was set.

Blanchet has also blocked some fellow members of Parliament, including Quebec Liberal Greg Fergus, Ontario New Democrat Matthew Green and Manitoba New Democrat Leah Gazan.

Green said he found himself blocked when he criticized Blanchet’s statements that defended the right of universities’ professors to use the N-word.

Gazan accused Blanchet last week of racism and Islamophobia over his statement about Alghabra.

“When criticized, he refuses to engage in a conversation, and a conversation he clearly needs to have around his Islamophobia,” she said.

Source: Blanchet’s choice to block critics on Twitter limits free speech: experts

Twitter apologizes after users notice image-cropping algorithm favours white faces over Black

Big oops:

Twitter has apologized after users called its ‘image-cropping’ algorithm racist for automatically focusing on white faces over Black ones.

Users noticed that when two separate photos, one of a white face and the other of a Black face, were displayed in the post, the algorithm would crop the latter out and only show the former on its mobile version.

PhD student Colin Madland was among the first to point out the issue on Sept. 18, after a Black colleague asked him to help stop Zoom from removing his head while using a virtual background. Madland attempted to post a two-up display of him and his colleague with the head erased and noticed that Twitter automatically cropped his colleague out and focused solely on his face.

“Geez .. any guesses why @Twitter defaulted to show only the right side of the picture on mobile?” he tweeted along with a screenshot.

Entrepreneur Tony Arcieri experimented with the algorithm using a two-up image of Barack Obama and U.S. Senator Mitch McConnell. He discovered that the algorithm would consistently crop out Obama and instead show two images of McConnell.

Several other Twitter users also tested the feature out and noticed that the same thing happened with stock models, different characters from The Simpsons, and golden and black retrievers.

Dantley Davis, Twitter’s chief design officer, replied to Madland’s tweet and suggested his facial hair could be affecting the model “because of the contrast with his skin.”

Davis, who said he experimented with the algorithm after seeing Madland’s tweet, added that once he removed Madland’s facial hair from the photo, the Black colleague’s image showed in the preview.

“Our team did test for racial bias before shipping this model,” he said, but noted that the issue is “100% (Twitter’s) fault.” “Now the next step is fixing it,” he wrote in another tweet.

In a statement, a Twitter spokesperson conceded the company had some further testing to do. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate,” they said, as quoted by the Guardian.

Source: Twitter apologizes after users notice image-cropping algorithm favours white faces over Black

Social Media Giants Support Racial Justice. Their Products Undermine It. Shows of support from Facebook, Twitter and YouTube don’t address the way those platforms have been weaponized by racists and partisan provocateurs.

Of note. “Thoughts and prayers” rather than action:

Several weeks ago, as protests erupted across the nation in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on his Facebook page, denouncing racial bias and proclaiming that “black lives matter.” Mr. Zuckerberg, Facebook’s chief executive, also announced that the company would donate $10 million to racial justice organizations.

A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the chief executive, pledged $3 million to an anti-racism organization started by Colin Kaepernick, the former N.F.L. quarterback.

YouTube joined the protests, too. Susan Wojcicki, its chief executive, wrote in a blog post that “we believe Black lives matter and we all need to do more to dismantle systemic racism.” YouTube also announced it would start a $100 million fund for black creators.

Pretty good for a bunch of supposedly heartless tech executives, right?

Well, sort of. The problem is that, while these shows of support were well intentioned, they didn’t address the way that these companies’ own products — Facebook, Twitter and YouTube — have been successfully weaponized by racists and partisan provocateurs, and are being used to undermine Black Lives Matter and other social justice movements. It’s as if the heads of McDonald’s, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.

It’s hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalized. In Tahrir Square in Cairo, Ferguson, Mo., and Baltimore, activists used Twitter and Facebook to organize demonstrations and get their messages out.

But in recent years, a right-wing reactionary movement has turned the tide. Now, some of the loudest and most established voices on these platforms belong to conservative commentators and paid provocateurs whose aim is mocking and subverting social justice movements, rather than supporting them.

The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn’t necessarily know it by scrolling through your social media feeds.

On Facebook, for example, the most popular post on the day of Mr. Zuckerberg’s Black Lives Matter pronouncement was an 18-minute video posted by the right-wing activist Candace Owens. In the video, Ms. Owens, who is black, railed against the protests, calling the idea of racially biased policing a “fake narrative” and deriding Mr. Floyd as a “horrible human being.” Her monologue, which was shared by right-wing media outlets — and which several people told me they had seen because Facebook’s algorithm recommended it to them — racked up nearly 100 million views.

Ms. Owens is a serial offender, known for spreading misinformation and stirring up partisan rancor. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly four million followers on Facebook. So can other high-profile conservative commentators like Terrence K. Williams, Ben Shapiro and the Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.

In all, seven of the 10 most-shared Facebook posts containing the phrase “Black Lives Matter” over the past month were critical of the movement, according to data from CrowdTangle, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favorable, perhaps because its users skew younger and more liberal.)

Facebook declined to comment. On Thursday, it announced it would spend $200 million to support black-owned businesses and organizations, and add a “Lift Black Voices” section to its app to highlight stories from black people and share educational resources.

Twitter has been a supporter of Black Lives Matter for years — remember Mr. Dorsey’s trip to Ferguson? — but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was run by a group of white nationalists posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter’s trending topics sidebar, which is often gamed by trolls looking to hijack online conversations, has filled upwith inflammatory hashtags like #whitelivesmatter and #whiteoutwednesday, often as a result of coordinated campaigns by far-right extremists.

A Twitter spokesman, Brandon Borrman, said: “We’ve taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world. From #BlackLivesMatter to #MeToo and #BringBackOurGirls, our company is motivated by the power of social movements to usher in meaningful societal change.”

YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made stridesin recent years to remove conspiracy theories and misinformation from its search results and recommendations, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contributed to racial division for years.

As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn’t footage of a protest or a police killing, but a four-year-old “social experiment” by the viral prankster and former Republican congressional candidate Joey Saladino, which has 14 million views. In the video, Mr. Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an “All Lives Matter” sign in a predominantly black neighborhood.

A YouTube spokeswoman, Andrea Faville, said that Mr. Saladino’s video had received fewer than 5 percent of its views this year, and that it was not being widely recommended by the company’s algorithms. Mr. Saladino recently reposted the video to Facebook, where it has gotten several million more views.

In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Mr. Floyd’s killing, or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the country are being organized in Facebook groups and Twitter threads, and social media has been helpful in creating more accountability for the police.

But these platforms aren’t just megaphones. They’re also global, real-time contests for attention, and many of the experienced players have gotten good at provoking controversy by adopting exaggerated views. They understand that if the whole world is condemning Mr. Floyd’s killing, a post saying he deserved it will stand out. If the data suggests that black people are disproportionately targeted by police violence, they know that there’s likely a market for a video saying that white people are the real victims.

The point isn’t that platforms should bar people like Mr. Saladino and Ms. Owens for criticizing Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet, and which features of their platforms are undermining the social justice movements they claim to support.

They don’t seem eager to do so. Recently, The Wall Street Journal reported that an internal Facebook study in 2016 found that 64 percent of the people who joined extremist groups on the platform did so because Facebook’s recommendations algorithms steered them there. Facebook could have responded to those findings by shutting off groups recommendations entirely, or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.

As a result, Facebook groups continue to be useful for violent extremists. This week, two members of the far-right “boogaloo” movement, which wants to destabilize society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, Calif. According to investigators, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommendations, they’re still appearing in plenty of people’s feeds.Rashad Robinson, the president of Color of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview this week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.

“What I see, particularly from Facebook and Mark Zuckerberg, it’s kind of like ‘thoughts and prayers’ after something tragic happens with guns,” Mr. Robinson said. “It’s a lot of sympathy without having to do anything structural about it.”

There is plenty more Mr. Zuckerberg, Mr. Dorsey and Ms. Wojcicki could do. They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don’t involve using racial slurs or organized hate groups. They could dismantle the recommendations systems that give provocateurs and cranks free attention, or make changes to the way their platforms rank information. (Ranking it by how engaging it is, the way some platforms still do, tends to amplify misinformation and outrage-bait.) They could institute a “viral ceiling” on posts about sensitive topics, to make it harder for trolls to hijack the conversation.

I’m optimistic that some of these tech leaders will eventually be convinced — either by their employees of color or their own conscience — that truly supporting racial justice means that they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices. But I’m worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.

So is Mr. Robinson, the civil rights organizer. A few weeks ago, he chatted with Mr. Zuckerberg by phone about Facebook’s policies on race, elections and other topics. Afterward, he said he thought that while Mr. Zuckerberg and other tech leaders generally meant well, he didn’t think they truly understood how harmful their products could be.

“I don’t think they can truly mean ‘Black Lives Matter’ when they have systems that put black people at risk,” he said.

Source: Social Media Giants Support Racial Justice. Their Products Undermine It.

Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump, Twitter contrast

Needed backlash at what can only be described as business-motivated collusion, one that becomes harder and harder to justify from any perspective:

Facebook is facing an unusually public backlash from its employees over the company’s handling of President Trump’s inflammatory posts about protests in the police killing of George Floyd, an unarmed black man in Minneapolis.

At least a dozen employees, some in senior positions, have openly condemned Facebook’s lack of action on the president’s posts and CEO Mark Zuckerberg’s defense of that decision. Some employees staged a virtual walkout Monday.

“Mark is wrong, and I will endeavor in the loudest possible way to change his mind,” tweeted Ryan Freitas, director of product design for Facebook’s news feed.

“I work at Facebook and I am not proud of how we’re showing up,” tweeted Jason Toff, director of product management. “The majority of coworkers I’ve spoken to feel the same way. We are making our voice heard.”

The social network also is under intense pressure from civil rights groups, Democrats and the public over its decision to leave up posts from the president that critics say violate Facebook’s rules against inciting violence. These included a post last week about the protests in which the president said, “when the looting starts, the shooting starts.

Twitter, in contrast, put a warning label on a tweet in which the president said the same thing, saying it violated rules against glorifying violence.

The move escalated a feud with the president that started when the company put fact-checking labels on two of his tweets earlier in the week. Trump retaliated by signing an executive order that attempts to strip online platforms of long-held legal protections.

Zuckerberg has long said he believes the company should not police what politicians say on its platform, arguing that political speech is already highly scrutinized. In a postFriday, the Facebook CEO said he had “been struggling with how to respond” to Trump’s posts.

“Personally, I have a visceral negative reaction to this kind of divisive and inflammatory rhetoric,” he wrote. “I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.”

Zuckerberg said Facebook had examined the post and decided to leave it up because “we think people need to know if the government is planning to deploy force.” He added that the company had been in touch with the White House to explain its policies. Zuckerberg spoke with Trump by phone Friday, according to a report published by Axios.

While Facebook’s 48,000 employees often debate policies and actions within the company, it is unusual for staff to take that criticism public. But the decision not to remove Trump’s posts has caused significant distress within the company, which is spilling over into public view.

“Censoring information that might help people see the complete picture *is* wrong. But giving a platform to incite violence and spread disinformation is unacceptable, regardless who you are or if it’s newsworthy,” tweeted Andrew Crow, head of design for the company’s Portal devices. “I disagree with Mark’s position and will work to make change happen.”

Several employees said on Twitter they were joining Monday’s walkout.

“Facebook’s recent decision to not act on posts that incite violence ignores other options to keep our community safe,” tweeted Sara Zhang, a product designer.

In a statement, Facebook spokesman Joe Osborne said: “We recognize the pain many of our people are feeling right now, especially our Black community. We encourage employees to speak openly when they disagree with leadership. As we face additional difficult decisions around content ahead, we’ll continue seeking their honest feedback.”

Less than 4% of Facebook’s U.S.-based staff are African American, according to the company’s most recent diversity report.

Facebook will not make employees participating in the walkout use paid time off, and it will not discipline those who participate.

On Sunday, Zuckerberg said the company would commit $10 million to groups working on racial justice. “I know Facebook needs to do more to support equality and safety for the Black community through our platforms,” he wrote.

Source: Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump

And Kara Swisher’s call for Twitter to take Trump off the platform:

C’mon, @Jack. You can do it.

Throw on some Kendrick Lamar and get your head in the right space. Pour yourself a big old glass of salt juice. Draw an ice bath and fire up the cryotherapy pod and the infrared sauna. Then just pull the plug on him. You know you want to.

You could answer the existential question of whether @realDonaldTrump even exists if he doesn’t exist on Twitter. I tweet, therefore I am. Dorsey meets Descartes.

All it would take is one sweet click to force the greatest troll in the history of the internet to meet his maker. Maybe he just disappears in an orange cloud of smoke, screaming, “I’m melllllllting.”

Do Trump — and the world — a favor and send him back into the void whence he came. And then go have some fun: Meditate and fast for days on end!

Our country is going through biological, economic and societal convulsions. We can’t trust the powerful forces in this nation to tell us the truth or do the right thing. In fact, not only can we not trust them. We have every reason to believe they’re gunning for us.

In Washington, the Trump administration’s deception about the virus was lethal. On Wall Street and in Silicon Valley, the fat cats who carved up the country, drained us dry and left us with no safety net profiteered off the virus. In Minneapolis, the barbaric death of George Floyd after a police officer knelt on him for almost nine minutes showed yet again that black Americans have everything to fear from some who are charged with protecting them.

As if that weren’t enough, from the slough of our despond, we have to watch Donald Trump duke it out with the lords of the cloud in a contest to see who can destroy our democracy faster.

I wish I could go along with those who say this dark period of American life will ultimately make us nicer and simpler and more contemplative. How can that happen when the whole culture has been re-engineered to put us at each other’s throats?

Trump constantly torques up the tribal friction and cruelty, even as Twitter and Facebook refine their systems to ratchet up rage. It is amazing that a septuagenarian became the greatest exploiter of social media. Trump and Twitter were a match made in hell.

The Wall Street Journal had a chilling report a few days ago that Facebook’s own research in 2018 revealed that “our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

Mark Zuckerberg shelved the research.

Why not just let all the bots trying to undermine our elections and spreading false information about the coronavirus and right-wing conspiracy theories and smear campaigns run amok? Sure, we’re weakening our society, but the weird, infantile maniacs running Silicon Valley must be allowed to rake in more billions and finish their mission of creating a giant cyberorganism of people, one huge and lucrative ball of rage.

“The shareholders of Facebook decided, ‘If you can increase my stock tenfold, we can put up with a lot of rage and hate,’” says Scott Galloway, professor of marketing at New York University’s Stern School of Business.

“These platforms have very dangerous profit motives. When you monetize rage at such an exponential rate, it’s bad for the world. These guys don’t look left or right; they just look down. They’re willing to promote white nationalism if there’s money in it. The rise of social media will be seen as directly correlating to the decline of Western civilization.”

Dorsey, who has more leeway because his stock isn’t as valuable as Facebook’s, made some mild moves against the president who has been spewing lies and inciting violence on Twitter for years. He added footnotes clarifying false Trump tweets about mail-in ballots and put a warning label on the president’s tweet about the Minneapolis riots that echo the language of a Miami police chief in 1967 and segregationist George Wallace: “When the looting starts, the shooting starts.”

“Jack is really sincerely trying to find something to make it better,” said one friend of the Twitter chief’s. “He’s like somebody trapped in a maze, going down every hallway and turning every corner.”

Zuckerberg, on the other hand, went on Fox to report that he was happy to continue enabling the Emperor of Chaos, noting that he did not think Facebook should be “the arbiter of truth of everything that people say online.”

It was a sickening display that made even some loyal Facebook staffers queasy. As The Verge’s Casey Newton reported, some employees objected to the company’s rationale in internal posts.

“I have to say I am finding the contortions we have to go through incredibly hard to stomach,” one wrote. “All this points to a very high risk of a violent escalation and civil unrest in November and if we fail the test case here, history will not judge us kindly.”

Trump, furious that Dorsey would attempt to rein him in on the very platform that catapulted him into the White House, immediately decided to try to rein in Dorsey.

He signed an executive order that might strip liability protection from social media sites, which would mean they would have to more assiduously police false and defamatory posts. Now that social media sites are behemoths, Galloway thinks that the removal of the Communications Decency Act makes a lot of sense even if the president is trying to do it for the wrong reasons.

Trump does not seem to realize, however, that he’s removing his own protection. He huffs and puffs about freedom of speech when he really wants the freedom to be vile. “It’s the mother of all cutting-off-your-nose-to-spite-your-face moves,” says Galloway.

The president wants to say things on Twitter that he will not be allowed to say if he exerts this control over Twitter. In a sense, it’s Trump versus his own brain. If Twitter can be sued for what people say on it, how can Trump continue to torment? Wouldn’t thousands of his own tweets have to be deleted?

“He’d be the equivalent of a slippery floor at a store that sells equipment for hip replacements,” says Galloway, who also posits that, in our hyper-politicized world, this will turn Twitter into a Democratic site and Facebook into a Republican one.

Nancy Pelosi, whose district encompasses Twitter, said that it did little good for Dorsey to put up a few fact-checks while letting Trump’s rants about murder and other “misrepresentations” stay up.

“Facebook, all of them, they are all about making money,” the speaker said. “Their business model is to make money at the expense of the truth and the facts.” She crisply concluded that “all they want is to not pay taxes; they got their tax break in 2017” and “they don’t want to be regulated, so they pander to the White House.”

C’mon, Jack. Make @realDonaldTrump melt to help end our meltdown.

Source: Think Outside the Box, Jack

 

Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators

Whether by humans or AI, not an easy thing to do consistently and appropriately:

Facebook, YouTube and Twitter are relying more heavily on automated systems to flag content that violate their rules, as tech workers were sent home to slow the spread of the coronavirus.

But that shift could mean more mistakes — some posts or videos that should be taken down might stay up, and others might be incorrectly removed. It comes at a time when the volume of content the platforms have to review is skyrocketing, as they clamp down on misinformation about the pandemic.

Tech companies have been saying for years that they want computers to take on more of the work of keeping misinformation, violence and other objectionable content off their platforms. Now the coronavirus outbreak is accelerating their use of algorithms rather than human reviewers.

“We’re seeing that play out in real time at a scale that I think a lot of the companies probably didn’t expect at all,” said Graham Brookie, director and managing editor of the Atlantic Council’s Digital Forensic Research Lab.

Facebook CEO Mark Zuckerberg told reporters that automated review of some content means “we may be a little less effective in the near term while we’re adjusting to this.”

Twitter and YouTube are also sounding caution about the shift to automated moderation.

“While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” Twitter said in a blog post. It added that no accounts will be permanently suspended based only on the actions of the automated systems.

YouTube said its automated systems “are not always as accurate or granular in their analysis of content as human reviewers.” It warned that more content may be removed, “including some videos that may not violate policies.” And, it added, it will take longer to review appeals of removed videos.

Facebook, YouTube and Twitter rely on tens of thousands of content moderators to monitor their sites and apps for material that breaks their rules, from spam and nudity to hate speech and violence. Many moderators are not full-time employees of the companies, but contractors who work for staffing firms.

Now those workers are being sent home. But some content moderation cannot be done outside the office, for privacy and security reasons.

For the most sensitive categories, including suicide, self-injury, child exploitation and terrorism, Facebook says it’s shifting work from contractors to full-time employees — and is ramping up the number of people working on those areas.

There are also increased demands for moderation as a result of the pandemic. Facebook says use of its apps, including WhatsApp and Instagram, is surging. The platforms are under pressure to keep false information, including dangerous fake health claims, from spreading.

The World Health Organization calls the situation an infodemic, where too much information, both true and false, makes it hard to find trustworthy information.

The tech companies “are dealing with more information with less staff,” Brookie said. “Which is why you’ve seen these decisions to move to more automated systems. Because frankly, there’s not enough people to look at the amount of information that’s ongoing.”

That makes the platforms’ decisions right now even more important, he said. “I think that we should all rely on more moderation rather than less moderation, in order to make sure that the vast majority of people are connecting with objective, science-based facts.”

Some Facebook users raised alarm that automated review was already causing problems.

When they tried to post links to mainstream news sources like The Atlantic and BuzzFeed, they got notifications that Facebook thought the posts were spam.

Facebook said the posts were erroneously flagged as spam due to a glitch in its automated spam filter.

Zuckerberg denied the problem was related to shifting content moderation from humans to computers.

“This is a completely separate system on spam,” he said. “This is not about any kind of near-term change, this was just a technical error.”

Source: Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators

Liberals step up attacks with 2 weeks left, but Conservative campaign most negative, data shows

Nice to see this kind of social media analysis. But depressing the reliance on negative attacks by both major parties:

The Conservatives lead other major federal parties in the amount of negative attacks on Twitter and in press releases this campaign, but at the midpoint of a close race the Liberals are increasingly turning negative, an analysis by CBC News shows.

CBC News analyzed more than 1,800 press releases and tweets from official party and party leader accounts since the start of the campaign. We categorized them as either positive, negative, both positive and negative or neutral. (See methodology below.)

Overall, the Conservatives have put out the highest volume of negative communications to date, the analysis revealed. The party tends to put out communications attacking the Liberals about as often as they promote their own policies.

That doesn’t mean the Conservatives were the only party to go negative early on. At the outset of the campaign, the Liberals went after Conservative Leader Andrew Scheer on Twitter for his 2005 stance on same-sex marriage and other Conservative candidates for anti-abortion views or past social media missteps.

But almost half (47 per cent) of Conservative communications have been negative or partly negative. The share of negative messages is 37 per cent for the NDP, 26 per cent for the Liberals, 18 per cent for the Greens and 13 per cent for the Bloc Quebecois, which has run the most positive campaign.

Liberals, NDP step up attacks

While the Conservatives have been consistently negative since the start of the campaign, other parties have become markedly more so in the last two weeks.

The uptick in attacks appears to be driven by two factors: the climate marches across the country on Sept. 20 and 27 and the French-language debate hosted by TVA on Oct. 2.

The NDP and Greens took aim at the Liberals’ environmental record around the time of the climate marches. It was also during the last week of September that the Liberals announced a number of environmental policies they would enact if re-elected, which were promptly criticized by the NDP and Greens.

The tone of Liberal communications turned markedly critical during the TVA French-language debate on Oct. 2. This was the first debate Liberal Leader Justin Trudeau took part in, and the Liberal war room put out press releases and tweets countering statements made during the debate by Scheer and the Bloc Quebecois’ Yves-Francois Blanchet.

The TVA debate also marked the first instance during the campaign of the Liberals targeting a party other than the Conservatives with critical tweets and press releases. The party took the Bloc leader to task over his environmental record, among other things.

Liberals the target of most attacks

The Liberals were the target of more than two-thirds (70 per cent) of negative or partly negative communications.

The Conservatives have yet to target a party other than the Liberals with a critical press release or tweet.

The Liberals also have been the primary target of the NDP and, to a lesser extent, the Greens.

While these two parties may be closer ideologically to the Liberals than to the Conservatives, the NDP and Greens are focused on stopping progressive voters from rallying around the Liberals. University of B.C. political science professor Gerald Baier said this reflects a coordination problem on the centre-left.

“The NDP and Greens, I think, would presumably prefer the Liberal record on the environment to what the Conservatives would do, but at the same time their main points are against the existing government,” he said.

The lack of Liberal attacks on the NDP and the Greens is telling, Baier said.

“It suggests that they know that their path to a majority to some degree is to appeal to some of those NDP and Green voters,” he said.

It also could be because the Liberals may need the support of those parties to govern in a minority Parliament, Baier added.

NDP and Green attacks against the Liberals have focused largely on the environment, while the Conservatives have zeroed in on themes of accountability, taxes and spending.

Environment, taxes the two biggest themes

Much of the official party communications focus on the campaign trail, specific candidates and where the party leaders are.

The two policy exceptions are the environment — a popular subject for all parties except the Conservatives — and tax policy, on which the Conservatives have focused. Affordability and housing are also common themes.

Methodology

CBC News analyzed every press release and tweet from official party and party leader accounts since the start of the campaign. We categorized each communication as positive (if the focus was promoting a party’s own policies or candidates), negative (if the focus was criticizing another party), both positive and negative (if the communication was equally split between the two) or neutral (leader itineraries, event announcements). We also kept track of the topics of communications and who, if anyone, was targeted.

We did not include retweets and treated identical tweets in English and French as one communication.

To keep the project’s scope manageable, the methodology excludes other platforms such as Facebook, YouTube, radio, television and print ads.

Source: Liberals step up attacks with 2 weeks left, but Conservative campaign most negative, data shows

MacDougall: Journalists are addicted to Twitter, and it’s poisoning their journalism

Valid points by MacDougall. Other observation, to be corrected as necessary by journalists, is the degree to which it cuts down on their time for more detailed investigation and reporting, thus resulting in less deep coverage of issues:

What’s the problem with the media?

Ping a journalist that question, and you’ll get back chapter and verse about the money problems facing newsrooms and the indifference of advertising-stealing platforms such as Facebook and Google.

Ask a random bloke on the street, however, and there’s a good chance the answer will be “bias” or “trust,” as in: “I don’t trust the press, they’re all biased.”

Ah, yes. The “fake” news. The “enemies of the people.” It’s not the best time to be repping the fourth estate.

The question now is how the press should fix their dismal approval ratings. A good start would be to stop being their own worst enemies. And a good place to start with that is ditching social media. It’s simply too easy for opinions to slip into posts that would never make it into news copy, leading to perceptions of bias.

Reporters should instead treat social media like the poison it is. For one, it’s not a representative sample of the public. Nor is the “shoot-first, think-later” mentality encouraged conducive to good journalism. Most importantly, social media reveals way too much of a reporter’s own bias to the people they cover and the people who read that coverage.

The ability of social media to reveal reporter bias has been apparent for years, but it’s shifted into overdrive now that Donald Trump has turned Twitter into grotesque political performance art, dragging an enraged global press corps with him, most of whom tweet their disgust or puzzlement at what the president does every day. And it’s affecting political journalism in every country. A day now isn’t a day without reporters broadcasting hot takes that risk tainting the coverage they ultimately provide.

And while it’s true most media organizations have guidelines or social media codes of conduct — most of which prohibit opining — they are largely self-enforced. Stretched editors simply can’t track their charges all day long on Twitter.

Forget about columnists, who are paid to give their opinion; it’s a mystery why straight news reporters would want to reveal anything about themselves or their views on public policy. Most politicians already think the press is biased — why risk confirming it for them in real-time?

Why, for example, would a freelance journalist want Conservative leader Andrew Scheer to know that his views on Scheer’s views on government are that they are a “ridiculous collection of straw men?” They might be, but good luck convincing Scheer’s people that anything you ever write will be a fair shake.

Sadly, it’s not just the smaller fish in the profession who blunder in this way; the problem reaches up much higher.

Lots of people heaped scorn on Maxime Bernier’s clumsy foray into multicuralism on Twitter before his split from the Conservative party, but did one of them really need to be the senior broadcast producer of Canada’s most-watched television news broadcast?

And then there was Rosemary Barton of the CBC, who suggested on Twitter that her network didn’t have a clue about Bernier’s motives for tweeting about diversity, even though reporter Evan Dyer inferred in his report that the one-year anniversary of the alt-right march in Charlottesville had informed Bernier’s timing, if not his thinking.

These examples are the kind of clever or knowing things journalists have always said to each other or their subjects. In private. Now they fire away for all to see. And for what? A bushel of RT’s and “likes”?

Ten years or so into the folly of social media, it should by now be clear that it’s the ranters and shouters who get the most clicks, not the neutral observer. Reporters should stop trying to play the game.

Ten years or so into the folly of social media, it should by now be clear that it’s the ranters and shouters who get the most clicks, not the neutral observer. Reporters should stop trying to play the game.

They should instead go back to being a mystery. To valuing personal scarcity over ubiquity. To ditching Twitter, and forgetting Facebook. Or, at least limiting appearances there to the posting of their work. They should also say “no” to shouty panel appearances alongside partisans.

Reporters might even find the lack of distraction focuses them on their work. And if a politician’s B.S. needs to be called out in real-time, reporters should have an editor or colleague peek over their shoulder to give them a sense check on tone. Because even super-fact checkers such as Daniel Dale of the Toronto Star can appear biased owing to the sheer volume of material they post to their channels. And most reporters aren’t dedicated super-fact checkers, they’re just smart people with opinions, ones the news-consuming public shouldn’t know.

Political journalism is at a crossroads. Reporters need to keep doing their valuable work. But do the work, full stop. Keep your opinions to yourself. More people will believe the good work you do if they have no idea who in the hell you are, or what you think about what’s going on.

ADL tallies up roughly 4 million anti-Semitic tweets in 2017

It would be nice to have comparative data with respect to different religions just as we do for police-reported hate crimes. :

At least 4.2 million anti-Semitic tweets were shared or re-shared from roughly 3 million Twitter accounts last year, according to an Anti-Defamation League report released Monday. Most of those accounts are believed to be operated by real people rather than automated software known as bots, the organization, an international NGO that works against anti-Semitism and bigotry, said.

The anti-Semitic accounts constitute less than 1% of the roughly 336 million active accounts,

“This new data shows that even with the steps Twitter has taken to remove hate speech and to deal with those accounts disseminating it, users are still spreading a shocking amount of anti-Semitism and using Twitter as a megaphone to harass and intimidate Jews,” said ADL CEO Jonathan Greenblatt in a statement.

The report comes amid growing concern about harassment on social media platforms such as Twitter and Facebook, as well as their roles in spreading fake news. Both companies are trying to curb hatred on their platforms while preserving principles of free speech and expression. Last month, Facebook CEO Mark Zuckerberg was summoned to Washington to testify in front of Congress, in part out of concern over how the social network was used to spread propaganda during the 2016 presidential campaign.

Twitter CEO Jack Dorsey has publicly made harassment on the social network a priority, even soliciting ideas for combatting the problem from the public. In March, Dorsey held a livestream to discuss how to deal with the issue. The company has made changes, such as prohibiting offensive account names or better enforcing its terms of service.

Twitter didn’t immediately respond to a request for comment.

The ADL report, evaluated tweets on subjects ranging from Holocaust denial and anti-Jewish slurs to positive references to anti-Semitic figures, books and podcasts. The ADL also tallied the use of coded words and symbols, such as the triple parenthesis, which is put around names to signal someone is Jewish.

The study used a dataset of roughly 55,000 tweets, which were screened by a team of researchers for indications of anti-Semitism. Since this is the first report if its kind from the group, there aren’t numbers to compare to data. Though, the ADL did release a report on the targeting of journalists during the 2016 election which also included Twitter data.

The ADL says that artificial intelligence and algorithms will eventually be effective at identifying hate online, but human input is needed to train such systems. For example, screeners can teach machines when anti-Semitic language might have been used to express opposition to such ideas or in an ironic manner.

Such issues aren’t simply hypothetical. The ADL pointed to the huge volume of tweets about anti-Semitism that were posted during the week of the Charlottesville, Virginia riots last summer. Though Twitter saw the highest volume of tweets about anti-Semitism for the year, only a small percentage were actually anti-Semitic.

The report noted the ADL works with Twitter on the issues of anti-Semitism and bigotry online. Greenblatt said the organization is “pleased that Twitter has already taken significant steps to respond to this challenge.”

Source: ADL tallies up roughly 4 million anti-Semitic tweets in 2017

How to Prevent Smart People From Spreading Dumb Ideas – The New York Times

I really liked Socolow’s three rules, particularly relevant for Twitter:

  1. No link? Not news!
  2. I knew it!
  3. Why am I talking?

We have a serious problem, and it goes far beyond “fake news.” Too many Americans have no idea how to properly read a social media feed. As we’re coming to learn more and more, such ignorance seems to be plaguing almost everybody — regardless of educational attainment, economic class, age, race, political affiliation or gender.

Some very smart people are helping to spread some very dumb ideas.

We all know this is a problem. The recent federal indictment of a Russian company, the Internet Research Agency, lists the numerous ways Russian trolls and bots created phony events and leveraged social media to sow disruption throughout the 2016 presidential election. New revelations about Cambridge Analytica’s sophisticated use of Facebook data to target unsuspecting social media users reminds us how complex the issue has become. Even the pope has weighed in, using his bully pulpit to warn the world of this new global evil.

But there are some remarkably easy steps that each of us, on our own, can take to address this issue. By following these three simple guidelines, we can collaborate to help solve a problem that’s befuddling the geniuses who built Facebook and Twitter.

If the problem is crowdsourced, then it seems obvious the solution will have to be crowdsourced as well. With that in mind, here are three easy steps each of us can take to help build a better civic polity. This advice will also help each of us look a little less foolish.

1. No link? Not news! Every time somebody tweets “BREAKING” a little bell should go off in your head. Before you even read the rest of the news, look for the link. Average Americans almost never break news about big stories. Even most professional journalists lack the sources and experience to quickly verify sensational information. If news breaks on a truly important story, there should be a link to a credible news source. But I still regularly see tweets that have no connection to reality being retweeted thousands of times by people who should know better. Here’s but one example of completely fictional “news” that was retweeted over 46,000 times. It involved Haiti’s supposed reaction to President Trump’s recent insult:

It was retweeted by the Harvard law professor Laurence Tribe. His retweet was retweeted over 2,000 times:

Yet there’s no evidence anywhere that Haiti’s “high court” did this. There was no “emergency” session and there was no “agreement to unseal & release documents.” The event is fabricated. Remember: No link? Not news!

2. I knew it! If breaking news on social media aligns perfectly with your carefully structured view of the world, then pause before liking it or retweeting it. Why? Because you — like most of us — have curated a personal news feed to confirm things you already suspected or “knew.” If you didn’t do this yourself (by unfriending people who dared argue politics with you on your feed), Facebook and Twitter are doing it for you. They structure your timeline to make it as agreeable as possible. Cambridge Analytica’s success was premised on building a distribution system tailored to precisely exploit the biases and preconceptions of specified Facebook users.

But Cambridge Analytica is the symptom, not the disease. The larger problem is that unpleasant and frustrating information — no matter how accurate — is actively hidden from you to maximize your social media engagement. George Orwell once noted that he became a writer because he possessed “a facility with words and a power of facing unpleasant facts.” There’s no place for “unpleasant facts” in our social media universe. Were Orwell alive today, he’d remind us of the terrible political costs caused by this devolution in our informational habits.

3. Why am I talking? My wife is a psychotherapist, and occasionally I skim her Psychotherapy Networker magazine. I read a piece by a therapist who realized his most effective communicative moments often occurred when he asked himself a simple question: “Why am I talking?” Inevitably this question shut him up and allowed him to absorb much more information. “Why am I talking” works out to a great acronym: WAIT. If we all just asked ourselves this simple question immediately before posting or retweeting, we’d all be better off. There are numerous reasons to participate in the public sphere, and everyone can contribute something valuable. But there’s also far too much noise out there, and we need to think more seriously and realistically about the added value of our own communication.

These are three simple rules. Of course, they contradict every mechanism Facebook and Twitter uses to encourage our behavior on social media. Being more skeptical, engaging more selectively and prioritizing links to information providers outside our social media silos will hurt the bottom line of the social media giants. Using social media in a more responsible manner might ultimately leave these companies to rot away as they cede their civic responsibilities to the Russian trolls and bots dedicated to polluting our discourse. If they won’t act, it’s up to us. If we’re collectively smarter and more skeptical about social media as an information delivery device, it will ultimately lessen the influence that these corporations and trolls have on our civic governance.

via How to Prevent Smart People From Spreading Dumb Ideas – The New York Times