Facebook Bans Holocaust Denial, Reversing Earlier Policy

Long overdue. Similar action needs to be take with respect to other forms of racism and hate on Facebook and other platforms:

Facebook is banning all content that “denies or distorts the Holocaust,” in a policy reversal that comes after increased pressure from critics.

Just two years ago, founder and chief executive Mark Zuckerberg said in an interview that even though he finds such posts “deeply offensive,” he did not believe Facebook should take them down. Zuckerberg has said on numerous occasions that Facebook shouldn’t be forced to be the arbiter of truth on its platform, but rather allow a wide range of speech.

In a Facebook post on Monday, Zuckerberg said his thinking has “evolved” because of data showing an increase in anti-Semitic violence. The company said it was also in response to an “alarming” level of ignorance about the Holocaust, especially among young people. It pointed to a recent survey that found almost a quarter of people in US aged 18-39 said they believed the Holocaust was either a myth, had been exaggerated or were not sure about the genocide.

“I’ve struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust,” Zuckerberg wrote. “Drawing the right lines between what is and isn’t acceptable speech isn’t straightforward, but with the current state of the world, I believe this is the right balance.”

Facebook has been under increased pressure to act more aggressively on hate speech, misinformation and other harmful content. The company has recently strengthened its rules to prohibit anti-Semitic stereotypes, and banned accounts related to militia groups and QAnon, a baseless conspiracy theory movement.

This summer, a group of Holocaust survivors, organized by the Conference on Jewish Material Claims Against Germany, launched a social media campaign urging Zuckerberg to remove Holocaust denial from Facebook.

On Monday, the group tweeted: “Survivors spoke! Facebook listened.”

In addition to removing Holocaust-denying posts, Facebook will begin directing users who search for terms associated with the Holocaust or its denial to “credible information” off the platform later this year, Monika Bickert, head of content policy, said in a blog post. She said it would take “some time” to train Facebook’s enforcement systems to enact the change.

Critics say how effectively Facebook polices its rules is the big question.

“We are seeing a trend toward Facebook listening to their critics and ultimately doing the right thing. That’s a trend we need to encourage,” Jonathan Greenblatt, CEO of the Anti-Defamation League, which has been pushing Facebook to crack down on Holocaust deniers for years, told NPR.

“Ultimately, Facebook will be judged not on the promises they make, but on how they keep those promises,” he said.

Source: Facebook Bans Holocaust Denial, Reversing Earlier Policy

Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Of note given complaints by conservatives:

Sen. Roger Wicker hit a familiar note when he announced on Thursday that the Commerce Committee was issuing subpoenas to force the testimony of Facebook Chief Executive Mark Zuckerberg and other tech leaders.

Tech platforms like Facebook, the Mississippi Republican said, “disproportionately suppress and censor conservative views online.”

When top tech bosses were summoned to Capitol Hill in July for a hearing on the industry’s immense power, Republican Congressman Jim Jordan made an even blunter accusation.

“I’ll just cut to the chase, Big Tech is out to get conservatives,” Jordan said. “That’s not a hunch. That’s not a suspicion. That’s a fact.”

But the facts to support that case have been hard to find. NPR called up half a dozen technology experts, including data scientists who have special access to Facebook’s internal metrics. The consensus: There is no statistical evidence to support the argument that Facebook does not give conservative views a fair shake.

Let’s step back for a moment.

When Republicans claim Facebook is “biased,” they often collapse two distinct complaints into one. First, that the social network deliberately scrubs right-leaning content from its site. There is no proof to back this up. Secondly, Republicans suggest that conservative news and perspectives are being throttled by Facebook, that the social network is preventing the content from finding a large audience. That claim is not only unproven, but publicly available data on Facebook shows the exact opposite to be true: conservative news regularly ranks among some of the popular content on the site.

Now, there are some complex layers to this, but former Facebook employees and data experts say the conservative bias argument would be easier to talk about — and easier to debunk — if Facebook was more transparent.

The social network keeps secret some of the most basic data points, like what news stories are the most viewed on Facebook on any given day, leaving data scientists, journalists and the general public in the dark about what people are actually seeing on their News Feeds.

There are other sources of data, but they offer just a tiny window into the sprawling universe of nearly 3 billion users. Facebook is quick to point out that the public metrics available are of limited use, yet it does so without offering a real solution, which would be opening up some of its more comprehensive analytics for public scrutiny.

Until they do, there’s little to counter rumors about what thrives and dies on Facebook and how the platform is shaping political discourse.

“It’s kind of a purgatory of their own making,” said Kathy Qian, a data scientist who co-founded Code for Democracy.

What the available data reveals about possible bias

Perhaps the most often-cited data point on what is popular on Facebook is a tracking tool called CrowdTangle, a startup that Facebook acquired in 2016.

New York Times journalist Kevin Roose has created a Twitter account where he posts the top ten most-engaging posts based on CrowdTangle data. These lists are dominated mostly by conservative commentators like Ben Shapiro and Dan Bongino and Fox News. They resemble a “parallel media universe that left-of-center Facebook users may never encounter,” Roose writes.

Yet these lists are like looking at Facebook through a soda straw, says researchers like MIT’s Jennifer Allen, who used to work at Facebook and now studies how people consume news on social media. CrowdTangle, Allen says, does not provide the whole story.

That’s because CrowdTangle only captures engagement — likes, shares, comments and other reactions — from public pages. But just because a post provokes lots of reactions does not mean it reaches many users. The data does not show how many people clicked on a link, or what the overall reach of the post was. And much of what people see on Facebook is from their friends, not public pages.

“You see these crazy numbers on CrowdTangle, but you don’t see how many people are engaging with this compared with the rest of the platform,” Allen said.

Another point researchers raise: All engagement is not created equal.

Users could “hate-like” a post, or click like as a way of bookmarking, or leave another reaction expressing disgust, not support. Take, for example, the laughing-face emoji.

“It could mean, ‘I agree with this’ or ‘This is so hilarious untrue,'” said data scientist Qian. “It’s just hard to know what people actually mean by those reactions.”

It’s also hard to tell whether people or automated bots are generating all the likes, comments and shares. Former Facebook research scientist Solomon Messing conducted a study of Twitter in 2018 finding hat bots were likely responsible for 66% of link shares on the platform. The tactic is employed on Facebook, too.

“What Facebook calls ‘inauthentic behavior’ and other borderline scam-like activity are unfortunately common and you can buy fake engagement easily on a number of websites,” Messing said.

Brendan Nyhan, a political scientist at Dartmouth College, is also wary about drawing any big conclusions from CrowdTangle.

“You can’t judge anything about American movies by looking at the top ten box films hits of all time,” Nyhan said. “That’s not a great way of understanding what people are actually watching. There’s the same risk here.”

‘Concerned about being seen as on the side of liberals’

Experts agree that a much better measure would be a by-the-numbers rundown of what posts are reaching the most people. So why doesn’t Facebook reveal that data?

In a Twitter thread back in July, John Hegeman, the head of Facebook’s NewsFeed, offered one sample of such a list, saying it is “not as partisan” as lists compiled with CrowdTangle data suggest.

But when asked why Facebook doesn’t share that broader data with the public, Hegeman did not reply.

It could be, some experts say, that Facebook fears that data will be used as ammunition against the company at a time when Congress and the Trump administration are threatening to rein in the power of Big Tech.

“They are incredibly concerned about being seen as on the side of liberals. That is against the profit motive of their business,” Dartmouth’s Nyhan said of Facebook executives. “I don’t see any reason to see that they have a secret, hidden liberal agenda, but they are just so unwilling to be transparent.”

Facebook has been more forthcoming with some academic researchers looking at how social media affects elections and democracy. In April 2019, it announced a partnership that would give 60 scholars access to more data, including the background and political affiliation of people who are engaging content.

One of those researchers is University of Pennsylvania data scientist Duncan Watts.

“Mostly it’s mainstream content,” he said of the most viewed and clicked on posts. “If anything, there is a bias in favor of conservative content.”

While Facebook posts from national television networks and major newspapers get the most clicks, partisan outlets like the Daily Wire and Brietbart routinely show up in top spots, too.

“That should be so marginal that it has no relevance at all,” Watts said of the right-wing content. “The fact that it is showing up at all is troubling.”

‘More false and misleading content on the right’

Accusations from Trump and other Republicans in Washington that Facebook is a biased referee of its content tend to flare up when the social network takes action against a conservative-leaning posts that violate its policies.

Researchers say there is a reason why most of the high-profile examples of content warnings and removals target conservative content.

“That is a result of there just being more false and misleading content on the right,” said researcher Allen. “There are bad actors on the left, but the ecosystem on the right is just much more mature and popular.”

Facebook’s algorithms could also be helping more people see right-wing content that’s meant to evoke passionate reactions, she added.

Because of the sheer amount of envelope-pushing conservative content, some of it veering into the realm of conspiracy theories, the moderation from Facebook is also greater.

Or as Nyhan put it: “When reality is asymmetric, enforcement may be asymmetric. That doesn’t necessarily indicate a bias.”

The attacks on Facebook over perceived prejudice against conservatives has helped fuel the push in Congress and the White House to reform Section 230 of the Communications and Decency Act of 1996, which allows platforms to avoid lawsuits over what users post and gives tech companies the freedom to police its sites as the companies see fit.

Joe Osborne, a Facebook spokesman, in a statement said the social network’s content moderation policies are applied fairly across the board.

“While many Republicans think we should do one thing, many Democrats think we should do the exact opposite. We’ve faced criticism from Republicans for being biased against conservatives and Democrats for not taking more steps to restrict the exact same content. Our job is to create one consistent set of rules that applies equally to everyone.”

Osborne confirmed that Facebook is exploring ways to make more data available in the platform’s public tools, but he declined to elaborate.

Watts, University of Pennsylvania data scientist who studies social media, said Facebook is sensitive to Republican criticism, but no matter what decision they make, the attacks will continue.

“Facebook could end up responding in a way to accommodate the right, but the right will never be appeased,” Watts said. “So it’s this constant pressure of ‘you have to give us more, you have to give us more,'” he said. “And it creates a situation where there’s no way to win arguments based on evidence, because they can just say, ‘Well, I don’t trust you.'”

Source: Facebook Keeps Data Secret, Letting Conservative Bias Claims Persist

Sensitive to claims of bias, Facebook relaxed misinformation rules for conservative pages

The social media platforms continue to undermine social inclusion and cohesion:

Facebook has allowed conservative news outlets and personalities to repeatedly spread false information without facing any of the company’s stated penalties, according to leaked materials reviewed by NBC News.

According to internal discussions from the last six months, Facebook has relaxed its rules so that conservative pages, including those run by Breitbart, former Fox News personalities Diamond and Silk, the nonprofit media outlet PragerU and the pundit Charlie Kirk, were not penalized for violations of the company’s misinformation policies.

Facebook’s fact-checking rules dictate that pages can have their reach and advertising limited on the platform if they repeatedly spread information deemed inaccurate by its fact-checking partners. The company operates on a “strike” basis, meaning a page can post inaccurate information and receive a one-strike warning before the platform takes action. Two strikes in 90 days places an account into “repeat offender” status, which can lead to a reduction in distribution of the account’s content and a temporary block on advertising on the platform.

Facebook has a process that allows its employees or representatives from Facebook’s partners, including news organizations, politicians, influencers and others who have a significant presence on the platform to flag misinformation-related problems. Fact-checking labels are applied to posts by Facebook when third-party fact-checkers determine their posts contain misinformation. A news organization or politician can appeal the decision to attach a label to one of its posts.

Facebook employees who work with content partners then decide if an appeal is a high-priority issue or PR risk, in which case they log it in an internal task management system as a misinformation “escalation.” Marking something as an “escalation” means that senior leadership is notified so they can review the situation and quickly — often within 24 hours — make a decision about how to proceed.

Facebook receives many queries about misinformation from its partners, but only a small subsection are deemed to require input from senior leadership. Since February, more than 30 of these misinformation queries were tagged as “escalations” within the company’s task management system, used by employees to track and assign work projects.

The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook’s fact-checking could go public and fuel allegations that the social network was biased against conservatives.

The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias.

Two current Facebook employees and two former employees, who spoke anonymously out of fear of professional repercussions, said they believed the company had become hypersensitive to conservative complaints, in some cases making special allowances for conservative pages to avoid negative publicity.

“This supposed goal of this process is to prevent embarrassing false positives against respectable content partners, but the data shows that this is instead being used primarily to shield conservative fake news from the consequences,” said one former employee.

About two-thirds of the “escalations” included in the leaked list relate to misinformation issues linked to conservative pages, including those of Breitbart, Donald Trump Jr., Eric Trump and Gateway Pundit. There was one escalation related to a progressive advocacy group and one each for CNN, CBS, Yahoo and the World Health Organization.

There were also escalations related to left-leaning entities, including one about an ad from Democratic super PAC Priorities USA that the Trump campaign and fact checkers have labeled as misleading. Those matters focused on preventing misleading videos that were already being shared widely on other media platforms from spreading on Facebook and were not linked to complaints or concerns about strikes.

Facebook and other tech companies including Twitter and Google have faced repeated accusations of bias against conservatives in their content moderation decisions, though there is little clear evidence that this bias exists. The issue was reignited this week when Facebook removed a video posted to Trump’s personal Facebook page in which he falsely claimed that children are “almost immune” to COVID-19. The Trump campaign accused Facebook of “flagrant bias.”

Facebook spokesperson Andy Stone did not dispute the authenticity of the leaked materials, but said that it did not provide the full context of the situation.

In recent years, Facebook has developed a lengthy set of rules that govern how the platform moderates false or misleading information. But how those rules are applied can vary and is up to the discretion of Facebook’s executives.

In late March, a Facebook employee raised concerns on an internal message board about a “false” fact-checking label that had been added to a post by the conservative bloggers Diamond and Silk in which they expressed outrage over the false allegation that Democrats were trying to give members of Congress a $25 million raise as part of a COVID-19 stimulus package.

Diamond and Silk had not yet complained to Facebook about the fact check, but the employee was sounding the alarm because the “partner is extremely sensitive and has not hesitated going public about their concerns around alleged conservative bias on Facebook.”

Since it was the account’s second misinformation strike in 90 days, according to the leaked internal posts, the page was placed into “repeat offender” status.

Diamond and Silk appealed the “false” rating that had been applied by third-party fact-checker Lead Stories on the basis that they were expressing opinion and not stating a fact. The rating was downgraded by Lead Stories to “partly false” and they were taken out of “repeat offender” status. Even so, someone at Facebook described as “Policy/Leadership” intervened and instructed the team to remove both strikes from the account, according to the leaked material.

In another case in late May, a Facebook employee filed a misinformation escalation for PragerU, after a series of fact-checking labels were applied to several similar posts suggesting polar bear populations had not been decimated by climate change and that a photo of a starving animal was used as a “deliberate lie to advance the climate change agenda.” This claim was fact-checked by one of Facebook’s independent fact-checking partners, Climate Feedback, as false and meant that the PragerU page had “repeat offender” status and would potentially be banned from advertising.

A Facebook employee escalated the issue because of “partner sensitivity” and mentioned within that the repeat offender status was “especially worrisome due to PragerU having 500 active ads on our platform,” according to the discussion contained within the task management system and leaked to NBC News.

After some back and forth between employees, the fact check label was left on the posts, but the strikes that could have jeopardized the advertising campaign were removed from PragerU’s pages.

Stone, the Facebook spokesperson, said that the company defers to third-party fact-checkers on the ratings given to posts, but that the company is responsible for “how we manage our internal systems for repeat offenders.”

“We apply additional system-wide penalties for multiple false ratings, including demonetization and the inability to advertise, unless we determine that one or more of those ratings does not warrant additional consequences,” he said in an emailed statement.

He added that Facebook works with more than 70 fact-checking partners who apply fact-checks to “millions of pieces of content.”

Facebook announced Thursday that it banned a Republican PAC, the Committee to Defend the President, from advertising on the platform following repeated sharing of misinformation.

But the ongoing sensitivity to conservative complaints about fact-checking continues to trigger heated debates inside Facebook, according to leaked posts from Facebook’s internal message board and interviews with current and former employees.

“The research has shown no evidence of bias against conservatives on Facebook,” said another employee, “So why are we trying to appease them?”

Those concerns have also spilled out onto the company’s internal message boards.

One employee wrote a post on 19 July, first reported by BuzzFeed News on Thursday, summarizing the list of misinformation escalations found in the task management system and arguing that the company was pandering to conservative politicians.

The post, a copy of which NBC News has reviewed, also compared Mark Zuckerberg to President Donald Trump and Russian President Vladimir Putin.

“Just like all the robber barons and slavers and plunderers who came before you, you are spending a fortune you didn’t build. No amount of charity can ever balance out the poverty, war and environmental damage enabled by your support of Donald Trump,” the employee wrote.

The post was removed for violating Facebook’s “respectful communications” policy and the list of escalations, previously accessible to all employees, was made private. The employee who wrote the post was later fired.

“We recognize that transparency and openness are important company values,” wrote a Facebook employee involved in handling misinformation escalations in response to questions about the list of escalations. “Unfortunately, because information from these Tasks were leaked, we’ve made them private for only subscribers and are considering how best to move forward.”

Source: https://www.nbcnews.com/tech/tech-news/sensitive-claims-bias-facebook-relaxed-misinformation-rules-conservative-pages-n1236182

Report Slams Facebook For ‘Vexing And Heartbreaking Decisions’ On Free Speech

Of note. Major fail as combination of ideology and business model have led Facebook to where it is today:

Facebook’s decisions to put free speech ahead of other values represent “significant setbacks for civil rights,” according to an independent audit of the social network’s progress in curbing discrimination.

The auditors gave a damning assessment of what they called “vexing and heartbreaking decisions” by Facebook. Among them: Keeping up posts by President Trump that “clearly violated” the company’s policies on hate and violent speech and voter suppression; exempting politicians from third-party fact-checking; and being “far too reluctant to adopt strong rules to limit [voting] misinformation and voter suppression.”

The report reflects two years of investigation by Laura W. Murphy, a former American Civil Liberties Union executive, and the civil rights law firm Relman Colfax. They were hired by Facebook following widespread accusations that it promotes discrimination by, for example, letting advertisers target users based on race. The auditors examined policies and practices ranging from how the company handles hate speech to its work to stop election interference.

“What has become increasingly clear is that we have a long way to go,” Sheryl Sandberg, Facebook’s chief operating officer, wrote in a blog postintroducing the auditors’ report.

“While we won’t make every change they call for, we will put more of their proposals into practice,” she said.

Sandberg said Facebook would create a new role for a senior vice president dedicated to making sure civil rights considerations informed the company’s products, policies and procedures.

The audit echoed complaints that advocacy groups have made for years. Leaders of those groups expressed skepticism over whether Facebook would make meaningful change now.

“The recommendations coming out of the audit are as good as the action that Facebook ends up taking,” Rashad Robinson, president of the nonprofit Color of Change, told NPR. “Otherwise, it is a road map without a vehicle and without the resources to move, and that is not useful for any of us.”

Vanita Gupta, head of the Leadership Conference on Civil and Human Rights, which along with Color of Change was instrumental in getting Facebook to make the audit public, said advocates would continue to put pressure on the company.

“It is a work in progress clearly, and this report in some ways is a start and not a finish for the civil rights community,” Gupta said. “We’re going to continue to push really hard using multiple tactics to be able to get done what we need to to preserve our democracy and protect our communities.”

The audit comes as hundreds of brands have pledged not to advertise on Facebook this month to protest its laissez-faire approach to harmful posts. Some of the boycott organizers, which include Color of Change, the Anti-Defamation League and the NAACP, held a call with Facebook leaders on Tuesday and hung up disheartened.

“They showed up to the meeting expecting an ‘A’ for attendance,” Robinson said of CEO Mark Zuckerberg and the other Facebook executives in a press conference after the meeting.

Advertising accounted for more than 98% of the company’s nearly $70 billion in revenue last year. The boycott campaign’s stated goal is “to force Mark Zuckerberg to address the effect that Facebook has had on our society.”

Anti-Defamation League CEO Jonathan Greenblatt told NPR the roster of brands that have paused advertising has passed 1,000, including household names such as Hershey, Ford and Levi’s.

“[Facebook executives] haven’t addressed the concerns of their advertisers. They haven’t addressed the concerns of the civil rights community. They haven’t addressed the concerns of consumer advocates,” Greenblatt said. “If they fail to do so, we will press and we will push. This effort will amplify, this campaign will expand, and more organizations will join.”

The audit included further recommendations for how Facebook could build “a long-term civil rights accountability structure,” including hiring more members of the civil rights team and making a civil rights executive a part of decisions over whether to remove content.

The auditors said Facebook had made progress in curbing discrimination — for example, by barring advertisers from targeting housing, employment and credit ads based on age, gender or ZIP code and expanding policies against voter suppression and census interference.

But they warned that the company’s decisions to prioritize free speech above all else — particularly speech by politicians — risked “obscur[ing]” that progress, especially as the presidential election approaches. They called on Facebook to enforce its policies and hold politicians to the same standards as other users.

“We have grave concerns that the combination of the company’s decision to exempt politicians from fact-checking and the precedents set by its recent decisions on President Trump’s posts, leaves the door open for the platform to be used by other politicians to interfere with voting,” they wrote.

“If politicians are free to mislead people about official voting methods … and are allowed to use not-so-subtle dog whistles with impunity to incite violence against groups advocating for racial justice, this does not bode well for the hostile voting environment that can be facilitated by Facebook in the United States.”

Source: Report Slams Facebook For ‘Vexing And Heartbreaking Decisions’ On Free Speech

Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Harsh but accurate:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

The news in the last weeks highlighted both the good and bad of platforms like Facebook and Twitter. The good: Graphic videos of police brutality from multiple cities transformed public sentiment about race, creating a potential movement for addressing an issue that has plagued the country since its founding. Peaceful protesters leveraged social platforms to get their message across, outcompeting the minority that advocated for violent tactics. The bad: waves of disinformation from politicians, police departments, Fox News, and others denied the reality of police brutality, overstated the role of looters in protests, and warned of busloads of antifa radicals. Only a month ago, critics exposed the role of internet platforms in undermining the country’s response to the COVID-19 pandemic by amplifying health disinformation. That disinformation convinced millions that face masks and social distancing were culture war issues, rather than public health guidance that would enable the economy to reopen safely.

The internet platforms have worked hard to minimize the perception of harm from their business. When faced with a challenge that they cannot deny or deflect, their response is always an apology and a promise to do better. In the case of Facebook, University of North Carolina Scholar Zeynep Tufekci coined the term “Zuckerberg’s 14-year apology tour.” If challenged to offer a roadmap, tech CEOs leverage the opaque nature of their platforms to create the illusion of progress, while minimizing the impact of the proposed solution on business practices. Despite many disclosures of harm, beginning with their role in undermining the integrity of the 2016 election, these platforms continue to be successful at framing the issues in a favorable light.

When pressured to reduce targeted harassment, disinformation, and conspiracy theories, the platforms frame the solution in terms of content moderation, implying there are no other options. Despite several waves of loudly promoted investments in artificial intelligence and human moderators, no platform has been successful at limiting the harm from third party content. When faced with public pressure to remove harmful content, internet platforms refuse to address root causes, which means old problems never go away, even as new ones develop. For example, banning Alex Jones removed conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.

The platforms respond to each new public relations challenge with an apology, another promise, and sometimes an increased investment in moderation. They have done it so many times I have lost track. And yet, policy makers and journalists continue to largely let them get away with it.

We need to recognize that internet platforms are experts in human attention. They know how to distract us. They know we will eventually get bored and move on.

Despite copious evidence to the contrary, too many policy makers and journalists behave as if internet platforms will eventually reduce the harm from targeted harassment, disinformation, and conspiracies through content moderation. There are three reasons why it will not do so: scale, latency, and intent. These platforms are huge. In the most recent quarter, Facebook reported that 1.7 billion people use its main platform every day and roughly 2.3 billion across its four large platforms. They do not disclose the numbers of messages posted each day, but it is likely to be in the hundreds of millions, if not a billion or more, just on Facebook. Substantial investments in artificial intelligence and human moderators cannot prevent millions of harmful messages from getting through.

The second hurdle is latency, which describes the time it takes for moderation to identify and remove a harmful message. AI works rapidly, but humans can take minutes or days. This means a large number of messages will circulate for some time before eventually being removed. Harm will occur in that interval. It is tempting to imagine that AI can solve everything, but that is a long way off. AI systems are built on data sets from older systems, and they are not yet capable of interpreting nuanced content like hate speech.

The final – and most important – obstacle for content moderation is intent. The sad truth is that the content we have asked internet platforms to remove is exceptionally valuable and they do not want to remove it. As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

A permissive approach to content has two huge benefits for platforms: profits and power. The business model of internet platforms like Facebook, Instagram, YouTube, and Twitter is based on advertising, the value of which depends on consumer attention. Where traditional media properties create content for mass audiences, internet platforms optimize content for each user individually, using surveillance to enable exceptionally precise targeting. Advertisers are addicted to the precision and convenience offered by internet platforms. Every year, they shift an ever larger percentage of their spending to them, from which platforms derive massive profits and wealth. Limiting the amplification of targeted harassment, disinformation, and conspiracy theories would lower engagement and revenues.

Power, in the form of political influence, is an essential component of success for the largest internet platforms. They are ubiquitous, which makes them vulnerable to politics. Tight alignment with the powerful ensures success in every country, which leads platforms to support authoritarians, including ones who violate human rights. For example, Facebook has enabled regime-aligned genocide in Myanmar and state-sponsored repression in Cambodia and the Philippines. In the United States, Facebook and other platforms have ignored or altered their terms of service to enable Trump and his allies to use the platform in ways that would normally be prohibited. For example, when journalists exposed Trump campaign ads that violated Facebook’s terms of service with falsehoods, Facebook changed its terms of service, rather than pulling the ads. In addition, Facebook chose not to follow Twitter’s lead in placing a public safety warning on a Trump post that promised violence in the event of looting.

Thanks to their exceptional targeting, platforms play an essential role in campaign fundraising and communications for candidates of both parties. While the dollars are not meaningful to the platforms, they derive power and influence from playing an essential role in electoral politics. This is particularly true for Facebook.

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

Thanks to COVID-19 and the protest marches, consumers and policy makers are far more aware of the role that internet platforms play in amplifying disinformation. For the first time in a generation, there is support in both parties in Congress for revisions to Section 230. There is increasing public support for regulation.

We do not need to accept disinformation as the cost of access to internet platforms. Harmful amplification is the result of business choices that can be changed. It is up to us and to our elected representatives to make that happen. The pandemic and the social justice protests underscore the urgency of doing so.

Source: Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Social Media Giants Support Racial Justice. Their Products Undermine It. Shows of support from Facebook, Twitter and YouTube don’t address the way those platforms have been weaponized by racists and partisan provocateurs.

Of note. “Thoughts and prayers” rather than action:

Several weeks ago, as protests erupted across the nation in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on his Facebook page, denouncing racial bias and proclaiming that “black lives matter.” Mr. Zuckerberg, Facebook’s chief executive, also announced that the company would donate $10 million to racial justice organizations.

A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the chief executive, pledged $3 million to an anti-racism organization started by Colin Kaepernick, the former N.F.L. quarterback.

YouTube joined the protests, too. Susan Wojcicki, its chief executive, wrote in a blog post that “we believe Black lives matter and we all need to do more to dismantle systemic racism.” YouTube also announced it would start a $100 million fund for black creators.

Pretty good for a bunch of supposedly heartless tech executives, right?

Well, sort of. The problem is that, while these shows of support were well intentioned, they didn’t address the way that these companies’ own products — Facebook, Twitter and YouTube — have been successfully weaponized by racists and partisan provocateurs, and are being used to undermine Black Lives Matter and other social justice movements. It’s as if the heads of McDonald’s, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.

It’s hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalized. In Tahrir Square in Cairo, Ferguson, Mo., and Baltimore, activists used Twitter and Facebook to organize demonstrations and get their messages out.

But in recent years, a right-wing reactionary movement has turned the tide. Now, some of the loudest and most established voices on these platforms belong to conservative commentators and paid provocateurs whose aim is mocking and subverting social justice movements, rather than supporting them.

The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn’t necessarily know it by scrolling through your social media feeds.

On Facebook, for example, the most popular post on the day of Mr. Zuckerberg’s Black Lives Matter pronouncement was an 18-minute video posted by the right-wing activist Candace Owens. In the video, Ms. Owens, who is black, railed against the protests, calling the idea of racially biased policing a “fake narrative” and deriding Mr. Floyd as a “horrible human being.” Her monologue, which was shared by right-wing media outlets — and which several people told me they had seen because Facebook’s algorithm recommended it to them — racked up nearly 100 million views.

Ms. Owens is a serial offender, known for spreading misinformation and stirring up partisan rancor. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly four million followers on Facebook. So can other high-profile conservative commentators like Terrence K. Williams, Ben Shapiro and the Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.

In all, seven of the 10 most-shared Facebook posts containing the phrase “Black Lives Matter” over the past month were critical of the movement, according to data from CrowdTangle, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favorable, perhaps because its users skew younger and more liberal.)

Facebook declined to comment. On Thursday, it announced it would spend $200 million to support black-owned businesses and organizations, and add a “Lift Black Voices” section to its app to highlight stories from black people and share educational resources.

Twitter has been a supporter of Black Lives Matter for years — remember Mr. Dorsey’s trip to Ferguson? — but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was run by a group of white nationalists posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter’s trending topics sidebar, which is often gamed by trolls looking to hijack online conversations, has filled upwith inflammatory hashtags like #whitelivesmatter and #whiteoutwednesday, often as a result of coordinated campaigns by far-right extremists.

A Twitter spokesman, Brandon Borrman, said: “We’ve taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world. From #BlackLivesMatter to #MeToo and #BringBackOurGirls, our company is motivated by the power of social movements to usher in meaningful societal change.”

YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made stridesin recent years to remove conspiracy theories and misinformation from its search results and recommendations, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contributed to racial division for years.

As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn’t footage of a protest or a police killing, but a four-year-old “social experiment” by the viral prankster and former Republican congressional candidate Joey Saladino, which has 14 million views. In the video, Mr. Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an “All Lives Matter” sign in a predominantly black neighborhood.

A YouTube spokeswoman, Andrea Faville, said that Mr. Saladino’s video had received fewer than 5 percent of its views this year, and that it was not being widely recommended by the company’s algorithms. Mr. Saladino recently reposted the video to Facebook, where it has gotten several million more views.

In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Mr. Floyd’s killing, or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the country are being organized in Facebook groups and Twitter threads, and social media has been helpful in creating more accountability for the police.

But these platforms aren’t just megaphones. They’re also global, real-time contests for attention, and many of the experienced players have gotten good at provoking controversy by adopting exaggerated views. They understand that if the whole world is condemning Mr. Floyd’s killing, a post saying he deserved it will stand out. If the data suggests that black people are disproportionately targeted by police violence, they know that there’s likely a market for a video saying that white people are the real victims.

The point isn’t that platforms should bar people like Mr. Saladino and Ms. Owens for criticizing Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet, and which features of their platforms are undermining the social justice movements they claim to support.

They don’t seem eager to do so. Recently, The Wall Street Journal reported that an internal Facebook study in 2016 found that 64 percent of the people who joined extremist groups on the platform did so because Facebook’s recommendations algorithms steered them there. Facebook could have responded to those findings by shutting off groups recommendations entirely, or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.

As a result, Facebook groups continue to be useful for violent extremists. This week, two members of the far-right “boogaloo” movement, which wants to destabilize society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, Calif. According to investigators, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommendations, they’re still appearing in plenty of people’s feeds.Rashad Robinson, the president of Color of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview this week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.

“What I see, particularly from Facebook and Mark Zuckerberg, it’s kind of like ‘thoughts and prayers’ after something tragic happens with guns,” Mr. Robinson said. “It’s a lot of sympathy without having to do anything structural about it.”

There is plenty more Mr. Zuckerberg, Mr. Dorsey and Ms. Wojcicki could do. They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don’t involve using racial slurs or organized hate groups. They could dismantle the recommendations systems that give provocateurs and cranks free attention, or make changes to the way their platforms rank information. (Ranking it by how engaging it is, the way some platforms still do, tends to amplify misinformation and outrage-bait.) They could institute a “viral ceiling” on posts about sensitive topics, to make it harder for trolls to hijack the conversation.

I’m optimistic that some of these tech leaders will eventually be convinced — either by their employees of color or their own conscience — that truly supporting racial justice means that they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices. But I’m worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.

So is Mr. Robinson, the civil rights organizer. A few weeks ago, he chatted with Mr. Zuckerberg by phone about Facebook’s policies on race, elections and other topics. Afterward, he said he thought that while Mr. Zuckerberg and other tech leaders generally meant well, he didn’t think they truly understood how harmful their products could be.

“I don’t think they can truly mean ‘Black Lives Matter’ when they have systems that put black people at risk,” he said.

Source: Social Media Giants Support Racial Justice. Their Products Undermine It.

Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump, Twitter contrast

Needed backlash at what can only be described as business-motivated collusion, one that becomes harder and harder to justify from any perspective:

Facebook is facing an unusually public backlash from its employees over the company’s handling of President Trump’s inflammatory posts about protests in the police killing of George Floyd, an unarmed black man in Minneapolis.

At least a dozen employees, some in senior positions, have openly condemned Facebook’s lack of action on the president’s posts and CEO Mark Zuckerberg’s defense of that decision. Some employees staged a virtual walkout Monday.

“Mark is wrong, and I will endeavor in the loudest possible way to change his mind,” tweeted Ryan Freitas, director of product design for Facebook’s news feed.

“I work at Facebook and I am not proud of how we’re showing up,” tweeted Jason Toff, director of product management. “The majority of coworkers I’ve spoken to feel the same way. We are making our voice heard.”

The social network also is under intense pressure from civil rights groups, Democrats and the public over its decision to leave up posts from the president that critics say violate Facebook’s rules against inciting violence. These included a post last week about the protests in which the president said, “when the looting starts, the shooting starts.

Twitter, in contrast, put a warning label on a tweet in which the president said the same thing, saying it violated rules against glorifying violence.

The move escalated a feud with the president that started when the company put fact-checking labels on two of his tweets earlier in the week. Trump retaliated by signing an executive order that attempts to strip online platforms of long-held legal protections.

Zuckerberg has long said he believes the company should not police what politicians say on its platform, arguing that political speech is already highly scrutinized. In a postFriday, the Facebook CEO said he had “been struggling with how to respond” to Trump’s posts.

“Personally, I have a visceral negative reaction to this kind of divisive and inflammatory rhetoric,” he wrote. “I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.”

Zuckerberg said Facebook had examined the post and decided to leave it up because “we think people need to know if the government is planning to deploy force.” He added that the company had been in touch with the White House to explain its policies. Zuckerberg spoke with Trump by phone Friday, according to a report published by Axios.

While Facebook’s 48,000 employees often debate policies and actions within the company, it is unusual for staff to take that criticism public. But the decision not to remove Trump’s posts has caused significant distress within the company, which is spilling over into public view.

“Censoring information that might help people see the complete picture *is* wrong. But giving a platform to incite violence and spread disinformation is unacceptable, regardless who you are or if it’s newsworthy,” tweeted Andrew Crow, head of design for the company’s Portal devices. “I disagree with Mark’s position and will work to make change happen.”

Several employees said on Twitter they were joining Monday’s walkout.

“Facebook’s recent decision to not act on posts that incite violence ignores other options to keep our community safe,” tweeted Sara Zhang, a product designer.

In a statement, Facebook spokesman Joe Osborne said: “We recognize the pain many of our people are feeling right now, especially our Black community. We encourage employees to speak openly when they disagree with leadership. As we face additional difficult decisions around content ahead, we’ll continue seeking their honest feedback.”

Less than 4% of Facebook’s U.S.-based staff are African American, according to the company’s most recent diversity report.

Facebook will not make employees participating in the walkout use paid time off, and it will not discipline those who participate.

On Sunday, Zuckerberg said the company would commit $10 million to groups working on racial justice. “I know Facebook needs to do more to support equality and safety for the Black community through our platforms,” he wrote.

Source: Facebook Employees Revolt Over Zuckerberg’s Hands-Off Approach To Trump

And Kara Swisher’s call for Twitter to take Trump off the platform:

C’mon, @Jack. You can do it.

Throw on some Kendrick Lamar and get your head in the right space. Pour yourself a big old glass of salt juice. Draw an ice bath and fire up the cryotherapy pod and the infrared sauna. Then just pull the plug on him. You know you want to.

You could answer the existential question of whether @realDonaldTrump even exists if he doesn’t exist on Twitter. I tweet, therefore I am. Dorsey meets Descartes.

All it would take is one sweet click to force the greatest troll in the history of the internet to meet his maker. Maybe he just disappears in an orange cloud of smoke, screaming, “I’m melllllllting.”

Do Trump — and the world — a favor and send him back into the void whence he came. And then go have some fun: Meditate and fast for days on end!

Our country is going through biological, economic and societal convulsions. We can’t trust the powerful forces in this nation to tell us the truth or do the right thing. In fact, not only can we not trust them. We have every reason to believe they’re gunning for us.

In Washington, the Trump administration’s deception about the virus was lethal. On Wall Street and in Silicon Valley, the fat cats who carved up the country, drained us dry and left us with no safety net profiteered off the virus. In Minneapolis, the barbaric death of George Floyd after a police officer knelt on him for almost nine minutes showed yet again that black Americans have everything to fear from some who are charged with protecting them.

As if that weren’t enough, from the slough of our despond, we have to watch Donald Trump duke it out with the lords of the cloud in a contest to see who can destroy our democracy faster.

I wish I could go along with those who say this dark period of American life will ultimately make us nicer and simpler and more contemplative. How can that happen when the whole culture has been re-engineered to put us at each other’s throats?

Trump constantly torques up the tribal friction and cruelty, even as Twitter and Facebook refine their systems to ratchet up rage. It is amazing that a septuagenarian became the greatest exploiter of social media. Trump and Twitter were a match made in hell.

The Wall Street Journal had a chilling report a few days ago that Facebook’s own research in 2018 revealed that “our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

Mark Zuckerberg shelved the research.

Why not just let all the bots trying to undermine our elections and spreading false information about the coronavirus and right-wing conspiracy theories and smear campaigns run amok? Sure, we’re weakening our society, but the weird, infantile maniacs running Silicon Valley must be allowed to rake in more billions and finish their mission of creating a giant cyberorganism of people, one huge and lucrative ball of rage.

“The shareholders of Facebook decided, ‘If you can increase my stock tenfold, we can put up with a lot of rage and hate,’” says Scott Galloway, professor of marketing at New York University’s Stern School of Business.

“These platforms have very dangerous profit motives. When you monetize rage at such an exponential rate, it’s bad for the world. These guys don’t look left or right; they just look down. They’re willing to promote white nationalism if there’s money in it. The rise of social media will be seen as directly correlating to the decline of Western civilization.”

Dorsey, who has more leeway because his stock isn’t as valuable as Facebook’s, made some mild moves against the president who has been spewing lies and inciting violence on Twitter for years. He added footnotes clarifying false Trump tweets about mail-in ballots and put a warning label on the president’s tweet about the Minneapolis riots that echo the language of a Miami police chief in 1967 and segregationist George Wallace: “When the looting starts, the shooting starts.”

“Jack is really sincerely trying to find something to make it better,” said one friend of the Twitter chief’s. “He’s like somebody trapped in a maze, going down every hallway and turning every corner.”

Zuckerberg, on the other hand, went on Fox to report that he was happy to continue enabling the Emperor of Chaos, noting that he did not think Facebook should be “the arbiter of truth of everything that people say online.”

It was a sickening display that made even some loyal Facebook staffers queasy. As The Verge’s Casey Newton reported, some employees objected to the company’s rationale in internal posts.

“I have to say I am finding the contortions we have to go through incredibly hard to stomach,” one wrote. “All this points to a very high risk of a violent escalation and civil unrest in November and if we fail the test case here, history will not judge us kindly.”

Trump, furious that Dorsey would attempt to rein him in on the very platform that catapulted him into the White House, immediately decided to try to rein in Dorsey.

He signed an executive order that might strip liability protection from social media sites, which would mean they would have to more assiduously police false and defamatory posts. Now that social media sites are behemoths, Galloway thinks that the removal of the Communications Decency Act makes a lot of sense even if the president is trying to do it for the wrong reasons.

Trump does not seem to realize, however, that he’s removing his own protection. He huffs and puffs about freedom of speech when he really wants the freedom to be vile. “It’s the mother of all cutting-off-your-nose-to-spite-your-face moves,” says Galloway.

The president wants to say things on Twitter that he will not be allowed to say if he exerts this control over Twitter. In a sense, it’s Trump versus his own brain. If Twitter can be sued for what people say on it, how can Trump continue to torment? Wouldn’t thousands of his own tweets have to be deleted?

“He’d be the equivalent of a slippery floor at a store that sells equipment for hip replacements,” says Galloway, who also posits that, in our hyper-politicized world, this will turn Twitter into a Democratic site and Facebook into a Republican one.

Nancy Pelosi, whose district encompasses Twitter, said that it did little good for Dorsey to put up a few fact-checks while letting Trump’s rants about murder and other “misrepresentations” stay up.

“Facebook, all of them, they are all about making money,” the speaker said. “Their business model is to make money at the expense of the truth and the facts.” She crisply concluded that “all they want is to not pay taxes; they got their tax break in 2017” and “they don’t want to be regulated, so they pander to the White House.”

C’mon, Jack. Make @realDonaldTrump melt to help end our meltdown.

Source: Think Outside the Box, Jack

 

Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators

Whether by humans or AI, not an easy thing to do consistently and appropriately:

Facebook, YouTube and Twitter are relying more heavily on automated systems to flag content that violate their rules, as tech workers were sent home to slow the spread of the coronavirus.

But that shift could mean more mistakes — some posts or videos that should be taken down might stay up, and others might be incorrectly removed. It comes at a time when the volume of content the platforms have to review is skyrocketing, as they clamp down on misinformation about the pandemic.

Tech companies have been saying for years that they want computers to take on more of the work of keeping misinformation, violence and other objectionable content off their platforms. Now the coronavirus outbreak is accelerating their use of algorithms rather than human reviewers.

“We’re seeing that play out in real time at a scale that I think a lot of the companies probably didn’t expect at all,” said Graham Brookie, director and managing editor of the Atlantic Council’s Digital Forensic Research Lab.

Facebook CEO Mark Zuckerberg told reporters that automated review of some content means “we may be a little less effective in the near term while we’re adjusting to this.”

Twitter and YouTube are also sounding caution about the shift to automated moderation.

“While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” Twitter said in a blog post. It added that no accounts will be permanently suspended based only on the actions of the automated systems.

YouTube said its automated systems “are not always as accurate or granular in their analysis of content as human reviewers.” It warned that more content may be removed, “including some videos that may not violate policies.” And, it added, it will take longer to review appeals of removed videos.

Facebook, YouTube and Twitter rely on tens of thousands of content moderators to monitor their sites and apps for material that breaks their rules, from spam and nudity to hate speech and violence. Many moderators are not full-time employees of the companies, but contractors who work for staffing firms.

Now those workers are being sent home. But some content moderation cannot be done outside the office, for privacy and security reasons.

For the most sensitive categories, including suicide, self-injury, child exploitation and terrorism, Facebook says it’s shifting work from contractors to full-time employees — and is ramping up the number of people working on those areas.

There are also increased demands for moderation as a result of the pandemic. Facebook says use of its apps, including WhatsApp and Instagram, is surging. The platforms are under pressure to keep false information, including dangerous fake health claims, from spreading.

The World Health Organization calls the situation an infodemic, where too much information, both true and false, makes it hard to find trustworthy information.

The tech companies “are dealing with more information with less staff,” Brookie said. “Which is why you’ve seen these decisions to move to more automated systems. Because frankly, there’s not enough people to look at the amount of information that’s ongoing.”

That makes the platforms’ decisions right now even more important, he said. “I think that we should all rely on more moderation rather than less moderation, in order to make sure that the vast majority of people are connecting with objective, science-based facts.”

Some Facebook users raised alarm that automated review was already causing problems.

When they tried to post links to mainstream news sources like The Atlantic and BuzzFeed, they got notifications that Facebook thought the posts were spam.

Facebook said the posts were erroneously flagged as spam due to a glitch in its automated spam filter.

Zuckerberg denied the problem was related to shifting content moderation from humans to computers.

“This is a completely separate system on spam,” he said. “This is not about any kind of near-term change, this was just a technical error.”

Source: Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators

Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

For the record:

British comedian Sacha Baron Cohen has said if Facebook had existed in the 1930s it would have allowed Hitler a platform for his anti-Semitic beliefs.

The Ali G star singled out the social media company in a speech in New York.

He also criticised Google, Twitter and YouTube for pushing “absurdities to billions of people”.

Social media giants and internet companies are under growing pressure to curb the spread of misinformation around political campaigns.

Twitter announced in late October that it would ban all political advertising globally from 22 November.

Earlier this week Google said it would not allow political advertisers to target voters using “microtargeting” based on browsing data or other factors.

Analysts say Facebook has come under increasing pressure to follow suit.

The company said in a statement that Baron Cohen had misrepresented its policies and that hate speech was banned on its platforms.

“We ban people who advocate for violence and we remove anyone who praises or supports it. Nobody – including politicians – can advocate or advertise hate, violence or mass murder on Facebook,” it added.

What did Baron Cohen say?

Addressing the Anti-Defamation League’s Never is Now summit, Baron Cohen took aim at Facebook boss Mark Zuckerberg who in October defended his company’s position not to ban political adverts that contain falsehoods.

“If you pay them, Facebook will run any ‘political’ ad you want, even if it’s a lie. And they’ll even help you micro-target those lies to their users for maximum effect,” he said.

“Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.”

Baron Cohen said it was time “for a fundamental rethink of social media and how it spreads hate, conspiracies and lies”. He also questioned Mr Zuckerberg’s characterisation of Facebook as a bastion of “free expression”.

“I think we could all agree that we should not be giving bigots and paedophiles a free platform to amplify their views and target their victims,” he added.

Earlier this month, an international group of lawmakers called for targeted political adverts on social media to be suspended until they are properly regulated.

The International Committee on Disinformation and Fake News was told that the business model adopted by social networks made “manipulation profitable”.

A BBC investigation into political ads for next month’s UK election suggested they were being targeted towards key constituencies and certain age groups.

Source: Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

Canada banning Christian demonstrations? How a private member’s bill sprouted fake news

Interesting account of the social media trail and actors:

Imagine scrolling through Facebook when you come across this headline: “Canada Moves to Ban Christians from Demonstrating in Public Under New Anti-Hate Proposal.” If you think it reads too shocking and absurd to be true, that’s because it is.

But this exact headline appeared atop a story that was shared more than 16,000 times online since it was published in May, according to social media tool CrowdTangle. The federal government and Justin Trudeau, who is pictured in the story, are not seeking to ban Christians from demonstrating. In fact, the bill the story is based on was introduced in the Ontario legislature, by a Conservative MPP, and never made it past a second reading.

Incorrect and misleading content is common on social media, but it’s not always obvious where it originates. To learn more, CBC News tracked this particular example back through time on social media to uncover where it came from and how it evolved over time.

March 20: Private member’s bill introduced in Ontario

In this case, it all started with a bill. In March, Roman Baber, a freshman member of the Ontario provincial legislature introduced his very first private member’s bill. Had he known how badly the bill would be misconstrued online, he might have chosen something else, he later told CBC News.

“I expected that people would understand what prompted the bill, as a proud member of the Jewish community who’s been subjected to repeated demonstrations at Queen’s Park by certain groups that were clearly promoting hate,” said Baber, Progressive Conservative member for York Centre.

The bill was simple. It sought to ban any demonstrations on the grounds of Queen’s Park, where Ontario’s provincial legislature is located, that promote hate speech or incite violence. Baber said the bill was prompted by previous demonstrations that occurred at the legislature grounds.

“In 2017, we saw a demonstration that called for the shooting of Israelis. We saw a demonstration that called for a bus bombing and murder of innocent civilians,” he said.

The bill went through two readings at Queen’s Park and was punted to the standing committee on justice, where it’s languished since.

March 27: Canadian Jewish News covers story

At first, the bill garnered modest attention online. The Canadian Jewish News ran a straight-forward report on the bill that included an interview with Baber shortly after he first introduced it. It was shared only a handful of times.

But a few weeks after the second reading, the bill drew the attention of LifeSiteNews, a socially-conservative website. The story was shared 212 times, according to CrowdTangle, including to the Yellow Vests Canada Facebook group.

In its story, LifeSiteNews suggested that a bill banning hate speech might be interpreted to include demonstrations like those that opposed updates to the province’s sex education curriculum.

Baber said this isn’t the case, because hate speech is already defined and interpreted by legal precedent.

“The words ‘hate’ and ‘hate-promoting’ have been defined by the courts repeatedly through common law and is enforced in courts routinely,” Baber said. “So it would be a mistake to suggest that the bill expands the realm of hate speech.”

April 24: The Post Millennial invokes ‘free speech’ argument 

But the idea stuck around. A few weeks later, on April 24, the Post Millennial posted a story labelled as news that argued the bill could infringe on free speech. The story was, however, clear that the bill was only in Ontario and had not yet moved beyond a second reading. It was shared over 200 times and drew nearly 400 interactions — likes, shares, comments and reactions — on social media, according to CrowdTangle.

May 6: Powerful emotions evoke response on social media 

On May 6, a socially conservative women’s group called Real Women of Canada published a news release on the bill calling it “an attack on free speech.” In the release, the group argues that hate speech isn’t clearly defined in Canadian law, and draws on unrelated examples to claim that Christian demonstrations, in particular, could be targeted.

For example, the group pointed to the case of British Columbia’s Trinity Western University, a Christian post-secondary institution that used to require all students sign a covenant that prohibited sex outside of heterosexual marriage. A legal challenge around the covenant and a potential law school at Trinity Western occurred last year, but it had nothing to do with hate speech.

May 9: LifeSiteNews republishes news release

Though this news release itself was not widely shared, three days later it was republished by LifeSiteNews as an opinion piece. That post did better, drawing 5,500 shares and over 8,000 interactions, according to CrowdTangle It also embellished the release with a dramatic image and sensational headline: “Ontario Bill Threatens to Criminalize Christian Speech as ‘Hate.'”

LifeSiteNews published the news release as an opinion piece, drawing 5,500 shares and over 8,000 interactions. (Screengrab/LifeSiteNews)

At this point, the nugget of truth has been nearly entirely obscured by several layers of opinion and misrepresentation. For example, the bill doesn’t specifically cite Christian speech, but this headline suggests it does.

These tactics are used to elicit a strong response from readers and encourage them to share, according to Samantha Bradshaw, a researcher on the Computational Propaganda project at Oxford University.

“People like to consume this kind of content because it’s very emotional, and it gets us feeling certain things: anger, frustration, anxiety, fear,” Bradshaw said. “These are all very powerful emotions that get people sharing and consuming content.”

May 11: Big League Politics publishes sensational inaccuracies 

That framing on LifeSiteNews caught the attention of a major U.S. publication known for spreading conspiracy theories and misinformation: Big League Politics. On May 11, the site published a story that cited the LifeSiteNews story heavily.

The headline and image make it seem like Trudeau’s government has introduced legislation that would specifically prohibit Christians from demonstrating anywhere in the country, a far cry from the truth.

While the story provides a few facts, like the fact the bill was introduced in Ontario, much of it is incorrect. For example, in the lead sentence, the writer claimed the bill would “criminalize public displays by Christians deemed hateful to Muslims, the LGBT community and other victim groups designated by the left.”

The disinformation and alarmist headline proved successful: the Big League Politics version of the story was shared more than 16,000 times, drew more than 26,000 interactions and continued to circulate online for over two weeks.

This evolution is a common occurrence. Disinformation is often based on a nugget of truth that gets buried under layers of emotionally-charged language and opinion. Here, that nugget of truth was a private member’s bill introduced in the Ontario legislature. But that fact was gradually churned through an online network of spin until it was unrecognizable in the final product.

At the end of the day, democracy is really hard work. It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.– Samantha Bradshaw, researcher at Oxford University

“That is definitely something that we see often: taking little truths and stretching them, misreporting them or implementing commentary and treating someone’s opinion about what happened as news,” Bradshaw said. “The incremental changes that we see in these stories and these narratives is something very typical of normal disinformation campaigns.”

Bradshaw said even though disinformation is only a small portion of the content online, it can have an outsized impact on our attention. With that in mind, she said it’s partly up to readers to think critically about what they’re reading and sharing online.

“At the end of the day, democracy is really hard work,” Bradshaw said. “It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.”

Source: Canada banning Christian demonstrations? How a private member’s bill sprouted fake news