ADL tallies up roughly 4 million anti-Semitic tweets in 2017

It would be nice to have comparative data with respect to different religions just as we do for police-reported hate crimes. :

At least 4.2 million anti-Semitic tweets were shared or re-shared from roughly 3 million Twitter accounts last year, according to an Anti-Defamation League report released Monday. Most of those accounts are believed to be operated by real people rather than automated software known as bots, the organization, an international NGO that works against anti-Semitism and bigotry, said.

The anti-Semitic accounts constitute less than 1% of the roughly 336 million active accounts,

“This new data shows that even with the steps Twitter has taken to remove hate speech and to deal with those accounts disseminating it, users are still spreading a shocking amount of anti-Semitism and using Twitter as a megaphone to harass and intimidate Jews,” said ADL CEO Jonathan Greenblatt in a statement.

The report comes amid growing concern about harassment on social media platforms such as Twitter and Facebook, as well as their roles in spreading fake news. Both companies are trying to curb hatred on their platforms while preserving principles of free speech and expression. Last month, Facebook CEO Mark Zuckerberg was summoned to Washington to testify in front of Congress, in part out of concern over how the social network was used to spread propaganda during the 2016 presidential campaign.

Twitter CEO Jack Dorsey has publicly made harassment on the social network a priority, even soliciting ideas for combatting the problem from the public. In March, Dorsey held a livestream to discuss how to deal with the issue. The company has made changes, such as prohibiting offensive account names or better enforcing its terms of service.

Twitter didn’t immediately respond to a request for comment.

The ADL report, evaluated tweets on subjects ranging from Holocaust denial and anti-Jewish slurs to positive references to anti-Semitic figures, books and podcasts. The ADL also tallied the use of coded words and symbols, such as the triple parenthesis, which is put around names to signal someone is Jewish.

The study used a dataset of roughly 55,000 tweets, which were screened by a team of researchers for indications of anti-Semitism. Since this is the first report if its kind from the group, there aren’t numbers to compare to data. Though, the ADL did release a report on the targeting of journalists during the 2016 election which also included Twitter data.

The ADL says that artificial intelligence and algorithms will eventually be effective at identifying hate online, but human input is needed to train such systems. For example, screeners can teach machines when anti-Semitic language might have been used to express opposition to such ideas or in an ironic manner.

Such issues aren’t simply hypothetical. The ADL pointed to the huge volume of tweets about anti-Semitism that were posted during the week of the Charlottesville, Virginia riots last summer. Though Twitter saw the highest volume of tweets about anti-Semitism for the year, only a small percentage were actually anti-Semitic.

The report noted the ADL works with Twitter on the issues of anti-Semitism and bigotry online. Greenblatt said the organization is “pleased that Twitter has already taken significant steps to respond to this challenge.”

Source: ADL tallies up roughly 4 million anti-Semitic tweets in 2017

‘Weaponization’ of free speech prompts talk of a new hate law

One to watch:

The climate for hate speech regulation in Canada appears to be shifting.

Traditional free speech advocates are reconsidering the status quo they helped create, in which hate speech is only a Criminal Code charge that requires political approval, and so is rarely prosecuted. There is even talk of resurrecting the defunct and much maligned ban on internet hate speech, Section 13 of the Canadian Human Rights Act.

The latest example was a lecture this week by Omar Mouallem, an Edmonton journalist and board member of free expression group PEN Canada, in which he argued online racists have “weaponized” free speech against Muslims, and Canada should consider a new anti-hate law to stop them.

Mouallem told a University of Alberta audience that public discourse is “fatally flawed,” and overrun with hate propagandists who traffic in lies and provocations in order to pose as censorship victims.

The far right has “co-opted” the issue of free speech, and their activism is not a principled defence of a Charter value, but “a sly political strategy to divide opponents on the left, humiliate them and cast them as hypocrites and unconstitutional, to clear a way for unconstitutional ideas,” Mouallem said in an advance email interview.

The traditional liberal response of public censure and rebuttal is no longer effective because it just “devolves into a pissing match that goes nowhere and only makes people double down on their opinions,” he said. “Given that Facebook groups and social media are the meeting point for hate groups to organize, and that online hate speech has a great ability to spread wider and faster, I think special regulation is worth considering.”

It is striking to hear that from a board member of PEN Canada, which is devoted to fighting censorship and defending freedom of expression, and was instrumental in the legislative repeal of Section 13, a law in the Canadian Human Rights Act that banned repeated messages, by phone or internet, that were “likely to expose” protected groups to hatred or contempt.

The lecture follows news that the federal Liberal government is openly mulling bringing back Section 13, which was repealed by Parliament in 2014, but later found by courts to be constitutionally valid. It allowed for legal orders banning offenders from engaging in further hate speech, on pain of criminal contempt charges, and provided for fines of $10,000.

It also follows the backtracking of another press freedom group, Canadian Journalists for Free Expression, which launched a petition for Prime Minister Justin Trudeau to “disinvite” U.S. President Donald Trump from a G7 Summit on the grounds that his administration’s attacks on press freedom have harmed American democracy. That petition was deleted soon after it was announced, amid criticism that it hypocritically also violated the principles of free expression.

Even libraries have illustrated the shift. A memorial held in a Toronto library last year for Barbara Kulaszka, a prominent lawyer for Canadian hate propagandists, led the Toronto Public Library to change its room-booking policy, allowing officials to refuse bookings that are “likely to promote, or would have the effect of promoting, discrimination, contempt or hatred of any group.”

Tasleem Thawar, executive director of PEN Canada, said she encourages diverse perspectives on the board. There has been no change to the group’s official position “that an educated, thoughtful, and vibrantly expressive citizenry is the best defence against the spread of hateful ideologies,” she said.

“If the federal government were to propose a new law (against hate speech), we would certainly comment on the specifics and its possible effects,” she said. “However, PEN is also committed to dispelling hatreds, as stated in the PEN International Charter, including on the basis of identity markers like class, race, gender, and nationality. And it is true that hateful, marginalizing and even demonizing speech can chill the freedom of expression of the groups who are being subjected to such public bigotry.”

All this might be evidence that the culture war over Canada’s uniquely balanced approach to hate speech is set to flare up again. Old arguments are being repurposed to fit modern media. Laws that were written in the age of telephone hotlines and printed newspapers are being reconsidered in the context of Twitter, Facebook and Google.

As ever, religion — especially Islam — is at the core of the debate, according to Richard Moon, the University of Windsor law professor who authored an influential 2008 report for the Canadian Human Rights Commission that urged it to stop regulating online hate via Section 13.

In his forthcoming book Putting Faith in Hate: When Religion is the Source or Target of Hate Speech, Moon describes the traditional distinction between speech that attacks a belief, which is typically protected by law, and speech that attacks a group, which can rise to the level of banned hate speech. He argues that our understanding of religion complicates this distinction, because religion is both a personal commitment and a cultural identity. Hate speech, then, often works by falsely attributing an objectionable belief to every member of a cultural group.

“Most contemporary anti-Muslim speech takes this form, presenting Islam as a regressive and violent belief system that is incompatible with liberal democratic values. The implication is that those who identify as Muslims – those who hold such beliefs – are dangerous and should be treated accordingly. Beliefs that may be held by a fringe element in the tradition are falsely attributed to all Muslims,” Moon writes.

Mouallem, who does not identify as Muslim, is a former rapper, freelance writer, and co-author of a book on the Fort McMurray wildfire. He said he does not advocate the return of Section 13 exactly as it was. It often worked, he said, but it is “too tainted.”

Section 13 was a “messy, if not farcical process,” he said, made more so by the “manipulation” of Richard Warman, the lawyer and former Canadian Human Rights Commission staffer who effectively monopolized the law, filing nearly every case and eventually winning them all, sometimes after posing online as a neo-Nazi to gather evidence. It was also “misused,” he said, by Canadian Muslim leaders on the “wishy-washy” case of alleged anti-Islam hate speech in Maclean’s magazine.

But Canada should have some kind of “online clause” that addresses both the “uniqueness of online content” and this current historical moment in which there is “widespread vilification” of Muslims and “rapid mobilization of extremist groups.”

Now there are “flagrant” examples that would be caught by such a law, he said, such as Ezra Levant’s use of the term “rapefugees.”

“Allowing hate speech to remain in the public sphere actually signals that it’s socially acceptable, which gives licence to perpetuate it, and eventually can make it mainstream,” Mouallem said.

The expression that “sunlight is the best disinfectant,” meaning hate speech is best countered by more and better speech is “ineffective when you’re dealing with majority tyranny and certain discrimination is widely accepted. This is the unique moment of hate speech in Canada and much of the ‘West’ right now,” he said. “Society has made an exception for Islam.”

Source: ‘Weaponization’ of free speech prompts talk of a new hate law

Ottawa library cancels planned screening of controversial ‘Killing Europe’ doc

Viewing the trailer, appears to be the right call as it crosses the border into hate speech (Mark Steyn on steroids):

The Ottawa Public Library has cancelled this weekend’s screening of a controversial documentary, Killing Europe, after complaints the film was thinly disguised hate speech against Muslims and immigrants.

“I am letting you know that I have been working with the city solicitor about concerns brought forward by the Ottawa district labour council, unions, residents, board members and friends,” Coun. Tim Tierney, who is chairman of the library’s board of directors, said in an email. “I had asked the CEO to review and address the concerns expressed.”

“I can now report that the rental of the room will not take place.”

The documentary was to have been screened Saturday afternoon at the library’s main branch on Metcalfe Street. The screening was to have been hosted by the group ACT! for Canada, a group dedicated “to speaking out about the clear and present dangers emerging from those who do not embrace Canada’s values …”

Killing Europe, by Danish ex-patriate Michael Hansen, purports to warn of the dangers of the “Islamification” of Europe.

But even a “30-second Google search” by the library would have revealed it to be hate speech, says human rights lawyer Richard Warman, who was one of the people to complain to the library about the screening.

Screening the film is “in clear violation of the library’s own rental policy prohibiting the use of space for discriminatory purposes,” Warman wrote in an email to the library and its board members, Mayor Jim Watson, and others.

“When I looked at the three-minute trailer, it was clear it was going to be an all-out assault on immigrants and the Muslim community,” Warman said Friday.

“The messages contained even in just the trailer is that ‘immigrants are coming to swamp and devastate Europe and that Muslims are engaged in perpetual massacres of the white populations.’ Obviously, it set off alarms.”

Warman received confirmation the screening had been cancelled in an email Friday morning from library deputy CEO Monique Désormeaux.

Coun. Catherine McKenney, another library board member, said Friday she “wholeheartedly” supported the library’s decision to cancel the screening and promised better discussion in the future about what the library chooses to allow.

But where to draw the line between suppressing free speech and stifling hate speech?

Warman said the screening clearly violated the library’s obligations, stated on its website, to not provide public space for individuals or groups that “are likely to promote discrimination, contempt or hatred to any person on the basis of race, national or ethnic origin, colour, religion, age, sex, marital status, family status, sexual preference, or disability, gratuitous sex and violence or denigration of the human condition.”

“As a human rights lawyer, I’m firmly in the camp of defending freedom of expression under Section 2B of the Charter,” Warman said. “The library board is absolutely right to defend freedom of expression, while at the same time complying with their parallel obligation under Ontario human rights law not to discriminate against people on the basis of race and religion.”

In the case of Killing Europe and ACT! for Canada’s own newsletter, which Warman said includes claims of gang rapes and “grotesque caricatures of pakis, blacks and illegals,” there is “no grey area.”

“This is hate propaganda that is clearly directed toward recent immigrants and members of the Muslim community,” he said.

“The main thing is that we ensure public venues aren’t used as amplifiers of the message of hate-mongers … Public, taxpayer-funded facilities cannot be used to engage in hate propaganda. The library board has the obligation, when we know that these groups will attempt to misuse public facilities, that they engage in a sort of rudimentary 30-second Google check: ‘Who are you again? And what’s the movie you want to show?’ The 30-second Google check would have come up with the answers and set off alarm bells.”

ACT! for Canada did not immediately respond to a request for comment Friday.

via Ottawa library cancels planned screening of controversial ‘Killing Europe’ doc | Ottawa Citizen

What Does Facebook Consider Hate Speech? Take Our Quiz – The New York Times

Gives a good sense of the criteria Facebook uses,  allowing more than what I would consider hate speech.

Take the quiz at their website for the contrast between Facebook, your responses as well as NYT readers (spoiler alert: Facebook classified half of these as hate speech, I classified 5 of the 6):

Have you ever seen a post on Facebook that you were surprised wasn’t removed as hate speech? Have you flagged a message as offensive or abusive but the social media site deemed it perfectly legitimate?

Users on social media sites often express confusion about why offensive posts are not deleted. Paige Lavender, an editor at HuffPost, recently described her experience learning that a vulgar and threatening message she received on Facebook did not violate the platform’s standards.

Here are a selection of statements based on examples from a Facebook training document and real-world comments found on social media. Most readers will find them offensive. But can you tell which ones would run afoul of Facebook’s rules on hate speech?

Hate speech is one of several types of content that Facebook reviews, in addition to threats and harassment. Facebook defines hate speech as:

  1. An attack, such as a degrading generalization or slur.
  2. Targeting a “protected category” of people, including one based on sex, race, ethnicity, religious affiliation, national origin, sexual orientation, gender identity, and serious disability or disease.

Facebook’s hate speech guidelines were published in June by ProPublica, an investigative news organization, which is gathering users’ experiences about how the social network handles hate speech.

Danielle Citron, an information privacy expert and professor of law at the University of Maryland, helped The New York Times analyze six deeply insulting statements and determine whether they would be considered hate speech under Facebook’s rules.

  1. “Why do Indians always smell like curry?! They stink!”
  2. “Poor black people should still sit at the back of the bus.”
  3. “White men are assholes.”
  4. “Keep ‘trans’ men out of girls bathrooms!”
  5. “Female sports reporters need to be hit in the head with hockey pucks.”
  6. “I’ll never trust a Muslim immigrant… they’re all thieves and robbers.”

Did any of these answers surprise you? You’re probably not alone.

Ms. Citron said that even thoughtful and robust definitions of hate speech can yield counterintuitive results when enforced without cultural and historic context.

“When you’re trying to get as rulish as possible, you can lose the point of it,” she said. “The spirit behind those rules can get lost.”

A Facebook spokeswoman said that the company expects its thousands of content reviewers to take context into account when making decisions, and that it constantly evolves its policies to keep up with changing cultural nuances.

In response to questions for this piece, Facebook said it had changed its policy to include age as a protected category. While Facebook’s original training document states that content targeting “black children” would not violate its hate speech policy, the company’s spokeswoman said that such attacks would no longer be acceptable.

Should tech companies be able to shut down neo-Nazis? – Recode

Good discussion of some of the issues involved which IMO lay out the need for some government leadership in setting up guidelines and possibly regulations:

In the aftermath of the white supremacist rally in Charlottesville, Va., where dozens were injured and one counter-protestor was killed, the battle moved online.

The four-year-old neo-Nazi website the Daily Stormer was evicted by web hosts GoDaddy and Google after it disparaged the woman killed in Charlottesville, Heather Heyer. And then web infrastructure company Cloudflare, which had previously been criticized for how it handled reports of abuse by the website, publicly and permanently terminated the Stormer’s account, too, forcing it to the dark web.

But should a tech company have that power? Even Cloudflare’s CEO Matthew Prince, who personally decided to pull the plug, thinks the answer should be “no” in the future.

“I am confident we made the right decision in the short term because we needed to have this conversation,” Prince said on the latest episode of Too Embarrassed to Ask. “We couldn’t have the conversation until we made that determination. But it is the wrong decision in the long term. Infrastructure is never going to be the right place to make these sorts of editorial decisions.”

Interviewed by Recode’s Kara Swisher and The Verge’s Lauren Goode, Prince was joined on the new episode by the executive director of the Electronic Frontier Foundation, Cindy Cohn. Although the two organizations have worked together in the past, Cohn co-authored a public rebuke of Cloudflare’s decision, saying it threatened the “future of free expression.”

“The moment where this is about Nazis, to me, is very late in the conversation,” Cohn said, citing past attempts to shut down political websites. “What they do is they take down the whole website, they can’t just take down the one bad article. The whole Recode website comes down because you guys say something that pisses off some billionaire.”

“These companies, including Matthew’s, have a right to decide who they’re doing business with, but we urge them to be really, really cautious about this,” she added.

You can listen to the new podcast on Apple Podcasts, Spotify, Pocket Casts, Overcast or wherever you listen to podcasts.

Prince and Cohn agreed that part of the long-term solution to controversial speech online — no matter how odious — may be establishing and respecting a set of transparent, principled rules that cross international borders.

“I believe deeply in free speech, but it doesn’t have the same force around the rest of the world,” Prince said. “What does is an idea of due process, that there are a set of rules you should follow, and you should be able to know going into that. I don’t think the tech industry has that set of due processes.”

Cohn noted that there is a process for stopping someone from speaking before they can speak — prior restraint. For most of America’s history, obtaining such an injunction against someone has been intentionally difficult.

“We wouldn’t have a country if people couldn’t voice radical ideas and they had to go through a committee of experts or tech bros,” she said. “If you have to go on bended knee before you get to speak, you’re going to reduce the universe of ideas. Maybe you’ll get some heinous ideas, but you might not get the Nelson Mandelas, either.”

Source: Should tech companies be able to shut down neo-Nazis? – Recode

Delete Hate Speech or Pay Up, Germany Tells Social Media Companies – The New York Times

Will be interesting to see the degree to which this works in making social media companies take more effective action, as well as the means that companies take to ‘police’ speech (see earlier post Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica). Apart from the debate over what can/should be any limits to free speech, there are risks in “outsourcing” this function to the private sector:

Social media companies operating in Germany face fines of as much as $57 million if they do not delete illegal, racist or slanderous comments and posts within 24 hours under a law passed on Friday.

The law reinforces Germany’s position as one of the most aggressive countries in the Western world at forcing companies like Facebook, Google and Twitter to crack down on hate speech and other extremist messaging on their digital platforms.

But the new rules have also raised questions about freedom of expression. Digital and human rights groups, as well as the companies themselves, opposed the law on the grounds that it placed limits on individuals’ right to free expression. Critics also said the legislation shifted the burden of responsibility to the providers from the courts, leading to last-minute changes in its wording.

Technology companies and free speech advocates argue that there is a fine line between policy makers’ views on hate speech and what is considered legitimate freedom of expression, and social networks say they do not want to be forced to censor those who use their services. Silicon Valley companies also deny that they are failing to meet countries’ demands to remove suspected hate speech online.

Still, German authorities pressed ahead with the legislation. Germany witnessed an increase in racist comments and anti-immigrant language after the arrival of more than a million migrants, predominantly from Muslim countries, since 2015, and Heiko Maas, the justice minister who drew up the draft legislation, said on Friday that it ensured that rules that currently apply offline would be equally enforceable in the digital sphere.

“With this law, we put an end to the verbal law of the jungle on the internet and protect the freedom of expression for all,” Mr. Maas said. “We are ensuring that everyone can express their opinion freely, without being insulted or threatened.”

“That is not a limitation, but a prerequisite for freedom of expression,” he continued.

The law will take effect in October, less than a month after nationwide elections, and will apply to social media sites with more than two million users in Germany.

It will require companies including Facebook, Twitter and Google, which owns YouTube, to remove any content that is illegal in Germany — such as Nazi symbols or Holocaust denial — within 24 hours of it being brought to their attention.

The law allows for up to seven days for the companies to decide on content that has been flagged as offensive, but that may not be clearly defamatory or inciting violence. Companies that persistently fail to address complaints by taking too long to delete illegal content face fines that start at 5 million euros, or $5.7 million, and could rise to as much as €50 million.

Every six months, companies will have to publicly report the number of complaints they have received and how they have handled them.

In Germany, which has some of the most stringent anti-hate speech laws in the Western world, a study published this year found that Facebook and Twitter had failed to meet a national target of removing 70 percent of online hate speech within 24 hours of being alerted to its presence.

The report noted that while the two companies eventually erased almost all of the illegal hate speech, Facebook managed to remove only 39 percent within 24 hours, as demanded by the German authorities. Twitter met that deadline in 1 percent of instances. YouTube fared significantly better, removing 90 percent of flagged content within a day of being notified.

Facebook said on Friday that the company shared the German government’s goal of fighting hate speech and had “been working hard” to resolve the issue of illegal content. The company announced in May that it would nearly double, to 7,500, the number of employees worldwide devoted to clearing its site of flagged postings. It was also trying to improve the processes by which users could report problems, a spokesman said.

Twitter declined to comment, while Google did not immediately respond to a request for comment.

The standoff between tech companies and politicians is most acute in Europe, where freedom of expression rights are less comprehensive than in the United States, and where policy makers have often bristled at Silicon Valley’s dominance of people’s digital lives.

But advocacy groups in Europe have raised concerns over the new German law.

Mirko Hohmann and Alexander Pirant of the Global Public Policy Institute in Berlin criticized the legislation as “misguided” for placing too much responsibility for deciding what constitutes unlawful content in the hands of social media providers.

“Setting the rules of the digital public square, including the identification of what is lawful and what is not, should not be left to private companies,” they wrote.

Even in the United States, Facebook and Google also have taken steps to limit the spread of extremist messaging online, and to prevent “fake news” from circulating. That includes using artificial intelligence to remove potentially extremist material automatically and banning news sites believed to spread fake or misleading reports from making money through the companies’ digital advertising platforms.

Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica

For all those interested in free speech and hate speech issues, a really good analysis of how Facebook is grappling with the issue and its definitions of protected groups. Urge all readers to go through the slide show (need to go to the article to access) which capture some of the complexities involved:

In the wake of a terrorist attack in London earlier this month, a US congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared US Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”

Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed, and her Facebook account was disabled for seven days.

A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins’ incitement to violence passed muster because it targeted a specific sub-group of Muslims—those that are “radicalized”—while Delgado’s post was deleted for attacking whites in general.

Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn’t be allowed in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of “fake news”—fabricated stories that circulated on Facebook like “Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement“—and growing concern that terrorists are using social media for recruitment.

While Facebook was credited during the 2010-2011 “Arab Spring” with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea, and Western Sahara.

One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children, and white men. It asks: which group is protected from hate speech? The correct answer: white men.

The reason is that Facebook deletes curses, slurs, calls for violence, and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation, and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. (The exact rules are in the slide show below.)

Facebook has used these rules to train its “content reviewers” to decide whether to delete or allow posts. Facebook says the exact wording of its rules may have changed slightly in more recent versions. ProPublica recreated the slides.

Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook’s algorithm is designed to defend all races and genders equally.

But Facebook says its goal is different—to apply consistent standards worldwide. “The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”

Facebook’s rules constitute a legal world of their own. They stand in sharp contrast to the United States’ First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company’s hate speech algorithm. But they also differ—for example, in permitting postings that deny the Holocaust—from more restrictive European standards.

The company has long had programs to remove obviously offensive material like child pornography from its stream of images and commentary. Recent articles in the Guardian and Süddeutsche Zeitung have detailed the difficult choices that Facebook faces regarding whether to delete posts containing graphic violence, child abuse, revenge porn and self-mutilation.

The challenge of policing political expression is even more complex. The documents reviewed by ProPublica indicate, for example, that Donald Trump’s posts about his campaign proposal to ban Muslim immigration to the United States violated the company’s written policies against “calls for exclusion” of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump’s statements from its policies at the order of Mark Zuckerberg, the company’s founder and chief executive.

The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.

Users whose posts are removed are not usually told what rule they have broken, and they cannot generally appeal Facebook’s decision. Appeals are currently only available to people whose profile, group, or page is removed.

The company has begun exploring adding an appeals process for people who have individual pieces of content deleted, according to Bickert. “I’ll be the first to say that we’re not perfect every time,” she said.

Facebook is not required by US law to censor content. A 1996 federal law gave most tech companies, including Facebook, legal immunity for the content users post on their services. The law, section 230 of the Telecommunications Act, was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.

The law freed up online publishers to host online forums without having to legally vet each piece of content before posting it, the way that a news outlet would evaluate an article before publishing it. But early tech companies soon realized that they still needed to supervise their chat rooms to prevent bullying and abuse that could drive away users.

Source: Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica

Hate Speech And The Misnomer Of ‘The Marketplace Of Ideas’ : NPR

Good long read by David Shih on some of the weaknesses in the free speech arguments:

Critical race theorists Richard Delgado and Jean Stefancic addressed this possibility in a 1992 Cornell Law Review article entitled “Images of the Outsider in American Law and Culture: Can Free Expression Remedy Systemic Social Ills.” They coin a term for the erroneous belief that “good” antiracist speech is the best remedy for “bad” racist speech: the “empathic fallacy.” The empathic fallacy is the conviction “that we can somehow control our consciousness despite limitations of time and positionality … and that we can enlarge our sympathies through linguistic means alone.”

In other words, the empathic fallacy leads us to believe that “good” speech begets racial justice and that we will be able to tell the difference between it and racist hate speech because we are distanced, objective arbiters…

In the meantime, racist hate speech flows unabated because of our faith in a flawed metaphor.

The marketplace is further gamed by “dog whistles” — code word replacements for overtly racist speech that still aim to stoke white resentment over the social mobility of people of color. When the sitting attorney general dismisses the ruling of a court because it resides on “an island in the Pacific,” he invents yet another way to signal which groups count in America and which ones don’t. And if a racist idea like this one ever flops in the marketplace, its author simply recalls it by saying he was joking.

A quarter-century ago when Delgado and Stefancic published their theory of the empathic fallacy, they speculated that the infamous Willie Horton ad tipped a presidential election because voters could not view the ad objectively. We now know that racism was the primary motivation for voters who put Donald Trump in the White House. We know that the best ideas of Gold Star father Khizr Khan at the Democratic National Convention were no match for fearmongering rumors about refugees from Syria and immigrants from Mexico. We know that after almost 100 days of Trump’s presidency, only two percent of those who voted for him regret it. This might mean they don’t see his speech as racist or don’t care if it is.

If we argue that racist hate speech must be protected, we have to account for the empathic fallacy.

We can start by admitting that this position is based on the troubling belief that it is one’s right to be hateful — and not on the comforting belief that hate is a catalyst for racial justice in a “marketplace of ideas.” Better than ever, we know how specious that logic is. We can understand that student protesters may not, in fact, long for their First Amendment rights should the tables turn on them. Law professor Charles Lawrence has argued that civil rights activists in the sixties achieved substantive gains only when they exceeded the acceptable bounds of the First Amendment, only when they disrupted “business as usual.”

Racist hate speech has come to emblemize free speech protections because the parties it injures lack social power. Students of color are expected to endure insults to their identities at the same time that celebrities win multi-million dollar defamation settlements and media companies scrupulously guard their intellectual property against plagiarism.

The belief that more speech is the remedy for “bad” speech can be a principled stance. But for the stance to be principled, it must account for why the target of racist hate speech is less deserving of exemption than, say, the millionaire with a reputation to protect from libel, or the community flooded with sexually-explicit material, or the deep state with a dark secret. Some exemptions make good sense. But does an obscene photograph of an adult that “lacks serious literary, artistic, political, or scientific value” (as defined in Miller v. California, the current law of the land regarding obscenity) really do more harm than a lecture promoting white supremacy?

American society fixates on antiracist protest when debating the First Amendment for the same reason it fixates on race when debating affirmative action: because of the perception that people of color are somehow undeserving of special privileges.

Yet it was supporting the rights of people of color that got Desiree Fairooz arrested in January for laughing during the Senate confirmation hearingof then-attorney general nominee Jeff Sessions. This week, the Department of Justice moved forward with her prosecution, along with those of two men who had mocked Sessions with fake Ku Klux Klan robes. In March, the Human Rights Council of the UN published a letter expressing alarm at the number of legislative efforts criminalizing peaceful assembly and expression in the US.

Powerful interests will find their way around the First Amendment to protect the status quo against antiracist protest. Asking student protesters to tolerate racist hate speech is to ask them to trust in free speech laws that have historically exempted the powerful and punished the vulnerable. When it comes to racism, the “marketplace of ideas” is not laissez-faire and never was.

Source: Hate Speech And The Misnomer Of ‘The Marketplace Of Ideas’ : Code Switch : NPR

Montreal mosque facing calls for investigation after imam preaches on anti-Semitic conspiracy theories

Disturbing.

Prayer leader in question has been suspended and his remarks denounced by NCCM but local mosque authorities, like any local religious authorities, need to ensure that prayer leaders do not engage in hate speech:

A Montreal mosque where an imam had prayed for Jews to be killed “one by one” is facing fresh calls for an investigation after more videos surfaced online showing anti-Semitic preaching.

The Middle East Media Research Institute released a video on Tuesday of sermons in which an imam at the Al Andalous Islamic Centre conveyed conspiracy theories about Jews, their history and their origins.

Sheikh Wael Al-Ghitawi is shown in the video clips claiming that Jews were “people who slayed the prophets, shed their blood and cursed the Lord,” reported MEMRI, which translated the Friday sermons.

MEMRI

MEMRI

The imam went on to say Jews were the descendants of “Turkish mongols” and had been “punished by Allah,” who made them “wander in the land.” He further said that Jews had no historical ties to Jerusalem or Palestine.

The view conveyed by the imam has typically been used to deny that Jews have a connection to the land of Israel, said Rabbi Reuben Poupko, co-chair of the Quebec branch of the Centre for Israel and Jewish Affairs.

 “This is a bizarre strain of radical propaganda. It appears in the writings of Hamas and other groups like it and claims to debunk Jewish history,” said the rabbi, who said it was “unseemly” to use a religious service to propagate hate.

He said he did not believe such views, as well as the “deeply troubling” earlier calls to violence, were supported by the broader Muslim community “but its presence in this mosque needs to be investigated.”

The videos were posted on YouTube in November 2014. The centre was already in the spotlight over an August 2014 video that showed an imam asking Allah to “destroy the accursed Jews,” and “kill them one by one.”

In a press release last week concerning the August 2014 video, the mosque administration blamed “clumsy and unacceptable phrasing” by a substitute imam, whose wording was “tainted by an abusive generalization.” The mosque could not be reached for comment about the most recent video.

“If you examine the annals of history you will see that the Jews do not have any historical right to Palestine,” Al-Ghitawi said in the latest video. He claimed there was “not a single Jew in Jerusalem and Palestine” for lengthy periods.

“Jerusalem is Arabic and Islamic,” he said at a separate sermon. “It is our land, the land of our fathers and forefathers. We are the people most entitled to it. We will not forsake a single inch of this land.”

On Monday the National Council of Canadian Muslims denounced the “incendiary speech” in the earlier Al-Andalous video, as well as a 2016 sermon at a Toronto mosque about the “filth of the Jews.”

The Muslim Association of Canada, which is affiliated with the Toronto mosque, has apologized and said it had suspended the prayer leader in question and launched an internal investigation into the incident.

Canada sees ‘dramatic’ spike in online hate — here’s what you can do about it

Useful to have this tracking of trends given that police-reported hate crime statistics, while needed and useful, only tell part of the story:

The internet can be a pretty intolerant place, and it may be getting worse.

An analysis of Canada’s online behaviour commissioned by CBC’s Marketplace shows a 600 per cent jump in the past year in how often Canadians use language online that’s racist, Islamophobic, sexist or otherwise intolerant.

“That’s a dramatic increase in the number of people feeling comfortable to make those comments,” James Rubec, content strategist for media marketing company Cision, told Marketplace.

Cision scanned social media, blogs and comments threads between November 2015 and November 2016 for slurs and intolerant phrases like “ban Muslims,” “sieg heil” or “white genocide.” They found that terms related to white supremacy jumped 300 per cent, while terms related to Islamophobia increased 200 per cent.

“It might not be that there are more racists in Canada than there used to be, but they feel more emboldened. And maybe that’s because of the larger racist sentiments that are coming out of the United States,” Rubec said.

So when you see hateful speech online, what can you do about it?

Marketplace‘s Asha Tomlinson joined journalist and cultural critic Septembre Anderson and University of Ontario Institute of Technology sociologist Barbara Perry, whose work focuses on hate crimes, to share strategies and tips for confronting intolerance online.

Reach out

If the person making hurtful comments is a friend, message them privately about it. Calling them out publicly can backfire.