Germany Acts to Tame Facebook, Learning From Its Own History of Hate – The New York Times

Good long and interesting read, highlighting a number of the issues and practical aspects involved:

Security is tight at this brick building on the western edge of Berlin. Inside, a sign warns: “Everybody without a badge is a potential spy!”

Spread over five floors, hundreds of men and women sit in rows of six scanning their computer screens. All have signed nondisclosure agreements. Four trauma specialists are at their disposal seven days a week.

They are the agents of Facebook. And they have the power to decide what is free speech and what is hate speech.

This is a deletion center, one of Facebook’s largest, with more than 1,200 content moderators. They are cleaning up content — from terrorist propaganda to Nazi symbols to child abuse — that violates the law or the company’s community standards.

Germany, home to a tough new online hate speech law, has become a laboratory for one of the most pressing issues for governments today: how and whether to regulate the world’s biggest social network.

Around the world, Facebook and other social networking platforms are facing a backlash over their failures to safeguard privacy, disinformation campaigns and the digital reach of hate groups.

In India, seven people were beaten to death after a false viral message on the Facebook subsidiary WhatsApp. In Myanmar, violence against the Rohingya minority was fueled, in part, by misinformation spread on Facebook. In the United States, Congress called Mark Zuckerberg, Facebook’s chief executive, to testify about the company’s inability to protect its users’ privacy.

As the world confronts these rising forces, Europe, and Germany in particular, have emerged as the de facto regulators of the industry, exerting influence beyond their own borders. Berlin’s digital crackdown on hate speech, which took effect on Jan. 1, is being closely watched by other countries. And German officials are playing a major role behind one of Europe’s most aggressive moves to rein in technology companies, strict data privacy rules that take effect across the European Union on May 25 and are prompting global changes.

“For them, data is the raw material that makes them money,” said Gerd Billen, secretary of state in Germany’s Ministry of Justice and Consumer Protection. “For us, data protection is a fundamental right that underpins our democratic institutions.”

Germany’s troubled history has placed it on the front line of a modern tug-of-war between democracies and digital platforms.

In the country of the Holocaust, the commitment against hate speech is as fierce as the commitment to free speech. Hitler’s “Mein Kampf” is only available in an annotated version. Swastikas are illegal. Inciting hatred is punishable by up to five years in jail.

But banned posts, pictures and videos have routinely lingered on Facebook and other social media platforms. Now companies that systematically fail to remove “obviously illegal” content within 24 hours face fines of up to 50 million euros.

The deletion center predates the legislation, but its efforts have taken on new urgency. Every day content moderators in Berlin, hired by a third-party firm and working exclusively on Facebook, pore over thousands of posts flagged by users as upsetting or potentially illegal and make a judgment: Ignore, delete or, in particularly tricky cases, “escalate” to a global team of Facebook lawyers with expertise in German regulation.

Some decisions to delete are easy. Posts about Holocaust denial and genocidal rants against particular groups like refugees are obvious ones for taking down.

Others are less so. On Dec. 31, the day before the new law took effect, a far-right lawmaker reacted to an Arabic New Year’s tweet from the Cologne police, accusing them of appeasing “barbaric, Muslim, gang-raping groups of men.”

The request to block a screenshot of the lawmaker’s post wound up in the queue of Nils, a 35-year-old agent in the Berlin deletion center. His judgment was to let it stand. A colleague thought it should come down. Ultimately, the post was sent to lawyers in Dublin, London, Silicon Valley and Hamburg. By the afternoon it had been deleted, prompting a storm of criticism about the new legislation, known here as the “Facebook Law.”

“A lot of stuff is clear-cut,” Nils said. Facebook, citing his safety, did not allow him to give his surname. “But then there is the borderline stuff.”

Complicated cases have raised concerns that the threat of the new rules’ steep fines and 24-hour window for making decisions encourage “over-blocking” by companies, a sort of defensive censorship of content that is not actually illegal.

The far-right Alternative of Germany, a noisy and prolific user of social media, has been quick to proclaim “the end of free speech.” Human rights organizations have warned that the legislation was inspiring authoritarian governments to copy it.

Other people argue that the law simply gives a private company too much authority to decide what constitutes illegal hate speech in a democracy, an argument that Facebook, which favored voluntary guidelines, made against the law.

“It is perfectly appropriate for the German government to set standards,” said Elliot Schrage, Facebook’s vice president of communications and public policy. “But we think it’s a bad idea for the German government to outsource the decision of what is lawful and what is not.”

Richard Allan, Facebook’s vice president for public policy in Europe and the leader of the company’s lobbying effort against the German legislation, put it more simply: “We don’t want to be the arbiters of free speech.”

German officials counter that social media platforms are the arbiters anyway.

It all boils down to one question, said Mr. Billen, who helped draw up the new legislation: “Who is sovereign? Parliament or Facebook?”

Learning From (German) History

When Nils applied for a job at the deletion center, the first question the recruiter asked him was: “Do you know what you will see here?”

Nils has seen it all. Child torture. Mutilations. Suicides. Even murder: He once saw a video of a man cutting a heart out of a living human being.

And then there is hate.

“You see all the ugliness of the world here,” Nils said. “Everyone is against everyone else. Everyone is complaining about that other group. And everyone is saying the same horrible things.”

The issue is deeply personal for Nils. He has a 4-year-old daughter. “I’m also doing this for her,” he said.

The center here is run by Arvato, a German service provider owned by the conglomerate Bertelsmann. The agents have a broad purview, reviewing content from a half-dozen countries. Those with a focus on Germany must know Facebook’s community standards and, as of January, the basics of German hate speech and defamation law.

“Two agents looking at the same post should come up with the same decision,” says Karsten König, who manages Arvato’s partnership with Facebook.

The Berlin center opened with 200 employees in 2015, as Germany was opening its doors to hundreds of thousands of migrants.

Anas Modamani, a Syrian refugee, posed with Chancellor Angela Merkel and posted the image on Facebook. It instantly became a symbol of her decision to allowing in hundreds of thousands of migrants.

Soon it also became a symbol of the backlash.

The image showed up in false reports linking Mr. Modamani to terrorist attacks in Brussels and on a Christmas market in Berlin. He sought an injunction against Facebook to stop such posts from being shared but eventually lost.

The arrival of nearly 1.4 million migrants in Germany has tested the country’s resolve to keep a tight lid on hate speech. The law on illegal speech was long-established but enforcement in the digital realm was scattershot before the new legislation.

Posts calling refugees rapists, Neanderthals and scum survived for weeks, according to jugendschutz.net, a publicly funded internet safety organization. Many were never taken down. Researchers at jugendschutz.net reported a tripling in observed hate speech in the second half of 2015.

Mr. Billen, the secretary of state in charge of the new law, was alarmed. In September 2015, he convened executives from Facebook and other social media sites at the justice ministry, a building that was once the epicenter of state propaganda for the Communist East. A task force for fighting hate speech was created. A couple of months later, Facebook and other companies signed a joint declaration, promising to “examine flagged content and block or delete the majority of illegal posts within 24 hours.”

But the problem did not go away. Over the 15 months that followed, independent researchers, hired by the government, twice posed as ordinary users and flagged illegal hate speech. During the tests, they found that Facebook had deleted 46 percent and 39 percent.

“They knew that they were a platform for criminal behavior and for calls to commit criminal acts, but they presented themselves to us as a wolf in sheep skin,” said Mr. Billen, a poker-faced civil servant with stern black frames on his glasses.

By March 2017, the German government had lost patience and started drafting legislation. The Network Enforcement Law was born, setting out 21 types of content that are “manifestly illegal” and requiring social media platforms to act quickly.

Officials say early indications suggest the rules have served their purpose. Facebook’s performance on removing illegal hate speech in Germany rose to 100 percent over the past year, according to the latest spot check of the European Union.

Platforms must publish biannual reports on their efforts. The first is expected in July.

At Facebook’s Berlin offices, Mr. Allan acknowledged that under the earlier voluntary agreement, the company had not acted decisively enough at first.

“It was too little and it was too slow,” he said. But, he added, “that has changed.”

He cited another independent report for the European Commission from last summer that showed Facebook was by then removing 80 percent of hate speech posts in Germany.

The reason for the improvement was not German legislation, he said, but a voluntary code of conduct with the European Union. Facebook’s results have improved in all European countries, not just in Germany, Mr. Allan said.

“There was no need for legislation,” he said.

Mr. Billen disagrees.

“They could have prevented the law,” he said. YouTube scored 90 percent in last year’s monitoring exercise. If other platforms had done the same, there would be no law today, he said.

A Regulatory Dilemma

Germany’s hard-line approach to hate speech and data privacy once made it an outlier in Europe. The country’s stance is now more mainstream, an evolution seen in the justice commissioner in Brussels.

Vera Jourova, the justice commissioner, deleted her Facebook account in 2015 because she could not stand the hate anymore.

“It felt good,” she said about pressing the button. She added: “It felt like taking back control.”

But Ms. Jourova, who grew up behind the Iron Curtain in what is now the Czech Republic, had long been skeptical about governments legislating any aspect of free speech, including hate speech. Her father lost his job after making a disparaging comment about the Soviet invasion in 1968, barring her from going to university until she married and took her husband’s name.

“I lived half my life in the atmosphere driven by Soviet propaganda,” she said. “The golden principle was: If you repeat a lie a hundred times it becomes the truth.”

When Germany started considering a law, she instead preferred a voluntary code of conduct. In 2016, platforms like Facebook promised European users easy reporting tools and committed to removing most illegal posts brought to their attention within 24 hours.

The approach worked well enough, Ms. Jourova said. It was also the quickest way to act because the 28 member states in the European Union differed so much about whether and how to legislate.

But the stance of many governments toward Facebook has hardened since it emerged that the consulting firm Cambridge Analytica had harvested the personal data of up to 87 million users. Representatives of the European Parliament have asked Mr. Zuckerberg to come to Brussels to “clarify issues related to the use of personal data” and he has agreed to come as soon as next week.

Ms. Jourova, whose job is to protect the data of over 500 million Europeans, has hardened her stance as well.

“Our current system relies on trust and this did nothing to improve trust,” she said. “The question now is how do we continue?”

The European Commission is considering German-style legislation for online content related to terrorism, violent extremism and child pornography, including a provision that would include fines for platforms that did not remove illegal content within an hour of being alerted to it.

Several countries — France, Israel, Italy, and Canada among them — have sent queries to the German government about the impact of the new hate speech law.

And Germany’s influence is evident in Europe’s new privacy regulation, known as the General Data Protection Regulation, or G.D.P.R.. The rules give people control over how their information is collected and used.

Inspired in part by German data protection laws written in the 1980s, the regulation has been shaped by a number of prominent Germans. Ms. Jourova’s chief of staff, Renate Nikolay, is German, as is her predecessor’s chief of staff, Martin Selmayr, now the European Commission’s secretary general. The lawmaker in charge of the regulation in the European Parliament is German, too.

“We have built on the German tradition of data protection as a constitutional right and created the most modern piece of regulation of the digital economy,” Ms. Nikolay said.

“To succeed in the long-term companies needs the trust of customers,” she said. “At the latest since Cambridge Analytica it has become clear that data protection is not just some nutty European idea, but a matter of competitiveness.”

On March 26, Ms. Jourova wrote a letter — by post, not email — to Sheryl Sandberg, Facebook’s chief operating officer.

“Is there a need for stricter rules for platforms like those that exist for traditional media?” she asked.

“Is the data of Europeans affected by the current scandal?” she added, referring to the Cambridge Analytica episode. And, if so, “How do you plan to inform the user about this?”

She demanded a reply within two weeks, and she got one. Some 2.7 million Europeans were affected, Ms. Sandberg wrote.

But she never answered Ms. Jourova’s question on regulation.

“There is now a sense of urgency and the conviction that we are dealing with something very dangerous that may threaten the development of free democracies,” said Ms. Jourova, who is also trying to find ways to clamp down on fake news and disinformation campaigns.

“We want the tech giants to respect and follow our legislation,” she added. “We want them to show social responsibility both on data protection and on hate speech.”

So do many Facebook employees, Mr. Allan, the company executive, said.

“We employ very thoughtful and principled people,” he said. “They work here because they want to make the world a better place, so when an assumption is made that the product they work on is harming people it is impactful.”

“People have felt this criticism very deeply,” he said.

A Visual Onslaught

Nils works eight-hour shifts. On busy days, 1,500 user reports are in his queue. Other days, there are only 300. Some of his colleagues have nightmares about what they see.

Every so often someone breaks down. A mother recently left her desk in tears after watching a video of a child being sexually abused. A young man felt physically sick after seeing a video of a dog being tortured. The agents watch teenagers self-mutilating and girls recounting rape.

They have weekly group sessions with a psychologist and the trauma specialists on standby. In more serious cases, the center teams up with clinics in Berlin.

In the office, which is adorned with Facebook logos, fresh fruit is at the agents’ disposal in a small room where subdued colors and decorative moss growing on the walls are meant to calm fraying nerves.

To decompress, the agents sometimes report each other’s posts, not because they are controversial, but “just for a laugh,” said another agent, the son of a Lebanese refugee and an Arabic-speaker who has had to deal with content related to terrorism generally and the Islamic State specifically. By now, he said, images of “weird skin diseases” affected him more than those of a beheading. Nils finds sports injuries like breaking bones particularly disturbing.

There is a camaraderie in the office and a real sense of mission: Nils said the agents were proud to “help clean up the hate.”

The definition of hate is constantly evolving.

The agents, who initially take a three-week training course, get frequent refreshers. Their guidelines are revised to reflect hate speech culture. Events change the meaning of words. New hashtags and online trends must be put in context.

“Slurs can become socialized,” Mr. Allan of Facebook explained.

“Refugee” became a group protected from the broad hate speech rules only in 2015. “Nafri” was a term used by the German police that year to describe North Africans who sexually harassed hundreds of women, attacking and, in some cases, raping them. Since then, Nafri has become a popular insult among the far-right.

Nils and his colleagues must determine whether hateful content is singling out an ethnic group or individuals.

That was the challenge with a message on Twitter that was later posted to Facebook as a screenshot by Beatrix von Storch, deputy floor leader of the far-right party, AfD.

“What the hell is wrong with this country?” Ms. von Storch wrote on Dec. 31. “Why is an official police account tweeting in Arabic?”

“Do you think that will appease the barbaric murdering Muslim group-raping gangs of men?” she continued.

A user reported the post as a violation of German law, and it landed in Nils’s queue. He initially decided to ignore the request because he felt Ms. von Storch was directing her insults at the men who had sexually assaulted women two years earlier.

Separately, a user reported the post as a violation of community standards. Another agent leaned toward deleting it, taking it as directed at Muslims in general.

They conferred with their “subject matter expert,” who escalated it to a team in Dublin.

For 24 hours, the post kept Facebook lawyers from Silicon Valley to Hamburg busy. The Dublin team decided that the post did not violate community standards but sent it on for legal assessment by outside lawyers hired by Facebook in Germany.

Within hours of news that the German police were opening a criminal investigation into Ms. von Storch over her comments, Facebook restricted access to the post. The user who reported the content was notified that it had been blocked for a violation of section 130 of the German criminal code, incitement to hatred. Ms. von Storch was also notified too.

In the first few days of the year, it looked like the platforms were erring on the side of censorship. On Jan. 2, a day after Ms. von Storch’s post was deleted, the satirical magazine Titanic quipped that she would be its new guest tweeter. Two of the magazine’s subsequent Twitter posts mocking her were deleted. When Titanic published them again, its account was temporarily suspended.

Since then, things have calmed down. And even Mr. Allan conceded: “The law has not materially changed the amount of content that is deleted.”

via Germany Acts to Tame Facebook, Learning From Its Own History of Hate – The New York Times

ADL tallies up roughly 4 million anti-Semitic tweets in 2017

It would be nice to have comparative data with respect to different religions just as we do for police-reported hate crimes. :

At least 4.2 million anti-Semitic tweets were shared or re-shared from roughly 3 million Twitter accounts last year, according to an Anti-Defamation League report released Monday. Most of those accounts are believed to be operated by real people rather than automated software known as bots, the organization, an international NGO that works against anti-Semitism and bigotry, said.

The anti-Semitic accounts constitute less than 1% of the roughly 336 million active accounts,

“This new data shows that even with the steps Twitter has taken to remove hate speech and to deal with those accounts disseminating it, users are still spreading a shocking amount of anti-Semitism and using Twitter as a megaphone to harass and intimidate Jews,” said ADL CEO Jonathan Greenblatt in a statement.

The report comes amid growing concern about harassment on social media platforms such as Twitter and Facebook, as well as their roles in spreading fake news. Both companies are trying to curb hatred on their platforms while preserving principles of free speech and expression. Last month, Facebook CEO Mark Zuckerberg was summoned to Washington to testify in front of Congress, in part out of concern over how the social network was used to spread propaganda during the 2016 presidential campaign.

Twitter CEO Jack Dorsey has publicly made harassment on the social network a priority, even soliciting ideas for combatting the problem from the public. In March, Dorsey held a livestream to discuss how to deal with the issue. The company has made changes, such as prohibiting offensive account names or better enforcing its terms of service.

Twitter didn’t immediately respond to a request for comment.

The ADL report, evaluated tweets on subjects ranging from Holocaust denial and anti-Jewish slurs to positive references to anti-Semitic figures, books and podcasts. The ADL also tallied the use of coded words and symbols, such as the triple parenthesis, which is put around names to signal someone is Jewish.

The study used a dataset of roughly 55,000 tweets, which were screened by a team of researchers for indications of anti-Semitism. Since this is the first report if its kind from the group, there aren’t numbers to compare to data. Though, the ADL did release a report on the targeting of journalists during the 2016 election which also included Twitter data.

The ADL says that artificial intelligence and algorithms will eventually be effective at identifying hate online, but human input is needed to train such systems. For example, screeners can teach machines when anti-Semitic language might have been used to express opposition to such ideas or in an ironic manner.

Such issues aren’t simply hypothetical. The ADL pointed to the huge volume of tweets about anti-Semitism that were posted during the week of the Charlottesville, Virginia riots last summer. Though Twitter saw the highest volume of tweets about anti-Semitism for the year, only a small percentage were actually anti-Semitic.

The report noted the ADL works with Twitter on the issues of anti-Semitism and bigotry online. Greenblatt said the organization is “pleased that Twitter has already taken significant steps to respond to this challenge.”

Source: ADL tallies up roughly 4 million anti-Semitic tweets in 2017

‘Weaponization’ of free speech prompts talk of a new hate law

One to watch:

The climate for hate speech regulation in Canada appears to be shifting.

Traditional free speech advocates are reconsidering the status quo they helped create, in which hate speech is only a Criminal Code charge that requires political approval, and so is rarely prosecuted. There is even talk of resurrecting the defunct and much maligned ban on internet hate speech, Section 13 of the Canadian Human Rights Act.

The latest example was a lecture this week by Omar Mouallem, an Edmonton journalist and board member of free expression group PEN Canada, in which he argued online racists have “weaponized” free speech against Muslims, and Canada should consider a new anti-hate law to stop them.

Mouallem told a University of Alberta audience that public discourse is “fatally flawed,” and overrun with hate propagandists who traffic in lies and provocations in order to pose as censorship victims.

The far right has “co-opted” the issue of free speech, and their activism is not a principled defence of a Charter value, but “a sly political strategy to divide opponents on the left, humiliate them and cast them as hypocrites and unconstitutional, to clear a way for unconstitutional ideas,” Mouallem said in an advance email interview.

The traditional liberal response of public censure and rebuttal is no longer effective because it just “devolves into a pissing match that goes nowhere and only makes people double down on their opinions,” he said. “Given that Facebook groups and social media are the meeting point for hate groups to organize, and that online hate speech has a great ability to spread wider and faster, I think special regulation is worth considering.”

It is striking to hear that from a board member of PEN Canada, which is devoted to fighting censorship and defending freedom of expression, and was instrumental in the legislative repeal of Section 13, a law in the Canadian Human Rights Act that banned repeated messages, by phone or internet, that were “likely to expose” protected groups to hatred or contempt.

The lecture follows news that the federal Liberal government is openly mulling bringing back Section 13, which was repealed by Parliament in 2014, but later found by courts to be constitutionally valid. It allowed for legal orders banning offenders from engaging in further hate speech, on pain of criminal contempt charges, and provided for fines of $10,000.

It also follows the backtracking of another press freedom group, Canadian Journalists for Free Expression, which launched a petition for Prime Minister Justin Trudeau to “disinvite” U.S. President Donald Trump from a G7 Summit on the grounds that his administration’s attacks on press freedom have harmed American democracy. That petition was deleted soon after it was announced, amid criticism that it hypocritically also violated the principles of free expression.

Even libraries have illustrated the shift. A memorial held in a Toronto library last year for Barbara Kulaszka, a prominent lawyer for Canadian hate propagandists, led the Toronto Public Library to change its room-booking policy, allowing officials to refuse bookings that are “likely to promote, or would have the effect of promoting, discrimination, contempt or hatred of any group.”

Tasleem Thawar, executive director of PEN Canada, said she encourages diverse perspectives on the board. There has been no change to the group’s official position “that an educated, thoughtful, and vibrantly expressive citizenry is the best defence against the spread of hateful ideologies,” she said.

“If the federal government were to propose a new law (against hate speech), we would certainly comment on the specifics and its possible effects,” she said. “However, PEN is also committed to dispelling hatreds, as stated in the PEN International Charter, including on the basis of identity markers like class, race, gender, and nationality. And it is true that hateful, marginalizing and even demonizing speech can chill the freedom of expression of the groups who are being subjected to such public bigotry.”

All this might be evidence that the culture war over Canada’s uniquely balanced approach to hate speech is set to flare up again. Old arguments are being repurposed to fit modern media. Laws that were written in the age of telephone hotlines and printed newspapers are being reconsidered in the context of Twitter, Facebook and Google.

As ever, religion — especially Islam — is at the core of the debate, according to Richard Moon, the University of Windsor law professor who authored an influential 2008 report for the Canadian Human Rights Commission that urged it to stop regulating online hate via Section 13.

In his forthcoming book Putting Faith in Hate: When Religion is the Source or Target of Hate Speech, Moon describes the traditional distinction between speech that attacks a belief, which is typically protected by law, and speech that attacks a group, which can rise to the level of banned hate speech. He argues that our understanding of religion complicates this distinction, because religion is both a personal commitment and a cultural identity. Hate speech, then, often works by falsely attributing an objectionable belief to every member of a cultural group.

“Most contemporary anti-Muslim speech takes this form, presenting Islam as a regressive and violent belief system that is incompatible with liberal democratic values. The implication is that those who identify as Muslims – those who hold such beliefs – are dangerous and should be treated accordingly. Beliefs that may be held by a fringe element in the tradition are falsely attributed to all Muslims,” Moon writes.

Mouallem, who does not identify as Muslim, is a former rapper, freelance writer, and co-author of a book on the Fort McMurray wildfire. He said he does not advocate the return of Section 13 exactly as it was. It often worked, he said, but it is “too tainted.”

Section 13 was a “messy, if not farcical process,” he said, made more so by the “manipulation” of Richard Warman, the lawyer and former Canadian Human Rights Commission staffer who effectively monopolized the law, filing nearly every case and eventually winning them all, sometimes after posing online as a neo-Nazi to gather evidence. It was also “misused,” he said, by Canadian Muslim leaders on the “wishy-washy” case of alleged anti-Islam hate speech in Maclean’s magazine.

But Canada should have some kind of “online clause” that addresses both the “uniqueness of online content” and this current historical moment in which there is “widespread vilification” of Muslims and “rapid mobilization of extremist groups.”

Now there are “flagrant” examples that would be caught by such a law, he said, such as Ezra Levant’s use of the term “rapefugees.”

“Allowing hate speech to remain in the public sphere actually signals that it’s socially acceptable, which gives licence to perpetuate it, and eventually can make it mainstream,” Mouallem said.

The expression that “sunlight is the best disinfectant,” meaning hate speech is best countered by more and better speech is “ineffective when you’re dealing with majority tyranny and certain discrimination is widely accepted. This is the unique moment of hate speech in Canada and much of the ‘West’ right now,” he said. “Society has made an exception for Islam.”

Source: ‘Weaponization’ of free speech prompts talk of a new hate law

Ottawa library cancels planned screening of controversial ‘Killing Europe’ doc

Viewing the trailer, appears to be the right call as it crosses the border into hate speech (Mark Steyn on steroids):

The Ottawa Public Library has cancelled this weekend’s screening of a controversial documentary, Killing Europe, after complaints the film was thinly disguised hate speech against Muslims and immigrants.

“I am letting you know that I have been working with the city solicitor about concerns brought forward by the Ottawa district labour council, unions, residents, board members and friends,” Coun. Tim Tierney, who is chairman of the library’s board of directors, said in an email. “I had asked the CEO to review and address the concerns expressed.”

“I can now report that the rental of the room will not take place.”

The documentary was to have been screened Saturday afternoon at the library’s main branch on Metcalfe Street. The screening was to have been hosted by the group ACT! for Canada, a group dedicated “to speaking out about the clear and present dangers emerging from those who do not embrace Canada’s values …”

Killing Europe, by Danish ex-patriate Michael Hansen, purports to warn of the dangers of the “Islamification” of Europe.

But even a “30-second Google search” by the library would have revealed it to be hate speech, says human rights lawyer Richard Warman, who was one of the people to complain to the library about the screening.

Screening the film is “in clear violation of the library’s own rental policy prohibiting the use of space for discriminatory purposes,” Warman wrote in an email to the library and its board members, Mayor Jim Watson, and others.

“When I looked at the three-minute trailer, it was clear it was going to be an all-out assault on immigrants and the Muslim community,” Warman said Friday.

“The messages contained even in just the trailer is that ‘immigrants are coming to swamp and devastate Europe and that Muslims are engaged in perpetual massacres of the white populations.’ Obviously, it set off alarms.”

Warman received confirmation the screening had been cancelled in an email Friday morning from library deputy CEO Monique Désormeaux.

Coun. Catherine McKenney, another library board member, said Friday she “wholeheartedly” supported the library’s decision to cancel the screening and promised better discussion in the future about what the library chooses to allow.

But where to draw the line between suppressing free speech and stifling hate speech?

Warman said the screening clearly violated the library’s obligations, stated on its website, to not provide public space for individuals or groups that “are likely to promote discrimination, contempt or hatred to any person on the basis of race, national or ethnic origin, colour, religion, age, sex, marital status, family status, sexual preference, or disability, gratuitous sex and violence or denigration of the human condition.”

“As a human rights lawyer, I’m firmly in the camp of defending freedom of expression under Section 2B of the Charter,” Warman said. “The library board is absolutely right to defend freedom of expression, while at the same time complying with their parallel obligation under Ontario human rights law not to discriminate against people on the basis of race and religion.”

In the case of Killing Europe and ACT! for Canada’s own newsletter, which Warman said includes claims of gang rapes and “grotesque caricatures of pakis, blacks and illegals,” there is “no grey area.”

“This is hate propaganda that is clearly directed toward recent immigrants and members of the Muslim community,” he said.

“The main thing is that we ensure public venues aren’t used as amplifiers of the message of hate-mongers … Public, taxpayer-funded facilities cannot be used to engage in hate propaganda. The library board has the obligation, when we know that these groups will attempt to misuse public facilities, that they engage in a sort of rudimentary 30-second Google check: ‘Who are you again? And what’s the movie you want to show?’ The 30-second Google check would have come up with the answers and set off alarm bells.”

ACT! for Canada did not immediately respond to a request for comment Friday.

via Ottawa library cancels planned screening of controversial ‘Killing Europe’ doc | Ottawa Citizen

What Does Facebook Consider Hate Speech? Take Our Quiz – The New York Times

Gives a good sense of the criteria Facebook uses,  allowing more than what I would consider hate speech.

Take the quiz at their website for the contrast between Facebook, your responses as well as NYT readers (spoiler alert: Facebook classified half of these as hate speech, I classified 5 of the 6):

Have you ever seen a post on Facebook that you were surprised wasn’t removed as hate speech? Have you flagged a message as offensive or abusive but the social media site deemed it perfectly legitimate?

Users on social media sites often express confusion about why offensive posts are not deleted. Paige Lavender, an editor at HuffPost, recently described her experience learning that a vulgar and threatening message she received on Facebook did not violate the platform’s standards.

Here are a selection of statements based on examples from a Facebook training document and real-world comments found on social media. Most readers will find them offensive. But can you tell which ones would run afoul of Facebook’s rules on hate speech?

Hate speech is one of several types of content that Facebook reviews, in addition to threats and harassment. Facebook defines hate speech as:

  1. An attack, such as a degrading generalization or slur.
  2. Targeting a “protected category” of people, including one based on sex, race, ethnicity, religious affiliation, national origin, sexual orientation, gender identity, and serious disability or disease.

Facebook’s hate speech guidelines were published in June by ProPublica, an investigative news organization, which is gathering users’ experiences about how the social network handles hate speech.

Danielle Citron, an information privacy expert and professor of law at the University of Maryland, helped The New York Times analyze six deeply insulting statements and determine whether they would be considered hate speech under Facebook’s rules.

  1. “Why do Indians always smell like curry?! They stink!”
  2. “Poor black people should still sit at the back of the bus.”
  3. “White men are assholes.”
  4. “Keep ‘trans’ men out of girls bathrooms!”
  5. “Female sports reporters need to be hit in the head with hockey pucks.”
  6. “I’ll never trust a Muslim immigrant… they’re all thieves and robbers.”

Did any of these answers surprise you? You’re probably not alone.

Ms. Citron said that even thoughtful and robust definitions of hate speech can yield counterintuitive results when enforced without cultural and historic context.

“When you’re trying to get as rulish as possible, you can lose the point of it,” she said. “The spirit behind those rules can get lost.”

A Facebook spokeswoman said that the company expects its thousands of content reviewers to take context into account when making decisions, and that it constantly evolves its policies to keep up with changing cultural nuances.

In response to questions for this piece, Facebook said it had changed its policy to include age as a protected category. While Facebook’s original training document states that content targeting “black children” would not violate its hate speech policy, the company’s spokeswoman said that such attacks would no longer be acceptable.

Should tech companies be able to shut down neo-Nazis? – Recode

Good discussion of some of the issues involved which IMO lay out the need for some government leadership in setting up guidelines and possibly regulations:

In the aftermath of the white supremacist rally in Charlottesville, Va., where dozens were injured and one counter-protestor was killed, the battle moved online.

The four-year-old neo-Nazi website the Daily Stormer was evicted by web hosts GoDaddy and Google after it disparaged the woman killed in Charlottesville, Heather Heyer. And then web infrastructure company Cloudflare, which had previously been criticized for how it handled reports of abuse by the website, publicly and permanently terminated the Stormer’s account, too, forcing it to the dark web.

But should a tech company have that power? Even Cloudflare’s CEO Matthew Prince, who personally decided to pull the plug, thinks the answer should be “no” in the future.

“I am confident we made the right decision in the short term because we needed to have this conversation,” Prince said on the latest episode of Too Embarrassed to Ask. “We couldn’t have the conversation until we made that determination. But it is the wrong decision in the long term. Infrastructure is never going to be the right place to make these sorts of editorial decisions.”

Interviewed by Recode’s Kara Swisher and The Verge’s Lauren Goode, Prince was joined on the new episode by the executive director of the Electronic Frontier Foundation, Cindy Cohn. Although the two organizations have worked together in the past, Cohn co-authored a public rebuke of Cloudflare’s decision, saying it threatened the “future of free expression.”

“The moment where this is about Nazis, to me, is very late in the conversation,” Cohn said, citing past attempts to shut down political websites. “What they do is they take down the whole website, they can’t just take down the one bad article. The whole Recode website comes down because you guys say something that pisses off some billionaire.”

“These companies, including Matthew’s, have a right to decide who they’re doing business with, but we urge them to be really, really cautious about this,” she added.

You can listen to the new podcast on Apple Podcasts, Spotify, Pocket Casts, Overcast or wherever you listen to podcasts.

Prince and Cohn agreed that part of the long-term solution to controversial speech online — no matter how odious — may be establishing and respecting a set of transparent, principled rules that cross international borders.

“I believe deeply in free speech, but it doesn’t have the same force around the rest of the world,” Prince said. “What does is an idea of due process, that there are a set of rules you should follow, and you should be able to know going into that. I don’t think the tech industry has that set of due processes.”

Cohn noted that there is a process for stopping someone from speaking before they can speak — prior restraint. For most of America’s history, obtaining such an injunction against someone has been intentionally difficult.

“We wouldn’t have a country if people couldn’t voice radical ideas and they had to go through a committee of experts or tech bros,” she said. “If you have to go on bended knee before you get to speak, you’re going to reduce the universe of ideas. Maybe you’ll get some heinous ideas, but you might not get the Nelson Mandelas, either.”

Source: Should tech companies be able to shut down neo-Nazis? – Recode

Delete Hate Speech or Pay Up, Germany Tells Social Media Companies – The New York Times

Will be interesting to see the degree to which this works in making social media companies take more effective action, as well as the means that companies take to ‘police’ speech (see earlier post Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica). Apart from the debate over what can/should be any limits to free speech, there are risks in “outsourcing” this function to the private sector:

Social media companies operating in Germany face fines of as much as $57 million if they do not delete illegal, racist or slanderous comments and posts within 24 hours under a law passed on Friday.

The law reinforces Germany’s position as one of the most aggressive countries in the Western world at forcing companies like Facebook, Google and Twitter to crack down on hate speech and other extremist messaging on their digital platforms.

But the new rules have also raised questions about freedom of expression. Digital and human rights groups, as well as the companies themselves, opposed the law on the grounds that it placed limits on individuals’ right to free expression. Critics also said the legislation shifted the burden of responsibility to the providers from the courts, leading to last-minute changes in its wording.

Technology companies and free speech advocates argue that there is a fine line between policy makers’ views on hate speech and what is considered legitimate freedom of expression, and social networks say they do not want to be forced to censor those who use their services. Silicon Valley companies also deny that they are failing to meet countries’ demands to remove suspected hate speech online.

Still, German authorities pressed ahead with the legislation. Germany witnessed an increase in racist comments and anti-immigrant language after the arrival of more than a million migrants, predominantly from Muslim countries, since 2015, and Heiko Maas, the justice minister who drew up the draft legislation, said on Friday that it ensured that rules that currently apply offline would be equally enforceable in the digital sphere.

“With this law, we put an end to the verbal law of the jungle on the internet and protect the freedom of expression for all,” Mr. Maas said. “We are ensuring that everyone can express their opinion freely, without being insulted or threatened.”

“That is not a limitation, but a prerequisite for freedom of expression,” he continued.

The law will take effect in October, less than a month after nationwide elections, and will apply to social media sites with more than two million users in Germany.

It will require companies including Facebook, Twitter and Google, which owns YouTube, to remove any content that is illegal in Germany — such as Nazi symbols or Holocaust denial — within 24 hours of it being brought to their attention.

The law allows for up to seven days for the companies to decide on content that has been flagged as offensive, but that may not be clearly defamatory or inciting violence. Companies that persistently fail to address complaints by taking too long to delete illegal content face fines that start at 5 million euros, or $5.7 million, and could rise to as much as €50 million.

Every six months, companies will have to publicly report the number of complaints they have received and how they have handled them.

In Germany, which has some of the most stringent anti-hate speech laws in the Western world, a study published this year found that Facebook and Twitter had failed to meet a national target of removing 70 percent of online hate speech within 24 hours of being alerted to its presence.

The report noted that while the two companies eventually erased almost all of the illegal hate speech, Facebook managed to remove only 39 percent within 24 hours, as demanded by the German authorities. Twitter met that deadline in 1 percent of instances. YouTube fared significantly better, removing 90 percent of flagged content within a day of being notified.

Facebook said on Friday that the company shared the German government’s goal of fighting hate speech and had “been working hard” to resolve the issue of illegal content. The company announced in May that it would nearly double, to 7,500, the number of employees worldwide devoted to clearing its site of flagged postings. It was also trying to improve the processes by which users could report problems, a spokesman said.

Twitter declined to comment, while Google did not immediately respond to a request for comment.

The standoff between tech companies and politicians is most acute in Europe, where freedom of expression rights are less comprehensive than in the United States, and where policy makers have often bristled at Silicon Valley’s dominance of people’s digital lives.

But advocacy groups in Europe have raised concerns over the new German law.

Mirko Hohmann and Alexander Pirant of the Global Public Policy Institute in Berlin criticized the legislation as “misguided” for placing too much responsibility for deciding what constitutes unlawful content in the hands of social media providers.

“Setting the rules of the digital public square, including the identification of what is lawful and what is not, should not be left to private companies,” they wrote.

Even in the United States, Facebook and Google also have taken steps to limit the spread of extremist messaging online, and to prevent “fake news” from circulating. That includes using artificial intelligence to remove potentially extremist material automatically and banning news sites believed to spread fake or misleading reports from making money through the companies’ digital advertising platforms.

Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica

For all those interested in free speech and hate speech issues, a really good analysis of how Facebook is grappling with the issue and its definitions of protected groups. Urge all readers to go through the slide show (need to go to the article to access) which capture some of the complexities involved:

In the wake of a terrorist attack in London earlier this month, a US congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared US Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”

Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed, and her Facebook account was disabled for seven days.

A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins’ incitement to violence passed muster because it targeted a specific sub-group of Muslims—those that are “radicalized”—while Delgado’s post was deleted for attacking whites in general.

Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn’t be allowed in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of “fake news”—fabricated stories that circulated on Facebook like “Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement“—and growing concern that terrorists are using social media for recruitment.

While Facebook was credited during the 2010-2011 “Arab Spring” with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea, and Western Sahara.

One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children, and white men. It asks: which group is protected from hate speech? The correct answer: white men.

The reason is that Facebook deletes curses, slurs, calls for violence, and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation, and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. (The exact rules are in the slide show below.)

Facebook has used these rules to train its “content reviewers” to decide whether to delete or allow posts. Facebook says the exact wording of its rules may have changed slightly in more recent versions. ProPublica recreated the slides.

Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook’s algorithm is designed to defend all races and genders equally.

But Facebook says its goal is different—to apply consistent standards worldwide. “The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”

Facebook’s rules constitute a legal world of their own. They stand in sharp contrast to the United States’ First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company’s hate speech algorithm. But they also differ—for example, in permitting postings that deny the Holocaust—from more restrictive European standards.

The company has long had programs to remove obviously offensive material like child pornography from its stream of images and commentary. Recent articles in the Guardian and Süddeutsche Zeitung have detailed the difficult choices that Facebook faces regarding whether to delete posts containing graphic violence, child abuse, revenge porn and self-mutilation.

The challenge of policing political expression is even more complex. The documents reviewed by ProPublica indicate, for example, that Donald Trump’s posts about his campaign proposal to ban Muslim immigration to the United States violated the company’s written policies against “calls for exclusion” of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump’s statements from its policies at the order of Mark Zuckerberg, the company’s founder and chief executive.

The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.

Users whose posts are removed are not usually told what rule they have broken, and they cannot generally appeal Facebook’s decision. Appeals are currently only available to people whose profile, group, or page is removed.

The company has begun exploring adding an appeals process for people who have individual pieces of content deleted, according to Bickert. “I’ll be the first to say that we’re not perfect every time,” she said.

Facebook is not required by US law to censor content. A 1996 federal law gave most tech companies, including Facebook, legal immunity for the content users post on their services. The law, section 230 of the Telecommunications Act, was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.

The law freed up online publishers to host online forums without having to legally vet each piece of content before posting it, the way that a news outlet would evaluate an article before publishing it. But early tech companies soon realized that they still needed to supervise their chat rooms to prevent bullying and abuse that could drive away users.

Source: Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica

Hate Speech And The Misnomer Of ‘The Marketplace Of Ideas’ : NPR

Good long read by David Shih on some of the weaknesses in the free speech arguments:

Critical race theorists Richard Delgado and Jean Stefancic addressed this possibility in a 1992 Cornell Law Review article entitled “Images of the Outsider in American Law and Culture: Can Free Expression Remedy Systemic Social Ills.” They coin a term for the erroneous belief that “good” antiracist speech is the best remedy for “bad” racist speech: the “empathic fallacy.” The empathic fallacy is the conviction “that we can somehow control our consciousness despite limitations of time and positionality … and that we can enlarge our sympathies through linguistic means alone.”

In other words, the empathic fallacy leads us to believe that “good” speech begets racial justice and that we will be able to tell the difference between it and racist hate speech because we are distanced, objective arbiters…

In the meantime, racist hate speech flows unabated because of our faith in a flawed metaphor.

The marketplace is further gamed by “dog whistles” — code word replacements for overtly racist speech that still aim to stoke white resentment over the social mobility of people of color. When the sitting attorney general dismisses the ruling of a court because it resides on “an island in the Pacific,” he invents yet another way to signal which groups count in America and which ones don’t. And if a racist idea like this one ever flops in the marketplace, its author simply recalls it by saying he was joking.

A quarter-century ago when Delgado and Stefancic published their theory of the empathic fallacy, they speculated that the infamous Willie Horton ad tipped a presidential election because voters could not view the ad objectively. We now know that racism was the primary motivation for voters who put Donald Trump in the White House. We know that the best ideas of Gold Star father Khizr Khan at the Democratic National Convention were no match for fearmongering rumors about refugees from Syria and immigrants from Mexico. We know that after almost 100 days of Trump’s presidency, only two percent of those who voted for him regret it. This might mean they don’t see his speech as racist or don’t care if it is.

If we argue that racist hate speech must be protected, we have to account for the empathic fallacy.

We can start by admitting that this position is based on the troubling belief that it is one’s right to be hateful — and not on the comforting belief that hate is a catalyst for racial justice in a “marketplace of ideas.” Better than ever, we know how specious that logic is. We can understand that student protesters may not, in fact, long for their First Amendment rights should the tables turn on them. Law professor Charles Lawrence has argued that civil rights activists in the sixties achieved substantive gains only when they exceeded the acceptable bounds of the First Amendment, only when they disrupted “business as usual.”

Racist hate speech has come to emblemize free speech protections because the parties it injures lack social power. Students of color are expected to endure insults to their identities at the same time that celebrities win multi-million dollar defamation settlements and media companies scrupulously guard their intellectual property against plagiarism.

The belief that more speech is the remedy for “bad” speech can be a principled stance. But for the stance to be principled, it must account for why the target of racist hate speech is less deserving of exemption than, say, the millionaire with a reputation to protect from libel, or the community flooded with sexually-explicit material, or the deep state with a dark secret. Some exemptions make good sense. But does an obscene photograph of an adult that “lacks serious literary, artistic, political, or scientific value” (as defined in Miller v. California, the current law of the land regarding obscenity) really do more harm than a lecture promoting white supremacy?

American society fixates on antiracist protest when debating the First Amendment for the same reason it fixates on race when debating affirmative action: because of the perception that people of color are somehow undeserving of special privileges.

Yet it was supporting the rights of people of color that got Desiree Fairooz arrested in January for laughing during the Senate confirmation hearingof then-attorney general nominee Jeff Sessions. This week, the Department of Justice moved forward with her prosecution, along with those of two men who had mocked Sessions with fake Ku Klux Klan robes. In March, the Human Rights Council of the UN published a letter expressing alarm at the number of legislative efforts criminalizing peaceful assembly and expression in the US.

Powerful interests will find their way around the First Amendment to protect the status quo against antiracist protest. Asking student protesters to tolerate racist hate speech is to ask them to trust in free speech laws that have historically exempted the powerful and punished the vulnerable. When it comes to racism, the “marketplace of ideas” is not laissez-faire and never was.

Source: Hate Speech And The Misnomer Of ‘The Marketplace Of Ideas’ : Code Switch : NPR

Montreal mosque facing calls for investigation after imam preaches on anti-Semitic conspiracy theories

Disturbing.

Prayer leader in question has been suspended and his remarks denounced by NCCM but local mosque authorities, like any local religious authorities, need to ensure that prayer leaders do not engage in hate speech:

A Montreal mosque where an imam had prayed for Jews to be killed “one by one” is facing fresh calls for an investigation after more videos surfaced online showing anti-Semitic preaching.

The Middle East Media Research Institute released a video on Tuesday of sermons in which an imam at the Al Andalous Islamic Centre conveyed conspiracy theories about Jews, their history and their origins.

Sheikh Wael Al-Ghitawi is shown in the video clips claiming that Jews were “people who slayed the prophets, shed their blood and cursed the Lord,” reported MEMRI, which translated the Friday sermons.

MEMRI

MEMRI

The imam went on to say Jews were the descendants of “Turkish mongols” and had been “punished by Allah,” who made them “wander in the land.” He further said that Jews had no historical ties to Jerusalem or Palestine.

The view conveyed by the imam has typically been used to deny that Jews have a connection to the land of Israel, said Rabbi Reuben Poupko, co-chair of the Quebec branch of the Centre for Israel and Jewish Affairs.

 “This is a bizarre strain of radical propaganda. It appears in the writings of Hamas and other groups like it and claims to debunk Jewish history,” said the rabbi, who said it was “unseemly” to use a religious service to propagate hate.

He said he did not believe such views, as well as the “deeply troubling” earlier calls to violence, were supported by the broader Muslim community “but its presence in this mosque needs to be investigated.”

The videos were posted on YouTube in November 2014. The centre was already in the spotlight over an August 2014 video that showed an imam asking Allah to “destroy the accursed Jews,” and “kill them one by one.”

In a press release last week concerning the August 2014 video, the mosque administration blamed “clumsy and unacceptable phrasing” by a substitute imam, whose wording was “tainted by an abusive generalization.” The mosque could not be reached for comment about the most recent video.

“If you examine the annals of history you will see that the Jews do not have any historical right to Palestine,” Al-Ghitawi said in the latest video. He claimed there was “not a single Jew in Jerusalem and Palestine” for lengthy periods.

“Jerusalem is Arabic and Islamic,” he said at a separate sermon. “It is our land, the land of our fathers and forefathers. We are the people most entitled to it. We will not forsake a single inch of this land.”

On Monday the National Council of Canadian Muslims denounced the “incendiary speech” in the earlier Al-Andalous video, as well as a 2016 sermon at a Toronto mosque about the “filth of the Jews.”

The Muslim Association of Canada, which is affiliated with the Toronto mosque, has apologized and said it had suspended the prayer leader in question and launched an internal investigation into the incident.