Earlier this week Google said it would not allow political advertisers to target voters using “microtargeting” based on browsing data or other factors.
Analysts say Facebook has come under increasing pressure to follow suit.
The company said in a statement that Baron Cohen had misrepresented its policies and that hate speech was banned on its platforms.
“We ban people who advocate for violence and we remove anyone who praises or supports it. Nobody – including politicians – can advocate or advertise hate, violence or mass murder on Facebook,” it added.
What did Baron Cohen say?
Addressing the Anti-Defamation League’s Never is Now summit, Baron Cohen took aim at Facebook boss Mark Zuckerberg who in October defended his company’s position not to ban political adverts that contain falsehoods.
“If you pay them, Facebook will run any ‘political’ ad you want, even if it’s a lie. And they’ll even help you micro-target those lies to their users for maximum effect,” he said.
“Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.”
Baron Cohen said it was time “for a fundamental rethink of social media and how it spreads hate, conspiracies and lies”. He also questioned Mr Zuckerberg’s characterisation of Facebook as a bastion of “free expression”.
“I think we could all agree that we should not be giving bigots and paedophiles a free platform to amplify their views and target their victims,” he added.
The International Committee on Disinformation and Fake News was told that the business model adopted by social networks made “manipulation profitable”.
Interesting account of the social media trail and actors:
Imagine scrolling through Facebook when you come across this headline: “Canada Moves to Ban Christians from Demonstrating in Public Under New Anti-Hate Proposal.” If you think it reads too shocking and absurd to be true, that’s because it is.
But this exact headline appeared atop a story that was shared more than 16,000 times online since it was published in May, according to social media tool CrowdTangle. The federal government and Justin Trudeau, who is pictured in the story, are not seeking to ban Christians from demonstrating. In fact, the bill the story is based on was introduced in the Ontario legislature, by a Conservative MPP, and never made it past a second reading.
Incorrect and misleading content is common on social media, but it’s not always obvious where it originates. To learn more, CBC News tracked this particular example back through time on social media to uncover where it came from and how it evolved over time.
March 20: Private member’s bill introduced in Ontario
In this case, it all started with a bill. In March, Roman Baber, a freshman member of the Ontario provincial legislature introduced his very first private member’s bill. Had he known how badly the bill would be misconstrued online, he might have chosen something else, he later told CBC News.
“I expected that people would understand what prompted the bill, as a proud member of the Jewish community who’s been subjected to repeated demonstrations at Queen’s Park by certain groups that were clearly promoting hate,” said Baber, Progressive Conservative member for York Centre.
The bill was simple. It sought to ban any demonstrations on the grounds of Queen’s Park, where Ontario’s provincial legislature is located, that promote hate speech or incite violence. Baber said the bill was prompted by previous demonstrations that occurred at the legislature grounds.
“In 2017, we saw a demonstration that called for the shooting of Israelis. We saw a demonstration that called for a bus bombing and murder of innocent civilians,” he said.
The bill went through two readings at Queen’s Park and was punted to the standing committee on justice, where it’s languished since.
March 27: Canadian Jewish News covers story
At first, the bill garnered modest attention online. The Canadian Jewish News ran a straight-forward report on the bill that included an interview with Baber shortly after he first introduced it. It was shared only a handful of times.
But a few weeks after the second reading, the bill drew the attention of LifeSiteNews, a socially-conservative website. The story was shared 212 times, according to CrowdTangle, including to the Yellow Vests Canada Facebook group.
In its story, LifeSiteNews suggested that a bill banning hate speech might be interpreted to include demonstrations like those that opposed updates to the province’s sex education curriculum.
Baber said this isn’t the case, because hate speech is already defined and interpreted by legal precedent.
“The words ‘hate’ and ‘hate-promoting’ have been defined by the courts repeatedly through common law and is enforced in courts routinely,” Baber said. “So it would be a mistake to suggest that the bill expands the realm of hate speech.”
April 24: The Post Millennial invokes ‘free speech’ argument
But the idea stuck around. A few weeks later, on April 24, the Post Millennial posted a story labelled as news that argued the bill could infringe on free speech. The story was, however, clear that the bill was only in Ontario and had not yet moved beyond a second reading. It was shared over 200 times and drew nearly 400 interactions — likes, shares, comments and reactions — on social media, according to CrowdTangle.
May 6: Powerful emotions evoke response on social media
On May 6, a socially conservative women’s group called Real Women of Canada published a news release on the bill calling it “an attack on free speech.” In the release, the group argues that hate speech isn’t clearly defined in Canadian law, and draws on unrelated examples to claim that Christian demonstrations, in particular, could be targeted.
For example, the group pointed to the case of British Columbia’s Trinity Western University, a Christian post-secondary institution that used to require all students sign a covenant that prohibited sex outside of heterosexual marriage. A legal challenge around the covenant and a potential law school at Trinity Western occurred last year, but it had nothing to do with hate speech.
May 9: LifeSiteNews republishes news release
Though this news release itself was not widely shared, three days later it was republished by LifeSiteNews as an opinion piece. That post did better, drawing 5,500 shares and over 8,000 interactions, according to CrowdTangle It also embellished the release with a dramatic image and sensational headline: “Ontario Bill Threatens to Criminalize Christian Speech as ‘Hate.'”
LifeSiteNews published the news release as an opinion piece, drawing 5,500 shares and over 8,000 interactions. (Screengrab/LifeSiteNews)
At this point, the nugget of truth has been nearly entirely obscured by several layers of opinion and misrepresentation. For example, the bill doesn’t specifically cite Christian speech, but this headline suggests it does.
These tactics are used to elicit a strong response from readers and encourage them to share, according to Samantha Bradshaw, a researcher on the Computational Propaganda project at Oxford University.
“People like to consume this kind of content because it’s very emotional, and it gets us feeling certain things: anger, frustration, anxiety, fear,” Bradshaw said. “These are all very powerful emotions that get people sharing and consuming content.”
May 11: Big League Politics publishes sensational inaccuracies
That framing on LifeSiteNews caught the attention of a major U.S. publication known for spreading conspiracy theories and misinformation: Big League Politics. On May 11, the site published a story that cited the LifeSiteNews story heavily.
The headline and image make it seem like Trudeau’s government has introduced legislation that would specifically prohibit Christians from demonstrating anywhere in the country, a far cry from the truth.
While the story provides a few facts, like the fact the bill was introduced in Ontario, much of it is incorrect. For example, in the lead sentence, the writer claimed the bill would “criminalize public displays by Christians deemed hateful to Muslims, the LGBT community and other victim groups designated by the left.”
The disinformation and alarmist headline proved successful: the Big League Politics version of the story was shared more than 16,000 times, drew more than 26,000 interactions and continued to circulate online for over two weeks.
This evolution is a common occurrence. Disinformation is often based on a nugget of truth that gets buried under layers of emotionally-charged language and opinion. Here, that nugget of truth was a private member’s bill introduced in the Ontario legislature. But that fact was gradually churned through an online network of spin until it was unrecognizable in the final product.
At the end of the day, democracy is really hard work. It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.– Samantha Bradshaw, researcher at Oxford University
“That is definitely something that we see often: taking little truths and stretching them, misreporting them or implementing commentary and treating someone’s opinion about what happened as news,” Bradshaw said. “The incremental changes that we see in these stories and these narratives is something very typical of normal disinformation campaigns.”
Bradshaw said even though disinformation is only a small portion of the content online, it can have an outsized impact on our attention. With that in mind, she said it’s partly up to readers to think critically about what they’re reading and sharing online.
“At the end of the day, democracy is really hard work,” Bradshaw said. “It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.”
In the face of criticism that Facebook is not doing enough to combat extremist messaging, the company likes to say that its automated systems remove the vast majority of prohibited content glorifying the Islamic State group and al-Qaida before it’s reported.
But a whistleblower’s complaint shows that Facebook itself has inadvertently provided the two extremist groups with a networking and recruitment tool by producing dozens of pages in their names.
The social networking company appears to have made little progress on the issue in the four months since The Associated Press detailed how pages that Facebook auto-generates for businesses are aiding Middle East extremists and white supremacists in the United States.
On Wednesday, U.S. senators on the Committee on Commerce, Science, and Transportation will be questioning representatives from social media companies, including Monika Bickert, who heads Facebooks efforts to stem extremist messaging.
The new details come from an update of a complaint to the Securities and Exchange Commission that the National Whistleblower Center plans to file this week. The filing obtained by the AP identifies almost 200 auto-generated pages — some for businesses, others for schools or other categories — that directly reference the Islamic State group and dozens more representing al-Qaida and other known groups. One page listed as a “political ideology” is titled “I love Islamic state.” It features an IS logo inside the outlines of Facebook’s famous thumbs-up icon.
In response to a request for comment, a Facebook spokesperson told the AP: “Our priority is detecting and removing content posted by people that violates our policy against dangerous individuals and organizations to stay ahead of bad actors. Auto-generated pages are not like normal Facebook pages as people can’t comment or post on them and we remove any that violate our policies. While we cannot catch every one, we remain vigilant in this effort.”
Facebook has a number of functions that auto-generate pages from content posted by users. The updated complaint scrutinizes one function that is meant to help business networking. It scrapes employment information from users’ pages to create pages for businesses. In this case, it may be helping the extremist groups because it allows users to like the pages, potentially providing a list of sympathizers for recruiters.
The new filing also found that users’ pages promoting extremist groups remain easy to find with simple searches using their names. They uncovered one page for “Mohammed Atta” with an iconic photo of one of the al-Qaida adherents, who was a hijacker in the Sept. 11 attacks. The page lists the user’s work as “Al Qaidah” and education as “University Master Bin Laden” and “School Terrorist Afghanistan.”
Facebook has been working to limit the spread of extremist material on its service, so far with mixed success. In March, it expanded its definition of prohibited content to include U.S. white nationalist and white separatist material as well as that from international extremist groups. It says it has banned 200 white supremacist organizations and 26 million pieces of content related to global extremist groups like IS and al-Qaida.
It also expanded its definition of terrorism to include not just acts of violence attended to achieve a political or ideological aim, but also attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate. It’s unclear, though, how well enforcement works if the company is still having trouble ridding its platform of well-known extremist organizations’ supporters.
But as the report shows, plenty of material gets through the cracks — and gets auto-generated.
The AP story in May highlighted the auto-generation problem, but the new content identified in the report suggests that Facebook has not solved it.
The report also says that researchers found that many of the pages referenced in the AP report were removed more than six weeks later on June 25, the day before Bickert was questioned for another congressional hearing.
The issue was flagged in the initial SEC complaint filed by the center’s executive director, John Kostyack, that alleges the social media company has exaggerated its success combatting extremist messaging.
“Facebook would like us to believe that its magical algorithms are somehow scrubbing its website of extremist content,” Kostyack said. “Yet those very same algorithms are auto-generating pages with titles like ‘I Love Islamic State,’ which are ideal for terrorists to use for networking and recruiting.”
In the summer of 2017, signs that seemed engineered to stoke anti-Muslim sentiment first appeared in a city park in Pitt Meadows, B.C.
“Many Muslims live in this area and dogs are considered filthy in Islam,” said the signs, which included the city’s logo. “Please keep your dogs on a leash and away from the Muslims who live in this community.”
After a spate of media coverage questioning their authenticity — and a statement from Pitt Meadows Mayor John Becker that the city didn’t make them — the signs were discredited and largely forgotten.
But almost two years later, a mix of right-wing American websites, Russian state media, and Canadian Facebook groups have made them go viral again, unleashing hateful comments and claims that Muslims are trying to “colonize” Western society.
The revival of this story shows how false, even discredited claims about Muslims in Canada find an eager audience in Facebook groups and on websites originating on both sides of the border, and how easily misinformation can be recirculated as the federal election approaches.
“Many people who harbour (or have been encouraged to hold) anti-Muslim feelings are looking for information to confirm their view that these people aren’t like them. This story plays into this,” Danah Boyd, a principal researcher at Microsoft and the founder of Data & Society, a non-profit research institute that studies disinformation and media manipulation, wrote in an email.
Boyd said a dubious story like this keeps recirculating “because the underlying fear and hate-oriented sentiment hasn’t faded.”
Daniel Funke, a reporter covering misinformation for the International Fact-Checking Network, said old stories with anti-Muslim aspects also recirculated after the recent fire at the Notre Dame cathedral in Paris.
“Social media users took real newspaper articles out of their original context, often years after they were first published, to falsely claim that the culprits behind the fire were Muslims,” he said. “The same thing has happened with health misinformation, when real news stories about product recalls or disease outbreaks go viral years after they were originally published.”
The signs about dogs first appeared in Hoffman Park in September 2017, and were designed to look official. They carried the logo of the city of Pitt Meadows and that of the Council on American-Islamic Relations (CAIR), a U.S. Muslim advocacy organization.
Media outlets reported on them after an image of one sign was shared online. Many noted that the city logo was falsely used and there was no evidence that actual Muslims were behind the messages.
A representative for CAIR told CBC News in 2017 that his organization had no involvement in the B.C. signs, but he did have an idea about why they were created.
“We see this on occasion where people try to be kind of an agent provocateur and use these kinds of messages to promote hostility towards Muslims and Islam,” Ibrahim Hooper said in an interview with CBC. “Sometimes people use the direct bigoted approach — we see that all too often in America and Canada, unfortunately — but other times they try and be a little more sophisticated or subtle.”
The Muslims of Vancouver Facebook page had a similar view, labelling it a case of “Bigots attempting to incite resentment and hatred towards Muslims.”
After the initial frenzy of articles about the signs, the story died down — until last week, when an American conservative website called The Stream published a story. It cited a 2017 report from CTV Vancouver, without noting that the incident was almost two years old.
“No Dogs: It Offends the Muslims,” read the headline on a story that cited the signs as an example of Muslims not integrating into Western society.
“That sign in the Canadian dog park tells us much that we’d rather not think about. That kind of sign notifies you when your country has been colonized,” John Zmirak wrote.
Zmirak’s post was soon summarized by state-funded Russian website Sputnik, and picked up by American conservative site Red State. Writing in Red State, Elizabeth Vaughn said “Muslims cannot expect Americans or Brits or anybody else to change their ways of life to accommodate them.” Conservative commentator Ann Coulter tweeted the Red State link to her 2.14 million followers, and the story was also cited by the right-wing website WND.
The Stream and Red State did not respond to emailed requests for comment. A spokesperson for Sputnik said its story made it clear to readers that the original incident happened in 2017. “I would like to stress that Sputnik has never mentioned that the flyers in question were created by Muslims, Sputnik just reported facts and indicated the sources,” Dmitry Borschevsky wrote in an email.
Nonetheless, the three stories generated more than 60,000 shares, reactions and comments on Facebook in less than a week. Some of that engagement also came thanks to right-wing Canadian Facebook groups and pages, bringing the dubious tale back to its original Canadian audience.
“Dogs can pick out evil! That’s why Death Cult adherents despise these lil canine truth detectors!” wrote one person in the “Canadians 1st Movement” Facebook group after seeing the Red State link.
“How about no muslims!” wrote one person after the Sputnik story was shared in the Canadian Combat Coalition National Facebook group. Another commenter in the group said he’d prefer to see Muslims “put down” instead of dogs.
On the page of anti-Muslim organization Pegida Canada, one commenter wrote, “I will take any dog over these animals.”
Those reactions were likely intended by whoever created the signs, according to Boyd, and it wasn’t the first incident of this type. In July 2016, flyers appeared in Manchester, England that asked residents to “limit the presence of dogs in the public sphere” out of sensitivity to the area’s “large Muslim community.”
The origin of the flyers was equally dubious, with evidence suggesting the idea may have been part of a campaign hatched on the anonymous message board 4chan. That’s where internet trolls often plan online harassment and disinformation campaigns aimed at generating outrage and media coverage.
“At this point, actors in 4chan have a lot of different motives, but there is no doubt that there are some who hold white nationalist attitudes and espouse racist anti-Muslim views,” Boyd said.
“There are also trolls who relish playing on political antagonisms to increase hostility and polarization. At the end of the day, the motivation doesn’t matter as much as the impact. And the impact is clear: these posters — and the conspiracists who amplify them — help intensify anti-Muslim sentiment in a way that is destructive to democracy.”
This week, unlike YouTube, Facebook decided to keep up a video deliberately and maliciously doctored to make it appear as if Speaker Nancy Pelosi was drunk or perhaps crazy. She was not. She was instead the victim of an obvious dirty trick by a dubious outfit with a Facebook page called Politics WatchDog.
The social media giant deemed the video a hoax and demoted its distribution, but the half-measure clearly didn’t work. The video ran wild across the system.
Facebook’s product policy and counterterrorism executive, Monika Bickert, drew the short straw and had to try to come up with a cogent justification for why Facebook was helping spew ugly political propaganda.
“We think it’s important for people to make their own informed choice for what to believe,” she said in an interview with CNN’s Anderson Cooper. “Our job is to make sure we are getting them accurate information.”
This is ridiculous. The only thing the incident shows is how expert Facebook has become at blurring the lines between simple mistakes and deliberate deception, thereby abrogating its responsibility as the key distributor of news on the planet.
Would a broadcast network air this? Never. Would a newspaper publish it? Not without serious repercussions. Would a marketing campaign like this ever pass muster? False advertising.
No other media could get away with spreading anything like this because they lack the immunity protection that Facebook and other tech companies enjoy under Section 230 of the Communications Decency Act. Section 230 was intended to spur innovation and encourage start-ups. Now it’s a shield to protect behemoths from any sensible rules.
Mr. Cooper must be less accustomed than some of us to the way Silicon Valley tortures the concept of free speech until it screams for mercy, because Ms. Bickert’s answer left him looking incredulous.
By conflating censorship with the responsible maintenance of its platforms, and by providing “rules” that are really just capricious decisions by a small coterie of the rich and powerful, Facebook and others have created a free-for-all with no consistent philosophy.
The Chewbacca mom video is sure fun, and so are New York Times articles, because classy journalism looks good on the platform. But the toxic stew of propaganda and fake news that is allowed to pour into the public river without filters? Also A-O.K., in the clearly underdeveloped mind of Facebook chief executive Mark Zuckerberg, who has been — try as he might with great earnestness — guiding his ship into dangerous waters.
Don’t believe me? Listen to what came out of his mouth during a podcast interview with me less than a year ago, a comment that in hindsight makes his non-action against the Pelosi video look completely inevitable. We had been talking about the vile Alex Jones, whom Mr. Zuckerberg had declined to remove from Facebook despite his having violated many of its policies. (This month Facebook finally did bar him from the platform). For some reason, presumably to make a greater point, he shifted the conversation to the Holocaust. It was a mistake, to say the least.
“I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive,” Mr. Zuckerberg said. “But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.”
I was shocked, but I wanted to hear more, so I said briefly: “In the case of Holocaust deniers, they might be, but go ahead.”
Did he ever: “It’s hard to impugn intent and to understand the intent. I just think, as abhorrent as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I’m sure you do. I’m sure a lot of leaders and public figures we respect do too, and I just don’t think that it is the right thing to say, ‘We’re going to take someone off the platform if they get things wrong, even multiple times.’’
Here was the internal dialogue in my head when he uttered this senseless jumble of words: What? What? What? Mr. Zuckerberg’s own pile of dumb mistakes were the same thing as anti-Semitic lies? The same as the calculatedly demented rantings of Mr. Jones? The same as the wily manipulations of Russia’s Internet Research Agency?
It was at that moment that I knew that Facebook was lost. And it’s been wandering ever since from one ethical quandary to the next. From the outside, the company can seem lazy and cynical, out to make money at the expense of just about anything or anyone, including Speaker Pelosi or an informed national electorate. It feels political too, as if its executives are making calculations based on nothing but what will keep the company free from trouble in these deeply partisan times.
And yet Facebook does remove content, such as posts it determines are a threat to public safety or from fake accounts.
Ms. Bickert, whom I have interviewed too and who certainly has made an effort to tame the platform, gamely tried to make this point to Mr. Cooper. “We aren’t in the news business. We’re in the social media business,” she said plaintively, as if that distinction could erase a thousand crimes taking place on the platform every day.
Not making these hard choices won’t work: The many indignities of being a Facebook user are making the platform a worse and worse place to be. So far, that has yet to infect the business itself, which is making money and continues to grow. But without a steadier hand at the wheel, Facebook cannot outrun a simple fact: It’s still Fakebook, and we already know how that story will end. Badly.
Good reminder that technology reflects both the people who develop it and use it, and that informed and meaningful conversation and dialogue are hard:
In ancient Egypt there lived a wise king named Thamus. One day he was visited by a clever god called Theuth.
Theuth was an inventor of many useful things: arithmetic and geometry; astronomy and dice. But his greatest discovery, so he believed, “was the use of letters.” And it was this invention that Theuth was most eager to share with King Thamus.
The art of writing, Theuth said, “will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit.”
But Thamus rebuffed him. “O most ingenious Theuth,” he said, “the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them.”
The king continued: “For this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember themselves.”
Written words, Thamus concluded, “give your disciples not truth, but only the semblance of truth; they will be hearers of many things but will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”
Now we learn that the company also sought to cover up the extent of Russian meddling on its platform — while quietly seeding invidious stories against its business rivals and critics like George Soros. Facebook disputes some of the claims made by The Times, but it’s fair to say the company’s reputation currently stands somewhere between that of Philip Morris and Purdue Pharma in the public toxicity department.
To which one can only say: About time.
The story of the wildly exaggerated promises and damaging unintended consequences of technology isn’t exactly a new one. The real marvel is that it constantly seems to surprise us. Why?
Part of the reason is that we tend to forget that technology is only as good as the people who use it. We want it to elevate us; we tend to degrade it. In a better world, Twitter might have been a digital billboard of ideas and conversation ennobling the public square. We’ve turned it into the open cesspool of the American mind. Facebook was supposed to serve as a platform for enhanced human interaction, not atool for the lonely to burrow more deeply into their own isolation.
It’s also true that Facebook and other Silicon Valley giants have sold themselves not so much as profit-seeking companies but as ideal-pursuing movements. Facebook’s mission is “to make the world more open and connected.” Tesla’s goal is “to accelerate the world’s transition to sustainable energy.” Google’s mantra was “Don’t Be Evil,” at least until it quietly dropped the slogan earlier this year.
But the deeper reason that technology so often disappoints and betrays us is that it promises to make easy things that, by their intrinsic nature, have to be hard.
Tweeting and trolling are easy. Mastering the arts of conversation and measured debate is hard. Texting is easy. Writing a proper letter is hard. Looking stuff up on Google is easy. Knowing what to search for in the first place is hard. Having a thousand friends on Facebook is easy. Maintaining six or seven close adult friendships over the space of many years is hard. Swiping right on Tinder is easy. Finding love — and staying in it — is hard.
That’s what Socrates (or Thamus) means when he deprecates the written word: It gives us an out. It creates the illusion that we can remain informed, and connected, even as we are spared the burdens of attentiveness, presence of mind and memory. That may seem quaint today. But how many of our personal, professional or national problems might be solved if we desisted from depending on shortcuts?
To read The Times’s account of how Facebook dealt with its problems is to be struck by how desperately Mark Zuckerberg and Sheryl Sandberg sought to massage and finesse — with consultants, lobbyists and technological patches — what amounted to a daunting if simple crisis of trust. As with love and grammar, acquiring and maintaining trust is hard. There are no workarounds.
Start over, Facebook. Do the basics. Stop pretending that you’re about transforming the state of the world. Work harder to operate ethically, openly and responsibly. Accept that the work will take time. Log off Facebook for a weekend. Read an ancient book instead.
Good long and interesting read, highlighting a number of the issues and practical aspects involved:
Security is tight at this brick building on the western edge of Berlin. Inside, a sign warns: “Everybody without a badge is a potential spy!”
Spread over five floors, hundreds of men and women sit in rows of six scanning their computer screens. All have signed nondisclosure agreements. Four trauma specialists are at their disposal seven days a week.
They are the agents of Facebook. And they have the power to decide what is free speech and what is hate speech.
This is a deletion center, one of Facebook’s largest, with more than 1,200 content moderators. They are cleaning up content — from terrorist propaganda to Nazi symbols to child abuse — that violates the law or the company’s community standards.
Germany, home to a tough new online hate speech law, has become a laboratory for one of the most pressing issues for governments today: how and whether to regulate the world’s biggest social network.
Around the world, Facebook and other social networking platforms are facing a backlash over their failures to safeguard privacy, disinformation campaigns and the digital reach of hate groups.
In India, seven people were beaten to death after a false viral message on the Facebook subsidiary WhatsApp. In Myanmar, violence against the Rohingya minority was fueled, in part, by misinformation spread on Facebook. In the United States, Congress called Mark Zuckerberg, Facebook’s chief executive, to testify about the company’s inability to protect its users’ privacy.
As the world confronts these rising forces, Europe, and Germany in particular, have emerged as the de facto regulators of the industry, exerting influence beyond their own borders. Berlin’s digital crackdown on hate speech, which took effect on Jan. 1, is being closely watched by other countries. And German officials are playing a major role behind one of Europe’s most aggressive moves to rein in technology companies, strict data privacy rules that take effect across the European Union on May 25 and are prompting global changes.
“For them, data is the raw material that makes them money,” said Gerd Billen, secretary of state in Germany’s Ministry of Justice and Consumer Protection. “For us, data protection is a fundamental right that underpins our democratic institutions.”
Germany’s troubled history has placed it on the front line of a modern tug-of-war between democracies and digital platforms.
In the country of the Holocaust, the commitment against hate speech is as fierce as the commitment to free speech. Hitler’s “Mein Kampf” is only available in an annotated version. Swastikas are illegal. Inciting hatred is punishable by up to five years in jail.
But banned posts, pictures and videos have routinely lingered on Facebook and other social media platforms. Now companies that systematically fail to remove “obviously illegal” content within 24 hours face fines of up to 50 million euros.
The deletion center predates the legislation, but its efforts have taken on new urgency. Every day content moderators in Berlin, hired by a third-party firm and working exclusively on Facebook, pore over thousands of posts flagged by users as upsetting or potentially illegal and make a judgment: Ignore, delete or, in particularly tricky cases, “escalate” to a global team of Facebook lawyers with expertise in German regulation.
Some decisions to delete are easy. Posts about Holocaust denial and genocidal rants against particular groups like refugees are obvious ones for taking down.
Others are less so. On Dec. 31, the day before the new law took effect, a far-right lawmaker reacted to an Arabic New Year’s tweet from the Cologne police, accusing them of appeasing “barbaric, Muslim, gang-raping groups of men.”
The request to block a screenshot of the lawmaker’s post wound up in the queue of Nils, a 35-year-old agent in the Berlin deletion center. His judgment was to let it stand. A colleague thought it should come down. Ultimately, the post was sent to lawyers in Dublin, London, Silicon Valley and Hamburg. By the afternoon it had been deleted, prompting a storm of criticism about the new legislation, known here as the “Facebook Law.”
“A lot of stuff is clear-cut,” Nils said. Facebook, citing his safety, did not allow him to give his surname. “But then there is the borderline stuff.”
Complicated cases have raised concerns that the threat of the new rules’ steep fines and 24-hour window for making decisions encourage “over-blocking” by companies, a sort of defensive censorship of content that is not actually illegal.
The far-right Alternative of Germany, a noisy and prolific user of social media, has been quick to proclaim “the end of free speech.” Human rights organizations have warned that the legislation was inspiring authoritarian governments to copy it.
Other people argue that the law simply gives a private company too much authority to decide what constitutes illegal hate speech in a democracy, an argument that Facebook, which favored voluntary guidelines, made against the law.
“It is perfectly appropriate for the German government to set standards,” said Elliot Schrage, Facebook’s vice president of communications and public policy. “But we think it’s a bad idea for the German government to outsource the decision of what is lawful and what is not.”
Richard Allan, Facebook’s vice president for public policy in Europe and the leader of the company’s lobbying effort against the German legislation, put it more simply: “We don’t want to be the arbiters of free speech.”
German officials counter that social media platforms are the arbiters anyway.
It all boils down to one question, said Mr. Billen, who helped draw up the new legislation: “Who is sovereign? Parliament or Facebook?”
Learning From (German) History
When Nils applied for a job at the deletion center, the first question the recruiter asked him was: “Do you know what you will see here?”
Nils has seen it all. Child torture. Mutilations. Suicides. Even murder: He once saw a video of a man cutting a heart out of a living human being.
And then there is hate.
“You see all the ugliness of the world here,” Nils said. “Everyone is against everyone else. Everyone is complaining about that other group. And everyone is saying the same horrible things.”
The issue is deeply personal for Nils. He has a 4-year-old daughter. “I’m also doing this for her,” he said.
The center here is run by Arvato, a German service provider owned by the conglomerate Bertelsmann. The agents have a broad purview, reviewing content from a half-dozen countries. Those with a focus on Germany must know Facebook’s community standards and, as of January, the basics of German hate speech and defamation law.
“Two agents looking at the same post should come up with the same decision,” says Karsten König, who manages Arvato’s partnership with Facebook.
The Berlin center opened with 200 employees in 2015, as Germany was opening its doors to hundreds of thousands of migrants.
Anas Modamani, a Syrian refugee, posed with Chancellor Angela Merkel and posted the image on Facebook. It instantly became a symbol of her decision to allowing in hundreds of thousands of migrants.
Soon it also became a symbol of the backlash.
The image showed up in false reports linking Mr. Modamani to terrorist attacks in Brussels and on a Christmas market in Berlin. He sought an injunction against Facebook to stop such posts from being shared but eventually lost.
The arrival of nearly 1.4 million migrants in Germany has tested the country’s resolve to keep a tight lid on hate speech. The law on illegal speech was long-established but enforcement in the digital realm was scattershot before the new legislation.
Posts calling refugees rapists, Neanderthals and scum survived for weeks, according to jugendschutz.net, a publicly funded internet safety organization. Many were never taken down. Researchers at jugendschutz.net reported a tripling in observed hate speech in the second half of 2015.
Mr. Billen, the secretary of state in charge of the new law, was alarmed. In September 2015, he convened executives from Facebook and other social media sites at the justice ministry, a building that was once the epicenter of state propaganda for the Communist East. A task force for fighting hate speech was created. A couple of months later, Facebook and other companies signed a joint declaration, promising to “examine flagged content and block or delete the majority of illegal posts within 24 hours.”
But the problem did not go away. Over the 15 months that followed, independent researchers, hired by the government, twice posed as ordinary users and flagged illegal hate speech. During the tests, they found that Facebook had deleted 46 percent and 39 percent.
“They knew that they were a platform for criminal behavior and for calls to commit criminal acts, but they presented themselves to us as a wolf in sheep skin,” said Mr. Billen, a poker-faced civil servant with stern black frames on his glasses.
By March 2017, the German government had lost patience and started drafting legislation. The Network Enforcement Law was born, setting out 21 types of content that are “manifestly illegal” and requiring social media platforms to act quickly.
Officials say early indications suggest the rules have served their purpose. Facebook’s performance on removing illegal hate speech in Germany rose to 100 percent over the past year, according to the latest spot check of the European Union.
Platforms must publish biannual reports on their efforts. The first is expected in July.
At Facebook’s Berlin offices, Mr. Allan acknowledged that under the earlier voluntary agreement, the company had not acted decisively enough at first.
“It was too little and it was too slow,” he said. But, he added, “that has changed.”
He cited another independent report for the European Commission from last summer that showed Facebook was by then removing 80 percent of hate speech posts in Germany.
The reason for the improvement was not German legislation, he said, but a voluntary code of conduct with the European Union. Facebook’s results have improved in all European countries, not just in Germany, Mr. Allan said.
“There was no need for legislation,” he said.
Mr. Billen disagrees.
“They could have prevented the law,” he said. YouTube scored 90 percent in last year’s monitoring exercise. If other platforms had done the same, there would be no law today, he said.
A Regulatory Dilemma
Germany’s hard-line approach to hate speech and data privacy once made it an outlier in Europe. The country’s stance is now more mainstream, an evolution seen in the justice commissioner in Brussels.
Vera Jourova, the justice commissioner, deleted her Facebook account in 2015 because she could not stand the hate anymore.
“It felt good,” she said about pressing the button. She added: “It felt like taking back control.”
But Ms. Jourova, who grew up behind the Iron Curtain in what is now the Czech Republic, had long been skeptical about governments legislating any aspect of free speech, including hate speech. Her father lost his job after making a disparaging comment about the Soviet invasion in 1968, barring her from going to university until she married and took her husband’s name.
“I lived half my life in the atmosphere driven by Soviet propaganda,” she said. “The golden principle was: If you repeat a lie a hundred times it becomes the truth.”
When Germany started considering a law, she instead preferred a voluntary code of conduct. In 2016, platforms like Facebook promised European users easy reporting tools and committed to removing most illegal posts brought to their attention within 24 hours.
The approach worked well enough, Ms. Jourova said. It was also the quickest way to act because the 28 member states in the European Union differed so much about whether and how to legislate.
But the stance of many governments toward Facebook has hardened since it emerged that the consulting firm Cambridge Analytica had harvested the personal data of up to 87 million users. Representatives of the European Parliament have asked Mr. Zuckerberg to come to Brussels to “clarify issues related to the use of personal data” and he has agreed to come as soon as next week.
Ms. Jourova, whose job is to protect the data of over 500 million Europeans, has hardened her stance as well.
“Our current system relies on trust and this did nothing to improve trust,” she said. “The question now is how do we continue?”
The European Commission is considering German-style legislation for online content related to terrorism, violent extremism and child pornography, including a provision that would include fines for platforms that did not remove illegal content within an hour of being alerted to it.
Several countries — France, Israel, Italy, and Canada among them — have sent queries to the German government about the impact of the new hate speech law.
And Germany’s influence is evident in Europe’s new privacy regulation, known as the General Data Protection Regulation, or G.D.P.R.. The rules give people control over how their information is collected and used.
Inspired in part by German data protection laws written in the 1980s, the regulation has been shaped by a number of prominent Germans. Ms. Jourova’s chief of staff, Renate Nikolay, is German, as is her predecessor’s chief of staff, Martin Selmayr, now the European Commission’s secretary general. The lawmaker in charge of the regulation in the European Parliament is German, too.
“We have built on the German tradition of data protection as a constitutional right and created the most modern piece of regulation of the digital economy,” Ms. Nikolay said.
“To succeed in the long-term companies needs the trust of customers,” she said. “At the latest since Cambridge Analytica it has become clear that data protection is not just some nutty European idea, but a matter of competitiveness.”
On March 26, Ms. Jourova wrote a letter — by post, not email — to Sheryl Sandberg, Facebook’s chief operating officer.
“Is there a need for stricter rules for platforms like those that exist for traditional media?” she asked.
“Is the data of Europeans affected by the current scandal?” she added, referring to the Cambridge Analytica episode. And, if so, “How do you plan to inform the user about this?”
She demanded a reply within two weeks, and she got one. Some 2.7 million Europeans were affected, Ms. Sandberg wrote.
But she never answered Ms. Jourova’s question on regulation.
“There is now a sense of urgency and the conviction that we are dealing with something very dangerous that may threaten the development of free democracies,” said Ms. Jourova, who is also trying to find ways to clamp down on fake news and disinformation campaigns.
“We want the tech giants to respect and follow our legislation,” she added. “We want them to show social responsibility both on data protection and on hate speech.”
So do many Facebook employees, Mr. Allan, the company executive, said.
“We employ very thoughtful and principled people,” he said. “They work here because they want to make the world a better place, so when an assumption is made that the product they work on is harming people it is impactful.”
“People have felt this criticism very deeply,” he said.
A Visual Onslaught
Nils works eight-hour shifts. On busy days, 1,500 user reports are in his queue. Other days, there are only 300. Some of his colleagues have nightmares about what they see.
Every so often someone breaks down. A mother recently left her desk in tears after watching a video of a child being sexually abused. A young man felt physically sick after seeing a video of a dog being tortured. The agents watch teenagers self-mutilating and girls recounting rape.
They have weekly group sessions with a psychologist and the trauma specialists on standby. In more serious cases, the center teams up with clinics in Berlin.
In the office, which is adorned with Facebook logos, fresh fruit is at the agents’ disposal in a small room where subdued colors and decorative moss growing on the walls are meant to calm fraying nerves.
To decompress, the agents sometimes report each other’s posts, not because they are controversial, but “just for a laugh,” said another agent, the son of a Lebanese refugee and an Arabic-speaker who has had to deal with content related to terrorism generally and the Islamic State specifically. By now, he said, images of “weird skin diseases” affected him more than those of a beheading. Nils finds sports injuries like breaking bones particularly disturbing.
There is a camaraderie in the office and a real sense of mission: Nils said the agents were proud to “help clean up the hate.”
The definition of hate is constantly evolving.
The agents, who initially take a three-week training course, get frequent refreshers. Their guidelines are revised to reflect hate speech culture. Events change the meaning of words. New hashtags and online trends must be put in context.
“Slurs can become socialized,” Mr. Allan of Facebook explained.
“Refugee” became a group protected from the broad hate speech rules only in 2015. “Nafri” was a term used by the German police that year to describe North Africans who sexually harassed hundreds of women, attacking and, in some cases, raping them. Since then, Nafri has become a popular insult among the far-right.
Nils and his colleagues must determine whether hateful content is singling out an ethnic group or individuals.
That was the challenge with a message on Twitter that was later posted to Facebook as a screenshot by Beatrix von Storch, deputy floor leader of the far-right party, AfD.
“What the hell is wrong with this country?” Ms. von Storch wrote on Dec. 31. “Why is an official police account tweeting in Arabic?”
“Do you think that will appease the barbaric murdering Muslim group-raping gangs of men?” she continued.
A user reported the post as a violation of German law, and it landed in Nils’s queue. He initially decided to ignore the request because he felt Ms. von Storch was directing her insults at the men who had sexually assaulted women two years earlier.
Separately, a user reported the post as a violation of community standards. Another agent leaned toward deleting it, taking it as directed at Muslims in general.
They conferred with their “subject matter expert,” who escalated it to a team in Dublin.
For 24 hours, the post kept Facebook lawyers from Silicon Valley to Hamburg busy. The Dublin team decided that the post did not violate community standards but sent it on for legal assessment by outside lawyers hired by Facebook in Germany.
Within hours of news that the German police were opening a criminal investigation into Ms. von Storch over her comments, Facebook restricted access to the post. The user who reported the content was notified that it had been blocked for a violation of section 130 of the German criminal code, incitement to hatred. Ms. von Storch was also notified too.
In the first few days of the year, it looked like the platforms were erring on the side of censorship. On Jan. 2, a day after Ms. von Storch’s post was deleted, the satirical magazine Titanic quipped that she would be its new guest tweeter. Two of the magazine’s subsequent Twitter posts mocking her were deleted. When Titanic published them again, its account was temporarily suspended.
Since then, things have calmed down. And even Mr. Allan conceded: “The law has not materially changed the amount of content that is deleted.”
The data mapping showing confluence between anti-Muslim, anti-immigrant, white nationalist, neo-Confederate and anti-Government/Militia is of interest.
Have always wanted more data on social media networks and what they say about integration or not between different ethnic communities, not just the standard demographic analysis of communities and hopefully this analysis will stimulate such work. We need to look beyond the ‘physical enclave’ to include the virtual ones:
…Researcher and professor of computer science at Elon University, Megan Squire, is conducting a large-scale, ongoing study of hate groups on social media platforms, including Facebook.
“Preliminary results from my research indicate that – not surprisingly – groups with broadly ‘nativist’ ideologies, including anti-Muslim and anti-immigrant views, have significant membership in common,” Squire said.
“But this social network analysis also shows that anti-Muslim ideologies in particular can serve as a bridge between mostly disconnected communities, such as between anti-government and white nationalist communities.”
To construct the graph below, Squire first classified Facebook groups into ideologies using SPLC’s descriptions. She then selected the groups having more than 10 members in common and used social network analysis software to reveal the underlying “co-membership” network.
Each node represents one Facebook group. Lines between the groups indicate that the groups have at least ten members in common. Groups are positioned on the graph according to how similar their membership is. Groups that are closer to the center have more in common with one another. Green nodes, representing anti-Muslim groups, can be seen throughout.
Anti-Muslim hate seems to be the unifying cause hate groups can rally around and engage in on the social media platform. These findings are part of her ongoing research project to study far-right extremist beliefs through social media.
Is Facebook enabling anti-Muslim sentiment?
According to a 2017 Pew research study, 75 percent of Muslims polled believe there is a lot of discrimination in the United States against Muslims. Americans polled that same year by Pew rated Muslims on a “feeling thermometer” where 0 is the coldest and 100 is the warmest, the lowest among all religions at just 48. This disposition is well illustrated on Facebook, a platform in which Nazis and white nationalists are removed, while anti-Muslim groups remain.
Many publications have highlighted the double standards that exist in policing various demographics. Like other tech companies, Facebook largely responds when there is a controversy that is reported on by a publication. ProPublica points to complaints made by many users and the Anti-Defamation League about a page called “Jewish Ritual Murder,” which Facebook didn’t respond to until ProPublica wrote about Facebook’s inaction.
Monica Bickert, who leads the global policy management team at Facebook, explained, “The policies do not always lead to perfect outcomes. That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”…
Facebook Inc said on Wednesday it was temporarily disabling the ability of advertisers on its social network to exclude racial groups from the intended audience of ads while it studies how the feature could be used to discriminate.
Facebook’s chief operating officer, Sheryl Sandberg, told African-American U.S. lawmakers in a letter that the company was determined to do better after a news report said Facebook had failed to block discriminatory ads.
The U.S.-based news organization ProPublica reported last week that, as part of an investigation, it had purchased discriminatory housing ads on Facebook and slipped them past the company’s review process, despite claims by Facebook months earlier that it was able to detect and block such ads.
“Until we can better ensure that our tools will not be used inappropriately, we are disabling the option that permits advertisers to exclude multicultural affinity segments from the audience for their ads,” Sandberg wrote in the letter to the Congressional Black Caucus, according to a copy posted online by ProPublica.
It is unlawful under U.S. law to publish certain types of ads if they indicate a preference based on race, religion, sex or certain classifications.
The flaws in their business models keep on becoming more apparent:
If you saw ads on your Facebook feed showing an alternate reality where France and Germany were governed by Sharia law ahead of the 2016 elections, you’re not alone.
Facebook (FB, +0.89%) and Google(GOOGL, +0.18%) helped advertising company Harris Media run the campaigns for their client, Secure America Now—a conservative, nonprofit advocacy group whose campaign “included a mix of anti-Hillary Clinton and anti-Islam messages,” notes Bloomberg.
According to Bloomberg’s account, Facebook and Google directly collaborated on the campaign, helping “target the ads to more efficiently reach the audiences.” Not only did the two tech giants compete for “millions in ad dollars,” but they also “worked closely” with the group on their ads throughout the 2016 election.
Voters in swing states saw a range of ads, including the faux tourism video that depicted French students being trained to fight for the caliphate, and the Mona Lisa covered in a burqa. Another ad linked Nevada Democratic Senate nominee Catherine Cortez Masto to terrorism, calling on viewers to “stop support of terrorism. Vote against Catherine Cortez Mastro,” and asking them to “vote to protect Nevada.”
Ads were optimized to target specific groups of people that they felt “could be swayed by the anti-refugee message.” And Facebook reportedly used its collaboration with Secure America Now as an opportunity to test new technology as well. Internal reports acquired by Bloomberg show that the ads were viewed millions of times on Facebook and Google.
This case distinguishes itself from that of Russian efforts to influence the 2016 election in that Google and Facebook directly assisted Secure America Now in its targeting of audiences. Of course, the two companies have worked with political groups on their advertising strategies in the past, but the extent and secretive nature of their assistance in this case is uncommon. And the content of the ads themselves reportedly left some Harris employees feeling “uneasy.”
Google and Facebook were not immediately available for comment.