What Does Facebook Consider Hate Speech? Take Our Quiz – The New York Times

Gives a good sense of the criteria Facebook uses,  allowing more than what I would consider hate speech.

Take the quiz at their website for the contrast between Facebook, your responses as well as NYT readers (spoiler alert: Facebook classified half of these as hate speech, I classified 5 of the 6):

Have you ever seen a post on Facebook that you were surprised wasn’t removed as hate speech? Have you flagged a message as offensive or abusive but the social media site deemed it perfectly legitimate?

Users on social media sites often express confusion about why offensive posts are not deleted. Paige Lavender, an editor at HuffPost, recently described her experience learning that a vulgar and threatening message she received on Facebook did not violate the platform’s standards.

Here are a selection of statements based on examples from a Facebook training document and real-world comments found on social media. Most readers will find them offensive. But can you tell which ones would run afoul of Facebook’s rules on hate speech?

Hate speech is one of several types of content that Facebook reviews, in addition to threats and harassment. Facebook defines hate speech as:

  1. An attack, such as a degrading generalization or slur.
  2. Targeting a “protected category” of people, including one based on sex, race, ethnicity, religious affiliation, national origin, sexual orientation, gender identity, and serious disability or disease.

Facebook’s hate speech guidelines were published in June by ProPublica, an investigative news organization, which is gathering users’ experiences about how the social network handles hate speech.

Danielle Citron, an information privacy expert and professor of law at the University of Maryland, helped The New York Times analyze six deeply insulting statements and determine whether they would be considered hate speech under Facebook’s rules.

  1. “Why do Indians always smell like curry?! They stink!”
  2. “Poor black people should still sit at the back of the bus.”
  3. “White men are assholes.”
  4. “Keep ‘trans’ men out of girls bathrooms!”
  5. “Female sports reporters need to be hit in the head with hockey pucks.”
  6. “I’ll never trust a Muslim immigrant… they’re all thieves and robbers.”

Did any of these answers surprise you? You’re probably not alone.

Ms. Citron said that even thoughtful and robust definitions of hate speech can yield counterintuitive results when enforced without cultural and historic context.

“When you’re trying to get as rulish as possible, you can lose the point of it,” she said. “The spirit behind those rules can get lost.”

A Facebook spokeswoman said that the company expects its thousands of content reviewers to take context into account when making decisions, and that it constantly evolves its policies to keep up with changing cultural nuances.

In response to questions for this piece, Facebook said it had changed its policy to include age as a protected category. While Facebook’s original training document states that content targeting “black children” would not violate its hate speech policy, the company’s spokeswoman said that such attacks would no longer be acceptable.

Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Interesting and revealing thread regarding some of the complexities involved and the degree of awareness of the issues:

Facebook executives don’t usually say much publicly, and when they do, it’s usually measured and approved by the company’s public relations team.

Today was a little different. Facebook’s chief security officer, Alex Stamos, took to Twitter to deliver an unusually raw tweetstorm defending the company’s software algorithms against critics who believe Facebook needs more oversight.

Facebook uses algorithms to determine everything from what you see and don’t see in News Feed, to finding and removing other content like hate speech and violent threats. The company has been criticized in the past for using these algorithms — and not humans — to monitor its service for things like abuse, violent threats, and misinformation.

The algorithms can be fooled or gamed, and part of the criticism is that Facebook and other tech companies don’t always seem to appreciate that algorithms have biases, too.

Stamos says it’s hard to understand from the outside.

“Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks,” Stamos tweeted. “My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.”

Stamos’s thread is all the more interesting given his current role inside the company. As chief security officer, he’s spearheading the company’s investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year’s U.S. presidential campaign.

The irony in Stamos’s suggestion, of course, is that most Silicon Valley tech companies are notorious for controlling their own message. This means individual employees rarely speak to the press, and when they do, it’s usually to deliver a bunch of prepared statements. Companies sometimes fire employees who speak to journalists without permission, and Facebook executives are particularly tight-lipped.

This makes Stamos’s thread, and his candor, very intriguing. Here it is in its entirety.

  1. I appreciate Quinta’s work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV.

  2. I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.

  3. Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.

  4. In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.

  5. For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.

  6. Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.

  7. A bunch of the public research really comes down to the feedback loop of “we believe this viewpoint is being pushed by bots” -> ML

  8. So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!

  9. Likewise all the stories about “The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos

  10. My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.

  11. And to be careful of their own biases when making leaps of judgment between facts.

  12. If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased

  13. If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.

  14. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.

  15. If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad

  16. Likewise if your call for data to be protected from governments is based upon who the person being protected is.

  17. A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.

  18. Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. 

Source: Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Facebook’s Frankenstein Moment – The New York Times

Good and sobering analysis and how Facebook and other social media were caught unprepared for the darker side of human nature and societal impact:

On Wednesday, in response to a ProPublica report that Facebook enabled advertisers to target users with offensive terms like “Jew hater,” Sheryl Sandberg, the company’s chief operating officer, apologized and vowed that the company would adjust its ad-buying tools to prevent similar problems in the future.

As I read her statement, my eyes lingered over one line in particular:

“We never intended or anticipated this functionality being used this way — and that is on us,” Ms. Sandberg wrote.

It was a candid admission that reminded me of a moment in Mary Shelley’s “Frankenstein,” after the scientist Victor Frankenstein realizes that his cobbled-together creature has gone rogue.

“I had been the author of unalterable evils,” he says, “and I lived in daily fear lest the monster whom I had created should perpetrate some new wickedness.”

If I were a Facebook executive, I might feel a Frankensteinian sense of unease these days. The company has been hit with a series of scandals that have bruised its image, enraged its critics and opened up the possibility that in its quest for global dominance, Facebook may have created something it can’t fully control.

Facebook is fighting through a tangled morass of privacy, free-speech and moderation issues with governments all over the world. Congress is investigating reports that Russian operatives used targeted Facebook ads to influence the 2016 presidential election. In Myanmar, activists are accusingFacebook of censoring Rohingya Muslims, who are under attack from the country’s military. In Africa, the social network faces accusations that it helped human traffickers extort victims’ families by leaving up abusive videos.

Few of these issues stem from willful malice on the company’s part. It’s not as if a Facebook engineer in Menlo Park personally greenlighted Russian propaganda, for example. On Thursday, the company said it would release political advertisements bought by Russians for the 2016 election, as well as some information related to the ads, to congressional investigators.

But the troubles do make it clear that Facebook was simply not built to handle problems of this magnitude. It’s a technology company, not an intelligence agency or an international diplomatic corps. Its engineers are in the business of building apps and selling advertising, not determining what constitutes hate speech in Myanmar. And with two billion users, including 1.3 billion who use it every day, moving ever greater amounts of their social and political activity onto Facebook, it’s possible that the company is simply too big to understand all of the harmful ways people might use its products.

“The reality is that if you’re at the helm of a machine that has two billion screaming, whiny humans, it’s basically impossible to predict each and every possible nefarious use case,” said Antonio García Martínez, author of the book “Chaos Monkeys” and a former Facebook advertising executive. “It’s a Whac-a-Mole problem.”

Elliot Schrage, Facebook’s vice president of communications and public policy, said in a statement: “We work very hard to support our millions of advertisers worldwide, but sometimes — rarely — bad actors win. We invest a lot of time, energy and resources to make these rare events extinct, and we’re grateful to our community for calling out where we can do better.”

When Mark Zuckerberg built Facebook in his Harvard dorm room in 2004, nobody could have imagined its becoming a censorship tool for repressive regimes, an arbiter of global speech standards or a vehicle for foreign propagandists.

But as Facebook has grown into the global town square, it has had to adapt to its own influence. Many of its users view the social network as an essential utility, and the company’s decisions — which posts to take down, which ads to allow, which videos to show — can have real life-or-death consequences around the world. The company has outsourced some decisions to complex algorithms, which carries its own risks, but many of the toughest choices Facebook faces are still made by humans.

Even if Mr. Zuckerberg and Ms. Sandberg don’t have personal political aspirations, as has been rumored, they are already leaders of an organization that influences politics all over the world. And there are signs that Facebook is starting to understand its responsibilities. It has hired a slew of counterterrorism experts and is expanding teams of moderators around the world to look for and remove harmful content.

On Thursday, Mr. Zuckerberg said in a video posted on Facebook that the company would take several steps to help protect the integrity of elections, like making political ads more transparent and expanding partnerships with election commissions.

“We will do our part not only to ensure the integrity of free and fair elections around the world, but also to give everyone a voice and to be a force for good in democracy everywhere,” he said.

But there may not be enough guardrails in the world to prevent bad outcomes on Facebook, whose scale is nearly inconceivable. Alex Stamos, Facebook’s security chief, said last month that the company shuts down more than a million user accounts every day for violating Facebook’s community standards. Even if only 1 percent of Facebook’s daily active users misbehaved, it would still mean 13 million rule breakers, about the number of people in Pennsylvania.

In addition to challenges of size, Facebook’s corporate culture is one of cheery optimism. That may have suited the company when it was an upstart, but it could hamper its ability to accurately predict risk now that it’s a setting for large-scale global conflicts.

Several current and former employees described Facebook to me as a place where engineers and executives generally assume the best of users, rather than preparing for the worst. Even the company’s mission statement — “Give people the power to build community and bring the world closer together” — implies that people who are given powerful tools will use those tools for socially constructive purposes. Clearly, that is not always the case.

Hiring people with darker views of the world could help Facebook anticipate conflicts and misuse. But pessimism alone won’t fix all of Facebook’s issues. It will need to keep investing heavily in defensive tools, including artificial intelligence and teams of human moderators, to shut down bad actors. It would also be wise to deepen its knowledge of the countries where it operates, hiring more regional experts who understand the nuances of the local political and cultural environment.

Facebook could even take a page from Wall Street’s book, and create a risk department that would watch over its engineering teams, assessing new products and features for potential misuse before launching them to the world.

Now that Facebook is aware of its own influence, the company can’t dodge responsibility for the world it has helped to build. In the future, blaming the monster won’t be enough.

Fearing Anti-Semitic Speech, Facebook Limits Audience Targeting – NY Times

Facebook grapples with its algorithms, business model and social impact:

Facebook has said it will restrict how advertisers target their audiences on the social network after a report said some were able to seek out self-described “Jew haters.”

In a statement dated Thursday, the company also said it would prevent users from indicating what type of ads they would like to see in an attempt to curb hate speech, adding that it had “no place on our platform.”

The moves came in response to a ProPublica investigation that revealed that Facebook’s self-service ad-buying platform allowed advertisers to direct ads to the newsfeeds of about 2,300 users who said they were interested in anti-Semitic subjects.

Reporters from ProPublica tested Facebook advertising categories to see whether they could buy ads aimed at Facebook users who expressed interest in topics like “Jew hater,” “How to burn jews,” and “History of ‘why jews ruin the world.’” The reporters paid $30 to ensure groups affiliated with these anti-Semitic categories saw promoted ProPublica posts in their Facebook news feeds.

Facebook approved the posts within 15 minutes, according to the ProPublica investigation.

The social network said Friday that its community standards “strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes.”

Facebook added that “to help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue.”

The news comes as Facebook faces scrutiny for its role in the 2016 presidential election in the United States. Last week its representatives briefed the Senate and House intelligence committees, which are investigating Russian intervention in the election. The company told congressional investigators that it had identified more than $100,000worth of ads on hot-button issues directed by a Russian company with links to the Kremlin.

Representatives from Facebook said the company had sold the ads to a pro-Kremlin Russian company that was seeking an audience of voters in the United States during last year’s election campaign.

The ads — about 3,000 of them — focused on divisive topics like gay rights, gun control, race and immigration, and they were linked to 470 fake accounts and pages that it subsequently took down, according to Facebook’s chief security officer.

Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica

For all those interested in free speech and hate speech issues, a really good analysis of how Facebook is grappling with the issue and its definitions of protected groups. Urge all readers to go through the slide show (need to go to the article to access) which capture some of the complexities involved:

In the wake of a terrorist attack in London earlier this month, a US congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared US Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”

Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed, and her Facebook account was disabled for seven days.

A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins’ incitement to violence passed muster because it targeted a specific sub-group of Muslims—those that are “radicalized”—while Delgado’s post was deleted for attacking whites in general.

Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn’t be allowed in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of “fake news”—fabricated stories that circulated on Facebook like “Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement“—and growing concern that terrorists are using social media for recruitment.

While Facebook was credited during the 2010-2011 “Arab Spring” with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea, and Western Sahara.

One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children, and white men. It asks: which group is protected from hate speech? The correct answer: white men.

The reason is that Facebook deletes curses, slurs, calls for violence, and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation, and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. (The exact rules are in the slide show below.)

Facebook has used these rules to train its “content reviewers” to decide whether to delete or allow posts. Facebook says the exact wording of its rules may have changed slightly in more recent versions. ProPublica recreated the slides.

Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook’s algorithm is designed to defend all races and genders equally.

But Facebook says its goal is different—to apply consistent standards worldwide. “The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”

Facebook’s rules constitute a legal world of their own. They stand in sharp contrast to the United States’ First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company’s hate speech algorithm. But they also differ—for example, in permitting postings that deny the Holocaust—from more restrictive European standards.

The company has long had programs to remove obviously offensive material like child pornography from its stream of images and commentary. Recent articles in the Guardian and Süddeutsche Zeitung have detailed the difficult choices that Facebook faces regarding whether to delete posts containing graphic violence, child abuse, revenge porn and self-mutilation.

The challenge of policing political expression is even more complex. The documents reviewed by ProPublica indicate, for example, that Donald Trump’s posts about his campaign proposal to ban Muslim immigration to the United States violated the company’s written policies against “calls for exclusion” of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump’s statements from its policies at the order of Mark Zuckerberg, the company’s founder and chief executive.

The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.

Users whose posts are removed are not usually told what rule they have broken, and they cannot generally appeal Facebook’s decision. Appeals are currently only available to people whose profile, group, or page is removed.

The company has begun exploring adding an appeals process for people who have individual pieces of content deleted, according to Bickert. “I’ll be the first to say that we’re not perfect every time,” she said.

Facebook is not required by US law to censor content. A 1996 federal law gave most tech companies, including Facebook, legal immunity for the content users post on their services. The law, section 230 of the Telecommunications Act, was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.

The law freed up online publishers to host online forums without having to legally vet each piece of content before posting it, the way that a news outlet would evaluate an article before publishing it. But early tech companies soon realized that they still needed to supervise their chat rooms to prevent bullying and abuse that could drive away users.

Source: Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica

Google offers a glimpse into its fight against fake news

Challenge to know how much the issue is being addressed without any independent watchdogs:

In the waning months of 2016, two of the world’s biggest tech companies decided they would do their part to curb the spread of hoaxes and misinformation on their platforms — by this point, widely referred to under the umbrella of “fake news.”

Facebook and Google announced they would explicitly ban fake news publishers from using their advertising networks to make money, while Facebook later announced additional efforts to flag and fact-check suspicious news stories in users’ feeds.

How successful have these efforts been? Neither company will say much — but Google, at least, has offered a glimpse.

In a report released today, Google says that its advertising team reviewed 550 sites it suspected of serving misleading content from November to December last year.

Of those 550 sites, Google took action against 340 of them for violating its advertising policies.

“When we say ‘take action’ that basically means, this is a site that historically was working with Google and our Adsense products to show ads, and now we’re no longer allowing our ad systems to support that content,” said Scott Spencer, Google’s director of product management for sustainable ads in an interview.

Nearly 200 publishers — that is, the site operators themselves — were also removed from Google’s AdSense network permanently, the company said.

Not all of the offenders were caught violating the company’s new policy specifically addressing misrepresentation; some may have run afoul of other existing policies.

In total, Google says, it took down 1.7 billion ads in violation of its policies in 2016.

Questions remain

No additional information is contained within the report — an annual review of bad advertising practices that Google dealt with last year.

In both an interview and a followup email, Google declined to name any of the publishers that had violated its policies or been permanently removed from its network. Nor could Google say how much money it had withheld from publishers of fake news, or how much money some of its highest-grossing offenders made.

Some fake news site operators have boasted of making thousands of dollars a month in revenue from advertising displayed on their sites.

‘I always say the bad guys with algorithms are going to be one step ahead of the good guys with algorithms.’– Susan Bidel, senior analyst at Forrester Research

The sites reviewed by Google also represent a very brief snapshot in time — the aftermath of the U.S. presidential election — and Spencer was unable to say how previous months in the year might have compared.

“There’s no way to know. We take action against sites when they’re identified and they violate our policies,” Spencer said. “It’s not like I can really extrapolate the number.”

A bigger issue

Companies such as Google are only part of the picture.

“It’s the advertisers’ dollars. It’s their responsibility to spend it wisely,” said Susan Bidel, a senior analyst at Forrester Research who recently co-wrote a report on fake news for marketers and advertisers.

That, however, is easier said than done. Often, advertisers don’t know all of the sites on which their ads run — making it difficult to weed out sites designed to serve misinformation. And even if they are able to maintain a partial list of offending sites, “there’s no blacklist that’s going to be able to keep up with fake news,” Bidel said, when publishers can quickly create new sites.

Source: Google offers a glimpse into its fight against fake news – Technology & Science – CBC News

Facebook’s AI boss: Facebook could fix its filter bubble if it wanted to – Recode

While Zuckerberg is correct that we all have a tendency to tune-out other perspectives, the role that Facebook and other social media have in reinforcing that tendency should not be downplayed:

One of the biggest complaints about Facebook — and its all-powerful News Feed algorithm — is that the social network often shows you posts supporting beliefs or ideas you (probably) already have.

Facebook’s feed is personalized, so what you see in your News Feed is a reflection of what you want to see, and people usually want to see arguments and ideas that align with their own.

The term for this, often associated with Facebook, is a “filter bubble,” and people have written books about it. A lot of people have pointed to that bubble, as well as to the proliferation of fake news on Facebook, as playing a major role in last month’s presidential election.

Now the head of Facebook’s artificial intelligence research division, Yann LeCun, says this is a problem Facebook could solve with artificial intelligence.

“We believe this is more of a product question than a technology question,” LeCun told a group of reporters last month when asked if artificial intelligence could solve this filter-bubble phenomenon. “We probably have the technology, it’s just how do you make it work from the product side, not from the technology side.”

A Facebook spokesperson clarified after the interview that the company doesn’t actually have this type of technology just sitting on the shelf. But LeCun seems confident it could be built. So why doesn’t Facebook build it?

“These are questions that go way beyond whether we can develop AI technology that solves the problem,” LeCun continued. “They’re more like trade-offs that I’m not particularly well placed to determine. Like, what is the trade-off between filtering and censorship and free expression and decency and all that stuff, right? So [it’s not a question of if] the technology exists or can be developed, but … does it make sense to deploy it. This is not my department.”

Facebook has long denied that its service creates a filter bubble. It has even published a study defending the diversity of peoples’ News Feeds. Now LeCun is at the very least acknowledging that a filter bubble does exist, and that Facebook could fix it if it wanted to.

And that’s fascinating because while it certainly seemed like a fixable problem from the outside — Facebook employs some of the smartest machine-learning and language-recognition experts in the world — it once again raises questions around Facebook’s role as a news and information distributor.

Facebook CEO Mark Zuckerberg has long argued that his social network is a platform that leaves what you see (or don’t see) to computer algorithms that use your online activity to rank your feed. Facebook is not a media company making human-powered editorial decisions, he argues. (We disagree.)

But is showing its users a politically balanced News Feed Facebook’s responsibility? Zuckerberg wrote in September that Facebook is already “more diverse than most newspapers or TV stations” and that the filter-bubble issue really isn’t an issue. Here’s what he wrote.

“One of the things we’re really proud of at Facebook is that, whatever your political views, you probably have some friends who are in the other camp. … [News Feed] is not a perfect system. Research shows that we all have psychological bias that makes us tune out information that doesn’t fit with our model of the world. It’s human nature to gravitate towards people who think like we do. But even if the majority of our friends have opinions similar to our own, with News Feed we have easier access to more news sources than we did before.”

So this, right here, explains why Facebook isn’t building the kind of technology that LeCun says it’s capable of building. At least not right now.

There are some benefits to a bubble like this, too, specifically user safety. Unlike Twitter, for example, Facebook’s bubble is heightened by the fact that your posts are usually private, which makes it harder for strangers to comment on them or drag you into conversations you might not want to be part of. The result: Facebook doesn’t have to deal with the level of abuse and harassment that Twitter struggles with.

Plus, Facebook isn’t the only place you’ll find culture bubbles. Here’s “SNL” making fun of a very similar bubble phenomenon that has come to light since election night.

Facebook Runs Up Against German Hate Speech Laws – The New York Times

About time – social media companies also need to be accountable (as do users):

In Germany, more than almost anywhere else in the West, lawmakers, including Chancellor Angela Merkel, are demanding that Facebook go further to police what is said on the social network — a platform that now has 1.8 billion users worldwide. The country’s lawmakers also want other American tech giants to meet similar standards.

The often-heated dispute has raised concerns over maintaining freedom of speech while protecting vulnerable minorities in a country where the legacy of World War II and decades under Communism still resonate.

It is occurring amid mounting criticism of Facebook in the United States after fake news reports were shared widely on the site before the presidential election. Facebook also has been accused of allowing similar false reports to spread during elections elsewhere.

Mr. Zuckerberg has denied that such reports swayed American voters. But lawmakers in the United States, Germany and beyond are pressing Facebook to clamp down on hate speech, fake news and other misinformation shared online, or face new laws, fines or other legal actions.

“Facebook has a certain responsibility to uphold the laws,” said Heiko Maas, the German justice minister. In October, Mr. Maas suggested the company could be held criminally liable for users’ illegal hate speech postings if it does not swiftly remove them.

Facebook rejects claims that it has not responded to the rise in hate speech in Germany and elsewhere, saying it continually updates its community standards to weed out inappropriate posts and comments.

“We’ve done more than any other service at trying to get on top of hate speech on our platform,” Mr. Allen said.

Tussles with German lawmakers are nothing new for Facebook.

It has routinely run afoul of the country’s strict privacy rules. In September, a local regulator blocked WhatsApp, the internet messaging service owned by Facebook, from sharing data from users in Germany with its parent company. The country’s officials also have questioned whether Facebook’s control of users’ digital information could breach antitrust rules, accusations the company denies.

Facebook’s problems with hate speech posts in Germany began in summer 2015 as more than one million refugees began to enter the country.

Their arrival, according to company executives and lawmakers, incited an online backlash from Germans opposed to the swell of people from Syria, Afghanistan and other war-torn countries. The number of hateful posts on Facebook increased sharply.

As such content spread quickly online, senior German politicians appealed directly to Facebook to comply with the country’s laws. Even Ms. Merkel confronted Mr. Zuckerberg in New York in September 2015 about the issue.

In response, Facebook updated its global community standards, which also apply in the United States, to give greater protection to minority groups, primarily to calm German concerns.

Facebook also agreed to work with the government, local charities and other companies to fight online hate speech, and recently started a billboard and television campaign in Germany to answer local fears over how it deals with hate speech and privacy.

Facebook hired a tech company based in Berlin to monitor and delete illegal content, including hate speech, from Germany and elsewhere, working with Facebook’s monitoring staff in Dublin.

“They have gotten better and quicker at handling hate speech,” said Martin Drechsler, managing director of FSM, a nonprofit group that has worked with Facebook on the issue.

Despite these steps, German officials are demanding further action.

Ms. Merkel, who is seeking a fourth term in general elections next year, warned lawmakers last week that hate speech and fake news sites were influencing public opinion, raising the possibility of new regulations.

And Mr. Maas, the justice minister, has repeatedly warned that he will propose legislation if Facebook cannot remove at least 70 percent of online hate speech within 24 hours by early next year. It now removes less than 50 percent, according to a study published in September by a group that monitors hate speech, a proportion that is still significantly higher than those for Twitter and YouTube, the report found.

For Chan-Jo Jun, a lawyer in Würzburg, an hour’s drive from Frankfurt, new laws governing Facebook cannot come soon enough.

Mr. Jun recently filed a complaint with Munich authorities, seeking prosecution of Mr. Zuckerberg and other senior Facebook executives on charges they failed to sufficiently tackle the widespread wave of hate speech in Germany. The company denies the accusations.

While his complaint may be dismissed, Mr. Jun says the roughly 450 hate speech cases that he has collected, more than half of them aimed at refugees, show that Facebook is not complying with German law. Despite its global size, he insists, the company cannot skirt its local responsibilities.

“I know Facebook wants to be seen as a global giant,” Mr. Jun said. “But there’s no way around it. They have to comply with German law.”

Let’s get real. Facebook is not to blame for Trump. – Recode

While I think Williams downplays the role and responsibility of social media (see Social Media’s Globe-Shaking Power – The New York Times), his raising of confirmation bias is valid.

Communications technology is not neutral, and perhaps it is time to reread some of the Canadian classics by Harold Innis (Empire and Communications) and Marshall McLuhan (Understanding Media: The Extensions of Man):

Much of the coverage and outrage has been directed toward social media, its echo chambers, and specifically those of the Facebook platform. While, to be sure, much of the fake or inaccurate news is found and circulated on Facebook, Facebook is not a news outlet; it is a communication medium to be utilized as its users so choose. It is not the job of Facebook’s employees, or its algorithms, to edit or censor the content that is shared; in fact it would be more detrimental to do so. This is for two very good reasons:

One, either human editors, or artificial intelligence editors, by removing one item or another will appear to introduce bias into the system. The group who’s content is being removed or edited will feel targeted by the platform and claim it, rightly or wrongly, is biased against their cause. Even if the content is vetted and found to be true or false.

Two, censorship in any form is bad for the national discourse.

So rather than blaming Facebook or other platforms for the trouble in which we find ourselves, let’s give credit where credit is due: The American people.

This comes down to two very important concepts that our society has been turning its back on, in the age of social media: Confirmation bias and epistemology.

Explained by David McRaney, the You Are Not So Smart blogger and author of “You Are Now Less Dumb: How to Conquer Mob Mentality, How to Buy Happiness, and All the Other Ways to Outsmart Yourself,” confirmation bias is the misconception that “your opinions are the result of years of rational, objective analysis,” and that the truth is that “your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.” Or, more precisely: The tendency to process information by looking for, or interpreting, information that is consistent with one’s existing beliefs.

If we find a piece of content that says that Donald Trump is clueless, or that Hillary Clinton belongs in prison, we accept the one because it reinforces our like for one candidate over the other, and discard the negative item as some falsehood generated by the opposing party to discredit your candidate. We don’t care about the information or what it says, as long as it reinforces how we feel.

That brings us to epistemology, “the study or a theory of the nature and grounds of knowledge especially with reference to its limits and validity,” a branch of philosophy aptly named from the Greek, meaning “knowledge dscourse.” This is a concept that has existed since the 16th century and very likely conveniently ignored in political campaigns ever since, perhaps because it’s just easier to believe and propagate than it is to read and validate.

In fact, a recent Pew Research Center survey called the American Trends Panel asked if the public prefers that the news media present facts without interpretation. Overwhelmingly, 59 percent of those posed the question preferred facts without interpretation, and among registered voters, 50 percent of Clinton supporters, and 71 percent of Trump supporters preferred no interpretation. While those numbers may seem incredible, the telling result is that 81 percent of registered voters disagree on what the facts actually are. Aren’t facts just facts? Yes, they are, but our biases and distrust of intellectual sources say otherwise.

Does Facebook create echo chambers on both sides of the political spectrum? No. Facebook and other social media only serve to provide a high-speed amplifier of what already exists in our society; especially to those who enjoy the communal effect of sharing information with others in their personal circles. Facebook goes give them a wide and instant audience.

In a 2012 study published in the journal Computers in Human Behavior, computer scientists Chei Sian Ma, and Long Ma said, “… we also establish that status seeking has a significant influence on prior content sharing experience indicating that the experiential factor may be a possible mediator between gratifications and news sharing intention.”

Or, in other words, it’s fun to share something and get congratulatory high-fives from your like-minded friends. Facebook does make that activity almost instantaneous. Sharing news, or fake news, and being liked for doing so feels good. Never mind the ramifications on the accuracy of cultural or political discourse.

During his final press conference in Berlin with Angela Merkel, President Obama puts this as succinctly as it could possibly be said: “If we are not serious about facts, and what’s true and what’s not . . . if we can’t discriminate between serious arguments and propaganda, then we have problems.”

From Hate Speech To Fake News: The Facebook Content Crisis Facing Mark Zuckerberg : NPR

Another good long read on social media, particularly Facebook, and its need to face up to ethical issues:

Some in Silicon Valley dismiss the criticisms against Facebook as schadenfreude: Just like taxi drivers don’t like Uber, legacy media envies the success of the social platform and enjoys seeing its leadership on the hot seat.

A former employee is not so dismissive and says there is a cultural problem, a stubborn blindness at Facebook and other leading Internet companies like Twitter. The source says: “The hardest problems these companies face aren’t technological. They are ethical, and there’s not as much rigor in how it’s done.”

At a values level, some experts point out, Facebook has to decide if its solution is free speech (the more people post, the more the truth rises), or clear restrictions.

And technically, there’s no shortage of ideas about how to fix the process.

A former employee says speech is so complex, you can’t expect Facebook to arrive at the same decision each and every time; but you can expect a company that consistently ranks among the 10 most valuable on earth, by market cap, to put more thought and resources into its censorship machine.

The source argues Facebook could afford to make content management regional — have decisions come from the same country in which a post occurs.

Speech norms are highly regional. When Facebook first opened its offices in Hyderabad, India, a former employee says, the guidance the reviewers got was to remove sexual content. In a test run, they ended up removing French kissing. Senior management was blown away. The Indian reviewers were doing something Facebook did not expect, but which makes perfect sense for local norms.

Harvard business professor Ben Edelman says Facebook could invest engineering resources into categorizing the posts. “It makes no sense at all,” he says, that when a piece of content is flagged, it goes into one long line. The company could have the algorithm track what flagged content is getting the most circulation and move that up in the queue, he suggests.

Zuckerberg finds himself at the helm of a company that started as a tech company — run by algorithms, free of human judgment, the mythology went. And now he’s just so clearly the CEO of a media company — replete with highly complex rules (What is hate speech anyway?); with double standards (If it’s “news” it stays, if it’s a rant it goes); and with an enforcement mechanism that is set up to fail.

Source: From Hate Speech To Fake News: The Facebook Content Crisis Facing Mark Zuckerberg : All Tech Considered : NPR