Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Interesting and revealing thread regarding some of the complexities involved and the degree of awareness of the issues:

Facebook executives don’t usually say much publicly, and when they do, it’s usually measured and approved by the company’s public relations team.

Today was a little different. Facebook’s chief security officer, Alex Stamos, took to Twitter to deliver an unusually raw tweetstorm defending the company’s software algorithms against critics who believe Facebook needs more oversight.

Facebook uses algorithms to determine everything from what you see and don’t see in News Feed, to finding and removing other content like hate speech and violent threats. The company has been criticized in the past for using these algorithms — and not humans — to monitor its service for things like abuse, violent threats, and misinformation.

The algorithms can be fooled or gamed, and part of the criticism is that Facebook and other tech companies don’t always seem to appreciate that algorithms have biases, too.

Stamos says it’s hard to understand from the outside.

“Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks,” Stamos tweeted. “My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.”

Stamos’s thread is all the more interesting given his current role inside the company. As chief security officer, he’s spearheading the company’s investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year’s U.S. presidential campaign.

The irony in Stamos’s suggestion, of course, is that most Silicon Valley tech companies are notorious for controlling their own message. This means individual employees rarely speak to the press, and when they do, it’s usually to deliver a bunch of prepared statements. Companies sometimes fire employees who speak to journalists without permission, and Facebook executives are particularly tight-lipped.

This makes Stamos’s thread, and his candor, very intriguing. Here it is in its entirety.

  1. I appreciate Quinta’s work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV.

  2. I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.

  3. Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.

  4. In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.

  5. For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.

  6. Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.

  7. A bunch of the public research really comes down to the feedback loop of “we believe this viewpoint is being pushed by bots” -> ML

  8. So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!

  9. Likewise all the stories about “The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos

  10. My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.

  11. And to be careful of their own biases when making leaps of judgment between facts.

  12. If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased

  13. If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.

  14. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.

  15. If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad

  16. Likewise if your call for data to be protected from governments is based upon who the person being protected is.

  17. A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.

  18. Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. 

Source: Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Should tech companies be able to shut down neo-Nazis? – Recode

Good discussion of some of the issues involved which IMO lay out the need for some government leadership in setting up guidelines and possibly regulations:

In the aftermath of the white supremacist rally in Charlottesville, Va., where dozens were injured and one counter-protestor was killed, the battle moved online.

The four-year-old neo-Nazi website the Daily Stormer was evicted by web hosts GoDaddy and Google after it disparaged the woman killed in Charlottesville, Heather Heyer. And then web infrastructure company Cloudflare, which had previously been criticized for how it handled reports of abuse by the website, publicly and permanently terminated the Stormer’s account, too, forcing it to the dark web.

But should a tech company have that power? Even Cloudflare’s CEO Matthew Prince, who personally decided to pull the plug, thinks the answer should be “no” in the future.

“I am confident we made the right decision in the short term because we needed to have this conversation,” Prince said on the latest episode of Too Embarrassed to Ask. “We couldn’t have the conversation until we made that determination. But it is the wrong decision in the long term. Infrastructure is never going to be the right place to make these sorts of editorial decisions.”

Interviewed by Recode’s Kara Swisher and The Verge’s Lauren Goode, Prince was joined on the new episode by the executive director of the Electronic Frontier Foundation, Cindy Cohn. Although the two organizations have worked together in the past, Cohn co-authored a public rebuke of Cloudflare’s decision, saying it threatened the “future of free expression.”

“The moment where this is about Nazis, to me, is very late in the conversation,” Cohn said, citing past attempts to shut down political websites. “What they do is they take down the whole website, they can’t just take down the one bad article. The whole Recode website comes down because you guys say something that pisses off some billionaire.”

“These companies, including Matthew’s, have a right to decide who they’re doing business with, but we urge them to be really, really cautious about this,” she added.

You can listen to the new podcast on Apple Podcasts, Spotify, Pocket Casts, Overcast or wherever you listen to podcasts.

Prince and Cohn agreed that part of the long-term solution to controversial speech online — no matter how odious — may be establishing and respecting a set of transparent, principled rules that cross international borders.

“I believe deeply in free speech, but it doesn’t have the same force around the rest of the world,” Prince said. “What does is an idea of due process, that there are a set of rules you should follow, and you should be able to know going into that. I don’t think the tech industry has that set of due processes.”

Cohn noted that there is a process for stopping someone from speaking before they can speak — prior restraint. For most of America’s history, obtaining such an injunction against someone has been intentionally difficult.

“We wouldn’t have a country if people couldn’t voice radical ideas and they had to go through a committee of experts or tech bros,” she said. “If you have to go on bended knee before you get to speak, you’re going to reduce the universe of ideas. Maybe you’ll get some heinous ideas, but you might not get the Nelson Mandelas, either.”

Source: Should tech companies be able to shut down neo-Nazis? – Recode

Delete Hate Speech or Pay Up, Germany Tells Social Media Companies – The New York Times

Will be interesting to see the degree to which this works in making social media companies take more effective action, as well as the means that companies take to ‘police’ speech (see earlier post Facebook’s secret rules mean that it’s ok to be anti-Islam, but not anti-gay | Ars Technica). Apart from the debate over what can/should be any limits to free speech, there are risks in “outsourcing” this function to the private sector:

Social media companies operating in Germany face fines of as much as $57 million if they do not delete illegal, racist or slanderous comments and posts within 24 hours under a law passed on Friday.

The law reinforces Germany’s position as one of the most aggressive countries in the Western world at forcing companies like Facebook, Google and Twitter to crack down on hate speech and other extremist messaging on their digital platforms.

But the new rules have also raised questions about freedom of expression. Digital and human rights groups, as well as the companies themselves, opposed the law on the grounds that it placed limits on individuals’ right to free expression. Critics also said the legislation shifted the burden of responsibility to the providers from the courts, leading to last-minute changes in its wording.

Technology companies and free speech advocates argue that there is a fine line between policy makers’ views on hate speech and what is considered legitimate freedom of expression, and social networks say they do not want to be forced to censor those who use their services. Silicon Valley companies also deny that they are failing to meet countries’ demands to remove suspected hate speech online.

Still, German authorities pressed ahead with the legislation. Germany witnessed an increase in racist comments and anti-immigrant language after the arrival of more than a million migrants, predominantly from Muslim countries, since 2015, and Heiko Maas, the justice minister who drew up the draft legislation, said on Friday that it ensured that rules that currently apply offline would be equally enforceable in the digital sphere.

“With this law, we put an end to the verbal law of the jungle on the internet and protect the freedom of expression for all,” Mr. Maas said. “We are ensuring that everyone can express their opinion freely, without being insulted or threatened.”

“That is not a limitation, but a prerequisite for freedom of expression,” he continued.

The law will take effect in October, less than a month after nationwide elections, and will apply to social media sites with more than two million users in Germany.

It will require companies including Facebook, Twitter and Google, which owns YouTube, to remove any content that is illegal in Germany — such as Nazi symbols or Holocaust denial — within 24 hours of it being brought to their attention.

The law allows for up to seven days for the companies to decide on content that has been flagged as offensive, but that may not be clearly defamatory or inciting violence. Companies that persistently fail to address complaints by taking too long to delete illegal content face fines that start at 5 million euros, or $5.7 million, and could rise to as much as €50 million.

Every six months, companies will have to publicly report the number of complaints they have received and how they have handled them.

In Germany, which has some of the most stringent anti-hate speech laws in the Western world, a study published this year found that Facebook and Twitter had failed to meet a national target of removing 70 percent of online hate speech within 24 hours of being alerted to its presence.

The report noted that while the two companies eventually erased almost all of the illegal hate speech, Facebook managed to remove only 39 percent within 24 hours, as demanded by the German authorities. Twitter met that deadline in 1 percent of instances. YouTube fared significantly better, removing 90 percent of flagged content within a day of being notified.

Facebook said on Friday that the company shared the German government’s goal of fighting hate speech and had “been working hard” to resolve the issue of illegal content. The company announced in May that it would nearly double, to 7,500, the number of employees worldwide devoted to clearing its site of flagged postings. It was also trying to improve the processes by which users could report problems, a spokesman said.

Twitter declined to comment, while Google did not immediately respond to a request for comment.

The standoff between tech companies and politicians is most acute in Europe, where freedom of expression rights are less comprehensive than in the United States, and where policy makers have often bristled at Silicon Valley’s dominance of people’s digital lives.

But advocacy groups in Europe have raised concerns over the new German law.

Mirko Hohmann and Alexander Pirant of the Global Public Policy Institute in Berlin criticized the legislation as “misguided” for placing too much responsibility for deciding what constitutes unlawful content in the hands of social media providers.

“Setting the rules of the digital public square, including the identification of what is lawful and what is not, should not be left to private companies,” they wrote.

Even in the United States, Facebook and Google also have taken steps to limit the spread of extremist messaging online, and to prevent “fake news” from circulating. That includes using artificial intelligence to remove potentially extremist material automatically and banning news sites believed to spread fake or misleading reports from making money through the companies’ digital advertising platforms.

No simple fix to weed out racial bias in the sharing economy

Two options to combat implicit bias and discrimination: less information (blind cv approach) or more information (expanded online reviews). The first has empirical evidence behind it, the second is exploratory at this stage:

One of the underlying flaws of any workplace is the assumption that the cream rises to the top, meaning that the best people get promoted and are given opportunities to shine.

While it’s tempting to be lulled into believing in a meritocracy, years of research on women and minorities in the work force demonstrate this is rarely the case. Fortunately, in most corporate settings, protocols exist to try to weed out discriminatory practices.

The same cannot necessarily be said for the sharing economy. While companies such as Uber and Airbnb boast transparency and even mutual reviews, they remain far from immune to discriminatory practices.

In 2014, Benjamin Edelman and Michael Luca, both associate professors of business administration at Harvard Business School, uncovered that non-black hosts can charge 12 per cent more than black hosts for a similar property. In this new economy, that simply means non-white hosts earn less for a similar service. This sounds painfully familiar to those who continue to fight this battle in the corporate world – although in this case, it occurs without the watchful eye of a human-resources division.

In the corporate world, companies have moved from focusing on overt to subconscious bias, according to Mr. Edelman and Mr. Luca, but the nature of the bias in the sharing economy remains unclear.

It’s either statistical, meaning users infer that the property remains inferior based on the owner’s profile, or “taste-based,” suggesting the decision to rent comes down to user preference. To curb discriminatory practices, the authors recommend concealing basic information, such as photos and names, until a transaction is complete, as on Craigslist.

Reached by e-mail this week, Mr. Edelman stands by that approach.

“Broadly, my instinct is to conceal information that might give rise to discrimination. If we think hosts might reject guests of [a] disfavoured race, let’s not tell hosts the race of a guest when they’re deciding whether to accept. If we think drivers might reject passengers of [a] disfavoured race, again, don’t reveal the race in advance,” he advised.

While Mr. Edelman feels those really bent on discrimination will continue to do so, other, more casual discriminators will realize it’s too costly.

An Uber driver who only notices a passenger’s race at the pickup point might think to himself he already has driven about five kilometres. If he cancels, not only will he be without a fare, but also Uber might notice and become suspicious, Mr. Edelman surmised.

Not everyone agrees that less information is the best route to take to combat discrimination in the sharing economy. In fact, more information may be the fix, according to recent research conducted by Ruomeng Cui, an assistant professor at Indiana University’s Kelley School of Business, Jun Li, an assistant professor at the University of Michigan’s Stephen M. Ross School of Business, and Dennis Zhang, an assistant professor at the John M. Olin Business School at Washington University in Saint Louis.

The trio of academics argues that rental decisions on platforms such as Airbnb are based on racial preferences only when not enough information is available. When more information is shared, specifically through peer reviews, discriminatory practices are reduced or even eliminated.

“We recommend platforms take advantage of the online reputation system to fight discrimination. This includes creating and maintaining an easy-to-use online review system, as well as encouraging users to write reviews after transactions. For example, sending multiple e-mail reminders or offering monetary incentives such as discounts or credits, especially for those relatively new users,” Dr. Li said.

“Eventually, sharing-economy platforms have to figure how to better signal user quality; nevertheless, whatever they do, concealing information will not help,” she added.

Still, others believe technology itself can offer a solution to the incidents of bias in the sharing economy, such as Copenhagen-based Sara Green Brodersen, founder and chief executive of Deemly, which launched last October. The company’s mission is to build trust in the sharing economy through social ID verification and reputation software, which enables users to take their reputation with them across platforms. For example, if a user has ratings on Airbnb, they can collate it with their reviews on Upwork.

“Recent studies in this area suggest that ratings and reviews are what creates most trust between peers. [For example] when a user on Airbnb looks at a host, they put the most emphasis on the previous reviews from other guests more than anything else on the profile. Essentially, this means platforms could present anonymous profiles showing only the user’s reputation, but not gender, profile picture, ethnicity, name and age and, in this way, we can avoid the bias which has been presented,” Ms. Brodersen said.

Regardless of the solution, platforms and their users need to recognize that combatting discriminatory practices is their responsibility and the sharing economy, like the traditional work force, is no meritocracy.

“This issue is not going to be smaller on its own,” Ms. Brodersen warned.

Source: No simple fix to weed out racial bias in the sharing economy – The Globe and Mail

The Culture of Nastiness – The New York Times

Good reflections and commentary by Teddy Wayne:

Social media has normalized casual cruelty, and those who remove the “casual” from that descriptor are simply taking it several repellent steps further than the rest of us. That internet trolls typically behave better in the real world is not, however, solely from fear of public shaming and repercussions, or even that their fundamental humanity is activated in empathetic, face-to-face conversations. It may be that they lack much of a “real world” — a strong sense of community — to begin with, and now have trouble relating to others.

Andrew Reiner, an English professor at Towson University who teaches a seminar called “Mister Rogers 101: Why Civility and Community Still Matter,” attributes much of the decline in civility, especially among younger people, to Americans’ living in relative sequestration. The majority of his students tell him they barely knew their neighbors growing up, corroborating thinkers like Robert Putnam, who in his 2000 book, “Bowling Alone,” argued that civic engagement is diminishing. Consequently, Professor Reiner believes they have little experience in working through conflicts with people with whom they must figure out a way to get along.

“Civility is the idea that you’re not always going to agree but you still have to make it work,” he said. “We fear our ideas clashing with somebody else’s, even when we’re all ultimately pulling for the same thing.”

This leads to a vicious cycle in which the breakdowns of civility and community reinforce one another.

“People think, ‘If I disagree with you, then I have to dislike you, so why should I go to a neighborhood meeting when it’s clear I’m going to disagree with them?’” he said.

Professor Reiner also chalked up some of the devolution of basic courtesy to people’s increasingly digitized existence and engagement with their phones, not one another. For an assignment, he asks his students to experiment with old-fashioned civility by committing random acts of kindness and eating with strangers.

“It’s about trying to get beyond our own insecurities and get past the possibility of rejection, and that never has to happen with our online lives,” he said. “It reintroduces the idea of social risk-taking, which not that long ago was the norm, and learning how to be uncomfortable and relearning the skills of how to talk face to face.”

Though the internet receives the brunt of censure for corroding manners, other elements of popular culture aren’t much more elevated. In my neighborhood subway station a few months ago, two posters near each other for the TV shows “Graves” (about a former president) and “Those Who Can’t” (a comedy about teachers) both featured lightly obscured depictions of the middle finger. After the election, as I looked at the one depicting Nick Nolte in front of an American flag with the presidential seal covering his offending hand, it no longer seemed so shocking that Mr. Trump would soon occupy the Oval Office.

And the putative employer wielding all the power over labor is a trademark of reality TV, where Mr. Trump honed his brand for 14 seasons on NBC and trained us to think of a blustery television personality like him as a regular and revered figure in contemporary America. We have long had game and talent shows, but elimination from them used to be gentler — or, in the case of “The Gong Show,” at least goofier — than being brusquely told, “You’re fired,” “You are the weakest link” or receiving Simon Cowell’s withering exit-interview critiques.

In the dog-eat-dog environments of these programs, cooperation and kindness are readily abandoned for back-stabbing and character assassination. Short-term strategic alliances sometimes form among rivals, but the rules of the games preclude the possibility of something like collective bargaining. Likewise, union membership has drastically shrunk in the private sector over the last four decades. Why sacrifice for another person when there can be just one top chef or model or singer, one bachelorette with the final rose, one survivor — or in your own workplace, one promotion this financial quarter amid a spate of layoffs?

As evinced by the long-running “Real Housewives” series, calm conflict resolution does not make for good ratings. Even cake baking is now a fight to the finish.

Rather than seeking the comfort of known faces with the fictional, loving families and buddies from “The Brady Bunch,” “Cheers” and “Friends,” we now crave the company of supposedly “real” squabbling family members or acquaintances from documentary-style shows, perhaps as consolation for most likely watching them by ourselves, on a small laptop or phone screen.

“We would rather watch families on reality TV than do the hard work of being in a community with our own families,” Professor Reiner said.

Many critics of Mr. Trump have drawn parallels between this era and 1930s Germany. But when it comes to incivility and the 45th president, a more apt epoch may be 1950s America and its Communist witch hunts, specifically a quotation from the lawyer Joseph N. Welch in the 1954 Army-McCarthy hearings. (The chief counsel for Senator Joseph McCarthy was Roy Cohn, who later went on to become a close friend and business associate of Mr. Trump’s.)

“Have you no sense of decency, sir?” Mr. Welch asked Mr. McCarthy after Mr. McCarthy alleged that one of Mr. Welch’s fellow lawyers was a Communist.

It is a question many would like to pose to Mr. Trump — and one we all, nasty sirs and women alike, should be asking ourselves.

Google offers a glimpse into its fight against fake news

Challenge to know how much the issue is being addressed without any independent watchdogs:

In the waning months of 2016, two of the world’s biggest tech companies decided they would do their part to curb the spread of hoaxes and misinformation on their platforms — by this point, widely referred to under the umbrella of “fake news.”

Facebook and Google announced they would explicitly ban fake news publishers from using their advertising networks to make money, while Facebook later announced additional efforts to flag and fact-check suspicious news stories in users’ feeds.

How successful have these efforts been? Neither company will say much — but Google, at least, has offered a glimpse.

In a report released today, Google says that its advertising team reviewed 550 sites it suspected of serving misleading content from November to December last year.

Of those 550 sites, Google took action against 340 of them for violating its advertising policies.

“When we say ‘take action’ that basically means, this is a site that historically was working with Google and our Adsense products to show ads, and now we’re no longer allowing our ad systems to support that content,” said Scott Spencer, Google’s director of product management for sustainable ads in an interview.

Nearly 200 publishers — that is, the site operators themselves — were also removed from Google’s AdSense network permanently, the company said.

Not all of the offenders were caught violating the company’s new policy specifically addressing misrepresentation; some may have run afoul of other existing policies.

In total, Google says, it took down 1.7 billion ads in violation of its policies in 2016.

Questions remain

No additional information is contained within the report — an annual review of bad advertising practices that Google dealt with last year.

In both an interview and a followup email, Google declined to name any of the publishers that had violated its policies or been permanently removed from its network. Nor could Google say how much money it had withheld from publishers of fake news, or how much money some of its highest-grossing offenders made.

Some fake news site operators have boasted of making thousands of dollars a month in revenue from advertising displayed on their sites.

‘I always say the bad guys with algorithms are going to be one step ahead of the good guys with algorithms.’– Susan Bidel, senior analyst at Forrester Research

The sites reviewed by Google also represent a very brief snapshot in time — the aftermath of the U.S. presidential election — and Spencer was unable to say how previous months in the year might have compared.

“There’s no way to know. We take action against sites when they’re identified and they violate our policies,” Spencer said. “It’s not like I can really extrapolate the number.”

A bigger issue

Companies such as Google are only part of the picture.

“It’s the advertisers’ dollars. It’s their responsibility to spend it wisely,” said Susan Bidel, a senior analyst at Forrester Research who recently co-wrote a report on fake news for marketers and advertisers.

That, however, is easier said than done. Often, advertisers don’t know all of the sites on which their ads run — making it difficult to weed out sites designed to serve misinformation. And even if they are able to maintain a partial list of offending sites, “there’s no blacklist that’s going to be able to keep up with fake news,” Bidel said, when publishers can quickly create new sites.

Source: Google offers a glimpse into its fight against fake news – Technology & Science – CBC News

Canada sees ‘dramatic’ spike in online hate — here’s what you can do about it

Useful to have this tracking of trends given that police-reported hate crime statistics, while needed and useful, only tell part of the story:

The internet can be a pretty intolerant place, and it may be getting worse.

An analysis of Canada’s online behaviour commissioned by CBC’s Marketplace shows a 600 per cent jump in the past year in how often Canadians use language online that’s racist, Islamophobic, sexist or otherwise intolerant.

“That’s a dramatic increase in the number of people feeling comfortable to make those comments,” James Rubec, content strategist for media marketing company Cision, told Marketplace.

Cision scanned social media, blogs and comments threads between November 2015 and November 2016 for slurs and intolerant phrases like “ban Muslims,” “sieg heil” or “white genocide.” They found that terms related to white supremacy jumped 300 per cent, while terms related to Islamophobia increased 200 per cent.

“It might not be that there are more racists in Canada than there used to be, but they feel more emboldened. And maybe that’s because of the larger racist sentiments that are coming out of the United States,” Rubec said.

So when you see hateful speech online, what can you do about it?

Marketplace‘s Asha Tomlinson joined journalist and cultural critic Septembre Anderson and University of Ontario Institute of Technology sociologist Barbara Perry, whose work focuses on hate crimes, to share strategies and tips for confronting intolerance online.

Reach out

If the person making hurtful comments is a friend, message them privately about it. Calling them out publicly can backfire.

Facebook Runs Up Against German Hate Speech Laws – The New York Times

About time – social media companies also need to be accountable (as do users):

In Germany, more than almost anywhere else in the West, lawmakers, including Chancellor Angela Merkel, are demanding that Facebook go further to police what is said on the social network — a platform that now has 1.8 billion users worldwide. The country’s lawmakers also want other American tech giants to meet similar standards.

The often-heated dispute has raised concerns over maintaining freedom of speech while protecting vulnerable minorities in a country where the legacy of World War II and decades under Communism still resonate.

It is occurring amid mounting criticism of Facebook in the United States after fake news reports were shared widely on the site before the presidential election. Facebook also has been accused of allowing similar false reports to spread during elections elsewhere.

Mr. Zuckerberg has denied that such reports swayed American voters. But lawmakers in the United States, Germany and beyond are pressing Facebook to clamp down on hate speech, fake news and other misinformation shared online, or face new laws, fines or other legal actions.

“Facebook has a certain responsibility to uphold the laws,” said Heiko Maas, the German justice minister. In October, Mr. Maas suggested the company could be held criminally liable for users’ illegal hate speech postings if it does not swiftly remove them.

Facebook rejects claims that it has not responded to the rise in hate speech in Germany and elsewhere, saying it continually updates its community standards to weed out inappropriate posts and comments.

“We’ve done more than any other service at trying to get on top of hate speech on our platform,” Mr. Allen said.

Tussles with German lawmakers are nothing new for Facebook.

It has routinely run afoul of the country’s strict privacy rules. In September, a local regulator blocked WhatsApp, the internet messaging service owned by Facebook, from sharing data from users in Germany with its parent company. The country’s officials also have questioned whether Facebook’s control of users’ digital information could breach antitrust rules, accusations the company denies.

Facebook’s problems with hate speech posts in Germany began in summer 2015 as more than one million refugees began to enter the country.

Their arrival, according to company executives and lawmakers, incited an online backlash from Germans opposed to the swell of people from Syria, Afghanistan and other war-torn countries. The number of hateful posts on Facebook increased sharply.

As such content spread quickly online, senior German politicians appealed directly to Facebook to comply with the country’s laws. Even Ms. Merkel confronted Mr. Zuckerberg in New York in September 2015 about the issue.

In response, Facebook updated its global community standards, which also apply in the United States, to give greater protection to minority groups, primarily to calm German concerns.

Facebook also agreed to work with the government, local charities and other companies to fight online hate speech, and recently started a billboard and television campaign in Germany to answer local fears over how it deals with hate speech and privacy.

Facebook hired a tech company based in Berlin to monitor and delete illegal content, including hate speech, from Germany and elsewhere, working with Facebook’s monitoring staff in Dublin.

“They have gotten better and quicker at handling hate speech,” said Martin Drechsler, managing director of FSM, a nonprofit group that has worked with Facebook on the issue.

Despite these steps, German officials are demanding further action.

Ms. Merkel, who is seeking a fourth term in general elections next year, warned lawmakers last week that hate speech and fake news sites were influencing public opinion, raising the possibility of new regulations.

And Mr. Maas, the justice minister, has repeatedly warned that he will propose legislation if Facebook cannot remove at least 70 percent of online hate speech within 24 hours by early next year. It now removes less than 50 percent, according to a study published in September by a group that monitors hate speech, a proportion that is still significantly higher than those for Twitter and YouTube, the report found.

For Chan-Jo Jun, a lawyer in Würzburg, an hour’s drive from Frankfurt, new laws governing Facebook cannot come soon enough.

Mr. Jun recently filed a complaint with Munich authorities, seeking prosecution of Mr. Zuckerberg and other senior Facebook executives on charges they failed to sufficiently tackle the widespread wave of hate speech in Germany. The company denies the accusations.

While his complaint may be dismissed, Mr. Jun says the roughly 450 hate speech cases that he has collected, more than half of them aimed at refugees, show that Facebook is not complying with German law. Despite its global size, he insists, the company cannot skirt its local responsibilities.

“I know Facebook wants to be seen as a global giant,” Mr. Jun said. “But there’s no way around it. They have to comply with German law.”

Let’s get real. Facebook is not to blame for Trump. – Recode

While I think Williams downplays the role and responsibility of social media (see Social Media’s Globe-Shaking Power – The New York Times), his raising of confirmation bias is valid.

Communications technology is not neutral, and perhaps it is time to reread some of the Canadian classics by Harold Innis (Empire and Communications) and Marshall McLuhan (Understanding Media: The Extensions of Man):

Much of the coverage and outrage has been directed toward social media, its echo chambers, and specifically those of the Facebook platform. While, to be sure, much of the fake or inaccurate news is found and circulated on Facebook, Facebook is not a news outlet; it is a communication medium to be utilized as its users so choose. It is not the job of Facebook’s employees, or its algorithms, to edit or censor the content that is shared; in fact it would be more detrimental to do so. This is for two very good reasons:

One, either human editors, or artificial intelligence editors, by removing one item or another will appear to introduce bias into the system. The group who’s content is being removed or edited will feel targeted by the platform and claim it, rightly or wrongly, is biased against their cause. Even if the content is vetted and found to be true or false.

Two, censorship in any form is bad for the national discourse.

So rather than blaming Facebook or other platforms for the trouble in which we find ourselves, let’s give credit where credit is due: The American people.

This comes down to two very important concepts that our society has been turning its back on, in the age of social media: Confirmation bias and epistemology.

Explained by David McRaney, the You Are Not So Smart blogger and author of “You Are Now Less Dumb: How to Conquer Mob Mentality, How to Buy Happiness, and All the Other Ways to Outsmart Yourself,” confirmation bias is the misconception that “your opinions are the result of years of rational, objective analysis,” and that the truth is that “your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.” Or, more precisely: The tendency to process information by looking for, or interpreting, information that is consistent with one’s existing beliefs.

If we find a piece of content that says that Donald Trump is clueless, or that Hillary Clinton belongs in prison, we accept the one because it reinforces our like for one candidate over the other, and discard the negative item as some falsehood generated by the opposing party to discredit your candidate. We don’t care about the information or what it says, as long as it reinforces how we feel.

That brings us to epistemology, “the study or a theory of the nature and grounds of knowledge especially with reference to its limits and validity,” a branch of philosophy aptly named from the Greek, meaning “knowledge dscourse.” This is a concept that has existed since the 16th century and very likely conveniently ignored in political campaigns ever since, perhaps because it’s just easier to believe and propagate than it is to read and validate.

In fact, a recent Pew Research Center survey called the American Trends Panel asked if the public prefers that the news media present facts without interpretation. Overwhelmingly, 59 percent of those posed the question preferred facts without interpretation, and among registered voters, 50 percent of Clinton supporters, and 71 percent of Trump supporters preferred no interpretation. While those numbers may seem incredible, the telling result is that 81 percent of registered voters disagree on what the facts actually are. Aren’t facts just facts? Yes, they are, but our biases and distrust of intellectual sources say otherwise.

Does Facebook create echo chambers on both sides of the political spectrum? No. Facebook and other social media only serve to provide a high-speed amplifier of what already exists in our society; especially to those who enjoy the communal effect of sharing information with others in their personal circles. Facebook goes give them a wide and instant audience.

In a 2012 study published in the journal Computers in Human Behavior, computer scientists Chei Sian Ma, and Long Ma said, “… we also establish that status seeking has a significant influence on prior content sharing experience indicating that the experiential factor may be a possible mediator between gratifications and news sharing intention.”

Or, in other words, it’s fun to share something and get congratulatory high-fives from your like-minded friends. Facebook does make that activity almost instantaneous. Sharing news, or fake news, and being liked for doing so feels good. Never mind the ramifications on the accuracy of cultural or political discourse.

During his final press conference in Berlin with Angela Merkel, President Obama puts this as succinctly as it could possibly be said: “If we are not serious about facts, and what’s true and what’s not . . . if we can’t discriminate between serious arguments and propaganda, then we have problems.”

From Hate Speech To Fake News: The Facebook Content Crisis Facing Mark Zuckerberg : NPR

Another good long read on social media, particularly Facebook, and its need to face up to ethical issues:

Some in Silicon Valley dismiss the criticisms against Facebook as schadenfreude: Just like taxi drivers don’t like Uber, legacy media envies the success of the social platform and enjoys seeing its leadership on the hot seat.

A former employee is not so dismissive and says there is a cultural problem, a stubborn blindness at Facebook and other leading Internet companies like Twitter. The source says: “The hardest problems these companies face aren’t technological. They are ethical, and there’s not as much rigor in how it’s done.”

At a values level, some experts point out, Facebook has to decide if its solution is free speech (the more people post, the more the truth rises), or clear restrictions.

And technically, there’s no shortage of ideas about how to fix the process.

A former employee says speech is so complex, you can’t expect Facebook to arrive at the same decision each and every time; but you can expect a company that consistently ranks among the 10 most valuable on earth, by market cap, to put more thought and resources into its censorship machine.

The source argues Facebook could afford to make content management regional — have decisions come from the same country in which a post occurs.

Speech norms are highly regional. When Facebook first opened its offices in Hyderabad, India, a former employee says, the guidance the reviewers got was to remove sexual content. In a test run, they ended up removing French kissing. Senior management was blown away. The Indian reviewers were doing something Facebook did not expect, but which makes perfect sense for local norms.

Harvard business professor Ben Edelman says Facebook could invest engineering resources into categorizing the posts. “It makes no sense at all,” he says, that when a piece of content is flagged, it goes into one long line. The company could have the algorithm track what flagged content is getting the most circulation and move that up in the queue, he suggests.

Zuckerberg finds himself at the helm of a company that started as a tech company — run by algorithms, free of human judgment, the mythology went. And now he’s just so clearly the CEO of a media company — replete with highly complex rules (What is hate speech anyway?); with double standards (If it’s “news” it stays, if it’s a rant it goes); and with an enforcement mechanism that is set up to fail.

Source: From Hate Speech To Fake News: The Facebook Content Crisis Facing Mark Zuckerberg : All Tech Considered : NPR