Martin: It’s not the economy, stupid. It’s the media

Good column, if depressing:

A major change in the communications system, Canadian media guru Marshall McLuhan opined long ago, “is bound to cause a great readjustment of all the social patterns, the educational patterns, the sources and conditions of political power, [and] public opinion patterns.”

Given the vast changes that have marked the digital age, McLuhan can hardly be accused of overstatement. The online world is corrosively altering social and political “patterns,” to use the McLuhan term, and destabilizing democracies.

In using internet platforms, fringe groups and hate generators have multiplied exponentially and contributed to an erosion of trust in public institutions. They’ve prompted violent threats against public officials, driven the United States into two warring silos, and cast a pall of negativity over the public square seldom seen.

The contamination of the dialogue is such that even agreement on what constitutes basic truths has come to be tenuous. Talk of a post-truth America is no joke. Canada isn’t there yet, but give the disinformation amplifiers more scope and we soon might be.

Bill Clinton’s campaign strategist, James Carville, may have had it right back in the 1990s when he famously declared why voters had soured on then-president George H.W. Bush: “It’s the economy, stupid.” But not today. Now it’s the media, stupid. It’s the upheaval in the communications system. A media landscape gone rogue.

Economic woes get regulated. Not so the convulsions in our information ecosphere. We have no idea how to harness the hailstorm. Few efforts are being made. Calls for regulation are greeted by a great hue and cry over potential freedom-of-speech transgressions.

So broadly has media influence and power expanded that a cable network has become the avatar of the Republican Party. Donald Trump has maintained support from the GOP because he has what Richard Nixon didn’t. A kowtowing TV network and a Twitter following, until he was blocked, of 90 million users.

Social media platforms, like an upstart rival sports league, have served to delegitimize, if not disenfranchise, traditional media, magnifying public distrust. There is still a lot of high-quality journalism around, including at this awards-dominating newspaper. But traditional media no longer set the tenor of the national discussion and help shape a national consensus as in times past. Enfeeble a society’s credible news and information anchors, replace them with flotsam and you get, as per the United States, a country increasingly adrift.

The trajectory of media decline is worth recalling. From having just two or three television networks in Canada and the U.S. that aired news only for an hour or so a day, we have expanded to around-the-clock cable networks. News couldn’t fill that much airtime so opinion did – heaps of it. Hours of tirades filled the airwaves from reactionaries like Rush Limbaugh. Then the internet took hold, along with the invasion of unfiltered social media, awash in vitriol.

And so the chaff now overwhelms the wheat.

Mainstream media got in on the act, lowering their standards, contributing to the debasement of the dialogue by running ad hominem insults on comment boards from readers who hide behind pseudonyms. As I’ve noted before, that’s not freedom of speech. That’s fraud speech.

The crisis in our information complex is glaring, but it isn’t being addressed. Mainstream media, while demanding transparency everywhere else, rarely applies this standard to itself. Despite its exponential growth in importance, the media industry gets only a small fraction of the scrutiny that other powerful institutions do.

Big issues go largely unexamined in Canadian media. We rarely take a critical look at the unfettered rise of advocacy journalism, the impact of the disappearance of local newspapers or media ownership monopolies. There are precious few media columnists in this country. There is no overarching media institute to address the problems.

Conservative leadership candidate Pierre Poilievre’s big idea is to deprive us of one of our longest-standing national institutions. He would gut the CBC, defund it practically out of existence. At his rallies, he’s cheered on lustily for the promise, an indication of the low regard held by many in the population toward the mainstream media.

Any kind of media-reform drive always runs up against the freedom of speech barrier. The Trudeau government has passed Bill C-10, but it was diluted and will have little regulatory impact. A Commission on Democratic Expression, whose membership included former Supreme Court Justice Beverley McLachlin, has recommended regulatory reforms to curb social media’s impact. But it didn’t receive anywhere near the attention it deserved.

There’s a vacuum. Ways to regulate the destabilizing forces in the new communications paradigm must be found; ways that leave no possibility of control by political partisans. Such ways are possible and, given the ravages of the new media age, imperative.

Source: It’s not the economy, stupid. It’s the media

May: Never tweet. Social media is complicating the age-old neutrality of the public service

Easier in my time when the major worry was appearing in the press regarding a leaked document. Safer to never tweet on public policy issues and debates while in government, as tweets can give the perception that the public service is not neutral and impartial by the political level.

Public service did give the impression of not being impartial at times during the Harper government:

Social media is a part of life that is increasingly treacherous for Canada’s public servants, who may need better guidance to navigate their public and private lives online.

The blurring of that line was on display during the so-called freedom convoy protest that paralyzed downtown Ottawa. Some public servants took to social media to oppose or support the protest, sometimes with funds. Other public servants criticized colleagues who backed the protest as well as government mishandling of the nearly month-long blockade.

The storm of often anonymous allegations of misbehaviour on social media underlined an absence of transparency in the government agencies responsible for the ethical behaviour of bureaucrats. Neither the Treasury Board Secretariat nor the Office of Public Sector Integrity Commission were willing or able to say if any investigation or other action has been taken against any public servant.

On Reddit, members of public servant forums questioned the loyalty of federal workers who donated money to a convoy with an underlying mission to overthrow the government. Public servants on Twitter chided anyone who may have used government email to send a donation; accused them of ethical breaches. One suggested any of them with secret security clearances or higher should face a loyalty interview from CSIS, the Canadian Security Intelligence Service.

Some demanded they be investigated or have security clearances revoked. Others called for dismissal. One senior bureaucrat told Policy Options public servants should be dismissed if they funded anything to do with removing the elected government to which they pledged loyalty.

Meanwhile, eyebrows were raised when Artur Wilczynski, an assistant deputy minister for diversity and inclusion at the Communications Security Establishment, tweeted a stinging criticism of Ottawa police’s handling of the protest. As a rule, senior bureaucrats, especially from such a top-secret department, keep such opinions to themselves. The CSE called Wilczynski’s criticism a personal opinion, noting it would be inappropriate for the CSE to comment on matters that don’t fall within its mandate.

It’s unclear whether any public servants are being investigated or disciplined for an ethical breach – or an illegal act.

Public servants typically have a lot of latitude to engage in political activities before risking an ethical breach. That changed when the Emergencies Act was invoked, making a peaceful protest an illegal occupation.

The Treasury Board Secretariat, the public service’s employer, knows some public servants supported the protesters, a spokesperson said. But it is unaware of whether any were warned or disciplined by their departments for any public support online or offline.

“We do not collect information about complaints or disciplinary actions against employees,” the Treasury Board said in an email.

Social media users suggested at least a dozen public servantswent to the Office of Public Sector Integrity Commission to report the possibility that a handful of bureaucrats were on a leaked list of convoy donors that was exposed when a hackers took down the crowdsourcing website GiveSendGo. The commission investigates wrongdoings that could pose serious threats to the integrity of the public service.

Commissioner Joe Friday refused to say whether he has received or is investigating any complaints. His office sees a spike in inquiries and disclosures when hot-button public issues dominate the news, he said.

Social media is here to stay. But how public servants use social media to balance their duty of loyalty to government with their right to free speech and engage in political activity seems to be an open question.

Public servants have rules for behavior at work and during off-hours, though the line between on and off the clock has increasingly blurred after two years of working at home. The rules come from the Public Service Employment Act, the Values and Ethics Code and the codes of conduct for each department.

But some argue there’s a grey zone now that partisan politics and political activities have moved online.

Jared Wesley, an associate professor of political science at the University of Alberta, said governments have not done a good job updating their ethics protocols, standards of practice and codes of conduct to manage social media. They amount to deputy ministers offering a rule-of-thumb “if your boss wouldn’t like, don’t post it,” he said.

Carleton University’s Amanda Clarke and employment lawyer Benjamin Piper examined the gap in guidance in a paper, A Legal Framework to Govern Online Political Expression by Public Servants. Clarke, a digital and public management expert and associate professor, said this uncertainty about the rules cuts two ways.

“What we can learn from this incident is that there is already a grey area and it’s dangerous for public servants who are not equipped with sufficient guidance,” said Clarke.

“There are two outcomes. One: they over-censor and unnecessarily give up their rights to political participation …. The second is they go to the other extreme and abandon their obligation to be neutral, which can put them into dangerous positions, personally and professionally and, at the larger democratic level, undermine the public service’s credibility.”

In fact, public servants believe impartiality is important, a recent survey shows, and 97 per cent steer clear of political activities beyond voting. Eighty-nine per cent believe expressing views on social media can affect their impartiality or the perception of their impartiality. But it found only about 70 per cent of managers felt capable of providing guidance to workers on engaging in such activities.

Clarke argues the modernization of public service must address how public servants reconcile their online lives with their professional duties.

“You can’t expect public servants not to have online political lives. This is where politics unfolds today. So, anybody who is trying to say that is the solution is missing the reality of how we how we engage in politics today.”

More than 40 years ago, the Supreme Court’s landmark Fraser ruling confirmed public servants’ political rights – with some restrictions. They depend on factors such as one’s rank or level of influence in the public service; the visibility of the political activity; the relationship between the subject matter and the public servant’s work and whether they can be identified as public servants.

David Zussman, who long held the Jarislowsky Chair in Public Management at the University of Ottawa, said the rules should be the same whether a public servant pens an op-ed, a letter to the editor or a tweet.

“Public servants should be able to make personal decisions about who they support, but the overriding consideration is keeping the public service neutral and apolitical.”

Shortcomings of existing rules, however, were revealed in the 2015 election, when an environment scientist, Tony Turner, was suspended for writing and performing a protest song called “Harperman” that went viral on YouTube.

His union, the Professional Institute of the Public Service of Canada, argued he had violated no restrictions: he wasn’t an executive, his job was tracking migratory birds, he wrote the song on his own time, used no government resources and there was nothing in the video or on his website to indicate he was a public servant. He hadn’t produced the video or posted it to YouTube.

About the same time, a Justice Department memo surfaced, warning: “you are a public servant 24/7,” anything posted is public and there is no privacy on the Internet. Unions feared public servants could be prevented from using social media, a basic part of life.

Twitter, Facebook, LinkedIn and YouTube have complicated the rules for public servants posting an opinion, signing an online petition or making a crowdsourced donation, Clarke and Piper argue.

Social media can amplify opinions in public debate and indiscriminate liking, sharing, or re-posting can ramp up visibility more than expected.  Assessments of whether a public servant crossed the line have to consider whether they used privacy settings, pseudonyms or identified as public servants.

Clarke and Piper question whether public servants who never mention their jobs should be punished if they are outed as government employees in a data breach – like those who donated to the convoy protest. What about a friend taking a screenshot of a private email you sent criticizing government, sending it others or posting it online?

The Internet makes it easy to identify people, Piper said. Public servants who avoid disclosing their employer on their personal social media accounts can be identified using Google, LinkedIn or the government’s own employee directory.

So back to the convoy protest. Before the emergency order, would public servants have unwittingly crossed the line by supporting the protest or donating money to it?

The protest opposed vaccines and pandemic restrictions, though the blockade also became home to a mix of grievances. Many supporters signed a memorandum of understanding by one of the organizing groups calling for the Governor-General and Senate to form a new government with the protestors.

“It’s hard for me to see how a private donation by someone who has a job that has nothing to do with vaccine mandates or the trucker protest could attract discipline. That would be a really aggressive application of discipline by the government,” said Piper.

But Wesley argues that the convoy was known from the start as a seditionist organization and anyone who gave money to the original GoFundMe account should have seen the attached MOU. It was later withdrawn.

“Most public servants sign an oath to the Queen and should have recognized that signing or donating money to that movement was an abrogation of your oath,” he said. “I think a re-examination of who they are, who they work for and implications of donating to a cause that would have upended Canada’s system of constitutional monarchy is definitely worth a conversation with that individual.”

Perhaps part of the problem is the traditional bargain of loyalty and impartiality between politicians and public servants is coming unglued.

The duty of loyalty is shifting. The stability and job security that once attracted new recruits for lifelong careers in government aren’t important for many young workers, who like remote work and expect to work for many employers.

A recent study found half of the politicians surveyed don’t really want an impartial public service. Brendan Boyd, assistant professor at MacEwan University, suggests they prefer a bureaucracy that enthusiastically defends its policies rather than simply implements and explains them. However, 85 per cent of the politicians say that outside of work hours, public servants should be impartial.

“There will be further test cases, and how we define a duty of loyalty is going to either be confirmed or adapted or changed,” said Friday.

“But public servants are still allowed to communicate, hold or express views as a means of expression. And the pace at which the views, thoughts and opinions are expressed is so phenomenal that I think it fundamentally changes the playing field.”

Source: Never tweet. Social media is complicating the age-old neutrality of the public service

New report details how autocrats use the internet to harass and suppress activists in Canada

Thousands of miles away from her homeland in Syria, she organized protests and ran social media pages in Canada in support of opposition forces fighting President Bashar al-Assad’s regime.

Then anonymous complaints started rolling in and prompted Facebook to shut down her group page. Trolls left “nasty and dirty” comments on social media and created fake profiles with her photos, she said, while a Gmail administrator alerted her that “a state sponsor” was trying to hack her account.

“The Assad regime was functioning through this network of thugs that they call Shabeeha. Inside of Syria, those thugs would be physically beating up people and terrorizing them,” said the 42-year-old Toronto woman.

“Then they were also very much online, so they terrorized people online as well.”

As diaspora communities are increasingly relying on social media and other online platforms to pursue advocacy work, authoritarian states are trying to exert their will over overseas dissidents through what’s dubbed “digital transnational repression,” said a new study released Tuesday.

“States that engage in transnational repression use a variety of methods to silence, persecute, control, coerce, or otherwise intimidate their nationals abroad into refraining from transnational political or social activities that may undermine or threaten the state and power within its border,” said the report by the Citizen Lab at University of Toronto’s Munk School of Global Affairs.

“Thus, nationals of these states who reside abroad are still limited in how they can exercise ‘their rights, liberties, and voice’ and remain subject to state authoritarianism even after leaving their country of origin.”

Being a country of immigrants — particularly refugees seeking protection from persecution — Canada is vulnerable to this kind of digital attacks, amid the advancement of surveillance technology and rising authoritarianism around the globe, said the report’s authors.

“There is this misassumption that once people arrive in Canada from authoritarian countries, they are safe. We need to redefine what safety is,” said Noura Al-Jizawi, one of the report’s co-authors.

“This is not only affecting the day-to-day life of these people, but it’s also affecting the civic rights, their freedom of speech or their freedom of assembly of an entire community that’s beyond the individuals who are being targeted.”

A team of researchers interviewed 18 individuals, all of whom resided in Canada and had moved or fled to Canada from 11 different places, including Syria, Saudi Arabia, Yemen, Tibet, Hong Kong, China, Rwanda, Iran, Afghanistan, East Turkestan, and Balochistan.

The participants shared their experiences of being intimidated for the advocacy work they conducted in Canada, as well as the impacts of such threats — allegedly from these foreign states and their supporters — on their well-being and the diaspora communities they come from.

“Their main concern besides their privacy and the privacy of their family is the friends and colleagues back home. If the government targets their devices digitally, they would reveal the underground and hidden network of activists,” said Al-Jizawi.

“Many of them mention that they try to avoid the communities from their country of origin because they can’t feel safe connecting with these people.”

Many of the participants in the study said they have reached out for assistance to authorities such as the Canadian Security Intelligence Service but were disappointed.

“The responses were generally like, we can’t help you or this isn’t a crime and there’s nothing actionable here. In one case, they suggested to the person to hire a private detective,” noted Siena Anstis, another co-author of the study.

“Law enforcement is probably not that well equipped or trained to understand the broader context within which this is happening. The way that they handle these cases is quite dismissive.”

The anonymous Syrian-Canadian political activist who participated in the study said victims of transnational repression will stop reporting to Canadian officials if nothing comes out of their complaints.

“Every day we’re becoming more and more digital, which makes us more vulnerable to digital attacks and digital privacy issues. I hope our government will start thinking about how to protect us from this emerging threat that we never had to worry about before,” said the woman, who came here from Aleppo as a 7-year-old and has stopped her political activities to free Syria.

“If someone like me who is extremely outspoken and very difficult to stifle felt a little bit overwhelmed by all of it, you can imagine other people who recently came from Syria and still have a lot of ties there. I know a lot of people that will not open their mouth publicly because they’re scared what will happen.”

The report urges Ottawa to create a dedicated government agency to support victims and conduct research to better understand the scale and impact of these activities on the exercise of Canadian human rights. It also recommends establishing federal policies for the sale of surveillance technologies to authoritarian states and for guiding how social media platforms can better protect victims from digital attacks.

“It might seem at this stage it’s only happening to some communities in Canada and it doesn’t matter,” said Anstis. “But collectively it’s our human rights that are being eroded. It’s our capacity to engage in, affirm and protect against human rights and democracy. That space for dialogue is really reducing.”

Source: New report details how autocrats use the internet to harass and suppress activists in Canada

U.S. accounts drive Canadian convoy protest chatter

Of note. While recent concerns have understandably focussed on Chinese and Russian government interference, we likely need to spend more attention on the threats from next door, along with the pernicious threats via Facebook and Twitter:

Known U.S.-based sources of misleading information have driven a majority of Facebook and Twitter posts about the Canadian COVID-19 vaccine mandate protest, per German Marshall Fund data shared exclusively with Axios.

Driving the news: Ottawa’s “Freedom Convoy” has ballooned into a disruptive political protest against Prime Minister Justin Trudeau and inspired support among right-wing and anti-vaccine mandate groups in the U.S.

Why it matters: Trending stories about the protest appear to be driven by a small number of voices as top-performing accounts with huge followings are using the protest to drive engagement and inflame emotions with another hot-button issue.

  • “They can flood the zone — making something news and distorting what appears to be popular,” said Karen Kornbluh, senior fellow and director of the Digital Innovation and Democracy Initiative at the German Marshall Fund. 

What they’re saying: “The three pages receiving the most interactions on [convoy protest] posts — Ben Shapiro, Newsmax and Breitbart -—are American,” Kornbluh said. Other pages with the most action on convoy-related posts include Fox News, Dan Bongino and Franklin Graham.

  • “These major online voices with their bullhorns determine what the algorithm promotes because the algorithm senses it is engaging,” she said.
  • Using a platform’s design to orchestrate anti-government action mirrors how the “Stop the Steal” groups worked around the Jan. 6 Capitol riot, with a few users quickly racking up massive followings, Kornbluh said.

By the numbers: Per German Marshall Fund data, from Jan. 22, when the protests began, to Feb. 12, there were 14,667 posts on Facebook pages about the Canadian protests, getting 19.3 million interactions (including likes, comments and shares).

  • For context: The Beijing Olympics had 20.9 million interactions in that same time period.
  • On Twitter, from Feb. 3 to Feb. 13, tweets about the protests from have been favorited at least 4.1 million times and retweeted at least 1.1 million times.
  • Pro-convoy videos on YouTube have racked up 47 million views, with Fox News’ YouTube page getting 29.6 million views on related videos.

The big picture: New research published in the Atlantic finds that most public activity on Facebook comes from a “tiny, hyperactive group of abusive users.”

  • Since user engagement remains the most important factor in Facebook’s weighting of content recommendations, the researchers write, the most abusive users will wield the most influence over the online conversation.
  • “Overall, we observed 52 million users active on these U.S. pages and public groups, less than a quarter of Facebook’s claimed user base in the country,” the researchers write. “Among this publicly active minority of users, the top 1 percent of accounts were responsible for 35 percent of all observed interactions; the top 3 percent were responsible for 52 percent. Many users, it seems, rarely, if ever, interact with public groups or pages.”

Meanwhile, Foreign meddling is further confusing the narrative around the trucker protest. 

  • NBC News reported that overseas content mills in Vietnam, Bangladesh, Romania and other countries are powering Facebook groups promoting American versions of the trucker convoys. Facebook took many of the pages down.
  • A report from Grid News found a Bangladeshi digital marketing firm was behind two of the largest Facebook groups related to the Canadian Freedom Convoy beforebeing removed from the platform.
  • Grid News reported earlier that Facebook groups supporting the Canadian convoy were being administered by a hacked Facebook account belonging to a Missouri woman.

Source: U.S. accounts drive Canadian convoy protest chatter

Canada is sleepwalking into bed with Big Tech, as politicos float between firms and public office

Sort of inevitable, unfortunately:

Canadians have been served a familiar dish of election promises aimed at taking on the American web giants. But our governments have demonstrated a knack for aggressive procrastination on this file.

A new initiative is providing a glimpse into Canada’s revolving door with Big Tech, and as the clock ticks on the Liberal government’s hundred-day promise to enact legislation, Canadians have 22 reasons to start asking tough questions.

The Regulatory Capture Lab — a collaboration between FRIENDS(formerly Friends of Canadian Broadcasting), the Centre for Digital Rights and McMaster University’s Master of Public Policy in Digital Society Program — is shedding light on a carousel of unconstrained career moves between public policy teams at Big Tech firms and federal public offices. 

Canadians should review this new resource and see for themselves the creeping links between the most powerful companies on earth and the institutions responsible for reining them in. 

And they’d be wise to look soon. According to the Liberal government, a wave of tech-oriented policy is in formation, from updating the Broadcasting Act to forcing tech firms to pay for journalism that appears on their platforms.

But our work raises vital questions about all these proposals: are Canadians’ interests being served through these pieces of legislation? Has a slow creep of influence over public office put Big Tech in the driver’s seat? These promises of regulation have been around for years, so, why is it taking so long to get on with it?

Cosy relations between Big Tech and those in public office in Canada have bubbled to the surface before, most notably through the work of Kevin Chan, the man for Meta (Facebook) in Canada. In 2020, the Star exposed Chan’s efforts to recruit senior analysts from within Canadian Heritage, the department leading the efforts to regulate social media giants, to work at Facebook.

It doesn’t stop there. A 2021 story from The Logic revealed the scope of Chan’s enthusiasm in advancing the interests of his employer. Under Chan’s skilful direction, Facebook has managed to get its tendrils of influence into everything — government offices, universities, even media outlets. And in so many instances, Chan has found willing participants across the aisle who offer up glowing statements about strategic partnerships with Facebook.

Facebook isn’t alone in the revolving door. For some politicos, moving between Big Tech and public office appears to be the norm, in both directions. Big Tech public policy teams are filled with people who have worked in Liberal and Conservative offices, the PMO, Heritage and Finance ministries, the Office of the Privacy Commissioner, and more.

Conversely, some current senior public office holders are former Big Tech employees. Amazon, Google, Netflix, Huawei, Microsoft and Palantir are all connected through a revolving door with government. And this doesn’t even begin to cover Big Tech’s soft-power activities in Canada, from academic partnerships, deals with journalism outlets (including this one), and even shared initiatives with government to save democracy. The connections are vast and deep.

So, why has tech regulation taken so long? Armed with the knowledge that so many of Canada’s brightest public policy minds are moving between the offices of Big Tech and the halls of power in Ottawa, Canadians should be forgiven for jumping to conclusions. Or, maybe it’s just that simple? 

That these employment moves are taking place in both directions is hardly surprising. But the fact that so little attention has been paid to this phenomenon is deeply troubling. And how can this power be held to account when our journalism outlets are left with little choice but to partner with Big Tech?

The Regulatory Capture Lab has pried opened the window on this situation, but others must jump in. It’s time for Canadians to start asking tough questions. FRIENDS is ready to get the answers.

Source: https://www.thestar.com/opinion/contributors/2022/01/17/canada-is-sleepwalking-into-bed-with-big-tech-as-politicos-float-between-firms-and-public-office.html

Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

More on Facebook and Zuckerberg’s failure to act against mis- and dis-information:

In May 2019, a video purporting to show House Speaker Nancy Pelosiinebriated, slurring her words as she gave a speech at a public event, went viral on Facebook. In reality, somebody had slowed the footage down to 75% of its original speed.

On one Facebook page alone, the doctored video received more than 3 million views and 48,000 shares. Within hours it had been reuploaded to different pages and groups, and spread to other social media platforms. In thousands of Facebook comments on pro-Trump and rightwing pages sharing the video, users called Pelosi “demented,” “messed up” and “an embarrassment.”
[time-brightcove not-tgx=”true”]

Two days after the video was first uploaded, and following angry calls from Pelosi’s team, Facebook CEO Mark Zuckerberg made the final call: the video did not break his site’s rules against disinformation or deepfakes, and therefore it would not be taken down. At the time, Facebook said it would instead demote the video in people’s feeds.

Inside Facebook, employees soon discovered that the page that shared the video of Pelosi was a prime example of a type of platform manipulation that had been allowing misinformation to spread unchecked. The page—and others like it—had built up a large audience not by posting original content, but by taking content from other sources around the web that had already gone viral. Once audiences had been established, nefarious pages would often pivot to posting misinformation or financial scams to their many viewers. The tactic was similar to how the Internet Research Agency (IRA), the Russian troll farm that had meddled in the 2016 U.S. election, spread disinformation to American Facebook users. Facebook employees gave the tactic a name: “manufactured virality.”

In April 2020, a team at Facebook working on “soft actions”—solutions that stop short of removing problematic content—presented Zuckerberg with a plan to reduce the reach of pages that pursued “manufactured virality” as a tactic. The plan would down-rank these pages, making it less likely that users would see their posts in the News Feed. It would impact the pages that shared the doctored video of Pelosi, employees specifically pointed out in their presentation to Zuckerberg. They also suggested it could significantly reduce misinformation posted by pages on the platform since the pages accounted for 64% of page-related misinformation views but only 19% of total page-related views.

But in response to feedback given by Zuckerberg during the meeting, the employees “deprioritized” that line of work in order to focus on projects with a “clearer integrity impact,” internal company documents show.

This story is partially based on whistleblower Frances Haugen’s disclosures to the U.S. Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. The redacted versions were seen by a consortium of news organizations, including TIME. Many of the documents were first reported by the Wall Street Journal. They paint a picture of a company obsessed with boosting user engagement, even as its efforts to do so incentivized divisive, angry and sensational content. They also show how the company often turned a blind eye to warnings from its own researchers about how it was contributing to societal harms.

A pitch to Zuckerberg with few visible downsides

Manufactured virality is a tactic that has been used frequently by bad actors to game the platform, according to Jeff Allen, the co-founder of the Integrity Institute and a former Facebook data scientist who worked closely on manufactured virality before he left the company in 2019. This includes a range of groups, from teenagers in Macedonia who found that targeting hyper-partisan U.S. audiences in 2016 was a lucrative business, to covert influence operations by foreign governments including the Kremlin. “Aggregating content that previously went viral is a strategy that all sorts of bad actors have used to build large audiences on platforms,” Allen told TIME. “The IRA did it, the financially motivated troll farms in the Balkans did it, and it’s not just a U.S. problem. It’s a tactic used across the world by actors who want to target various communities for their own financial or political gain.”

In the April 2020 meeting, Facebook employees working in the platform’s “integrity” division, which focuses on safety, presented a raft of suggestions to Zuckerberg about how to reduce the virality of harmful content on the platform. Several of the suggestions—titled “Big ideas to reduce prevalence of bad content”—had already been launched; some were still the subjects of experiments being run on the platform by Facebook researchers. Others —including tackling “manufactured virality”—were early concepts that employees were seeking approval from Zuckerberg to explore in more detail.

The employees noted that much “manufactured virality” content was already against Facebook’s rules. The problem, they said, was that the company inconsistently enforced those rules. “We already have a policy against pages that [pursue manufactured virality],” they wrote. “But [we] don’t consistently enforce on this policy today.”

The employees’ presentation said that further research was needed to determine the “integrity impact” of taking action against manufactured virality. But they pointed out that the tactic disproportionately contributed to the platform’s misinformation problem. They had compiled statistics showing that nearly two-thirds of page-related misinformation came from “manufactured virality” pages, compared to less than one fifth of total page-related views.

Acting against “manufactured virality” would bring few business risks, the employees added. Doing so would not reduce the number of times users logged into Facebook per day, nor the number of “likes” that they gave to other pieces of content, the presentation noted. Neither would cracking down on such content impact freedom of speech, the presentation said, since only reshares of unoriginal content—not speech—would be affected.

But Zuckerberg appeared to discourage further research. After presenting the suggestion to the CEO, employees posted an account of the meeting on Facebook’s internal employee forum, Workplace. In the post, they said that based on Zuckerberg’s feedback they would now be “deprioritizing” the plans to reduce manufactured virality, “in favor of projects that have a clearer integrity impact.” Zuckerberg approved several of the other suggestions that the team presented in the same meeting, including “personalized demotions,” or demoting content for users based on their feedback.

Andy Stone, a Facebook spokesperson, rejected suggestions that employees were discouraged from researching manufactured virality. “Researchers pursued this and, while initial results didn’t demonstrate a significant impact, they were free to continue to explore it,” Stone wrote in a statement to TIME. He said the company had nevertheless contributed significant resources to reducing bad content, including down-ranking. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” he wrote. “We recently published our Content Distribution Guidelines that describe the kinds of content whose distribution we reduce in News Feed. And we’ve spent years standing up teams, developing policies and collaborating with industry peers to disrupt coordinated attempts by foreign and domestic inauthentic groups to abuse our platform.”

But even today, pages that share unoriginal viral content in order to boost engagement and drive traffic to questionable websites are still some of the most popular on the entire platform, according to a report released by Facebook in August.

Allen, the former Facebook data scientist, says Facebook and other platforms should be focused on tackling manufactured virality, because it’s a powerful way to make platforms more resilient against abuse. “Platforms need to ensure that building up large audiences in a community should require genuine work and provide genuine value for the community,” he says. “Platforms leave them themselves vulnerable and exploitable by bad actors across the globe if they allow large audiences to be built up by the extremely low-effort practice of scraping and reposting content that previously went viral.”

The internal Facebook documents show that some researchers noted that cracking down on “manufactured virality” might reduce Meaningful Social Interactions (MSI)—a statistic that Facebook began using in 2018 to help rank its News Feed. The algorithm change was meant to show users more content from their friends and family, and less from politicians and news outlets. But an internal analysis from 2018 titled “Does Facebook reward outrage” reported that the more negative comments a Facebook post elicited​​—content like the altered Pelosi video—the more likely the link in the post was to be clicked by users. “The mechanics of our platform are not neutral,” one Facebook employee wrote at the time. Since the content with more engagement was placed more highly in users’ feeds, it created a feedback loop that incentivized the posts that drew the most outrage. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Oct. 25.

How “manufactured virality” led to trouble in Washington

Zuckerberg’s decision in May 2019 not to remove the doctored video of Pelosi seemed to mark a turning point for many Democratic lawmakers fed up with the company’s larger failure to stem misinformation. At the time, it led Pelosi—one of the most powerful members of Congress, who represents the company’s home state of California—to deliver an unusually scathing rebuke. She blasted Facebook as “willing enablers” of political disinformation and interference, a criticism increasingly echoed by many other lawmakers. Facebook defended its decision, saying that they had “dramatically reduced the distribution of that content” as soon as its fact-checking partners flagged the video for misinformation.

Pelosi’s office did not respond to TIME’s request for comment on this story.

The circumstances surrounding the Pelosi video exemplify how Facebook’s pledge to show political disinformation to fewer users only after third-party fact-checkers flag it as misleading or manipulated—a process that can take hours or even days—does little to stop this content from going viral immediately after it is posted.

In the lead-up to the 2020 election, after Zuckerberg discouraged employees from tackling manufactured virality, hyper-partisan sites used the tactic as a winning formula to drive engagement to their pages. In August 2020, another doctored video falsely claiming to show Pelosi inebriated again went viral. Pro-Trump and rightwing Facebook pages shared thousands of similar posts, from doctored videos meant to make then-candidate Joe Biden appear lost or confused while speaking at events, to edited videos claiming to show voter fraud.

In the aftermath of the election, the same network of pages that had built up millions of followers between them using manufactured virality tactics used the reach they had built to spread the lie that the election had been stolen.

Source: Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

Facebook’s language gaps weaken screening of hate, terrorism

Any number of good articles on the “Facebook papers” and its unethical and dangerous business practices:

As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.

Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.

For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.

Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemicthan just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.

Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were reviewed by a consortium of news organizations, including The Associated Press.

In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.

But when it comes to Arabic content moderation, the company said, “We still have more work to do. … We conduct research to better understand this complexity and identify how we can improve.”

In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya’s persecution, which the U.S. has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.

Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.

In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.

In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.

Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.

The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Quran. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.

Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.

Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.

For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.

Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the U.S. government equivalent — are grounds for a takedown.

“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”

The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.

“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.

In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”

“We know our systems are not perfect,” it added.

The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.

Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.

Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.

“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”

Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.

“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.

Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.

Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.

“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.

In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.

When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.

When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.

In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.

Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.

Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77% of the time, one document said.

“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90% of the time, a report said.

Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.

“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”

Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.

“The stereotype that Arabic is one entity is a major problem,” said Enam al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.

Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.

Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.

Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.

“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.

During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.

In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.

“The repetition of false positives creates a huge drain of resources,” it said.

In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.

Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.

“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.

But the company also lamented that “there is not one clear mitigation strategy.”

Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.

“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”

Source: Facebook’s language gaps weaken screening of hate, terrorism

How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Good in-depth article:

Facebook’s civic-integrity team was always different from all the other teams that the social media company employed to combat misinformation and hate speech. For starters, every team member subscribed to an informal oath, vowing to “serve the people’s interest first, not Facebook’s.”

The “civic oath,” according to five former employees, charged team members to understand Facebook’s impact on the world, keep people safe and defuse angry polarization. Samidh Chakrabarti, the team’s leader, regularly referred to this oath—which has not been previously reported—as a set of guiding principles behind the team’s work, according to the sources.
[time-brightcove not-tgx=”true”]

Chakrabarti’s team was effective in fixing some of the problems endemic to the platform, former employees and Facebook itself have said.

But, just a month after the 2020 U.S. election, Facebook dissolved the civic-integrity team, and Chakrabarti took a leave of absence. Facebook said employees were assigned to other teams to help share the group’s experience across the company. But for many of the Facebook employees who had worked on the team, including a veteran product manager from Iowa named Frances Haugen, the message was clear: Facebook no longer wanted to concentrate power in a team whose priority was to put people ahead of profits.

Five weeks later, supporters of Donald Trump stormed the U.S. Capitol—after some of them organized on Facebook and used the platform to spread the lie that the election had been stolen. The civic-integrity team’s dissolution made it harder for the platform to respond effectively to Jan. 6, one former team member, who left Facebook this year, told TIME. “A lot of people left the company. The teams that did remain had significantly less power to implement change, and that loss of focus was a pretty big deal,” said the person. “Facebook did take its eye off the ball in dissolving the team, in terms of being able to actually respond to what happened on Jan. 6.” The former employee, along with several others TIME interviewed, spoke on the condition of anonymity, for fear that being named would ruin their career.

Enter Frances Haugen

Haugen revealed her identity on Oct. 3 as the whistle-blower behind the most significant leak of internal research in the company’s 17-year history. In a bombshell testimony to the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security two days later, Haugen said the civic-integrity team’s dissolution was the final event in a long series that convinced her of the need to blow the whistle. “I think the moment which I realized we needed to get help from the outside—that the only way these problems would be solved is by solving them together, not solving them alone—was when civic-integrity was dissolved following the 2020 election,” she said. “It really felt like a betrayal of the promises Facebook had made to people who had sacrificed a great deal to keep the election safe, by basically dissolving our community.”

In a statement provided to TIME, Facebook’s vice president for integrity Guy Rosen denied the civic-integrity team had been disbanded. “We did not disband Civic Integrity,” Rosen said. “We integrated it into a larger Central Integrity team so that the incredible work pioneered for elections could be applied even further, for example, across health-related issues. Their work continues to this day.” (Facebook did not make Rosen available for an interview for this story.)

Impacts of Civic Technology Conference 2016The defining values of the civic-integrity team, as described in a 2016 presentation given by Samidh Chakrabarti and Winter Mason. Civic-integrity team members were expected to adhere to this list of values, which was referred to internally as the “civic oath”.

Haugen left the company in May. Before she departed, she trawled Facebook’s internal employee forum for documents posted by integrity researchers about their work. Much of the research was not related to her job, but was accessible to all Facebook employees. What she found surprised her.

Some of the documents detailed an internal study that found that Instagram, its photo-sharing app, made 32% of teen girls feel worse about their bodies. Others showed how a change to Facebook’s algorithm in 2018, touted as a way to increase “meaningful social interactions” on the platform, actually incentivized divisive posts and misinformation. They also revealed that Facebook spends almost all of its budget for keeping the platform safe only on English-language content. In September, the Wall Street Journal published a damning series of articles based on some of the documents that Haugen had leaked to the paper. Haugen also gave copies of the documents to Congress and the Securities and Exchange Commission (SEC).

The documents, Haugen testified Oct. 5, “prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems, and its role in spreading divisive and extreme messages.” She told Senators that the failings revealed by the documents were all linked by one deep, underlying truth about how the company operates. “This is not simply a matter of certain social media users being angry or unstable, or about one side being radicalized against the other; it is about Facebook choosing to grow at all costs, becoming an almost trillion-dollar company by buying its profits with our safety,” she said.

Facebook’s focus on increasing user engagement, which ultimately drives ad revenue and staves off competition, she argued, may keep users coming back to the site day after day—but also systematically boosts content that is polarizing, misinformative and angry, and which can send users down dark rabbit holes of political extremism or, in the case of teen girls, body dysmorphia and eating disorders. “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people,” Haugen said. (In 2020, the company reported $29 billion in net income—up 58% from a year earlier. This year, it briefly surpassed $1 trillion in total market value, though Haugen’s leaks have since knocked the company down to around $940 billion.)

Asked if executives adhered to the same set of values as the civic-integrity team, including putting the public’s interests before Facebook’s, a company spokesperson told TIME it was “safe to say everyone at Facebook is committed to understanding our impact, keeping people safe and reducing polarization.”

In the same week that an unrelated systems outage took Facebook’s services offline for hours and revealed just how much the world relies on the company’s suite of products—including WhatsApp and Instagram—the revelations sparked a new round of national soul-searching. It led some to question how one company can have such a profound impact on both democracy and the mental health of hundreds of millions of people. Haugen’s documents are the basis for at least eight new SEC investigations into the company for potentially misleading its investors. And they have prompted senior lawmakers from both parties to call for stringent new regulations.

Haugen urged Congress to pass laws that would make Facebook and other social media platforms legally liable for decisions about how they choose to rank content in users’ feeds, and force companies to make their internal data available to independent researchers. She also urged lawmakers to find ways to loosen CEO Mark Zuckerberg’s iron grip on Facebook; he controls more than half of voting shares on its board, meaning he can veto any proposals for change from within. “I came forward at great personal risk because I believe we still have time to act,” Haugen told lawmakers. “But we must act now.”

Potentially even more worryingly for Facebook, other experts it hired to keep the platform safe, now alienated by the company’s actions, are growing increasingly critical of their former employer. They experienced first hand Facebook’s unwillingness to change, and they know where the bodies are buried. Now, on the outside, some of them are still honoring their pledge to put the public’s interests ahead of Facebook’s.

Inside Facebook’s civic-integrity team

Chakrabarti, the head of the civic-integrity team, was hired by Facebook in 2015 from Google, where he had worked on improving how the search engine communicated information about lawmakers and elections to its users. A polymath described by one person who worked under him as a “Renaissance man,” Chakrabarti holds master’s degrees from MIT, Oxford and Cambridge, in artificial intelligence engineering, modern history and public policy, respectively, according to his LinkedIn profile.

Although he was not in charge of Facebook’s company-wide “integrity” efforts (led by Rosen), Chakrabarti, who did not respond to requests to comment for this article, was widely seen by employees as the spiritual leader of the push to make sure the platform had a positive influence on democracy and user safety, according to multiple former employees. “He was a very inspirational figure to us, and he really embodied those values [enshrined in the civic oath] and took them quite seriously,” a former member of the team told TIME. “The team prioritized societal good over Facebook good. It was a team that really cared about the ways to address societal problems first and foremost. It was not a team that was dedicated to contributing to Facebook’s bottom line.”

Chakrabarti began work on the team by questioning how Facebook could encourage people to be more engaged with their elected representatives on the platform, several of his former team members said. An early move was to suggest tweaks to Facebook’s “more pages you may like” feature that the team hoped might make users feel more like they could have an impact on politics.

After the chaos of the 2016 election, which prompted Zuckerberg himself to admit that Facebook didn’t do enough to stop misinformation, the team evolved. It moved into Facebook’s wider “integrity” product group, which employs thousands of researchers and engineers to focus on fixing Facebook’s problems of misinformation, hate speech, foreign interference and harassment. It changed its name from “civic engagement” to “civic integrity,” and began tackling the platform’s most difficult problems head-on.

Shortly before the midterm elections in 2018, Chakrabarti gave a talk at a conference in which he said he had “never been told to sacrifice people’s safety in order to chase a profit.” His team was hard at work making sure the midterm elections did not suffer the same failures as in 2016, in an effort that was generally seen as a success, both inside the company and externally. “To see the way that the company has mobilized to make this happen has made me feel very good about what we’re doing here,” Chakrabarti told reporters at the time. But behind closed doors, integrity employees on Chakrabarti’s team and others were increasingly getting into disagreements with Facebook leadership, former employees said. It was the beginning of the process that would eventually motivate Haugen to blow the whistle.

In 2019, the year Haugen joined the company, researchers on the civic-integrity team proposed ending the use of an approved list of thousands of political accounts that were exempt from Facebook’s fact-checking program, according to tech news site The Information. Their research had found that the exemptions worsened the site’s misinformation problem because users were more likely to believe false information if it were shared by a politician. But Facebook executives rejected the proposal.

The pattern repeated time and time again, as proposals to tweak the platform to down-rank misinformation or abuse were rejected or watered down by executives concerned with engagement or worried that changes might disproportionately impact one political party more than another, according to multiple reports in the press and several former employees. One cynical joke among members of the civic-integrity team was that they spent 10% of their time coding and the other 90% arguing that the code they wrote should be allowed to run, one former employee told TIME. “You write code that does exactly what it’s supposed to do, and then you had to argue with execs who didn’t want to think about integrity, had no training in it and were mad that you were hurting their product, so they shut you down,” the person said.

Sometimes the civic-integrity team would also come into conflict with Facebook’s policy teams, which share the dual role of setting the rules of the platform while also lobbying politicians on Facebook’s behalf. “I found many times that there were tensions [in meetings] because the civic-integrity team was like, ‘We’re operating off this oath; this is our mission and our goal,’” says Katie Harbath, a long-serving public-policy director at the company’s Washington, D.C., office who quit in March 2021. “And then you get into decisionmaking meetings, and all of a sudden things are going another way, because the rest of the company and leadership are not basing their decisions off those principles.”

Harbath admitted not always seeing eye to eye with Chakrabarti on matters of company policy, but praised his character. “Samidh is a man of integrity, to use the word,” she told TIME. “I personally saw times when he was like, ‘How can I run an integrity team if I’m not upholding integrity as a person?’”

Years before the 2020 election, research by integrity teams had shownFacebook’s group recommendations feature was radicalizing users by driving them toward polarizing political groups, according to the Journal. The company declined integrity teams’ requests to turn off the feature, BuzzFeed News reported. Then, just weeks before the vote, Facebook executives changed their minds and agreed to freeze political group recommendations. The company also tweaked its News Feed to make it less likely that users would see content that algorithms flagged as potential misinformation, part of temporary emergency “break glass” measures designed by integrity teams in the run-up to the vote. “Facebook changed those safety defaults in the run-up to the election because they knew they were dangerous,” Haugen testified to Senators on Tuesday. But they didn’t keep those safety measures in place long, she added. “Because they wanted that growth back, they wanted the acceleration on the platform back after the election, they returned to their original defaults. And the fact that they had to break the glass on Jan. 6, and turn them back on, I think that’s deeply problematic.”

In a statement, Facebook spokesperson Tom Reynolds rejected the idea that the company’s actions contributed to the events of Jan. 6. “In phasing in and then adjusting additional measures before, during and after the election, we took into account specific on-platforms signals and information from our ongoing, regular engagement with law enforcement,” he said. “When those signals changed, so did the measures. It is wrong to claim that these steps were the reason for Jan. 6—the measures we did need remained in place through February, and some like not recommending new, civic or political groups remain in place to this day. These were all part of a much longer and larger strategy to protect the election on our platform—and we are proud of that work.”

Soon after the civic-integrity team was dissolved in December 2020, Chakrabarti took a leave of absence from Facebook. In August, he announced he was leaving for good. Other employees who had spent years working on platform-safety issues had begun leaving, too. In her testimony, Haugen said that several of her colleagues from civic integrity left Facebook in the same six-week period as her, after losing faith in the company’s pledge to spread their influence around the company. “Six months after the reorganization, we had clearly lost faith that those changes were coming,” she said.

After Haugen’s Senate testimony, Facebook’s director of policy communications Lena Pietsch suggested that Haugen’s criticisms were invalid because she “worked at the company for less than two years, had no direct reports, never attended a decision-point meeting with C-level executives—and testified more than six times to not working on the subject matter in question.” On Twitter, Chakrabarti said he was not supportive of company leaks but spoke out in support of the points Haugen raised at the hearing. “I was there for over 6 years, had numerous direct reports, and led many decision meetings with C-level execs, and I find the perspectives shared on the need for algorithmic regulation, research transparency, and independent oversight to be entirely valid for debate,” he wrote. “The public deserves better.”

Can Facebook’s latest moves protect the company?

Two months after disbanding the civic-integrity team, Facebook announced a sharp directional shift: it would begin testing ways to reduce the amount of political content in users’ News Feeds altogether. In August, the company said early testing of such a change among a small percentage of U.S. users was successful, and that it would expand the tests to several other countries. Facebook declined to provide TIME with further information about how its proposed down-ranking system for political content would work.

Many former employees who worked on integrity issues at the company are skeptical of the idea. “You’re saying that you’re going to define for people what political content is, and what it isn’t,” James Barnes, a former product manager on the civic-integrity team, said in an interview. “I cannot even begin to imagine all of the downstream consequences that nobody understands from doing that.”

Another former civic-integrity team member said that the amount of work required to design algorithms that could detect any political content in all the languages and countries in the world—and keeping those algorithms updated to accurately map the shifting tides of political debate—would be a task that even Facebook does not have the resources to achieve fairly and equitably. Attempting to do so would almost certainly result in some content deemed political being demoted while other posts thrived, the former employee cautioned. It could also incentivize certain groups to try to game those algorithms by talking about politics in nonpolitical language, creating an arms race for engagement that would privilege the actors with enough resources to work out how to win, the same person added.

When Zuckerberg was hauled to testify in front of lawmakers after the Cambridge Analytica data scandal in 2018, Senators were roundly mocked on social media for asking basic questions such as how Facebook makes money if its services are free to users. (“Senator, we run ads” was Zuckerberg’s reply.) In 2021, that dynamic has changed. “The questions asked are a lot more informed,” says Sophie Zhang, a former Facebook employee who was fired in 2020 after she criticized Facebook for turning a blind eye to platform manipulation by political actors around the world.

“The sentiment is increasingly bipartisan” in Congress, Zhang adds. In the past, Facebook hearings have been used by lawmakers to grandstand on polarizing subjects like whether social media platforms are censoring conservatives, but this week they were united in their condemnation of the company. “Facebook has to stop covering up what it knows, and must change its practices, but there has to be government accountability because Facebook can no longer be trusted,” Senator Richard Blumenthal of Connecticut, chair of the Subcommittee on Consumer Protection, told TIME ahead of the hearing. His Republican counterpart Marsha Blackburn agreed, saying during the hearing that regulation was coming “sooner rather than later” and that lawmakers were “close to bipartisan agreement.”

As Facebook reels from the revelations of the past few days, it already appears to be reassessing product decisions. It has begun conducting reputational reviewsof new products to assess whether the company could be criticized or its features could negatively affect children, the Journal reported Wednesday. It last week paused its Instagram Kids product amid the furor.

Whatever the future direction of Facebook, it is clear that discontent has been brewing internally. Haugen’s document leak and testimony have already sparked calls for stricter regulation and improved the quality of public debate about social media’s influence. In a post addressing Facebook staff on Wednesday, Zuckerberg put the onus on lawmakers to update Internet regulations, particularly relating to “elections, harmful content, privacy and competition.” But the real drivers of change may be current and former employees, who have a better understanding of the inner workings of the company than anyone—and the most potential to damage the business.

Source: How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Why Silicon Valley’s Optimization Mindset Sets Us Up for Failure

Of interest. Depends on how one views optimization and what one considers to be the objectives. Engineers and programmers tend to have a relatively narrow focus and thus blind spots to social and public goods:

In 2013 a Silicon Valley software engineer decided that food is an inconvenience—a pain point in a busy life. Buying food, preparing it, and cleaning up afterwards struck him as an inefficient way to feed himself. And so was born the idea of Soylent, Rob Rhinehart’s meal replacement powder, described on its website as an International Complete Nutrition Platform. Soylent is the logical result of an engineer’s approach to the “problem” of feeding oneself with food: there must be a more optimal solution.

It’s not hard to sense the trouble with this crushingly instrumental approach to nutrition.

Soylent may optimize meeting one’s daily nutritional needs with minimal cost and time investment. But for most people, food is not just a delivery mechanism for one’s nutritional requirements. It brings gustatory pleasure. It provides for social connection. It sustains and transmits cultural identity. A world in which Soylent spells the end of food also spells the degradation of these values.

Maybe you don’t care about Soylent; it’s just another product in the marketplace that no one is required to buy. If tech workers want to economize on time spent grocery shopping or a busy person faces the choice between grabbing an unhealthy meal at a fast-food joint or bringing along some Soylent, why should anyone complain? In fact, it’s a welcome alternative for some people.

But the story of Soylent is powerful because it reveals the optimization mindset of the technologist. And problems arise when this mindset begins to dominate—when the technologies begin to scale and become universal and unavoidable.

That mindset is inculcated early in the training of technologists. When developing an algorithm, computer science courses often define the goal as providing an optimal solution to a computationally-specified problem. And when you look at the world through this mindset, it’s not just computational inefficiencies that annoy. Eventually, it becomes a defining orientation to life as well. As one of our colleagues at Stanford tells students, everything in life is an optimization problem.

The desire to optimize can favor some values over others. And the choice of which values to favor, and which to sacrifice, are made by the optimizers who then impose those values on the rest of us when their creations reach great scale. For example, consider that Facebook’s decisions about how content gets moderated or who loses their accounts are the rules of expression for more than three billion people on the platform; Google’s choices about what web pages to index determine what information most users of the internet get in response to searches. The small and anomalous group of human beings at these companies create, tweak, and optimize technology based on their notions of how it ought to be better. Their vision and their values about technology are remaking our individual lives and societies. As a result, the problems with the optimization mindset have become our problems, too.

A focus on optimization can lead technologists to believe that increasing efficiency is inherently a good thing. There’s something tempting about this view. Given a choice between doing something efficiently or inefficiently, who would choose the slower, more wasteful, more energy-intensive path?

Yet a moment’s reflection reveals other ways of approaching problems. We put speed bumps onto roads near schools to protect children; judges encourage juries to take ample time to deliberate before rendering a verdict; the media holds off on calling an election until all the polls have closed. It’s also obvious that the efficient pursuit of a malicious goal—such as deliberately harming or misinforming people—makes the world worse, not better. The quest to make something more efficient is not an inherently good thing. Everything depends on the goal.

Technologists with a single-minded focus on efficiency frequently take for granted that the goals they pursue are worth pursuing. But, in the context of Big Tech, that would have us believe that boosting screen time, increasing click-through rates on ads, promoting purchases of an algorithmically-recommended item, and profit-maximizing are the ultimate outcomes we care about.

The problem here is that goals such as connecting people, increasing human flourishing, or promoting freedom, equality, and democracy are not goals that are computationally tractable. Technologists are always on the lookout for quantifiable metrics. Measurable inputs to a model are their lifeblood, and the need to quantify produces a bias toward measuring things that are easy to quantify. But simple metrics can take us further away from the important goals we really care about, which may require multiple or more complicated metrics or, more fundamentally, may not lend themselves to straightforward quantification. This results in technologists frequently substituting what is measurable for what is meaningful. Or as the old saying goes, “Not everything that counts can be counted, and not everything that can be counted counts.”

There is no shortage of examples of the “bad proxy” phenomenon, but perhaps one of the most illustrative is an episode in Facebook’s history. Facebook Vice President Andrew Bosworth revealed in an internal memo in 2016 how the company pursued growth in the number of people on the platform as the one and only relevant metric for their larger mission of giving people the power to build community and bring the world closer together. “The natural state of the world,” he wrote, “is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.” To accomplish their mission of connecting people, Facebook simplified the task to growing their ever-more connected userbase. As Bosworth noted: “The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.” But what happens when “connecting people” comes with potential violations of user privacy, greater circulation of hate speech and misinformation, or political polarization that tears at the fabric of our democracy?

The optimization mindset is also prone to the “success disaster.” The issue here is not that the technologist has failed in accomplishing something, but rather that their success in solving for one objective has wide-ranging consequences for other things we care about. The realm of worthy ends is vast, and when it comes to world-changing technologies that have implications for fairness, privacy, national security, justice, human autonomy, freedom of expression, and democracy, it’s fair to assume that values conflict in many circumstances. Solutions aren’t so clear cut and inevitably involve trade-offs among competing values. This is where the optimization mindset can fail us.

Think for example of the amazing technological advances in agriculture. Factory farming has dramatically increased agricultural productivity. Where it once took 55 days to raise a chicken before slaughter, it now takes 35, and an estimated 50 billion are killed every year–more than five million killed every hour of every day of the year. But the success of factory farming has generated terrible consequences for the environment (increases in methane gases that contribute to climate change), our individual health (greater meat consumption is correlated with heart disease), and public health (greater likelihood of transmission of viruses from animals to humans that could cause a pandemic).

Success disasters abound in Big Tech as well. Facebook, YouTube, and Twitter have succeeded in connecting billions of people in a social network, but now that they have created a digital civic square, they have to grapple with the conflict between freedom of expression and the reality of misinformation and hate speech.

The bottom line is that technology is an explicit amplifier. It requires us to be explicit about the values we want to promote and how we trade-off between them, because those values are encoded in some way into the objective functions that are optimized. And it is an amplifier because it can often allow for the execution of a policy far more efficiently than humans. For example, with current technology we could produce vehicles that automatically issue speeding tickets whenever the driver exceeded the speed limit—and could issue a warrant for the driver’s arrest after they had enough speeding tickets. Such a vehicle would provide extreme efficiency in upholding speed limits. However, this amplification of safety would infringe on the competing values of autonomy (to make our own choices about safe driving speeds and the urgency of a given trip) or privacy (not to have our driving constantly surveilled).

Several years ago, one of us received an invitation to a small dinner. Founders, venture capitalists, researchers at a secretive tech lab, and two professors assembled in the private dining room of a four-star hotel in Silicon Valley. The host—one of the most prominent names in technology—thanked everyone for coming and reminded us of the topic we’d been invited to discuss: “What if a new state were created to maximize science and tech progress powered by commercial models—what would that run like? Utopia? Dystopia?”

The conversation progressed, with enthusiasm around the table for the establishment of a small nation-state dedicated to optimizing the progress of science and technology. Rob raised his hand to speak. “I’m just wondering, would this state be a democracy? What’s the governance structure here?” The response was quick: “Democracy? No. To optimize for science, we need a beneficent technocrat in charge. Democracy is too slow, and it holds science back.”

Adapted from Chapter 1 of System Error: Where Big Tech Went Wrong and How We Can Reboot published on September 7 by Harper Collins

Source: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjD2p7UmvryAhXBMd8KHXXUDmMQFnoECAgQAQ&url=https%3A%2F%2Ftime.com%2F6096754%2Fsilicon-valley-optimization-mindset%2F&usg=AOvVaw0CIPGWnSedYmuw2GOAzewq

A ‘safe space for racists’: antisemitism report criticises social media giants

Sigh….

There has been a serious and systemic failure to tackle antisemitism across the five biggest social media platforms, resulting in a “safe space for racists”, according to a new report.

Facebook, Twitter, Instagram, YouTube and TikTok failed to act on 84% of posts spreading anti-Jewish hatred and propaganda reported via the platforms’ official complaints system.

Researchers from the Center for Countering Digital Hate (CCDH), a UK/US non-profit organisation, flagged hundreds of antisemitic posts over a six-week period earlier this year. The posts, including Nazi, neo-Nazi and white supremacist content, received up to 7.3 million impressions.

Although each of the 714 posts clearly violated the platforms’ policies, fewer than one in six were removed or had the associated accounts deleted after being pointed out to moderators.

The report found that the platforms are particularly poor at acting on antisemitic conspiracy theories, including tropes about “Jewish puppeteers”, the Rothschild family and George Soros, as well as misinformation connecting Jewish people to the pandemic. Holocaust denial was also often left unchecked, with 80% of posts denying or downplaying the murder of 6 million Jews receiving no enforcement action whatsoever.

Facebook was the worst offender, acting on just 10.9% of posts, despite introducing tougher guidelines on antisemitic content last year. In November 2020, the company updated its hate speech policy to ban content that denies or distorts the Holocaust.

However, a post promoting a viral article that claimed the Holocaust was a hoax accompanied by a falsified image of the gates of Auschwitz with a white supremacist meme was not removed after researchers reported it to moderators. Instead, it was labelled as false information, which CCHD say contributed to it reaching hundreds of thousands of users. Statistics from Facebook’s own analytics tool show the article received nearly a quarter of a million likes, shares and comments across the platform.

Twitter also showed a poor rate of enforcement action, removing just 11% of posts or accounts and failing to act on hashtags such as #holohoax (often used by Holocaust deniers) or #JewWorldOrder (used to promote anti-Jewish global conspiracies). Instagram also failed to act on antisemitic hashtags, as well as videos inciting violence towards Jewish people.

YouTube acted on 21% of reported posts, while Instagram and TikTok on around 18%. On TikTok, an app popular with teenagers, antisemitism frequently takes the form of racist abuse sent directly to Jewish users as comments on their videos.

The report, entitled Failure to Protect, found that the platform did not act in three out of four cases of antisemitic comments sent to Jewish users. When TikTok did act, it more frequently removed individual comments instead of banning the users who sent them, barring accounts that sent direct antisemitic abuse in just 5% of cases.

Forty-one videos identified by researchers as containing hateful content, which have racked up a total of 3.5m views over an average of six years, remain on YouTube.

The report recommends financial penalties to incentivise better moderation, with improved training and support. Platforms should also remove groups dedicated to antisemitism and ban accounts that send racist abuse directly to users.

Imran Ahmed, CEO of CCDH, said the research showed that online abuse is not about algorithms or automation, as the tech companies allowed “bigots to keep their accounts open and their hate to remain online”, even after alerting human moderators.

He said that media, which he described as “how we connect as a society”, has become a “safe space for racists” to normalise “hateful rhetoric without fear of consequences”. “This is why social media is increasingly unsafe for Jewish people, just as it is becoming for women, Black people, Muslims, LGBT people and many other groups,” he added.

Ahmed said the test of the government’s online safety bill, first drafted in 2019 and introduced to parliament in May, is whether platforms can be made to enforce their own rules or face consequences themselves.

“While we have made progress in fighting antisemitism on Facebook, our work is never done,” said Dani Lever, a Facebook spokesperson. Lever told the New York Times that the prevalence of hate speech on the platform was decreasing, and she said that “given the alarming rise in antisemitism around the world, we have and will continue to take significant action through our policies”.

A Twitter spokesperson said the company condemned antisemitism and was working to make the platform a safer place for online engagement. “We recognise that there’s more to do, and we’ll continue to listen and integrate stakeholders’ feedback in these ongoing efforts,” the spokesperson said.

TikTok said in a statement that it condemns antisemitism and does not tolerate hate speech, and proactively removes accounts and content that violate its policies. “We are adamant about continually improving how we protect our community,” the company said.

YouTube said in a statement that it had “made significant progress” in removing hate speech over the last few years. “This work is ongoing and we appreciate this feedback,” said a YouTube spokesperson.

Instagram, which is owned by Facebook, did not respond to a request for comment before publication.

Source: A ‘safe space for racists’: antisemitism report criticises social media giants