Le Devoir Éditorial | La vraie nature de Zuckerberg

Well stated:

Pendant que le comté de Los Angeles compte les morts causés par d’effroyables incendies, le président des États-Unis désigné, Donald Trump, répand ses faussetés à la même vitesse que les flammes. Sur son réseau Truth Social, le 8 janvier, il a accusé le gouverneur de la Californie, le démocrate Gavin Newsom, d’être responsable des difficultés d’approvisionnement en eau en raison de son refus « de signer la déclaration de restauration de l’eau qui lui a été présentée et qui aurait permis l’accès à des millions de litres d’eau, provenant des pluies excédentaires et de la fonte des neiges du Nord ».

Une simple mais rigoureuse vérification des faits menée par l’équipe du Poynter Institute, PolitiFact, a montré que la « déclaration de restauration de l’eau » n’existe tout simplement pas. Et que ce sont les structures de stockage des eaux, et non ses méthodes de collecte à la source, qui ont entraîné des problèmes d’approvisionnement. Pour le président désigné, proférer des mensonges de manière consciente et calculée dans le but de discréditer l’adversaire est devenu aussi naturel que respirer. Il est donc profondément troublant d’apprendre que le p.-d.g. de Meta Platforms inc. (Facebook, Instagram, WhatsApp, Threads), Mark Zuckerberg, s’incline devant le règne de la désinformation et proscrit l’ère de la vérification des faits.

Dans une vidéo diffusée le 7 janvier dernier sur son réseau, l’ex-étudiant de Harvard âgé de 40 ans, et dont la fortune est évaluée à plus de 200 milliards de dollars américains, a affirmé qu’il souhaitait « revenir à la source » de Facebook, créé en 2004, et redonner la voix au peuple. Concrètement, il annonce la fin de la vérification des faits par une équipe de vérificateurs au profit des notes de la communauté, à la manière du réseau X, où les citoyens réagissent au gré de leurs connaissances,a priori et intentions partisanes. Ironiquement, le programme de vérification des faits lancé par Facebook en 2016, et salué dans le monde entier, visait à contrer le flot de fake news né de la campagne du candidat républicain Donald Trump. Zuckerberg n’en est pas à son premier revirement, mais celui-ci pourrait être dévastateur.

Dans une longue entrevue-confession accordée vendredi au polémiste et partisan de Trump Joe Rogan (l’un des animateurs de podcast les plus écoutés dans le monde), Zuckerberg explique qu’il a erré en confiant à des vérificateurs « idéologiquement partiaux » le mandat de valider la véracité des idées publiées par les utilisateurs de Facebook — il y en aurait 3,2 milliards chaque mois dans le monde, une quantité non négligeable. « On va se débarrasser d’une série de restrictions portant sur l’immigration et les questions de genre », dit-il, ne cachant pas son exaspération pour des courants wokes, qui lui semblent occuper trop d’espace.

Le p.-d.g poursuit son délire : fortement mal à l’aise avec le fait d’être « un de ceux qui décident de ce qui est vrai ou faux dans le monde », il préfère mettre fin à la « censure » et milite pour une saine autorégulation. Or, la désastreuse expérience du réseau X, sous la houlette d’un autre despote de la désinformation, Elon Musk, a montré les errements vers lesquels menait un réseau gangrené par les trolls et les manipulateurs. Avec les notes de la communauté, la vérité n’est pas vainqueure.

Zuckerberg parle de censure, mais ce que les vérificateurs de faits faisaient n’avait rien à voir avec une exclusion complète de propos s’éloignant de la vérité, mais relevait plutôt d’une diminution de leur portée. Facebook est une bête qui se nourrit à l’engagement, source de ses profits mirobolants. La décision a choqué partout dans le monde, et un groupe comme l’IFCN (Réseau international de vérification des faits) a immédiatement dénoncé la prémisse de Zuckerberg, selon laquelle les vérificateurs sont idéologiquement partiaux, ce qui en fait des censeurs.

La nouvelle ne concerne pour l’heure que les États-Unis, mais Mark Zuckerberg a promis d’étendre cette mesure ailleurs. L’heure est grave : a-t-on oublié un faux pas tragique comme celui survenu en 2017 au Myanmar ? Un rapport dévastateur publié en 2022 par Amnesty International a démontré que « les systèmes d’algorithmes de Facebook amplifiaient la propagation de contenus nocifs anti-Rohingyas au Myanmar ». Des milliers de Rohingyas ont ainsi été « tués, torturés, violés et déplacés ». Avec Facebook comme caisse de résonance, la violence virtuelle s’est transposée sur le terrain.

Les nouvelles règles sur la conduite haineuse édictées par Facebook interdisent de cibler des caractéristiques mentales pour insulter des personnes, mais, de manière tout à fait outrancière, elles passeront outre auxdites allégations de maladie mentale ou d’anormalité si elles sont fondées sur le genre ou l’orientation sexuelle, et cela, « compte tenu du discours politique et religieux sur le transgenrisme et l’homosexualité ». La communauté LGBTQ+ fulmine et s’inquiète, avec raison. Voilà donc la vraie nature de Zuckerberg, qui, sous le couvert fourre-tout de la libre expression, pourrait stimuler des vagues de haine et d’intolérance sur ses réseaux sociaux.

Source: Éditorial | La vraie nature de Zuckerberg

Lisée | Mauvaise influence

Foreign interference from south of the border:

….Dans ce Far West politique qu’est devenu Internet, écrit Perez, et « compte tenu de l’extraordinaire propension en ligne des extrémistes d’extrême droite de tous types, cette carte fait le jeu de politiciens comme Poilievre. Le pouvoir de cette “arme secrète” est énorme ». Il s’agit, pense le militant libéral, de la « principale menace » pesant sur la démocratie canadienne. Il a parfaitement raison.

On peut d’ores et déjà se demander comment réagira Elon Musk, lui qui a à ce jour investi une centaine de millions de dollars pour l’élection de Trump, sans compter l’influence qu’il détient personnellement avec ses 167 millions d’abonnés. Maintenant qu’il a pris goût à la politique partisane, pourquoi se priverait-il d’aider l’accession au pouvoir d’un homme, Pierre Poilievre, qui s’est opposé à toutes les initiatives visant à réguler les géants du Web ?

Si les trumpistes perdent l’élection américaine, une intervention massive dans l’élection canadienne ne serait-elle pas pour eux un prix de consolation ? Et s’ils gagnent, pourquoi Trump se gênerait-il non seulement d’encourager ses partisans à s’en mêler, mais aussi d’activer quelques-uns des leviers gouvernementaux à sa disposition pour aider à faire pencher la balance ? Déclassées, la Chine et l’Inde pourront aller se rhabiller.

Source: Chronique | Mauvaise influence

…. In this political Wild West that has become the Internet, writes Perez, and “given the extraordinary online propensity of far-right extremists of all kinds, this card plays into the game of politicians like Poilievre. The power of this “secret weapon” is enormous.” This is, the liberal activist believes, the “main threat” to Canadian democracy. He is absolutely right.

We can already wonder how Elon Musk will react, who has so far invested a hundred million dollars for Trump’s election, not to mention the influence he personally holds with his 167 million subscribers. Now that he has taken a liking to partisan politics, why would he deprive himself of helping a man, Pierre Poilievre, who has opposed all initiatives to regulate the giants of the Web?

If the Trumpists lose the American election, wouldn’t a massive intervention in the Canadian election be a consolation price for them? And if they win, why would Trump hesitate not only to encourage his supporters to get involved, but also to activate some of the government levers at his disposal to help tip the balance? Downgraded, China and India will be able to get dressed.

Hate Speech’s Rise on Twitter Is Unprecedented, Researchers Find

Of note. Likely to get worse:

Before Elon Musk bought Twitter, slurs against Black Americans showed up on the social media service an average of 1,282 times a day. After the billionaire became Twitter’s owner, they jumped to 3,876 times a day.

Slurs against gay men appeared on Twitter 2,506 times a day on average before Mr. Musk took over. Afterward, their use rose to 3,964 times a day.

And antisemitic posts referring to Jews or Judaism soared more than 61 percent in the two weeks after Mr. Musk acquired the site.

These findings — from the Center for Countering Digital Hate, the Anti-Defamation League and other groups that study online platforms — provide the most comprehensive picture to date of how conversations on Twitter have changed since Mr. Musk completed his $44 billion deal for the company in late October. While the numbers are relatively small, researchers said the increases were atypically high.

The shift in speech is just the tip of a set of changes on the service under Mr. Musk. Accounts that Twitter used to regularly remove — such as those that identify as part of the Islamic State, which were banned after the U.S. government classified ISIS as a terror group — have come roaring back. Accounts associated with QAnon, a vast far-right conspiracy theory, have paid for and received verified status on Twitter, giving them a sheen of legitimacy.

These changes are alarming, researchers said, adding that they had never seen such a sharp increase in hate speech, problematic content and formerly banned accounts in such a short period on a mainstream social media platform.

“Elon Musk sent up the Bat Signal to every kind of racist, misogynist and homophobe that Twitter was open for business,” said Imran Ahmed, the chief executive of the Center for Countering Digital Hate. “They have reacted accordingly.”

Mr. Musk, who did not respond to a request for comment, has been vocal about being a “free speech absolutist” who believes in unfettered discussions online. He has moved swiftly to overhaul Twitter’s practices, allowing former President Donald J. Trump — who was barred for tweets that could incite violence — to return. Last week, Mr. Musk proposed a widespread amnesty for accounts that Twitter’s previous leadership had suspended. And on Tuesday, he ended enforcement of a policy against Covid misinformation.

But Mr. Musk has denied claims that hate speech has increased on Twitter under his watch. Last month, he tweeted a downward-trending graph that he said showed that “hate speech impressions” had dropped by a third since he took over. He did not provide underlying numbers or details of how he was measuring hate speech.

On Thursday, Mr. Musk said the account of Kanye West, which was restricted for a spell in October because of an antisemitic tweet, would be suspended indefinitely after the rapper, known as Ye, tweeted an image of a swastika inside the Star of David. On Friday, Mr. Musk said Twitter would publish “hate speech impressions” every week and agreed with a tweet that said hate speech spiked last week because of Ye’s antisemitic posts.

Changes in Twitter’s content not only have societal implications but also affect the company’s bottom line. Advertisers, which provide about 90 percent of Twitter’s revenue, have reduced their spending on the platform as they wait to see how it will fare under Mr. Musk. Some have said they are concerned that the quality of discussions on the platform will suffer.

On Wednesday, Twitter sought to reassure advertisers about its commitment to online safety. “Brand safety is only possible when human safety is the top priority,” the company wrote in a blog post. “All of this remains true today.”

The appeal to advertisers coincided with a meeting between Mr. Musk and Thierry Breton, the digital chief of the European Union, in which they discussed content moderation and regulation, according to an E.U. spokesman. Mr. Breton has pressed Mr. Musk to comply with the Digital Services Act, a European law that requires social platforms to reduce online harm or face fines and other penalties.

Mr. Breton plans to visit Twitter’s San Francisco headquarters early next year to perform a “stress test” of its ability to moderate content and combat disinformation, the spokesman said.

On Twitter itself, researchers said the increase in hate speech, antisemitic posts and other troubling content had begun before Mr. Musk loosened the service’s content rules. That suggested that a further surge could be coming, they said.

If that happens, it’s unclear whether Mr. Musk will have policies in place to deal with problematic speech or, even if he does, whether Twitter has the employees to keep up with moderation. Mr. Musk laid off, fired or accepted the resignations of more than half the company’s staff last month, including those who worked to remove harassment, foreign interference and disinformation from the service. Yoel Roth, Twitter’s head of trust of safety, was among those who quit.

The Anti-Defamation League, which files regular reports of antisemitic tweets to Twitter and keeps track of which posts are removed, said the company had gone from taking action on 60 percent of the tweets it reported to only 30 percent.

“We have advised Musk that Twitter should not just keep the policies it has had in place for years, it should dedicate resources to those policies,” said Yael Eisenstat, a vice president at the Anti-Defamation League, who met with Mr. Musk last month. She said he did not appear interested in taking the advice of civil rights groups and other organizations.

“His actions to date show that he is not committed to a transparent process where he incorporates the best practices we have learned from civil society groups,” Ms. Eisenstat said. “Instead he has emboldened racists, homophobes and antisemites.”

The lack of action extends to new accounts affiliated with terror groups and others that Twitter previously banned. In the first 12 days after Mr. Musk assumed control, 450 accounts associated with ISIS were created, up 69 percent from the previous 12 days, according to the Institute for Strategic Dialogue, a think tank that studies online platforms.

Other social media companies are also increasingly concerned about how content is being moderated on Twitter.

When Meta, which owns Facebook and Instagram, found accounts associated with Russian and Chinese state-backed influence campaigns on its platforms last month, it tried to alert Twitter, said two members of Meta’s security team, who asked not to be named because they were not authorized to speak publicly. The two companies often communicated on these issues, since foreign influence campaigns typically linked fake accounts on Facebook to Twitter.

But this time was different. The emails to their counterparts at Twitter bounced or went unanswered, the Meta employees said, in a sign that those workers may have been fired.

Source: Hate Speech’s Rise on Twitter Is Unprecedented, Researchers Find

How Social Media Amplifies Misinformation More Than Information

Not surprising but useful studyÈ

It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday it began publishing results that it plans to update each week through the midterm elections on Nov. 8.

The institute’s initial report, posted online, found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.

Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.

“We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”

The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts. It analyzed nearly 600 fact-checked posts in September on a variety of subjects, including the Covid-19 pandemic, the war in Ukraine and the upcoming elections.

Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.

Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.

Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.

The institute plans to update its findings to track how the amplification fluctuates, especially as the midterm elections near. Misinformation, the institute’s report said, is much more likely to be shared than merely factual content.

“Amplification of misinformation can rise around critical events if misinformation narratives take hold,” the report said. “It can also fall, if platforms implement design changes around the event that reduce the spread of misinformation.”

Source: How Social Media Amplifies Misinformation More Than Information

Martin: It’s not the economy, stupid. It’s the media

Good column, if depressing:

A major change in the communications system, Canadian media guru Marshall McLuhan opined long ago, “is bound to cause a great readjustment of all the social patterns, the educational patterns, the sources and conditions of political power, [and] public opinion patterns.”

Given the vast changes that have marked the digital age, McLuhan can hardly be accused of overstatement. The online world is corrosively altering social and political “patterns,” to use the McLuhan term, and destabilizing democracies.

In using internet platforms, fringe groups and hate generators have multiplied exponentially and contributed to an erosion of trust in public institutions. They’ve prompted violent threats against public officials, driven the United States into two warring silos, and cast a pall of negativity over the public square seldom seen.

The contamination of the dialogue is such that even agreement on what constitutes basic truths has come to be tenuous. Talk of a post-truth America is no joke. Canada isn’t there yet, but give the disinformation amplifiers more scope and we soon might be.

Bill Clinton’s campaign strategist, James Carville, may have had it right back in the 1990s when he famously declared why voters had soured on then-president George H.W. Bush: “It’s the economy, stupid.” But not today. Now it’s the media, stupid. It’s the upheaval in the communications system. A media landscape gone rogue.

Economic woes get regulated. Not so the convulsions in our information ecosphere. We have no idea how to harness the hailstorm. Few efforts are being made. Calls for regulation are greeted by a great hue and cry over potential freedom-of-speech transgressions.

So broadly has media influence and power expanded that a cable network has become the avatar of the Republican Party. Donald Trump has maintained support from the GOP because he has what Richard Nixon didn’t. A kowtowing TV network and a Twitter following, until he was blocked, of 90 million users.

Social media platforms, like an upstart rival sports league, have served to delegitimize, if not disenfranchise, traditional media, magnifying public distrust. There is still a lot of high-quality journalism around, including at this awards-dominating newspaper. But traditional media no longer set the tenor of the national discussion and help shape a national consensus as in times past. Enfeeble a society’s credible news and information anchors, replace them with flotsam and you get, as per the United States, a country increasingly adrift.

The trajectory of media decline is worth recalling. From having just two or three television networks in Canada and the U.S. that aired news only for an hour or so a day, we have expanded to around-the-clock cable networks. News couldn’t fill that much airtime so opinion did – heaps of it. Hours of tirades filled the airwaves from reactionaries like Rush Limbaugh. Then the internet took hold, along with the invasion of unfiltered social media, awash in vitriol.

And so the chaff now overwhelms the wheat.

Mainstream media got in on the act, lowering their standards, contributing to the debasement of the dialogue by running ad hominem insults on comment boards from readers who hide behind pseudonyms. As I’ve noted before, that’s not freedom of speech. That’s fraud speech.

The crisis in our information complex is glaring, but it isn’t being addressed. Mainstream media, while demanding transparency everywhere else, rarely applies this standard to itself. Despite its exponential growth in importance, the media industry gets only a small fraction of the scrutiny that other powerful institutions do.

Big issues go largely unexamined in Canadian media. We rarely take a critical look at the unfettered rise of advocacy journalism, the impact of the disappearance of local newspapers or media ownership monopolies. There are precious few media columnists in this country. There is no overarching media institute to address the problems.

Conservative leadership candidate Pierre Poilievre’s big idea is to deprive us of one of our longest-standing national institutions. He would gut the CBC, defund it practically out of existence. At his rallies, he’s cheered on lustily for the promise, an indication of the low regard held by many in the population toward the mainstream media.

Any kind of media-reform drive always runs up against the freedom of speech barrier. The Trudeau government has passed Bill C-10, but it was diluted and will have little regulatory impact. A Commission on Democratic Expression, whose membership included former Supreme Court Justice Beverley McLachlin, has recommended regulatory reforms to curb social media’s impact. But it didn’t receive anywhere near the attention it deserved.

There’s a vacuum. Ways to regulate the destabilizing forces in the new communications paradigm must be found; ways that leave no possibility of control by political partisans. Such ways are possible and, given the ravages of the new media age, imperative.

Source: It’s not the economy, stupid. It’s the media

May: Never tweet. Social media is complicating the age-old neutrality of the public service

Easier in my time when the major worry was appearing in the press regarding a leaked document. Safer to never tweet on public policy issues and debates while in government, as tweets can give the perception that the public service is not neutral and impartial by the political level.

Public service did give the impression of not being impartial at times during the Harper government:

Social media is a part of life that is increasingly treacherous for Canada’s public servants, who may need better guidance to navigate their public and private lives online.

The blurring of that line was on display during the so-called freedom convoy protest that paralyzed downtown Ottawa. Some public servants took to social media to oppose or support the protest, sometimes with funds. Other public servants criticized colleagues who backed the protest as well as government mishandling of the nearly month-long blockade.

The storm of often anonymous allegations of misbehaviour on social media underlined an absence of transparency in the government agencies responsible for the ethical behaviour of bureaucrats. Neither the Treasury Board Secretariat nor the Office of Public Sector Integrity Commission were willing or able to say if any investigation or other action has been taken against any public servant.

On Reddit, members of public servant forums questioned the loyalty of federal workers who donated money to a convoy with an underlying mission to overthrow the government. Public servants on Twitter chided anyone who may have used government email to send a donation; accused them of ethical breaches. One suggested any of them with secret security clearances or higher should face a loyalty interview from CSIS, the Canadian Security Intelligence Service.

Some demanded they be investigated or have security clearances revoked. Others called for dismissal. One senior bureaucrat told Policy Options public servants should be dismissed if they funded anything to do with removing the elected government to which they pledged loyalty.

Meanwhile, eyebrows were raised when Artur Wilczynski, an assistant deputy minister for diversity and inclusion at the Communications Security Establishment, tweeted a stinging criticism of Ottawa police’s handling of the protest. As a rule, senior bureaucrats, especially from such a top-secret department, keep such opinions to themselves. The CSE called Wilczynski’s criticism a personal opinion, noting it would be inappropriate for the CSE to comment on matters that don’t fall within its mandate.

It’s unclear whether any public servants are being investigated or disciplined for an ethical breach – or an illegal act.

Public servants typically have a lot of latitude to engage in political activities before risking an ethical breach. That changed when the Emergencies Act was invoked, making a peaceful protest an illegal occupation.

The Treasury Board Secretariat, the public service’s employer, knows some public servants supported the protesters, a spokesperson said. But it is unaware of whether any were warned or disciplined by their departments for any public support online or offline.

“We do not collect information about complaints or disciplinary actions against employees,” the Treasury Board said in an email.

Social media users suggested at least a dozen public servantswent to the Office of Public Sector Integrity Commission to report the possibility that a handful of bureaucrats were on a leaked list of convoy donors that was exposed when a hackers took down the crowdsourcing website GiveSendGo. The commission investigates wrongdoings that could pose serious threats to the integrity of the public service.

Commissioner Joe Friday refused to say whether he has received or is investigating any complaints. His office sees a spike in inquiries and disclosures when hot-button public issues dominate the news, he said.

Social media is here to stay. But how public servants use social media to balance their duty of loyalty to government with their right to free speech and engage in political activity seems to be an open question.

Public servants have rules for behavior at work and during off-hours, though the line between on and off the clock has increasingly blurred after two years of working at home. The rules come from the Public Service Employment Act, the Values and Ethics Code and the codes of conduct for each department.

But some argue there’s a grey zone now that partisan politics and political activities have moved online.

Jared Wesley, an associate professor of political science at the University of Alberta, said governments have not done a good job updating their ethics protocols, standards of practice and codes of conduct to manage social media. They amount to deputy ministers offering a rule-of-thumb “if your boss wouldn’t like, don’t post it,” he said.

Carleton University’s Amanda Clarke and employment lawyer Benjamin Piper examined the gap in guidance in a paper, A Legal Framework to Govern Online Political Expression by Public Servants. Clarke, a digital and public management expert and associate professor, said this uncertainty about the rules cuts two ways.

“What we can learn from this incident is that there is already a grey area and it’s dangerous for public servants who are not equipped with sufficient guidance,” said Clarke.

“There are two outcomes. One: they over-censor and unnecessarily give up their rights to political participation …. The second is they go to the other extreme and abandon their obligation to be neutral, which can put them into dangerous positions, personally and professionally and, at the larger democratic level, undermine the public service’s credibility.”

In fact, public servants believe impartiality is important, a recent survey shows, and 97 per cent steer clear of political activities beyond voting. Eighty-nine per cent believe expressing views on social media can affect their impartiality or the perception of their impartiality. But it found only about 70 per cent of managers felt capable of providing guidance to workers on engaging in such activities.

Clarke argues the modernization of public service must address how public servants reconcile their online lives with their professional duties.

“You can’t expect public servants not to have online political lives. This is where politics unfolds today. So, anybody who is trying to say that is the solution is missing the reality of how we how we engage in politics today.”

More than 40 years ago, the Supreme Court’s landmark Fraser ruling confirmed public servants’ political rights – with some restrictions. They depend on factors such as one’s rank or level of influence in the public service; the visibility of the political activity; the relationship between the subject matter and the public servant’s work and whether they can be identified as public servants.

David Zussman, who long held the Jarislowsky Chair in Public Management at the University of Ottawa, said the rules should be the same whether a public servant pens an op-ed, a letter to the editor or a tweet.

“Public servants should be able to make personal decisions about who they support, but the overriding consideration is keeping the public service neutral and apolitical.”

Shortcomings of existing rules, however, were revealed in the 2015 election, when an environment scientist, Tony Turner, was suspended for writing and performing a protest song called “Harperman” that went viral on YouTube.

His union, the Professional Institute of the Public Service of Canada, argued he had violated no restrictions: he wasn’t an executive, his job was tracking migratory birds, he wrote the song on his own time, used no government resources and there was nothing in the video or on his website to indicate he was a public servant. He hadn’t produced the video or posted it to YouTube.

About the same time, a Justice Department memo surfaced, warning: “you are a public servant 24/7,” anything posted is public and there is no privacy on the Internet. Unions feared public servants could be prevented from using social media, a basic part of life.

Twitter, Facebook, LinkedIn and YouTube have complicated the rules for public servants posting an opinion, signing an online petition or making a crowdsourced donation, Clarke and Piper argue.

Social media can amplify opinions in public debate and indiscriminate liking, sharing, or re-posting can ramp up visibility more than expected.  Assessments of whether a public servant crossed the line have to consider whether they used privacy settings, pseudonyms or identified as public servants.

Clarke and Piper question whether public servants who never mention their jobs should be punished if they are outed as government employees in a data breach – like those who donated to the convoy protest. What about a friend taking a screenshot of a private email you sent criticizing government, sending it others or posting it online?

The Internet makes it easy to identify people, Piper said. Public servants who avoid disclosing their employer on their personal social media accounts can be identified using Google, LinkedIn or the government’s own employee directory.

So back to the convoy protest. Before the emergency order, would public servants have unwittingly crossed the line by supporting the protest or donating money to it?

The protest opposed vaccines and pandemic restrictions, though the blockade also became home to a mix of grievances. Many supporters signed a memorandum of understanding by one of the organizing groups calling for the Governor-General and Senate to form a new government with the protestors.

“It’s hard for me to see how a private donation by someone who has a job that has nothing to do with vaccine mandates or the trucker protest could attract discipline. That would be a really aggressive application of discipline by the government,” said Piper.

But Wesley argues that the convoy was known from the start as a seditionist organization and anyone who gave money to the original GoFundMe account should have seen the attached MOU. It was later withdrawn.

“Most public servants sign an oath to the Queen and should have recognized that signing or donating money to that movement was an abrogation of your oath,” he said. “I think a re-examination of who they are, who they work for and implications of donating to a cause that would have upended Canada’s system of constitutional monarchy is definitely worth a conversation with that individual.”

Perhaps part of the problem is the traditional bargain of loyalty and impartiality between politicians and public servants is coming unglued.

The duty of loyalty is shifting. The stability and job security that once attracted new recruits for lifelong careers in government aren’t important for many young workers, who like remote work and expect to work for many employers.

A recent study found half of the politicians surveyed don’t really want an impartial public service. Brendan Boyd, assistant professor at MacEwan University, suggests they prefer a bureaucracy that enthusiastically defends its policies rather than simply implements and explains them. However, 85 per cent of the politicians say that outside of work hours, public servants should be impartial.

“There will be further test cases, and how we define a duty of loyalty is going to either be confirmed or adapted or changed,” said Friday.

“But public servants are still allowed to communicate, hold or express views as a means of expression. And the pace at which the views, thoughts and opinions are expressed is so phenomenal that I think it fundamentally changes the playing field.”

Source: Never tweet. Social media is complicating the age-old neutrality of the public service

New report details how autocrats use the internet to harass and suppress activists in Canada

Thousands of miles away from her homeland in Syria, she organized protests and ran social media pages in Canada in support of opposition forces fighting President Bashar al-Assad’s regime.

Then anonymous complaints started rolling in and prompted Facebook to shut down her group page. Trolls left “nasty and dirty” comments on social media and created fake profiles with her photos, she said, while a Gmail administrator alerted her that “a state sponsor” was trying to hack her account.

“The Assad regime was functioning through this network of thugs that they call Shabeeha. Inside of Syria, those thugs would be physically beating up people and terrorizing them,” said the 42-year-old Toronto woman.

“Then they were also very much online, so they terrorized people online as well.”

As diaspora communities are increasingly relying on social media and other online platforms to pursue advocacy work, authoritarian states are trying to exert their will over overseas dissidents through what’s dubbed “digital transnational repression,” said a new study released Tuesday.

“States that engage in transnational repression use a variety of methods to silence, persecute, control, coerce, or otherwise intimidate their nationals abroad into refraining from transnational political or social activities that may undermine or threaten the state and power within its border,” said the report by the Citizen Lab at University of Toronto’s Munk School of Global Affairs.

“Thus, nationals of these states who reside abroad are still limited in how they can exercise ‘their rights, liberties, and voice’ and remain subject to state authoritarianism even after leaving their country of origin.”

Being a country of immigrants — particularly refugees seeking protection from persecution — Canada is vulnerable to this kind of digital attacks, amid the advancement of surveillance technology and rising authoritarianism around the globe, said the report’s authors.

“There is this misassumption that once people arrive in Canada from authoritarian countries, they are safe. We need to redefine what safety is,” said Noura Al-Jizawi, one of the report’s co-authors.

“This is not only affecting the day-to-day life of these people, but it’s also affecting the civic rights, their freedom of speech or their freedom of assembly of an entire community that’s beyond the individuals who are being targeted.”

A team of researchers interviewed 18 individuals, all of whom resided in Canada and had moved or fled to Canada from 11 different places, including Syria, Saudi Arabia, Yemen, Tibet, Hong Kong, China, Rwanda, Iran, Afghanistan, East Turkestan, and Balochistan.

The participants shared their experiences of being intimidated for the advocacy work they conducted in Canada, as well as the impacts of such threats — allegedly from these foreign states and their supporters — on their well-being and the diaspora communities they come from.

“Their main concern besides their privacy and the privacy of their family is the friends and colleagues back home. If the government targets their devices digitally, they would reveal the underground and hidden network of activists,” said Al-Jizawi.

“Many of them mention that they try to avoid the communities from their country of origin because they can’t feel safe connecting with these people.”

Many of the participants in the study said they have reached out for assistance to authorities such as the Canadian Security Intelligence Service but were disappointed.

“The responses were generally like, we can’t help you or this isn’t a crime and there’s nothing actionable here. In one case, they suggested to the person to hire a private detective,” noted Siena Anstis, another co-author of the study.

“Law enforcement is probably not that well equipped or trained to understand the broader context within which this is happening. The way that they handle these cases is quite dismissive.”

The anonymous Syrian-Canadian political activist who participated in the study said victims of transnational repression will stop reporting to Canadian officials if nothing comes out of their complaints.

“Every day we’re becoming more and more digital, which makes us more vulnerable to digital attacks and digital privacy issues. I hope our government will start thinking about how to protect us from this emerging threat that we never had to worry about before,” said the woman, who came here from Aleppo as a 7-year-old and has stopped her political activities to free Syria.

“If someone like me who is extremely outspoken and very difficult to stifle felt a little bit overwhelmed by all of it, you can imagine other people who recently came from Syria and still have a lot of ties there. I know a lot of people that will not open their mouth publicly because they’re scared what will happen.”

The report urges Ottawa to create a dedicated government agency to support victims and conduct research to better understand the scale and impact of these activities on the exercise of Canadian human rights. It also recommends establishing federal policies for the sale of surveillance technologies to authoritarian states and for guiding how social media platforms can better protect victims from digital attacks.

“It might seem at this stage it’s only happening to some communities in Canada and it doesn’t matter,” said Anstis. “But collectively it’s our human rights that are being eroded. It’s our capacity to engage in, affirm and protect against human rights and democracy. That space for dialogue is really reducing.”

Source: New report details how autocrats use the internet to harass and suppress activists in Canada

U.S. accounts drive Canadian convoy protest chatter

Of note. While recent concerns have understandably focussed on Chinese and Russian government interference, we likely need to spend more attention on the threats from next door, along with the pernicious threats via Facebook and Twitter:

Known U.S.-based sources of misleading information have driven a majority of Facebook and Twitter posts about the Canadian COVID-19 vaccine mandate protest, per German Marshall Fund data shared exclusively with Axios.

Driving the news: Ottawa’s “Freedom Convoy” has ballooned into a disruptive political protest against Prime Minister Justin Trudeau and inspired support among right-wing and anti-vaccine mandate groups in the U.S.

Why it matters: Trending stories about the protest appear to be driven by a small number of voices as top-performing accounts with huge followings are using the protest to drive engagement and inflame emotions with another hot-button issue.

  • “They can flood the zone — making something news and distorting what appears to be popular,” said Karen Kornbluh, senior fellow and director of the Digital Innovation and Democracy Initiative at the German Marshall Fund. 

What they’re saying: “The three pages receiving the most interactions on [convoy protest] posts — Ben Shapiro, Newsmax and Breitbart -—are American,” Kornbluh said. Other pages with the most action on convoy-related posts include Fox News, Dan Bongino and Franklin Graham.

  • “These major online voices with their bullhorns determine what the algorithm promotes because the algorithm senses it is engaging,” she said.
  • Using a platform’s design to orchestrate anti-government action mirrors how the “Stop the Steal” groups worked around the Jan. 6 Capitol riot, with a few users quickly racking up massive followings, Kornbluh said.

By the numbers: Per German Marshall Fund data, from Jan. 22, when the protests began, to Feb. 12, there were 14,667 posts on Facebook pages about the Canadian protests, getting 19.3 million interactions (including likes, comments and shares).

  • For context: The Beijing Olympics had 20.9 million interactions in that same time period.
  • On Twitter, from Feb. 3 to Feb. 13, tweets about the protests from have been favorited at least 4.1 million times and retweeted at least 1.1 million times.
  • Pro-convoy videos on YouTube have racked up 47 million views, with Fox News’ YouTube page getting 29.6 million views on related videos.

The big picture: New research published in the Atlantic finds that most public activity on Facebook comes from a “tiny, hyperactive group of abusive users.”

  • Since user engagement remains the most important factor in Facebook’s weighting of content recommendations, the researchers write, the most abusive users will wield the most influence over the online conversation.
  • “Overall, we observed 52 million users active on these U.S. pages and public groups, less than a quarter of Facebook’s claimed user base in the country,” the researchers write. “Among this publicly active minority of users, the top 1 percent of accounts were responsible for 35 percent of all observed interactions; the top 3 percent were responsible for 52 percent. Many users, it seems, rarely, if ever, interact with public groups or pages.”

Meanwhile, Foreign meddling is further confusing the narrative around the trucker protest. 

  • NBC News reported that overseas content mills in Vietnam, Bangladesh, Romania and other countries are powering Facebook groups promoting American versions of the trucker convoys. Facebook took many of the pages down.
  • A report from Grid News found a Bangladeshi digital marketing firm was behind two of the largest Facebook groups related to the Canadian Freedom Convoy beforebeing removed from the platform.
  • Grid News reported earlier that Facebook groups supporting the Canadian convoy were being administered by a hacked Facebook account belonging to a Missouri woman.

Source: U.S. accounts drive Canadian convoy protest chatter

Canada is sleepwalking into bed with Big Tech, as politicos float between firms and public office

Sort of inevitable, unfortunately:

Canadians have been served a familiar dish of election promises aimed at taking on the American web giants. But our governments have demonstrated a knack for aggressive procrastination on this file.

A new initiative is providing a glimpse into Canada’s revolving door with Big Tech, and as the clock ticks on the Liberal government’s hundred-day promise to enact legislation, Canadians have 22 reasons to start asking tough questions.

The Regulatory Capture Lab — a collaboration between FRIENDS(formerly Friends of Canadian Broadcasting), the Centre for Digital Rights and McMaster University’s Master of Public Policy in Digital Society Program — is shedding light on a carousel of unconstrained career moves between public policy teams at Big Tech firms and federal public offices. 

Canadians should review this new resource and see for themselves the creeping links between the most powerful companies on earth and the institutions responsible for reining them in. 

And they’d be wise to look soon. According to the Liberal government, a wave of tech-oriented policy is in formation, from updating the Broadcasting Act to forcing tech firms to pay for journalism that appears on their platforms.

But our work raises vital questions about all these proposals: are Canadians’ interests being served through these pieces of legislation? Has a slow creep of influence over public office put Big Tech in the driver’s seat? These promises of regulation have been around for years, so, why is it taking so long to get on with it?

Cosy relations between Big Tech and those in public office in Canada have bubbled to the surface before, most notably through the work of Kevin Chan, the man for Meta (Facebook) in Canada. In 2020, the Star exposed Chan’s efforts to recruit senior analysts from within Canadian Heritage, the department leading the efforts to regulate social media giants, to work at Facebook.

It doesn’t stop there. A 2021 story from The Logic revealed the scope of Chan’s enthusiasm in advancing the interests of his employer. Under Chan’s skilful direction, Facebook has managed to get its tendrils of influence into everything — government offices, universities, even media outlets. And in so many instances, Chan has found willing participants across the aisle who offer up glowing statements about strategic partnerships with Facebook.

Facebook isn’t alone in the revolving door. For some politicos, moving between Big Tech and public office appears to be the norm, in both directions. Big Tech public policy teams are filled with people who have worked in Liberal and Conservative offices, the PMO, Heritage and Finance ministries, the Office of the Privacy Commissioner, and more.

Conversely, some current senior public office holders are former Big Tech employees. Amazon, Google, Netflix, Huawei, Microsoft and Palantir are all connected through a revolving door with government. And this doesn’t even begin to cover Big Tech’s soft-power activities in Canada, from academic partnerships, deals with journalism outlets (including this one), and even shared initiatives with government to save democracy. The connections are vast and deep.

So, why has tech regulation taken so long? Armed with the knowledge that so many of Canada’s brightest public policy minds are moving between the offices of Big Tech and the halls of power in Ottawa, Canadians should be forgiven for jumping to conclusions. Or, maybe it’s just that simple? 

That these employment moves are taking place in both directions is hardly surprising. But the fact that so little attention has been paid to this phenomenon is deeply troubling. And how can this power be held to account when our journalism outlets are left with little choice but to partner with Big Tech?

The Regulatory Capture Lab has pried opened the window on this situation, but others must jump in. It’s time for Canadians to start asking tough questions. FRIENDS is ready to get the answers.

Source: https://www.thestar.com/opinion/contributors/2022/01/17/canada-is-sleepwalking-into-bed-with-big-tech-as-politicos-float-between-firms-and-public-office.html

Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

More on Facebook and Zuckerberg’s failure to act against mis- and dis-information:

In May 2019, a video purporting to show House Speaker Nancy Pelosiinebriated, slurring her words as she gave a speech at a public event, went viral on Facebook. In reality, somebody had slowed the footage down to 75% of its original speed.

On one Facebook page alone, the doctored video received more than 3 million views and 48,000 shares. Within hours it had been reuploaded to different pages and groups, and spread to other social media platforms. In thousands of Facebook comments on pro-Trump and rightwing pages sharing the video, users called Pelosi “demented,” “messed up” and “an embarrassment.”
[time-brightcove not-tgx=”true”]

Two days after the video was first uploaded, and following angry calls from Pelosi’s team, Facebook CEO Mark Zuckerberg made the final call: the video did not break his site’s rules against disinformation or deepfakes, and therefore it would not be taken down. At the time, Facebook said it would instead demote the video in people’s feeds.

Inside Facebook, employees soon discovered that the page that shared the video of Pelosi was a prime example of a type of platform manipulation that had been allowing misinformation to spread unchecked. The page—and others like it—had built up a large audience not by posting original content, but by taking content from other sources around the web that had already gone viral. Once audiences had been established, nefarious pages would often pivot to posting misinformation or financial scams to their many viewers. The tactic was similar to how the Internet Research Agency (IRA), the Russian troll farm that had meddled in the 2016 U.S. election, spread disinformation to American Facebook users. Facebook employees gave the tactic a name: “manufactured virality.”

In April 2020, a team at Facebook working on “soft actions”—solutions that stop short of removing problematic content—presented Zuckerberg with a plan to reduce the reach of pages that pursued “manufactured virality” as a tactic. The plan would down-rank these pages, making it less likely that users would see their posts in the News Feed. It would impact the pages that shared the doctored video of Pelosi, employees specifically pointed out in their presentation to Zuckerberg. They also suggested it could significantly reduce misinformation posted by pages on the platform since the pages accounted for 64% of page-related misinformation views but only 19% of total page-related views.

But in response to feedback given by Zuckerberg during the meeting, the employees “deprioritized” that line of work in order to focus on projects with a “clearer integrity impact,” internal company documents show.

This story is partially based on whistleblower Frances Haugen’s disclosures to the U.S. Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. The redacted versions were seen by a consortium of news organizations, including TIME. Many of the documents were first reported by the Wall Street Journal. They paint a picture of a company obsessed with boosting user engagement, even as its efforts to do so incentivized divisive, angry and sensational content. They also show how the company often turned a blind eye to warnings from its own researchers about how it was contributing to societal harms.

A pitch to Zuckerberg with few visible downsides

Manufactured virality is a tactic that has been used frequently by bad actors to game the platform, according to Jeff Allen, the co-founder of the Integrity Institute and a former Facebook data scientist who worked closely on manufactured virality before he left the company in 2019. This includes a range of groups, from teenagers in Macedonia who found that targeting hyper-partisan U.S. audiences in 2016 was a lucrative business, to covert influence operations by foreign governments including the Kremlin. “Aggregating content that previously went viral is a strategy that all sorts of bad actors have used to build large audiences on platforms,” Allen told TIME. “The IRA did it, the financially motivated troll farms in the Balkans did it, and it’s not just a U.S. problem. It’s a tactic used across the world by actors who want to target various communities for their own financial or political gain.”

In the April 2020 meeting, Facebook employees working in the platform’s “integrity” division, which focuses on safety, presented a raft of suggestions to Zuckerberg about how to reduce the virality of harmful content on the platform. Several of the suggestions—titled “Big ideas to reduce prevalence of bad content”—had already been launched; some were still the subjects of experiments being run on the platform by Facebook researchers. Others —including tackling “manufactured virality”—were early concepts that employees were seeking approval from Zuckerberg to explore in more detail.

The employees noted that much “manufactured virality” content was already against Facebook’s rules. The problem, they said, was that the company inconsistently enforced those rules. “We already have a policy against pages that [pursue manufactured virality],” they wrote. “But [we] don’t consistently enforce on this policy today.”

The employees’ presentation said that further research was needed to determine the “integrity impact” of taking action against manufactured virality. But they pointed out that the tactic disproportionately contributed to the platform’s misinformation problem. They had compiled statistics showing that nearly two-thirds of page-related misinformation came from “manufactured virality” pages, compared to less than one fifth of total page-related views.

Acting against “manufactured virality” would bring few business risks, the employees added. Doing so would not reduce the number of times users logged into Facebook per day, nor the number of “likes” that they gave to other pieces of content, the presentation noted. Neither would cracking down on such content impact freedom of speech, the presentation said, since only reshares of unoriginal content—not speech—would be affected.

But Zuckerberg appeared to discourage further research. After presenting the suggestion to the CEO, employees posted an account of the meeting on Facebook’s internal employee forum, Workplace. In the post, they said that based on Zuckerberg’s feedback they would now be “deprioritizing” the plans to reduce manufactured virality, “in favor of projects that have a clearer integrity impact.” Zuckerberg approved several of the other suggestions that the team presented in the same meeting, including “personalized demotions,” or demoting content for users based on their feedback.

Andy Stone, a Facebook spokesperson, rejected suggestions that employees were discouraged from researching manufactured virality. “Researchers pursued this and, while initial results didn’t demonstrate a significant impact, they were free to continue to explore it,” Stone wrote in a statement to TIME. He said the company had nevertheless contributed significant resources to reducing bad content, including down-ranking. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” he wrote. “We recently published our Content Distribution Guidelines that describe the kinds of content whose distribution we reduce in News Feed. And we’ve spent years standing up teams, developing policies and collaborating with industry peers to disrupt coordinated attempts by foreign and domestic inauthentic groups to abuse our platform.”

But even today, pages that share unoriginal viral content in order to boost engagement and drive traffic to questionable websites are still some of the most popular on the entire platform, according to a report released by Facebook in August.

Allen, the former Facebook data scientist, says Facebook and other platforms should be focused on tackling manufactured virality, because it’s a powerful way to make platforms more resilient against abuse. “Platforms need to ensure that building up large audiences in a community should require genuine work and provide genuine value for the community,” he says. “Platforms leave them themselves vulnerable and exploitable by bad actors across the globe if they allow large audiences to be built up by the extremely low-effort practice of scraping and reposting content that previously went viral.”

The internal Facebook documents show that some researchers noted that cracking down on “manufactured virality” might reduce Meaningful Social Interactions (MSI)—a statistic that Facebook began using in 2018 to help rank its News Feed. The algorithm change was meant to show users more content from their friends and family, and less from politicians and news outlets. But an internal analysis from 2018 titled “Does Facebook reward outrage” reported that the more negative comments a Facebook post elicited​​—content like the altered Pelosi video—the more likely the link in the post was to be clicked by users. “The mechanics of our platform are not neutral,” one Facebook employee wrote at the time. Since the content with more engagement was placed more highly in users’ feeds, it created a feedback loop that incentivized the posts that drew the most outrage. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Oct. 25.

How “manufactured virality” led to trouble in Washington

Zuckerberg’s decision in May 2019 not to remove the doctored video of Pelosi seemed to mark a turning point for many Democratic lawmakers fed up with the company’s larger failure to stem misinformation. At the time, it led Pelosi—one of the most powerful members of Congress, who represents the company’s home state of California—to deliver an unusually scathing rebuke. She blasted Facebook as “willing enablers” of political disinformation and interference, a criticism increasingly echoed by many other lawmakers. Facebook defended its decision, saying that they had “dramatically reduced the distribution of that content” as soon as its fact-checking partners flagged the video for misinformation.

Pelosi’s office did not respond to TIME’s request for comment on this story.

The circumstances surrounding the Pelosi video exemplify how Facebook’s pledge to show political disinformation to fewer users only after third-party fact-checkers flag it as misleading or manipulated—a process that can take hours or even days—does little to stop this content from going viral immediately after it is posted.

In the lead-up to the 2020 election, after Zuckerberg discouraged employees from tackling manufactured virality, hyper-partisan sites used the tactic as a winning formula to drive engagement to their pages. In August 2020, another doctored video falsely claiming to show Pelosi inebriated again went viral. Pro-Trump and rightwing Facebook pages shared thousands of similar posts, from doctored videos meant to make then-candidate Joe Biden appear lost or confused while speaking at events, to edited videos claiming to show voter fraud.

In the aftermath of the election, the same network of pages that had built up millions of followers between them using manufactured virality tactics used the reach they had built to spread the lie that the election had been stolen.

Source: Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show