How To Govern Chinese Apps Without Discrimination Against Asian Diaspora Communities

Interesting long read, written from an Australian perspective but applicable more broadly including in Canada:

Last week, Christopher Wray, Director of the U.S. Federal Bureau of Investigation (FBI), toldlawmakers that the bureau has national security concerns about TikTok, the popular app that is owned by the Chinese firm ByteDance. “Under Chinese law, Chinese companies are required to essentially — and I’m going to shorthand here — basically do whatever the Chinese government wants them to do in terms of sharing information or serving as a tool of the Chinese government,” Wray said in the House Homeland Security Committee hearing. “That’s plenty of reason by itself to be extremely concerned.”Similar fears have been expressed by officials in other democratic governments. Concerns about rising Chinese influence have been increasingly conveyed through the lens of how technology is developed, governed and distributed. As democratic governments enter a new phase of engagement with China that balances national security worries with needed cooperation, understanding how we might govern Chinese apps in a way that squares such concerns with the needs and interests of Asian diaspora communities is paramount. Australia offers a case study in the potential pitfalls, and a possible path forward.

Introduction

The May 2022 Australian election was a moment of vindication for many. Nine years of conservative rule gave way to a coalition of independents seeking climate action, record Indigenous representation and the most diverse Parliament Australia has ever seen. With nearly 1 in 5 Australians having Asian ancestry, this election was a particular turning point for political representation: where the number of elected Asian-Australians makes up half of the total figure ever elected to Parliament from that ethnic group.

This election, however, came at a point of intense alienation for Asian-Australians. The wave of anti-Asian sentiment during the course of the COVID-19 pandemic meant increased discrimination, hate crime and attacks on the community. And rather than being repudiated by the political establishment, when combined with a historic low point in Australia’s geopolitical relationship with China, this wave of hate alongside increasingly hawkish sentiments has translated into our own brand of Down Under McCarthyism.

Like many other countries, Australia has been grappling with the societal impacts of social media for the past few years. At the coalface of this debate are elections, where issues such as misinformation, foreign interference and content moderation become both more apparent and important. How we navigate these issues becomes more complex when the focus turns to non-Western social media platforms – namely those that originate from China. But calls to ban or boycott these platforms would achieve the exact opposite of their intended aims to protect democracy, as huge proportions of the Australian population would be excluded from our political processes.

In our attempts to reign in ‘foreign’ Big Tech, how might we balance our national security anxieties and interests with the new opportunities for engagement these platforms have given us? It is crucial to separate real concerns over security and the integrity of Australian elections and political discourse from the bigotry and discrimination that has long targeted Asian-Australians.

The Asian Diaspora in Australia

Whilst multiculturalism is regularly touted nowadays as a fundamental national value, the exclusion of Asians from Australian society has deep historical roots.

Prior to the ‘establishment’ of modern Australia, the influx of Chinese migrants from the gold rush meant that distrust and violence against non-white communities was prevalent. After Federation, one of the first pieces of legislation passed from the newly formed government was designed to specifically limit non-British immigration representing the formal start of the White Australia Policy. This policy had a profound impact on Australia’s demographics, decreasing the proportion of Asians from 1.25% of the population at Federation to only 0.21% by the end of World War II. Following WWII, successive governments began dismantling this policy until its full abolishment by the government of then Prime Minister Gough Whitlam with the passage of the Racial Discrimination Act 1975. From 1978, Australia became the second country in the world (after Canada) to implement a national multiculturalism policy; since then, its value to society has been made manifest.

Ancestry Proportion (%) Total
Chinese 5.47 1,390,639
Indian 3.08 783,953
Filipino 1.61 408,842
Vietnamese 1.32 334,785
Nepalese 0.54 138,463
Asian Australians 17.4
(Source: ABS Census 2021 found here)

These policy reforms paved the way for waves of migration from Asia. From the refugee crisis in Vietnam and Cambodia, skilled migration from India, and people escaping political turmoil in the Philippines and China, Australia became a primary destination for many in the region. Whilst this has led many in the political establishment to label Australia as ‘the most successful multiculturalnation on Earth’, various voices still see rising diversity as a threat to the national identity.

As of 2021, nearly half of all Aussies have at least one overseas-born parent. From the first census in 1911 that indicated 18% of the population was born overseas, 111 years later it’s risen to 30% of the population (predominantly from Asian countries). But even with these numbers, the path for migrant communities to realize their place in business, civic and political leadership in Australian society still has a long way to go.

Asian-Australians in Politics and Leadership

The legacy of exclusion resulted in severe under-representation of Asian-Australians in politics. Prior to the recent election, 96% of Australian lawmakers were white, trailing behind other similar multicultural, liberal democracies such as the United States, the United Kingdom, Canada and New Zealand.

Representation in core Anglosphere elected officials. Source: BBC

This lack of representation of Asian-Australians extends far beyond politics into all areas of leadership in Australian society, and is known as the ‘bamboo ceiling’. But whilst the private sector also limits Asian-Australian progression, the issue is particularly pronounced in the public service.

Chinese-Australians in particular are broadly under-represented, but are increasingly so in the more ‘sensitive’ departments such as ONI (intelligence), Defence or DFAT (foreign affairs) as opposed to Education or Treasury. One of the main reasons for this are the lengthy periods associated with obtaining security clearances, on average 6 months longer for Chinese-Australians. Greater scrutiny of China links is not just an Australian phenomenon. In the US between 2010-2019, you were nearly twice as likely to get your security clearance denied if you had any familial or financial links to China – prior to this the denial rate was similar to other countries.

Holding an Election Amidst a Tense Trade War 

Three elections ago, relations between China and Australia were much better than they are currently. Amid lofty optimism off the back of a finalized free trade agreement, Chinese Paramount Leader and Communist Party (CCP) General Secretary Xi Jinping’s address to a joint sitting of the Australian Parliament and tour of the country would be unthinkable in today’s climate. Instead, years of simmering tension were catalyzed when Australia called for an independent investigation into the origins of COVID-19. An ensuing petty trade war (no lobster and wine!) plummeted relations between the two countries.

The 2022 election saw the Scott Morrison government double down on a hardline national security and anti-China rhetoric, at a point where a wave of anti-Asian hate saw more than 8 in 10 Asian-Australians reporting at least one instance of discrimination. From labeling Richard Marles, the future Minister of Defence, as the Manchurian candidate to billboards from right wing campaigning groups associating Xi Jinping with the Labor Party, this all out offensive was costly, swinging almost all electorates with <10% Chinese ancestry to the Labor party.

While the hawkish positions that at times bordered on racial vilification from the conservatives was clearly miscalculated, their sentiments belie real concerns regarding foreign interference and electoral integrity more broadly. How social media platforms impacts elections, society and democracy has been one of the topical policy conversations over the past few years, and as more non-Western social media platforms gain popularity, there is an even greater need to understand the nuances of platform governance while avoiding the pitfalls of reactionary solutions (i.e. let’s ban it!).

What is WeChat?

At the eye of the storm is a Chinese app called WeChat. Developed by Tencent (one of China’s main technology companies), WeChat is the most popular online platform amongst Chinese migrants, and as of 2020 had around 700,000 daily active users in Australia. Far from just being a social media platform, WeChat also has messaging, calling, mobile payments and ecommerce functions, making it a ‘one stop shop’ for everything online. For Chinese-Australians this app is the public square. WeChat is the dominant source for both Chinese-language (at 86%) and English-language (at 63%) news for this ethnic group.

WeChat has a ‘one app, two systems’ approach, where the international version of WeChat is subjected to less severe censorship and data governance obligations than its Chinese counterpart (called Weixin). The version of the app depends on the device used to register, meaning that many Chinese visitors, students and business travelers remain under the governance framework of Weixin even outside of China’s borders. In September 2021, Tencent updated its terms of service to assure international users of the system’s discretion (and also in response to evolving data storage and localization legislation in China). It gave users a choice to switch registrations over to non-Chinese numbers, however with migration taking as long as 10 days and resulting in decreased functionality, many Australians chose to keep their Weixin accounts. Additionally, WeChat Official Accounts (WOAs), which give accounts functionality akin to a Facebook page and are the preferred choice for politicos, still require registration with a Chinese number.

The Witch Hunt on Chinese Technology

Since mid-2020, from India’s TikTok ban to investigations of Huawei, global scrutiny on Chinese technology firms has been at an all time high. Accusations range from surveillance to censorship to foreign interference, reflecting the decline in relations and trust between China and the rest of the world at large. Some of these accusations are well founded, while others are less so.

Privacy and Surveillance 

Concerns over the data practices of Chinese apps, backed up by evidence uncovered by journalists, have become so commonplace that they should be taken as fact. Right after the election, leaked audio from TikTok in the US revealed that user data had been frequently accessed from China. In Australia, a report released around the same time pulls into question where data from the app is actually processed, and the risk this poses for security and privacy. A review into data harvesting of WeChat and other Chinese apps was announced by the Home Affairs Minister shortly after the election.

Censorship 

As opposed to Weixin, where a sophisticated system of direct algorithmic censorship ensures CCP control over the online environment, WeChat’s censorship regime is more indirect.

Firstly, it’s well documented that many users of the app self-censor, where users avoid ‘sensitive’ topics around international relations, human rights and COVID-19, and could be a contributing factor to why Chinese-Austrailans rarely share their views online about politics and government – particularly about China.

Further, opaque platform policies (although not dissimilar to other social media platforms) mean that content moderation and censorship decisions are held entirely within the company. From activists to artists, even foreign-registered WeChat accounts have posts and messages actively censored if they touch too closely on sensitive issues.

Finally, many Australian users and politicians who register with a Chinese number to gain access to increased functionality are subjected to the stricter content rules of Weixin. Even then-Prime Minister Scott Morrison had a post removed that criticized a Chinese government official for publishing a doctored image of an Australian soldier holding a knife to an Afghan child (in response to the release of a report alleging war crimes by the Australian military). A note on WeChat said that the post was unable to be viewed as it ‘violated company regulations’.

Misinformation and Foreign Interference 

Even though many within the Australian establishment have expressed concern about China’s ability to influence public opinion, proof of whether it has succeeded in impacting public discourse has been limited. Definitively proving the efficacy of state-sponsored disinformation campaigns is extremely challenging, as the network of astroturfing, proxies and shadow organizations used to achieve these goals are intended to be hidden, but incidents overseas reveal the potential risk towards Australian democracy. The 2021 Canadian election saw significant disinformation campaigns against an outspoken Hong Kong-Canadian politician, which contributed to him losing his seat. Kenny Chiu, a Conservative member of the Canadian Parliament and critic of the Chinese regime, faced significant (and falsified) opposition to proposed legislation intended to bring in more transparency requirements.

In reality, the majority of misinformation and disinformation spread on WeChat comes from domestic Australian actors. From statements that Labor will fund school programs to ‘turn students gay’ and ‘refugees flooding in and taking your wealth away’ to misinformation on how to vote, such posts are mostly forwarded between private groups. The confluence of platform design that facilitates these ‘communities of trust’ to form and the segregated nature of these online spaces leaves WeChat very susceptible to information disorder.

Paradoxically, while Chinese disinformation campaigns tend to go after more conservative candidates (due to a higher likelihood of them being China hawks), domestic misinformation tends to target more left-leaning politicians (due to the Chinese diaspora being more likely to engage with socially conservative and economic narratives).

WeChat Use Becomes a Dogwhistle for Patriotism 

Early in the election period, Prime Minister Scott Morrison was rocked with a scandal. His WeChat account was sold to a company based in Fuzhou, renamed, losing him the ability to reach 76,000 subscribers. It’s important to note that account transferrals are completely allowed on the app. While foreign politicians are not allowed WOAs, they can still obtain one through registration services that pair foreign accounts with Chinese numbers – a tactic used by many politicians as these types of accounts allow for more desired campaigning features (such as push alerts and being able to broadcast).

As the news broke, many people in the Australian political and media elite quickly jumped into accusation mode. From allegations of hacking to CCP interference, it was galvanizing to both security and political folk alike – time to ditch WeChat. Senator Paterson, a libertarian who chaired the Parliament’s intelligence and security committee, said that the takeover was ‘very likely’ sanctioned by the CCP and amounted to foreign interference, joining the chorus of pundits calling for a ban on WeChat.

Even Gladys Liu, the first Chinese woman to be elected to the Australian House, was quick to renounce WeChat. This is despite the fact that she had expertly used WeChat on two separate occasions to win seats for the Conservative party – once for her predecessor and once for herself. Even as other members of her party continued to push ads on WeChat, Liu’s precautionary actions to publicly display nationalistic loyalty not only hark back to her experiences years before, when her previous links to overseas Chinese organisations were used to insinuate links to the CCP, but also to the persistent ‘otherness’ Asian-Australians face throughout society. It followed another instancewhere in a Senate inquiry on diaspora experiences, a Conservative Senator demanded three Chinese-Australians to unequivocally condemn the CCP, a question which many condemned as racially targeted. For Liu, struggling to hold onto a marginal seat where a quarter of the population speaks Chinese, the decision to not use WeChat was costly.

The Difficult Task of Platform Governance

The aftermath of the US 2016 election, from which evidence emerged of Russian interference via social media, firmly established ‘reigning in Big Tech’ as a common policy goal in many democracies. A few years on, translating this call into tangible action has revealed hard decisions, seemingly intractable tensions and systemic inertia. What the Australian experience has shown is that when the regulatory conversation shifts to try and align the actions of non-Western (i.e. non-American) digital platforms – an additional pitfall of parsing through minority alienation and political posturing must be considered.

The increasing securitization of ‘Chinese influence’ within the Australian policy discourse since 2017 mirrors our increasing frustration around social media regulation. While there are unique challenges WeChat poses from a security and geopolitical lens, attempting to parse out rational policy concerns from irrational and bigoted fears will enable a more nuanced and holistic approach towards platform governance.

For instance:

  • On susceptibility towards foreign interference – Chinese-Australians trust news that is shared on WOAs the least compared to other sources such as Australian news
  • On distrusting firm’s intentions – whilst there is evidence that the purported assurances from WeChat around transparency, privacy, accountability and safety are disingenuous, the Facebook Files and other whistleblowers have shown that this hypocrisy also occurs elsewhere
  • On censorship – content moderation decisions, whether on Facebook or WeChat, both happen at the discretion of these firms and their ‘Community Guidelines’. Whilst there have been some efforts to add a layer of independent governance to these efforts (most notably Facebook’s Oversight Board), key questions remain – how should these quasi-independent transnational governance initiatives fit into our existing state-centric governance model and will privately-led governance initiatives ever manage to account for public interest? Would a ‘Tencent Oversight Board’ be received with the same level of legitimacy? And how might these initiatives be constructed and integrated in a way that ensures buy-in.

What this illustrates is that while security concerns for WeChat are a consideration, many of the fundamental issues WeChat poses are fundamental platform governance policy problems.

A path forward

As our new MPs make their way to Canberra, the responsibility of regulating social media now falls to them. But what the pandemic made clear is that Chinese platforms are a lifesaving communications channel for the Asian-Australian community. Acquiescing to hawkish calls to ‘boycott’ them is not only overly simplistic, but will serve to further alienate huge sections of the Australian public.

Instead, legislators must work towards doubling down on engagement and creating the rules and systems to ensure that this engagement is safe and trustworthy. And whilst it’s a task that won’t be featured in a sensationalized Murdoch hit piece, it will do more to enhance Australian democracy than any media firestorm will.  Here are three key recommendations to achieve these goals:

  1. Shift investment towards digital-forward diverse media to combat misinformation

As one of the first countries in the world to establish a public broadcaster catering specifically to culturally diverse communities, Australia has a legacy of diverse communication. Today, Australia has a diverse media market, but there remains a clear skew towards traditional forms such as print and radio. Even as online media outfits begin to proliferate, many of these outfits originate from migrant students sympathetic towards China’s positioning on various issues. An unfamiliarity around using WeChat amongst Australian media and business outlets has left this digital public square without a counterbalance.

Language Publications/Print Radio TV Online 
Chinese 80 many 1 50
Indian 50 36 2 11
Filipino 5 30 1 4
Vietnamese 16 14 2 5
Cultural media market in Australia. Source: Leba – Australia’s largest advertising agency for culturally and linguistically diverse media.

Facilitating plurality within this environment is a complex and active task, and governments should employ multiple levers to increase diversity and representation, particularly within digitally-native media operations. This should include;

  • Incentivising traditional media outlets to establish a presence amongst foreign-language platforms – including bi-lingual publication
  • Incentivising the diversification of newsrooms, and ensuring that journalistic standards are upheld
  • Active funding of new digital media startups that represent diverse and contextual viewpoints

This is particularly important as second-generation communities, who are more digitally literate and have completely different experiences/identities than their migrant parents, become more visible in Australian society. One of the most prominent Facebook groups that provides a forum for the unique experiences of the Asian diaspora – Subtle Asian Traits – with nearly 2 million members was started by a group of Chinese-Australian high school students in Melbourne. Continuing to invest in increasing the diversity of culturally specific media across a wide range of channels is the best way to combat the unique risks around misinformation and information disorder facing minority communities.

  1. Establish avenues to compel platform engagement in governance processes to combat distrust of foreign social media companies

A holistic platform governance regime should combine:

  • Domestic action that combines a multistakeholder approach with equipping independent regulators with the appropriate powers to ensure proper transparency, oversight and accountability, and
  • International engagement so that legislation, processes and structures are built through consensus and alignment with international norms

Domestically, hard levers such as mandating researcher access, requiring company and algorithmic audits from independent bodies, and ‘truth in political advertising’ legislation could be considered. In many of the key platform governance policy debates, the focus has been mainly on Meta and Google – however efforts to understand, engage and cooperate with alternative platforms must included to ensure that our regulatory regime applies to all actors.

Internationally, as key geographies seek to establish their sphere of influence (via the EU’s Digital Services Act, the UK’s Online Safety Bill, or U.S President Joe Biden’s principles for tech accountability), ensuring that consensus is achieved will be a significant challenge – particularly as more and more non-American social media platforms begin accumulating larger and larger user bases. It will require diverse coalitions, novel governance frameworks and new institutions. Working towards this new digital compact will require Western democracies to broaden the tent, engage in good faith and center pragmatism, while balancing liberal values – a task that is only possible through dialogue.

  1. Continue using alternative platforms to increase the political participation of diverse communities to combat alienation

Ultimately, alternative social media platforms are an unparalleled way for minority communities to obtain information and realize their democratic rights. This isn’t limited to WeChat, but platforms such as Zalo (Vietnam), Line (Japan), KakaoTalk (Korea) and WhatsApp all have unique usage patterns amongst various diasporic communities in Australia, even if their dynamics are less researched. What is clear is that even with the risks, WeChat not only enables greater political participation but facilitates public service, information delivery and civic engagement.

For political parties, candidates and advocacy organizations – taking a considered approach that assesses and mitigates risks without losing a valuable communication channel should be considered. This may include:

  • Establishing an internal policy on foreign-owned social media platform usage
  • Reporting violations to the Australian Electoral Commission or eSafety Commissioner
  • Keeping a transparent public register of WeChat ads and paid posts during an election period

Conclusion

In her maiden speech to Parliament in 1996, Senator Pauline Hanson warned that Australia was at risk of being ‘swamped by Asians’. Even though these comments were made by a fringe far-right politician, they have become emblematic of how the Asian-Australian identity is viewed as a ‘perpetual foreigner’.

Twenty years later from these vitriolic remarks will bring us to the next Australian election, where lawmakers must not succumb to making the New Red Scare a political tactic. Social media platforms, and their unparalleled ability to connect and engage communities, present an unparalleled opportunity for minority communities to add their part to the Australian story. Driving engagement with the Asian-Australian community via the channels they use whilst tackling the real platform governance issues will ensure that Australia’s democracy is strengthened, and could offer an example to other democracies struggling with similar issues.

Matt Nguyen is the Policy Lead for Digital Governance and Rights at the Tony Blair Institute, where he leads work on the future of news, platform governance and digital rights.

Source: How To Govern Chinese Apps Without Discrimination Against Asian Diaspora Communities

Klein: I Didn’t Want It to Be True, but the Medium Really Is the Message

Good long read on the impact of social media, harking back to McLuhan (and Innis) on how the medium and means of communications affects society:

It’s been revealing watching Marc Andreessen, the co-founder of the browsers Mosaic and Netscape and of A16Z, a venture capital firm, incessantly tweet memes about how everyone online is obsessed with “the current thing.” Andreessen sits on the board of Meta and his firm is helping finance Elon Musk’s proposed acquisition of Twitter. He is central to the media platforms that algorithmically obsess the world with the same small collection of topics and have flattened the frictions of place and time that, in past eras, made the news in Omaha markedly different from the news in Ojai. He and his firm have been relentless in hyping crypto, which turns the “current thing” dynamics of the social web into frothing, speculative asset markets.

Behind his argument is a view of human nature, and how it does, or doesn’t, interact with technology. In an interview with Tyler Cowen, Andreessen suggests that Twitter is like “a giant X-ray machine”:

You’ve got this phenomenon, which is just fascinating, where you have all of these public figures, all of these people in positions of authorityin a lot of cases, great authoritythe leading legal theorists of our time, leading politicians, all these businesspeople. And they tweet, and all of a sudden, it’s like, “Oh, that’s who you actually are.”

But is it? I don’t even think this is true for Andreessen, who strikes me as very different off Twitter than on. There is no stable, unchanging self. People are capable of cruelty and altruism, farsightedness and myopia. We are who we are, in this moment, in this context, mediated in these ways. It is an abdication of responsibility for technologists to pretend that the technologies they make have no say in who we become. Where he sees an X-ray, I see a mold.

Over the past decade, the narrative has turned against Silicon Valley. Puff pieces have become hit jobs, and the visionaries inventing our future have been recast as the Machiavellians undermining our present. My frustration with these narratives, both then and now, is that they focus on people and companies, not technologies. I suspect that is because American culture remains deeply uncomfortable with technological critique. There is something akin to an immune system against it: You get called a Luddite, an alarmist. “In this sense, all Americans are Marxists,” Postman wrote, “for we believe nothing if not that history is moving us toward some preordained paradise and that technology is the force behind that movement.”

I think that’s true, but it coexists with an opposite truth: Americans are capitalists, and we believe nothing if not that if a choice is freely made, that grants it a presumption against critique. That is one reason it’s so hard to talk about how we are changed by the mediums we use. That conversation, on some level, demands value judgments. This was on my mind recently, when I heard Jonathan Haidt, a social psychologist who’s been collecting data on how social media harms teenagers, say, bluntly, “People talk about how to tweak it — oh, let’s hide the like counters. Well, Instagram tried — but let me say this very clearly: There is no way, no tweak, no architectural change that will make it OK for teenage girls to post photos of themselves, while they’re going through puberty, for strangers or others to rate publicly.”

What struck me about Haidt’s comment is how rarely I hear anything structured that way. He’s arguing three things. First, that the way Instagram works is changing how teenagers think. It is supercharging their need for approval of how they look and what they say and what they’re doing, making it both always available and never enough. Second, that it is the fault of the platform — that it is intrinsic to how Instagram is designed, not just to how it is used. And third, that it’s bad. That even if many people use it and enjoy it and make it through the gantlet just fine, it’s still bad. It is a mold we should not want our children to pass through.

Or take Twitter. As a medium, Twitter nudges its users toward ideas that can survive without context, that can travel legibly in under 280 characters. It encourages a constant awareness of what everyone else is discussing. It makes the measure of conversational success not just how others react and respond but how much response there is. It, too, is a mold, and it has acted with particular force on some of our most powerful industries — media and politics and technology. These are industries I know well, and I do not think it has changed them, or the people in them (myself included), for the better.

But what would? I’ve found myself going back to a wise, indescribable book that Jenny Odell, a visual artist, published in 2019. In “How to Do Nothing: Resisting the Attention Economy,” Odell suggests that any theory of media must first start with a theory of attention. “One thing I have learned about attention is that certain forms of it are contagious,” she writes.

When you spend enough time with someone who pays close attention to something (if you were hanging out with me, it would be birds), you inevitably start to pay attention to some of the same things. I’ve also learned that patterns of attention — what we choose to notice and what we do not — are how we render reality for ourselves, and thus have a direct bearing on what we feel is possible at any given time. These aspects, taken together, suggest to me the revolutionary potential of taking back our attention.

I think Odell frames both the question and the stakes correctly. Attention is contagious. What forms of it, as individuals and as a society, do we want to cultivate? What kinds of mediums would that cultivation require?

This is anything but an argument against technology, were such a thing even coherent. It’s an argument for taking technology as seriously as it deserves to be taken, for recognizing, as McLuhan’s friend and colleague John M. Culkin put it, “we shape our tools, and thereafter, they shape us.”

There is an optimism in that, a reminder of our own agency. And there are questions posed, ones we should spend much more time and energy trying to answer: How do we want to be shaped? Who do we want to become?

Source: I Didn’t Want It to Be True, but the Medium Really Is the Message

MacDougall: Let’s dump Trump’s accomplices: social media and cable news

Hard not to agree:

Now that Donald Trump has been fired by (enough of) the American people, it’s time to think about how to bin his accomplices: cable news and social media.

The Trump Era has been exhausting and the lion’s share of that exhaustion stems from our grossly expanded information economy. What used to come to us in dollops of papers and broadcasts is now streamed non-stop across all hours of the day on too many platforms to count. But there can be too much of a good thing. A glass of water quenches your thirst; a firehose knocks you over and leaves you drenched. It’s time to turn off the tap.

Whatever the intention at their points of creation, cable news and social media have flown a long way off course. Watching CNN or Fox News during (and after) the Presidential election was to subject yourself to a marathon of preachy monologues/inquisitions interspersed with furious nine-person panels, in which various partisans were invited to bark at each other, not listen to an argument or concede a point. It was a stark reminder of how far our public sphere has degraded.

But it’s actually worse than that. Cable news has also sought to make stars out of journalists but journalism isn’t meant to be celebrity entertainment. It’s supposed to serve a nobler purpose. It’s the work that’s meant to be important, not the author. What’s more, inviting reporters on to discuss or opine on the news of the day is to make them active participants, not impartial observers. What news value is there, for example, in having CNN’s Anderson Cooper calling the President of the United States of America an “obese turtle”? Is it any wonder that trust in the news is at record lows?

And if that wasn’t bad enough, social media then picks up the baton to make everything worse. Instead of bringing hidden expertise to bear on conversations, social media makes everyone ‘experts’ on everything, no matter what they don’t know about the subject. Even worse, the loudest and most extreme takes get the most attention. As study after study has shown, social media encourages people to indulge their emotions, not to apply logic or reason. These channels encourage us to huddle amongst like-minded people and then helps us radicalize. It makes enemies of citizens instead of encouraging a common understanding.

That’s why the sooner we get our politics and news off 24/7 platforms, the better. If the past four years of Trumpism have taught us anything, it’s that our brains simply cannot handle the volume of information they’ve been receiving. Seeing so much means we retain little of actual value. And it’s not just politics that suffers from this consumption pattern. Our recall with music, for example, was much stronger when we had to buy physical albums than it is now when we can stream literally anything for a few bucks a month. Everything now goes in one ear (or eyeball) and out the other.

It turns out quality content isn’t a gas; it doesn’t expand to fill the available space. If anything, whatever quality exists in our news environment now gets choked by the amateur fumes polluting our screens and feeds. Using quality to compete for attention in the 24/7 information economy is to lose the battle before it starts. Everybody is more interested in the outrage. A better approach would be to evacuate the pitch and find a new place to play, somewhere it has a chance of being noticed.

Pulling news content off social media would be a risk, yes, but it’s less of a risk than hoping the current information environment will improve. The news can either die on its terms or someone else’s, and right now social media companies and cable news programmers are incentivized to virality and outrage, not analysis or introspection. More importantly, their current output is cheap, unlike quality journalism. They do not, as presently constructed, serve a civic good. We wouldn’t miss them when they’re gone.

Of course, we can’t actually bin cable news and social media. For one, the purveyors of cable news and social media make too much money doing it. They won’t stop. But we can make the choice to stop watching and clicking.

It would help if the media outlets took the first step of not seeding the outrage machine with the lifeblood of their content. It would also help if they forbid their reporters from appearing on cable shows. We have enough data now to know that social media and cable news aren’t gateways to serious news consumption; they’re pathways to polarization and misinformation. They are platforms for the already convinced. More pertinently, they’re not serious money makers for news organizations. Media organizations need to make their content scarce, not ubiquitous. It’s time to put up paywalls and demand money for quality.

And now that we’re all properly exhausted, people might be open to a return to the subscription model. I know my mood has improved significantly since I prioritized one paper in the morning to the exclusion of all others. And while I might miss some stories because of it, I trust in the quality of my morning read to know that I won’t be out of too many important loops.

As strange as it seems after years of the firehose, we’ll have to consume less to understand more.

Andrew MacDougall is a director at Trafalgar Strategy, and a former Head of Communications to Prime Minister Stephen Harper

Source: Let’s dump Trump’s accomplices: social media and cable news

Stopping Online Vitriol at the Roots: With the election upon us, we’re awash in misleading and angry information. Here’s what we can do.

Some useful pointers, not just applicable to the USA post-election:

America, it’s one day before a pivotal election, and we’re awash in a lot of garbage information and online vitriol. It comes from strangers on the internet, scammers in our text messagesdisreputable news organizations and even our friends and family.

Whitney Phillips, an assistant professor in the department of communication and rhetorical studies at Syracuse University and an author on polluted information, says that all of this is making our brains go haywire.

With the U.S. election ginning up misleading information and the nonstop political discussions online wearing many of us out, I spoke to her about how we can individually and collectively fight back. Here are edited excerpts from our discussion:

You’ve written that angry conversations online and misleading information essentially short circuits our brains. How?

When our brains are overloaded, and we’re confronted constantly with upsetting or confusing information, it sends us into a state in which we’re less capable of processing information. We say things we probably shouldn’tPeople get retweet happy. It’s not productive, even when people have good intentions and think they’re helping.

How do we stop that process?

I’ve been researching how mindfulness meditation processes can help us navigate this information hellscape. When you see or read something that triggers that emotional reaction, take a moment to breathe and try to establish some emotional space. It doesn’t mean you shouldn’t say the critical thing you’re thinking, but you should first reflect on the most constructive thing to do next.

But we don’t tend to think that we’re the ones acting irresponsibly or irrationally. We think the people who disagree with us are irrational and irresponsible.

Most people think if they’re not setting out to do damage or don’t have hate in their hearts, then they don’t have to consider what they do. But even if we aren’t vicious ourselves, we’re still fundamentally a part of what information spreads and how.

We all affect the ecology around us. Bad actors like high-profile influencers can scar the land, but everyone else does, too. The more information pollution there is in the landscape, the less functional our democracy is. If you feel that everything is terrible and everyone lies, then people don’t want to engage in civic discourse.

This imposes a lot of personal responsibility on a problem that is much bigger than us as individuals.

Yes, individual solutions are not enough. We all can make better choices, but that means nothing if we’re not also thinking about structural, systemic reasons that we’re forced to confront bad information in the first place.

What are those structural forces? What can be done to make the information environment better at the structural level?

For us to understand how bad information travels we have to think about all the forces that contributed to it — decisions made by the internet platforms, broader capitalist forces, local and national influences. And it includes you. All of them feed into each other.

Part of the problem is that people haven’t understood how information works or recommendation algorithms of social media companies that influence why we see what we do online. If people understand, they can imagine a different world and they can fight to change the system.

I’m tempted to unplug the internet and go live in a cave. Should I?

We need to find a way to balance between evacuating from the hurricane and running toward the hurricane. If we only evacuate, we’re not doing our part as citizens, and we force people on the informational front lines to bear that burden. If we only run toward the storm, we’ll burn out.

Source: https://www.nytimes.com/2020/11/02/technology/stopping-election-misinformation.html

Liberal, Conservative MPs join international task force to curb anti-Semitism online

Of note. Not an easy task and one that should aim to be also applicable to other forms of hate, whether anti-Black, anti-Muslim, anti-Indigenous, anti-LGBTQ or other minorities:

Two members of Parliament are joining forces with legislators in four other countries in an international effort to force web giants to curb the proliferation of anti-Semitic content online.

Liberal MP Anthony Housefather and Conservative MP Marty Morantz are part of a new task force that includes politicians from Australia, Israel, the United Kingdom, and the United States.

A report out of the U.K. this summer said online incidents of anti-Semitism were on the rise in that country, driven by conspiracy theories about Jews being responsible for the COVID-19 pandemic.

In Canada, advocacy group B’nai Brith has said anti-Semitic incidents are up overall, with an 11 per cent rise in online harassment that often advocates genocide.

But how different countries measure and define the problem is a barrier to convincing web companies to address it, said Housefather.

The point of the task force is to get like-minded countries to agree on how to define the problem, how to solve it, convince their respective legislatures to pass similar laws and then collectively pressure the companies to act, he said.

“If we can come up with something that’s common to everybody, it will make life much easier for the providers to co-operate with us,” he said.

The task force is getting underway just as the federal Liberals promised in last week’s throne speech to take more action to curb online hate as part an effort to address systemic racism.

Housefather said the task force’s initial work predates that pledge but he hopes it can support the government’s own efforts.

Social media companies have been under sustained pressure to do more to address online hate, and give users better tools for reporting instances of it.

Earlier this year, Twitter began flagging some tweets from U.S. President Donald Trump for violating its policies, saying they included threats of harm against an identifiable group.

But, both Housefather and Morantz said, Twitter does nothing when the Iranian leader Ayatollah Ali Khamenei puts out tweets calling for the destruction of Israel or uses violent or potentially racist language to describe the state.

Twitter said earlier this year in response to criticism over that approach that Khamenei’s remarks amounted to “foreign policy sabre-rattling.”

Morantz said while the task force is focused on anti-Semitism, the work also applies more broadly.

“Hate against one group online is really a concern to all groups,” Morantz said.

“We need to emphasize that if we can’t protect one minority we can’t protect any of them.”

The Liberal government has repeatedly pledged to do more to combat hate speech online.

During the last election, they promised new regulations for social media platforms, including a requirement that they remove “illegal content, including hate speech, within 24 hours or face significant penalties.”

Critics, including conservative media outlets like True North and The Rebel, have accused the Liberals of wanting to crack down on free speech.

Morantz said a distinction must be made between free speech and that which breaks existing criminal laws. The focus needs to be on the latter, he said.

In turn, he refused to comment on a fellow Tory MP who recently circulated a message on Twitter that was criticized for using an anti-Semitic trope.

In August, B.C. MP Kerry-Lynne Findlay share a video of Finance Minister Chrystia Freeland and liberal philanthropist George Soros, saying Canadians ought to be alarmed by their “closeness.”

Soros is often linked to anti-Semitic conspiracy theories. Findlay later took down the tweet and apologized for sharing content from what she described as a source that promotes “hateful conspiracy theories.”

Housefather was among those who spoke out about Findlay’s tweet. He said he accepts her apology, but the incident highlights the issue.

“People often innocently retweet something without understanding the implications of it,” he said.

What needs to happen is for social media platforms to step up and figure out a way the flag the content, he said.

“If the media platform lets them know that, they can make a conscious decision whether or not they want to retweet it, knowing that it’s been flagged as being anti-Semitic content or other types of racist, misogynistic, etc., content.”

Source: Liberal, Conservative MPs join international task force to curb anti-Semitism online

Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Harsh but accurate:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

The news in the last weeks highlighted both the good and bad of platforms like Facebook and Twitter. The good: Graphic videos of police brutality from multiple cities transformed public sentiment about race, creating a potential movement for addressing an issue that has plagued the country since its founding. Peaceful protesters leveraged social platforms to get their message across, outcompeting the minority that advocated for violent tactics. The bad: waves of disinformation from politicians, police departments, Fox News, and others denied the reality of police brutality, overstated the role of looters in protests, and warned of busloads of antifa radicals. Only a month ago, critics exposed the role of internet platforms in undermining the country’s response to the COVID-19 pandemic by amplifying health disinformation. That disinformation convinced millions that face masks and social distancing were culture war issues, rather than public health guidance that would enable the economy to reopen safely.

The internet platforms have worked hard to minimize the perception of harm from their business. When faced with a challenge that they cannot deny or deflect, their response is always an apology and a promise to do better. In the case of Facebook, University of North Carolina Scholar Zeynep Tufekci coined the term “Zuckerberg’s 14-year apology tour.” If challenged to offer a roadmap, tech CEOs leverage the opaque nature of their platforms to create the illusion of progress, while minimizing the impact of the proposed solution on business practices. Despite many disclosures of harm, beginning with their role in undermining the integrity of the 2016 election, these platforms continue to be successful at framing the issues in a favorable light.

When pressured to reduce targeted harassment, disinformation, and conspiracy theories, the platforms frame the solution in terms of content moderation, implying there are no other options. Despite several waves of loudly promoted investments in artificial intelligence and human moderators, no platform has been successful at limiting the harm from third party content. When faced with public pressure to remove harmful content, internet platforms refuse to address root causes, which means old problems never go away, even as new ones develop. For example, banning Alex Jones removed conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.

The platforms respond to each new public relations challenge with an apology, another promise, and sometimes an increased investment in moderation. They have done it so many times I have lost track. And yet, policy makers and journalists continue to largely let them get away with it.

We need to recognize that internet platforms are experts in human attention. They know how to distract us. They know we will eventually get bored and move on.

Despite copious evidence to the contrary, too many policy makers and journalists behave as if internet platforms will eventually reduce the harm from targeted harassment, disinformation, and conspiracies through content moderation. There are three reasons why it will not do so: scale, latency, and intent. These platforms are huge. In the most recent quarter, Facebook reported that 1.7 billion people use its main platform every day and roughly 2.3 billion across its four large platforms. They do not disclose the numbers of messages posted each day, but it is likely to be in the hundreds of millions, if not a billion or more, just on Facebook. Substantial investments in artificial intelligence and human moderators cannot prevent millions of harmful messages from getting through.

The second hurdle is latency, which describes the time it takes for moderation to identify and remove a harmful message. AI works rapidly, but humans can take minutes or days. This means a large number of messages will circulate for some time before eventually being removed. Harm will occur in that interval. It is tempting to imagine that AI can solve everything, but that is a long way off. AI systems are built on data sets from older systems, and they are not yet capable of interpreting nuanced content like hate speech.

The final – and most important – obstacle for content moderation is intent. The sad truth is that the content we have asked internet platforms to remove is exceptionally valuable and they do not want to remove it. As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

A permissive approach to content has two huge benefits for platforms: profits and power. The business model of internet platforms like Facebook, Instagram, YouTube, and Twitter is based on advertising, the value of which depends on consumer attention. Where traditional media properties create content for mass audiences, internet platforms optimize content for each user individually, using surveillance to enable exceptionally precise targeting. Advertisers are addicted to the precision and convenience offered by internet platforms. Every year, they shift an ever larger percentage of their spending to them, from which platforms derive massive profits and wealth. Limiting the amplification of targeted harassment, disinformation, and conspiracy theories would lower engagement and revenues.

Power, in the form of political influence, is an essential component of success for the largest internet platforms. They are ubiquitous, which makes them vulnerable to politics. Tight alignment with the powerful ensures success in every country, which leads platforms to support authoritarians, including ones who violate human rights. For example, Facebook has enabled regime-aligned genocide in Myanmar and state-sponsored repression in Cambodia and the Philippines. In the United States, Facebook and other platforms have ignored or altered their terms of service to enable Trump and his allies to use the platform in ways that would normally be prohibited. For example, when journalists exposed Trump campaign ads that violated Facebook’s terms of service with falsehoods, Facebook changed its terms of service, rather than pulling the ads. In addition, Facebook chose not to follow Twitter’s lead in placing a public safety warning on a Trump post that promised violence in the event of looting.

Thanks to their exceptional targeting, platforms play an essential role in campaign fundraising and communications for candidates of both parties. While the dollars are not meaningful to the platforms, they derive power and influence from playing an essential role in electoral politics. This is particularly true for Facebook.

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

Thanks to COVID-19 and the protest marches, consumers and policy makers are far more aware of the role that internet platforms play in amplifying disinformation. For the first time in a generation, there is support in both parties in Congress for revisions to Section 230. There is increasing public support for regulation.

We do not need to accept disinformation as the cost of access to internet platforms. Harmful amplification is the result of business choices that can be changed. It is up to us and to our elected representatives to make that happen. The pandemic and the social justice protests underscore the urgency of doing so.

Source: Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

Social Media Giants Support Racial Justice. Their Products Undermine It. Shows of support from Facebook, Twitter and YouTube don’t address the way those platforms have been weaponized by racists and partisan provocateurs.

Of note. “Thoughts and prayers” rather than action:

Several weeks ago, as protests erupted across the nation in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on his Facebook page, denouncing racial bias and proclaiming that “black lives matter.” Mr. Zuckerberg, Facebook’s chief executive, also announced that the company would donate $10 million to racial justice organizations.

A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the chief executive, pledged $3 million to an anti-racism organization started by Colin Kaepernick, the former N.F.L. quarterback.

YouTube joined the protests, too. Susan Wojcicki, its chief executive, wrote in a blog post that “we believe Black lives matter and we all need to do more to dismantle systemic racism.” YouTube also announced it would start a $100 million fund for black creators.

Pretty good for a bunch of supposedly heartless tech executives, right?

Well, sort of. The problem is that, while these shows of support were well intentioned, they didn’t address the way that these companies’ own products — Facebook, Twitter and YouTube — have been successfully weaponized by racists and partisan provocateurs, and are being used to undermine Black Lives Matter and other social justice movements. It’s as if the heads of McDonald’s, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.

It’s hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalized. In Tahrir Square in Cairo, Ferguson, Mo., and Baltimore, activists used Twitter and Facebook to organize demonstrations and get their messages out.

But in recent years, a right-wing reactionary movement has turned the tide. Now, some of the loudest and most established voices on these platforms belong to conservative commentators and paid provocateurs whose aim is mocking and subverting social justice movements, rather than supporting them.

The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn’t necessarily know it by scrolling through your social media feeds.

On Facebook, for example, the most popular post on the day of Mr. Zuckerberg’s Black Lives Matter pronouncement was an 18-minute video posted by the right-wing activist Candace Owens. In the video, Ms. Owens, who is black, railed against the protests, calling the idea of racially biased policing a “fake narrative” and deriding Mr. Floyd as a “horrible human being.” Her monologue, which was shared by right-wing media outlets — and which several people told me they had seen because Facebook’s algorithm recommended it to them — racked up nearly 100 million views.

Ms. Owens is a serial offender, known for spreading misinformation and stirring up partisan rancor. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly four million followers on Facebook. So can other high-profile conservative commentators like Terrence K. Williams, Ben Shapiro and the Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.

In all, seven of the 10 most-shared Facebook posts containing the phrase “Black Lives Matter” over the past month were critical of the movement, according to data from CrowdTangle, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favorable, perhaps because its users skew younger and more liberal.)

Facebook declined to comment. On Thursday, it announced it would spend $200 million to support black-owned businesses and organizations, and add a “Lift Black Voices” section to its app to highlight stories from black people and share educational resources.

Twitter has been a supporter of Black Lives Matter for years — remember Mr. Dorsey’s trip to Ferguson? — but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was run by a group of white nationalists posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter’s trending topics sidebar, which is often gamed by trolls looking to hijack online conversations, has filled upwith inflammatory hashtags like #whitelivesmatter and #whiteoutwednesday, often as a result of coordinated campaigns by far-right extremists.

A Twitter spokesman, Brandon Borrman, said: “We’ve taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world. From #BlackLivesMatter to #MeToo and #BringBackOurGirls, our company is motivated by the power of social movements to usher in meaningful societal change.”

YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made stridesin recent years to remove conspiracy theories and misinformation from its search results and recommendations, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contributed to racial division for years.

As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn’t footage of a protest or a police killing, but a four-year-old “social experiment” by the viral prankster and former Republican congressional candidate Joey Saladino, which has 14 million views. In the video, Mr. Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an “All Lives Matter” sign in a predominantly black neighborhood.

A YouTube spokeswoman, Andrea Faville, said that Mr. Saladino’s video had received fewer than 5 percent of its views this year, and that it was not being widely recommended by the company’s algorithms. Mr. Saladino recently reposted the video to Facebook, where it has gotten several million more views.

In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Mr. Floyd’s killing, or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the country are being organized in Facebook groups and Twitter threads, and social media has been helpful in creating more accountability for the police.

But these platforms aren’t just megaphones. They’re also global, real-time contests for attention, and many of the experienced players have gotten good at provoking controversy by adopting exaggerated views. They understand that if the whole world is condemning Mr. Floyd’s killing, a post saying he deserved it will stand out. If the data suggests that black people are disproportionately targeted by police violence, they know that there’s likely a market for a video saying that white people are the real victims.

The point isn’t that platforms should bar people like Mr. Saladino and Ms. Owens for criticizing Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet, and which features of their platforms are undermining the social justice movements they claim to support.

They don’t seem eager to do so. Recently, The Wall Street Journal reported that an internal Facebook study in 2016 found that 64 percent of the people who joined extremist groups on the platform did so because Facebook’s recommendations algorithms steered them there. Facebook could have responded to those findings by shutting off groups recommendations entirely, or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.

As a result, Facebook groups continue to be useful for violent extremists. This week, two members of the far-right “boogaloo” movement, which wants to destabilize society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, Calif. According to investigators, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommendations, they’re still appearing in plenty of people’s feeds.Rashad Robinson, the president of Color of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview this week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.

“What I see, particularly from Facebook and Mark Zuckerberg, it’s kind of like ‘thoughts and prayers’ after something tragic happens with guns,” Mr. Robinson said. “It’s a lot of sympathy without having to do anything structural about it.”

There is plenty more Mr. Zuckerberg, Mr. Dorsey and Ms. Wojcicki could do. They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don’t involve using racial slurs or organized hate groups. They could dismantle the recommendations systems that give provocateurs and cranks free attention, or make changes to the way their platforms rank information. (Ranking it by how engaging it is, the way some platforms still do, tends to amplify misinformation and outrage-bait.) They could institute a “viral ceiling” on posts about sensitive topics, to make it harder for trolls to hijack the conversation.

I’m optimistic that some of these tech leaders will eventually be convinced — either by their employees of color or their own conscience — that truly supporting racial justice means that they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices. But I’m worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.

So is Mr. Robinson, the civil rights organizer. A few weeks ago, he chatted with Mr. Zuckerberg by phone about Facebook’s policies on race, elections and other topics. Afterward, he said he thought that while Mr. Zuckerberg and other tech leaders generally meant well, he didn’t think they truly understood how harmful their products could be.

“I don’t think they can truly mean ‘Black Lives Matter’ when they have systems that put black people at risk,” he said.

Source: Social Media Giants Support Racial Justice. Their Products Undermine It.

Spy agency says Canadians are targets of foreign influence campaigns

More on foreign influence and interference:

Canadians are more exposed to “influence” operations than ever before according to an internal assessment from the country’s electronic spy agency.

A 2018 memo from the Communications Security Establishment (CSE) warned the rise of “web technology” like social media, along with Canadians’ changing habits for consuming media, make the population much more likely to encounter efforts by foreign powers to shape domestic political opinion.

“These new systems have generated unintended threats to the democratic process, as they deprive the public of accurate information, informed political commentary and the means to identify and ignore fraudulent information,” reads the memo, classified as Canadian Eyes Only.

“Foreign states have harnessed the new online influence systems to undertake influence activities against Western democratic processes, and they use cyber capabilities to enhance their influence activities through, for example, cyber espionage.”

“Foreign states steal and release information, modify or make information more compelling and distracting, create fraudulent or distorted ‘news,’ or amplify fringe and sometimes noxious opinions.”

The memo was prepared as Canada’s intelligence agencies were engaged in an exercise to protect the 2019 federal election from foreign interference.

Elections across the democratic world — the United States, France, the United Kingdom, Germany and the European Union — have in recent years been the targets of misinformation and cyberespionage campaigns from hostile countries.

There is no evidence that Canada’s recent federal election was the target of sophisticated cyber espionage or misinformation campaigns.

But another document prepared by CSE makes clear that Canadian politicians have already been targeted by foreign “influence” campaigns.

An undated slide deck prepared by the CSE suggested “sources linked to Russia popularized (then Global Affairs Minister Chrystia) Freeland’s family history” and targeted Defence Minister Harjit Sajjan’s appearance and turban in Russian-language media outlets in the Balkans.

The agency appears to be referring to stories, which were reported by mainstream Canadian news outlets, suggesting Freeland’s grandfather edited a Nazi-associated newspaper in occupied Poland.

The stories were “very likely intended to cause personal reputational damage in order to discredit the Government (of) Canada’s ongoing diplomatic and military support for Ukraine, to delegitimize Canada’s decision to enact the Justice for Victims of Corrupt Foreign Offices Act, and the 2018 expulsion of several Russian diplomats,” the documents, first reported by Global News, state.

The attacks against Sajjan, meanwhile, were “almost certainly” intended to discredit the NATO presence in Latvia, where Canadian forces are deployed as part of a NATO mission to deter Russian expansion after the invasion of Crimea.

“Since Canada’s deployment to Latvia, subtle and overtly racist comments pertaining to … Sajjan’s appearance, particularly his turban, have consistently appeared across Russian-language media in the Baltic region,” the documents read.

“Even ostensibly professional news sources are not above such descriptions. When … Sajjan attended a conference in Latvia in October 2017, he was described by Vesti.lv as ‘a large swarthy man in a big black turban.’”

Compared to some of the attacks on Western democracies, those two influence campaigns were minor in scale and impact. But the intelligence agency suggested that more and more countries are turning to cyber capabilities to further their own goals at the expense of other nations. And CSE’s analysis suggests their willing to play the long game.

“In the longer-term, influence activities, both cyber and human, are likely to challenge the transparency and independence of the decision-making process, reduce public trust (and) confidence in institutions, and push policy in directions inimical to Canadian interests,” the documents, released under access to information law, read.

“Many European states and some private companies have begun to develop countermeasures to malicious activities aimed at democratic processes, including increasing public understanding and resilience. However, little has been done to create robust, institutionalized multilateral responses.”

Parliament’s new national security review committee has completed a review of foreign espionage activities in Canada and submitted it to Prime Minister Justin Trudeau. The classified report detailing their findings is expected to be released early in 2020, once the House of Commons resumes sitting.

Source: Spy agency says Canadians are targets of foreign influence campaigns

Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

For the record:

British comedian Sacha Baron Cohen has said if Facebook had existed in the 1930s it would have allowed Hitler a platform for his anti-Semitic beliefs.

The Ali G star singled out the social media company in a speech in New York.

He also criticised Google, Twitter and YouTube for pushing “absurdities to billions of people”.

Social media giants and internet companies are under growing pressure to curb the spread of misinformation around political campaigns.

Twitter announced in late October that it would ban all political advertising globally from 22 November.

Earlier this week Google said it would not allow political advertisers to target voters using “microtargeting” based on browsing data or other factors.

Analysts say Facebook has come under increasing pressure to follow suit.

The company said in a statement that Baron Cohen had misrepresented its policies and that hate speech was banned on its platforms.

“We ban people who advocate for violence and we remove anyone who praises or supports it. Nobody – including politicians – can advocate or advertise hate, violence or mass murder on Facebook,” it added.

What did Baron Cohen say?

Addressing the Anti-Defamation League’s Never is Now summit, Baron Cohen took aim at Facebook boss Mark Zuckerberg who in October defended his company’s position not to ban political adverts that contain falsehoods.

“If you pay them, Facebook will run any ‘political’ ad you want, even if it’s a lie. And they’ll even help you micro-target those lies to their users for maximum effect,” he said.

“Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.”

Baron Cohen said it was time “for a fundamental rethink of social media and how it spreads hate, conspiracies and lies”. He also questioned Mr Zuckerberg’s characterisation of Facebook as a bastion of “free expression”.

“I think we could all agree that we should not be giving bigots and paedophiles a free platform to amplify their views and target their victims,” he added.

Earlier this month, an international group of lawmakers called for targeted political adverts on social media to be suspended until they are properly regulated.

The International Committee on Disinformation and Fake News was told that the business model adopted by social networks made “manipulation profitable”.

A BBC investigation into political ads for next month’s UK election suggested they were being targeted towards key constituencies and certain age groups.

Source: Sacha Baron Cohen: Facebook would have let Hitler buy anti-Semitic ads

Why are the U.S. immigration norms being tightened?

US immigration checking of social media noted in Indian media (a reminder to us all to more mindful when on social media):

The story so far: On May 31, 2019, the U.S. Department of State introduced a change in online visa forms for immigrant (form DS-260) and non-immigrant visas (form DS-160) requiring applicants to register their social media handles over a five-year period. The newly released DS-160 and DS-260 forms ask, “Do you have a social media presence?” A drop-down menu provides a list of some 20 options, including Facebook, Instagram, Sina Weibo and Twitter. There is also a “NONE” option. Applicants are required to list their handles alone and not passwords. All sites will soon be listable according to an administration official who spoke to The Hill, a Washington DC-based newsletter. The policy does not cover those eligible for the visa waiver programme and those applying for diplomatic visas and certain categories of official visas.

How did it come about?

The policy is part of U.S. President Donald Trump’s intent to conduct “extreme vetting” of foreigners seeking admission into the U.S. In March 2017, Mr. Trump issued an Executive Order asking the administration to implement a programme that “shall include the development of a uniform baseline for screening and vetting standards and procedures for all immigrant programs.”

In September 2017, the Department of Homeland Security started including “social media handles, aliases, associated identifiable information, and search results” information in the files it keeps on each immigrant. The notice regarding this policy said those impacted would include Green Card holders and naturalised citizens. In March 2018, the State Department proposed a similar policy, but for all visa applicants — this is the policy now in effect. Earlier, only certain visa applicants identified for extra screening were required to provide such information. Asking visa applicants to volunteer social media history started during the Obama administration which was criticised for not catching Tashfeen Malik, one of those who carried out a mass-shooting in San Bernardino, California, in 2015. Malik had come to the U.S. on a K-1 fiancé visa, and had exchanged social media messages about jihad prior to her admission to the U.S.

How will it impact India?

Most Indians applying for U.S. visas will be covered by this policy. Over 955,000 non-immigrant visas (excluding A and G visas) and some 28,000 immigrant visas were issued to Indians in fiscal year 2018. So at least 10 lakh Indians — and these are just those who are successful in their visa applicants and not all applicants — will be directly impacted by the policy.

What lies ahead?

The new policy is expected to impact 14 million travellers to the U.S. and 700,000 immigrants worldwide according to the administration’s prior estimates. In some individual cases it is possible that the visa policy achieves what it is (ostensibly) supposed to — allow the gathering of social media information that results in the denial of a visa for an applicant who genuinely presents a security threat. However, the bluntness of the policy and its vast scope raise serious concerns around civil liberties including questions of arbitrariness, mass surveillance, privacy, and the stifling of free speech.

First, it is not unusual for an individual to not recall all their social media handles over a five-year period. Consequently, even if acting in good faith, it is entirely possible for individuals to provide an incomplete social media history. This could give consular officers grounds for denying a visa.

Second, there is a significant degree of discretion involved in determining what constitutes a visa-disqualifying social media post and this could stifle free speech. For instance, is criticising the President of the United States or posting memes about him (there are plenty of those on social media these days) grounds for visa denial? What about media professionals? Is criticising U.S. foreign policy ground for not granting someone a visa?

Third, one can expect processing delays with visas as social media information of applicants is checked. It is possible that individuals impacted by the policy will bring cases against the U.S. government on grounds of privacy or on grounds of visa delays. The strength of these cases depends on a number of factors including whether they are brought by Green Card holders and naturalised citizens (who were impacted by the September 2017 policy not the May 31 one) or non-immigrants. The courts could examine the intent of the U.S. government’s policy and ask whether it has discriminatory intent.

Source: Why are the U.S. immigration norms being tightened?