Fears of election meddling on social media were overblown, say researchers

Hype versus the reality (perhaps Canada not important enough…). The hype was in both mainstream and ethnic media:

Now that the election is over and researchers have combed through the data collected, their conclusion is clear: there was more talk about foreign trolls during the campaign than there was evidence of their activities.

Although there were a few confirmed cases of attempts to deceive Canadians online, three large research teams devoted to detecting co-ordinated influence campaigns on social media report they found little to worry about.

In fact, there were more news reports about malicious activity during the campaign than traces of it.

“We didn’t see high levels of effective disinformation campaigns. We didn’t see evidence of effective bot networks in any of the major platforms. Yet, we saw a lot of coverage of these things,” said Derek Ruths, a professor of computer science at McGill University in Montreal.

He monitored social media for foreign meddling during the campaign and, as part of the Digital Democracy Project, scoured the web for signs of disinformation campaigns.

Threat of foreign influence was hyped

“The vast majority of news stories about disinformation overstated the results and represented them as far more conclusive than they were. It was the case everywhere, with all media,” he said.

It’s a view mirrored by the Ryerson Social Media Lab, which also monitored social media during the campaign.

“Fears of foreign and domestic interference were overblown,” Philip Mai, co-director of the Social Media Lab, told CBC News.

A major focus of monitoring efforts during the campaign was Twitter, a platform favoured by politicians, journalists and partisans of all stripes. It’s where a lot of political exchanges take place, and it’s an easy target for automated influence campaigns.

“Our preliminary analysis of the [Twitter hashtag] #cdnpoli suggests that only about one per cent of accounts that used that hashtag earlier in the election cycle can be classified as likely to be bots,” said Mai.

The word “likely” is key. Any social media analyst will tell you that detecting bonafide automated accounts that exist solely to spread a message far and wide is incredibly difficult.

#TrudeauMustGo and other frenzies

A few times during the campaign, independent researchers found signs that certain conversations on Twitter were being amplified by accounts that appeared to be foreign. For example, the popular hashtag #TrudeauMustGo was tweeted and retweeted in large numbers by users who had the word “MAGA” in their user descriptions.

But this doesn’t mean those users were part of a foreign campaign, Ruths said.

“It’s very hard to prove that those MAGA accounts aren’t Canadian,” he said. “How can you prove who’s Canadian online? What does a Canadian look like on Twitter?”

Few Canadians use Twitter for news. According to the Digital News Report from the Reuters Institute for the Study of Journalism, only 11 per cent of Canadians got their news on Twitter in 2019, down slightly from 12 per cent last year.

Twitter’s most avid users tend to be politicians, journalists and highly engaged partisans.

Fenwick McKelvey, an assistant professor at Montreal’s Concordia University who researches social media platforms, said he feels journalists overestimate Twitter’s ability to take the pulse of the voting public.

“Twitter is an elite medium used by journalists and politicians more than everyday Canadians,” McKelvey told CBC News. “Twitter is a very specific public. Not a proxy for public opinion.”

In fact, most Canadians — 57 per cent — told a 2018 survey by the Social Media Lab that they have never shared political opinions on any social media platform.

Tweets for elites

For an idea of just how elitist Twitter can be, take a look at who is driving its political conversations. For some of the major hashtags during the election — like #cdnpoli, #defundCBC and the recently popular #wexit — just a fraction of users post original content. The rest just retweet.

And the users who get the most retweets, the biggest influencers, represent an even tinier sliver of Twitter users, according to data from the University of Toronto’s Citizen Lab, another outfit that monitored disinformation during the campaign.

“What we thought was a horizontal democratic space is dominated by less than two per cent of accounts,” said Gabrielle Lim, a fellow at the Citizen Lab.

“We need to take everything with a grain of salt when looking at Twitter. Doing data analysis is easy, but we’re bad at contextualizing what it means,” Lim said.

So why this focus on Twitter if it’s such a small and unrepresentative medium for Canadians? Because it’s easy to study. Unless a user sets an account to private, everything posted on Twitter is public and fairly easy to access.

On the other hand, more popular social networks like Facebook make it much harder to harvest user content at scale. A lot of misinformation may also be shared in closed channels like private Facebook groups and WhatsApp groups, which are nearly impossible for outsiders to access.

But even taking into account those larger social media audiences, the evidence shows that Canadians are getting their news from a variety of sources, Lim noted.

Although the threat posed by online disinformation to Canadian democracy was overblown in the context of the 2019 campaign, Ruths said he still believes it was important to be alert, just as it’s important to go to the dentist even if no cavities are found.

And he suggests that journalists looking for evidence of bot activity apply the same level of rigour as the people doing the research.

“We saw a lot of well-intentioned reporting,” he said. “But finding suspected accounts is not the same as finding bots. Saying that MAGA accounts don’t look like Canadians’ doesn’t mean they’re not.”

Source: Fears of election meddling on social media were overblown, say researchers

Liberals step up attacks with 2 weeks left, but Conservative campaign most negative, data shows

Nice to see this kind of social media analysis. But depressing the reliance on negative attacks by both major parties:

The Conservatives lead other major federal parties in the amount of negative attacks on Twitter and in press releases this campaign, but at the midpoint of a close race the Liberals are increasingly turning negative, an analysis by CBC News shows.

CBC News analyzed more than 1,800 press releases and tweets from official party and party leader accounts since the start of the campaign. We categorized them as either positive, negative, both positive and negative or neutral. (See methodology below.)

Overall, the Conservatives have put out the highest volume of negative communications to date, the analysis revealed. The party tends to put out communications attacking the Liberals about as often as they promote their own policies.

That doesn’t mean the Conservatives were the only party to go negative early on. At the outset of the campaign, the Liberals went after Conservative Leader Andrew Scheer on Twitter for his 2005 stance on same-sex marriage and other Conservative candidates for anti-abortion views or past social media missteps.

But almost half (47 per cent) of Conservative communications have been negative or partly negative. The share of negative messages is 37 per cent for the NDP, 26 per cent for the Liberals, 18 per cent for the Greens and 13 per cent for the Bloc Quebecois, which has run the most positive campaign.

Liberals, NDP step up attacks

While the Conservatives have been consistently negative since the start of the campaign, other parties have become markedly more so in the last two weeks.

The uptick in attacks appears to be driven by two factors: the climate marches across the country on Sept. 20 and 27 and the French-language debate hosted by TVA on Oct. 2.

The NDP and Greens took aim at the Liberals’ environmental record around the time of the climate marches. It was also during the last week of September that the Liberals announced a number of environmental policies they would enact if re-elected, which were promptly criticized by the NDP and Greens.

The tone of Liberal communications turned markedly critical during the TVA French-language debate on Oct. 2. This was the first debate Liberal Leader Justin Trudeau took part in, and the Liberal war room put out press releases and tweets countering statements made during the debate by Scheer and the Bloc Quebecois’ Yves-Francois Blanchet.

The TVA debate also marked the first instance during the campaign of the Liberals targeting a party other than the Conservatives with critical tweets and press releases. The party took the Bloc leader to task over his environmental record, among other things.

Liberals the target of most attacks

The Liberals were the target of more than two-thirds (70 per cent) of negative or partly negative communications.

The Conservatives have yet to target a party other than the Liberals with a critical press release or tweet.

The Liberals also have been the primary target of the NDP and, to a lesser extent, the Greens.

While these two parties may be closer ideologically to the Liberals than to the Conservatives, the NDP and Greens are focused on stopping progressive voters from rallying around the Liberals. University of B.C. political science professor Gerald Baier said this reflects a coordination problem on the centre-left.

“The NDP and Greens, I think, would presumably prefer the Liberal record on the environment to what the Conservatives would do, but at the same time their main points are against the existing government,” he said.

The lack of Liberal attacks on the NDP and the Greens is telling, Baier said.

“It suggests that they know that their path to a majority to some degree is to appeal to some of those NDP and Green voters,” he said.

It also could be because the Liberals may need the support of those parties to govern in a minority Parliament, Baier added.

NDP and Green attacks against the Liberals have focused largely on the environment, while the Conservatives have zeroed in on themes of accountability, taxes and spending.

Environment, taxes the two biggest themes

Much of the official party communications focus on the campaign trail, specific candidates and where the party leaders are.

The two policy exceptions are the environment — a popular subject for all parties except the Conservatives — and tax policy, on which the Conservatives have focused. Affordability and housing are also common themes.

Methodology

CBC News analyzed every press release and tweet from official party and party leader accounts since the start of the campaign. We categorized each communication as positive (if the focus was promoting a party’s own policies or candidates), negative (if the focus was criticizing another party), both positive and negative (if the communication was equally split between the two) or neutral (leader itineraries, event announcements). We also kept track of the topics of communications and who, if anyone, was targeted.

We did not include retweets and treated identical tweets in English and French as one communication.

To keep the project’s scope manageable, the methodology excludes other platforms such as Facebook, YouTube, radio, television and print ads.

Source: Liberals step up attacks with 2 weeks left, but Conservative campaign most negative, data shows

Canada banning Christian demonstrations? How a private member’s bill sprouted fake news

Interesting account of the social media trail and actors:

Imagine scrolling through Facebook when you come across this headline: “Canada Moves to Ban Christians from Demonstrating in Public Under New Anti-Hate Proposal.” If you think it reads too shocking and absurd to be true, that’s because it is.

But this exact headline appeared atop a story that was shared more than 16,000 times online since it was published in May, according to social media tool CrowdTangle. The federal government and Justin Trudeau, who is pictured in the story, are not seeking to ban Christians from demonstrating. In fact, the bill the story is based on was introduced in the Ontario legislature, by a Conservative MPP, and never made it past a second reading.

Incorrect and misleading content is common on social media, but it’s not always obvious where it originates. To learn more, CBC News tracked this particular example back through time on social media to uncover where it came from and how it evolved over time.

March 20: Private member’s bill introduced in Ontario

In this case, it all started with a bill. In March, Roman Baber, a freshman member of the Ontario provincial legislature introduced his very first private member’s bill. Had he known how badly the bill would be misconstrued online, he might have chosen something else, he later told CBC News.

“I expected that people would understand what prompted the bill, as a proud member of the Jewish community who’s been subjected to repeated demonstrations at Queen’s Park by certain groups that were clearly promoting hate,” said Baber, Progressive Conservative member for York Centre.

The bill was simple. It sought to ban any demonstrations on the grounds of Queen’s Park, where Ontario’s provincial legislature is located, that promote hate speech or incite violence. Baber said the bill was prompted by previous demonstrations that occurred at the legislature grounds.

“In 2017, we saw a demonstration that called for the shooting of Israelis. We saw a demonstration that called for a bus bombing and murder of innocent civilians,” he said.

The bill went through two readings at Queen’s Park and was punted to the standing committee on justice, where it’s languished since.

March 27: Canadian Jewish News covers story

At first, the bill garnered modest attention online. The Canadian Jewish News ran a straight-forward report on the bill that included an interview with Baber shortly after he first introduced it. It was shared only a handful of times.

But a few weeks after the second reading, the bill drew the attention of LifeSiteNews, a socially-conservative website. The story was shared 212 times, according to CrowdTangle, including to the Yellow Vests Canada Facebook group.

In its story, LifeSiteNews suggested that a bill banning hate speech might be interpreted to include demonstrations like those that opposed updates to the province’s sex education curriculum.

Baber said this isn’t the case, because hate speech is already defined and interpreted by legal precedent.

“The words ‘hate’ and ‘hate-promoting’ have been defined by the courts repeatedly through common law and is enforced in courts routinely,” Baber said. “So it would be a mistake to suggest that the bill expands the realm of hate speech.”

April 24: The Post Millennial invokes ‘free speech’ argument 

But the idea stuck around. A few weeks later, on April 24, the Post Millennial posted a story labelled as news that argued the bill could infringe on free speech. The story was, however, clear that the bill was only in Ontario and had not yet moved beyond a second reading. It was shared over 200 times and drew nearly 400 interactions — likes, shares, comments and reactions — on social media, according to CrowdTangle.

May 6: Powerful emotions evoke response on social media 

On May 6, a socially conservative women’s group called Real Women of Canada published a news release on the bill calling it “an attack on free speech.” In the release, the group argues that hate speech isn’t clearly defined in Canadian law, and draws on unrelated examples to claim that Christian demonstrations, in particular, could be targeted.

For example, the group pointed to the case of British Columbia’s Trinity Western University, a Christian post-secondary institution that used to require all students sign a covenant that prohibited sex outside of heterosexual marriage. A legal challenge around the covenant and a potential law school at Trinity Western occurred last year, but it had nothing to do with hate speech.

May 9: LifeSiteNews republishes news release

Though this news release itself was not widely shared, three days later it was republished by LifeSiteNews as an opinion piece. That post did better, drawing 5,500 shares and over 8,000 interactions, according to CrowdTangle It also embellished the release with a dramatic image and sensational headline: “Ontario Bill Threatens to Criminalize Christian Speech as ‘Hate.'”

LifeSiteNews published the news release as an opinion piece, drawing 5,500 shares and over 8,000 interactions. (Screengrab/LifeSiteNews)

At this point, the nugget of truth has been nearly entirely obscured by several layers of opinion and misrepresentation. For example, the bill doesn’t specifically cite Christian speech, but this headline suggests it does.

These tactics are used to elicit a strong response from readers and encourage them to share, according to Samantha Bradshaw, a researcher on the Computational Propaganda project at Oxford University.

“People like to consume this kind of content because it’s very emotional, and it gets us feeling certain things: anger, frustration, anxiety, fear,” Bradshaw said. “These are all very powerful emotions that get people sharing and consuming content.”

May 11: Big League Politics publishes sensational inaccuracies 

That framing on LifeSiteNews caught the attention of a major U.S. publication known for spreading conspiracy theories and misinformation: Big League Politics. On May 11, the site published a story that cited the LifeSiteNews story heavily.

The headline and image make it seem like Trudeau’s government has introduced legislation that would specifically prohibit Christians from demonstrating anywhere in the country, a far cry from the truth.

While the story provides a few facts, like the fact the bill was introduced in Ontario, much of it is incorrect. For example, in the lead sentence, the writer claimed the bill would “criminalize public displays by Christians deemed hateful to Muslims, the LGBT community and other victim groups designated by the left.”

The disinformation and alarmist headline proved successful: the Big League Politics version of the story was shared more than 16,000 times, drew more than 26,000 interactions and continued to circulate online for over two weeks.

This evolution is a common occurrence. Disinformation is often based on a nugget of truth that gets buried under layers of emotionally-charged language and opinion. Here, that nugget of truth was a private member’s bill introduced in the Ontario legislature. But that fact was gradually churned through an online network of spin until it was unrecognizable in the final product.

At the end of the day, democracy is really hard work. It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.– Samantha Bradshaw, researcher at Oxford University

“That is definitely something that we see often: taking little truths and stretching them, misreporting them or implementing commentary and treating someone’s opinion about what happened as news,” Bradshaw said. “The incremental changes that we see in these stories and these narratives is something very typical of normal disinformation campaigns.”

Bradshaw said even though disinformation is only a small portion of the content online, it can have an outsized impact on our attention. With that in mind, she said it’s partly up to readers to think critically about what they’re reading and sharing online.

“At the end of the day, democracy is really hard work,” Bradshaw said. “It’s up to us to put in that time and effort to fact check our information, to look at other sources, to look at the other side of the argument and to weigh and debate and discuss.”

Source: Canada banning Christian demonstrations? How a private member’s bill sprouted fake news

A Hunt for Ways to Combat Online Radicalization – The New York Times

Interesting approach, applicable to extremists and radicals, whether on right, left or other:

Law enforcement officials, technology companies and lawmakers have long tried to limit what they call the “radicalization” of young people over the internet.

The term has often been used to describe a specific kind of radicalization — that of young Muslim men who are inspired to take violent action by the online messages of Islamist groups like the Islamic State. But as it turns out, it isn’t just violent jihadists who benefit from the internet’s power to radicalize young people from afar.

White supremacists are just as adept at it. Where the pre-internet Ku Klux Klan grew primarily from personal connections and word of mouth, today’s white supremacist groups have figured out a way to expertly use the internet to recruit and coordinate among a huge pool of potential racists. That became clear two weeks ago with the riots in Charlottesville, Va., which became a kind of watershed event for internet-addled racists.

“It was very important for them to coordinate and become visible in public space,” said Joan Donovan, a scholar of media manipulation and right-wing extremism at Data & Society, an online research institute. “This was an attempt to say, ‘Let’s come out; let’s meet each other. Let’s build camaraderie, and let’s show people who we are.’”

Ms. Donovan and others who study how the internet shapes extremism said that even though Islamists and white nationalists have different views and motivations, there are broad similarities in how the two operate online — including how they spread their message, recruit and organize offline actions. The similarities suggest a kind of blueprint for a response — efforts that may work for limiting the reach of jihadists may also work for white supremacists, and vice versa.

In fact, that’s the battle plan. Several research groups in the United States and Europe now see the white supremacist and jihadi threats as two faces of the same coin. They’re working on methods to fight both, together — and slowly, they have come up with ideas for limiting how these groups recruit new members to their cause.

Their ideas are grounded in a few truths about how extremist groups operate online, and how potential recruits respond. After speaking to many researchers, I compiled this rough guide for combating online radicalization.

Recognize the internet as an extremist breeding ground.

The first step in combating online extremism is kind of obvious: It is to recognize the extremists as a threat.

For the Islamic State, that began to happen in the last few years. After a string of attacks in Europe and the United States by people who had been indoctrinated in the swamp of online extremism, politicians demanded action. In response, Google, Facebook, Microsoft and other online giants began identifying extremist content and systematically removing it from their services, and have since escalated their efforts.

When it comes to fighting white supremacists, though, much of the tech industry has long been on the sidelines. This laxity has helped create a monster. In many ways, researchers said, white supremacists are even more sophisticated than jihadists in their use of the internet.

The earliest white nationalist sites date back to the founding era of the web. For instance, Stormfront.org, a pioneering hate site, was started as a bulletin board in 1990. White supremacist groups have also been proficient at spreading their messages using the memes, language and style that pervade internet subcultures. Beyond setting up sites of their own, they have more recently managed to spread their ideology to online groups that were once largely apolitical, like gaming and sci-fi groups.

And they’ve grown huge. “The white nationalist scene online in America is phenomenally larger than the jihadists’ audience, which tends to operate under the radar,” said Vidhya Ramalingam, the co-founder of Moonshot CVE, a London-based start-up that works with internet companies to combat violent extremism. “It’s just a stunning difference between the audience size.”

After the horror of Charlottesville, internet companies began banning and blocking content posted by right-wing extremist groups. So far their efforts have been hasty and reactive, but Ms. Ramalingam sees it as at the start of a wider effort.

“It’s really an unprecedented moment where social media and tech companies are recognizing that their platforms have become spaces where these groups can grow, and have been often unpoliced,” she said. “They’re really kind of waking up to this and taking some action.”

Engage directly with potential recruits.

If tech companies are finally taking action to prevent radicalization, is it the right kind of action? Extremism researchers said that blocking certain content may work to temporarily disrupt groups, but may eventually drive them further underground, far from the reach of potential saviors.

A more lasting plan involves directly intervening in the process of radicalization. Consider The Redirect Method, an anti-extremism project created by Jigsaw, a think tank founded by Google. The plan began with intensive field research. After interviews with many former jihadists, white supremacists and other violent extremists, Jigsaw discovered several important personality traits that may abet radicalization.

One factor is a skepticism of mainstream media. Whether on the far right or ISIS, people who are susceptible to extremist ideologies tend to dismiss outlets like The New York Times or the BBC, and they often go in search of alternative theories online.

Another key issue is timing. There’s a brief window between initial interest in an extremist ideology and a decision to join the cause — and after recruits make that decision, they are often beyond the reach of outsiders. For instance, Jigsaw found that when jihadists began planning their trips to Syria to join ISIS, they had fallen too far down the rabbit hole and dismissed any new information presented to them.

Jigsaw put these findings to use in an innovative way. It curated a series of videos showing what life is truly like under the Islamic State in Syria and Iraq. The videos, which weren’t filmed by news outlets, offered a credible counterpoint to the fantasies peddled by the group — they show people queuing up for bread, fighters brutally punishing civilians, and women and children being mistreated.

Experiencing the Caliphate Video by Upvotely

Then, to make sure potential recruits saw the videos at the right time in their recruitment process, Jigsaw used one of Google’s most effective technologies: ad targeting. In the same way that a pair of shoes you looked up last week follows you around the internet, Jigsaw’s counterterrorism videos were pushed to likely recruits.

Jigsaw can’t say for sure if the project worked, but it found that people spent lots of time watching the videos, which suggested they were of great interest, and perhaps dissuaded some from extremism.

Moonshot CVE, which worked with Jigsaw on the Redirect project, put together several similar efforts to engage with both jihadists and white supremacist groups. It has embedded undercover social workers in extremist forums who discreetly message potential recruits to dissuade them. And lately it’s been using targeted ads to offer mental health counseling to those who might be radicalized.

“We’ve seen that it’s really effective to go beyond ideology,” Ms. Ramalingam said. “When you offer them some information about their lives, they’re disproportionately likely to interact with it.”

What happens online isn’t all that matters in the process of radicalization. The offline world obviously matters too. Dylann Roof — the white supremacist who murdered nine people at a historically African-American church in Charleston, S.C., in 2015 — was radicalized online. But as a new profile in GQ Magazine makes clear, there was much more to his crime than the internet, including his mental state and a racist upbringing.

Still, just about every hate crime and terrorist attack, these days, was planned or in some way coordinated online. Ridding the world of all of the factors that drive young men to commit heinous acts isn’t possible. But disrupting the online radicalization machine? With enough work, that may just be possible.

How to Know What Donald Trump Really Cares About: Look at What He’s Insulting – The New York Times

This is a truly remarkable analysis of social media and Donald Trump, rich in data and beautifully charted by Kevin Quealy and Jasmine Lee.

Well worth reading, both in terms of the specifics as well as a more general illustration of social media analysis:

Donald J. Trump’s tweets can be confounding for journalists and his political opponents. Many see them as a master class in diversion, shifting attention to minutiae – “Hamilton” and flag-burning, to name two recent examples – and away from his conflicts of interest and proposed policies. Our readers aren’t quite sure what to make of them, either.

For better or worse, I’ve developed a deep expertise of what he has tweeted about in the last two years. Over the last 11 months, my colleague Jasmine C. Lee and I have read, tagged and sorted more than 14,000 tweets. We’ve found that about one in every nine was an insult of some kind.

This work, mundane as it sometimes is, has helped reveal a clear pattern – one that has not changed much in the weeks since Mr. Trump’s victory.

First, Mr. Trump likes to identify a couple of chief enemies and attack them until they are no longer threatening enough to interest him. He hurls insults at these foils relentlessly, for sustained periods – weeks or months. Jeb Bush, Marco Rubio, Ted Cruz and Hillary Clinton have all held Mr. Trump’s attention in this way; nearly one in every three insults in the last two years has been directed at them.

If Mr. Trump continues to use Twitter as president the way he did as a candidate, we may see new chief antagonists: probably Democratic leaders, perhaps Republican leaders in Congress and possibly even foreign countries and their leaders. For now, the news media – like CNN and The New York Times – is starting to fill that role. The chart at the top of this page illustrates this story very clearly.

That’s not to say that the media is necessarily becoming his next full-time target. Rather, it suggests that one has not yet presented itself. The chart below, which shows the total number of insults per day, shows how these targets have come and gone in absolute terms. An increasing number of insults are indeed being directed at the media, but, for now, those insults are still at relatively normal levels.

Insults per day

Second, there’s a nearly constant stream of insults in the background directed at a wider range of subjects. These insults can be a response to a news event, unfavorable media coverage or criticism, or they can simply be a random thought. These subjects receive short bursts of attention, and inevitably Mr. Trump turns to other things in a day or two. Mr. Trump’s brief feuds with Macy’sElizabeth WarrenJohn McCain and The New Hampshire Union Leader fit this bucket well. The election has not changed this pattern either.

Facebook’s AI boss: Facebook could fix its filter bubble if it wanted to – Recode

While Zuckerberg is correct that we all have a tendency to tune-out other perspectives, the role that Facebook and other social media have in reinforcing that tendency should not be downplayed:

One of the biggest complaints about Facebook — and its all-powerful News Feed algorithm — is that the social network often shows you posts supporting beliefs or ideas you (probably) already have.

Facebook’s feed is personalized, so what you see in your News Feed is a reflection of what you want to see, and people usually want to see arguments and ideas that align with their own.

The term for this, often associated with Facebook, is a “filter bubble,” and people have written books about it. A lot of people have pointed to that bubble, as well as to the proliferation of fake news on Facebook, as playing a major role in last month’s presidential election.

Now the head of Facebook’s artificial intelligence research division, Yann LeCun, says this is a problem Facebook could solve with artificial intelligence.

“We believe this is more of a product question than a technology question,” LeCun told a group of reporters last month when asked if artificial intelligence could solve this filter-bubble phenomenon. “We probably have the technology, it’s just how do you make it work from the product side, not from the technology side.”

A Facebook spokesperson clarified after the interview that the company doesn’t actually have this type of technology just sitting on the shelf. But LeCun seems confident it could be built. So why doesn’t Facebook build it?

“These are questions that go way beyond whether we can develop AI technology that solves the problem,” LeCun continued. “They’re more like trade-offs that I’m not particularly well placed to determine. Like, what is the trade-off between filtering and censorship and free expression and decency and all that stuff, right? So [it’s not a question of if] the technology exists or can be developed, but … does it make sense to deploy it. This is not my department.”

Facebook has long denied that its service creates a filter bubble. It has even published a study defending the diversity of peoples’ News Feeds. Now LeCun is at the very least acknowledging that a filter bubble does exist, and that Facebook could fix it if it wanted to.

And that’s fascinating because while it certainly seemed like a fixable problem from the outside — Facebook employs some of the smartest machine-learning and language-recognition experts in the world — it once again raises questions around Facebook’s role as a news and information distributor.

Facebook CEO Mark Zuckerberg has long argued that his social network is a platform that leaves what you see (or don’t see) to computer algorithms that use your online activity to rank your feed. Facebook is not a media company making human-powered editorial decisions, he argues. (We disagree.)

But is showing its users a politically balanced News Feed Facebook’s responsibility? Zuckerberg wrote in September that Facebook is already “more diverse than most newspapers or TV stations” and that the filter-bubble issue really isn’t an issue. Here’s what he wrote.

“One of the things we’re really proud of at Facebook is that, whatever your political views, you probably have some friends who are in the other camp. … [News Feed] is not a perfect system. Research shows that we all have psychological bias that makes us tune out information that doesn’t fit with our model of the world. It’s human nature to gravitate towards people who think like we do. But even if the majority of our friends have opinions similar to our own, with News Feed we have easier access to more news sources than we did before.”

So this, right here, explains why Facebook isn’t building the kind of technology that LeCun says it’s capable of building. At least not right now.

There are some benefits to a bubble like this, too, specifically user safety. Unlike Twitter, for example, Facebook’s bubble is heightened by the fact that your posts are usually private, which makes it harder for strangers to comment on them or drag you into conversations you might not want to be part of. The result: Facebook doesn’t have to deal with the level of abuse and harassment that Twitter struggles with.

Plus, Facebook isn’t the only place you’ll find culture bubbles. Here’s “SNL” making fun of a very similar bubble phenomenon that has come to light since election night.

Research based on social media data can contain hidden biases that ‘misrepresent real world,’ critics say

Good article on some of the limits in using social media for research, as compared to IRL (In Real Life):

One is ensuring a representative sample, a problem that is sometimes, but not always, solved by ever greater numbers. Another is that few studies try to “disentangle the human from the platform,” to distinguish the user’s motives from what the media are enabling and encouraging him to do.

Another is that data can be distorted by processes not designed primarily for research. Google, for example, stores only the search terms used after auto-completion, not the text the user actually typed. Another is simply that many social media are largely populated by non-human robots, which mimic the behaviour of real people.

Even the cultural preference in academia for “positive results” can conceal the prevalence of null findings, the authors write.

“The biases and issues highlighted above will not affect all research in the same way,” the authors write. “[But] they share in common the need for increased awareness of what is actually being analyzed when working with social media data.”

Research based on social media data can contain hidden biases that ‘misrepresent real world,’ critics say