Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators

Whether by humans or AI, not an easy thing to do consistently and appropriately:

Facebook, YouTube and Twitter are relying more heavily on automated systems to flag content that violate their rules, as tech workers were sent home to slow the spread of the coronavirus.

But that shift could mean more mistakes — some posts or videos that should be taken down might stay up, and others might be incorrectly removed. It comes at a time when the volume of content the platforms have to review is skyrocketing, as they clamp down on misinformation about the pandemic.

Tech companies have been saying for years that they want computers to take on more of the work of keeping misinformation, violence and other objectionable content off their platforms. Now the coronavirus outbreak is accelerating their use of algorithms rather than human reviewers.

“We’re seeing that play out in real time at a scale that I think a lot of the companies probably didn’t expect at all,” said Graham Brookie, director and managing editor of the Atlantic Council’s Digital Forensic Research Lab.

Facebook CEO Mark Zuckerberg told reporters that automated review of some content means “we may be a little less effective in the near term while we’re adjusting to this.”

Twitter and YouTube are also sounding caution about the shift to automated moderation.

“While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” Twitter said in a blog post. It added that no accounts will be permanently suspended based only on the actions of the automated systems.

YouTube said its automated systems “are not always as accurate or granular in their analysis of content as human reviewers.” It warned that more content may be removed, “including some videos that may not violate policies.” And, it added, it will take longer to review appeals of removed videos.

Facebook, YouTube and Twitter rely on tens of thousands of content moderators to monitor their sites and apps for material that breaks their rules, from spam and nudity to hate speech and violence. Many moderators are not full-time employees of the companies, but contractors who work for staffing firms.

Now those workers are being sent home. But some content moderation cannot be done outside the office, for privacy and security reasons.

For the most sensitive categories, including suicide, self-injury, child exploitation and terrorism, Facebook says it’s shifting work from contractors to full-time employees — and is ramping up the number of people working on those areas.

There are also increased demands for moderation as a result of the pandemic. Facebook says use of its apps, including WhatsApp and Instagram, is surging. The platforms are under pressure to keep false information, including dangerous fake health claims, from spreading.

The World Health Organization calls the situation an infodemic, where too much information, both true and false, makes it hard to find trustworthy information.

The tech companies “are dealing with more information with less staff,” Brookie said. “Which is why you’ve seen these decisions to move to more automated systems. Because frankly, there’s not enough people to look at the amount of information that’s ongoing.”

That makes the platforms’ decisions right now even more important, he said. “I think that we should all rely on more moderation rather than less moderation, in order to make sure that the vast majority of people are connecting with objective, science-based facts.”

Some Facebook users raised alarm that automated review was already causing problems.

When they tried to post links to mainstream news sources like The Atlantic and BuzzFeed, they got notifications that Facebook thought the posts were spam.

Facebook said the posts were erroneously flagged as spam due to a glitch in its automated spam filter.

Zuckerberg denied the problem was related to shifting content moderation from humans to computers.

“This is a completely separate system on spam,” he said. “This is not about any kind of near-term change, this was just a technical error.”

Source: Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators

Pedophiles, anti-vaxxers, homophobes: YouTube’s algorithm caters to them all

Denise Balkissoon on the business models driving some of the hate:

Social-media platforms appear to be having an amorality contest, and this week it was YouTube’s turn to shrug at the harm that it’s caused.

On Monday, the New York Times reported that the platform failed to protect children from people who sexualize them, even though it has known about the problem for months. When prompted with a search for erotic videos, YouTube’s recommendation algorithm is still serving up images of increasingly young children doing what should be innocuous, such as playing in swimsuits or doing gymnastics.

The next day, Vox journalist Carlos Maza received a reply to his complaints about being targeted by a YouTube vlogger who he said had spent years aiming homophobic, racist and hateful insults at him. The vlogger has almost 4 million subscribers, some of whom allegedly targeted Mr. Maza across multiple platforms and in his personal inbox with death threats and threats to release his personal information online.

Even so, replied YouTube, “while we found language that was clearly hurtful, the videos as posted don’t violate our policies.” Which is confusing, since those policies advise users not to post content that “makes hurtful and negative personal comments/videos about another person” or that “incites others to harass or threaten individuals on or off YouTube.”

Every major social-media platform – Twitter, Facebook, Reddit – has played a part in creating this age of disinformation and extremism. But unlike the other platforms, YouTube shares the ad money it makes with content creators: Tech journalist Julia Carrie Wong argues that it’s effectively their employer, whether it accepts that title or not. That means the platform is directly delivering rewards to its creators, including those who propagate prejudice, creepiness and lies. In fact, it even helps them spread their message.

Some inside the company have tried to solve the issue. In April, Bloombergpublished a story for which it interviewed “scores of employees” who said they had long known that the site’s recommendation algorithm was leading people toward “false, incendiary and toxic content.”

But senior executives, including chief executive officer Susan Wojcicki, seem to be so focused on the advertising money that YouTube’s audience brings in that they ignore the well-being of those same users. They dismissed these warnings, along with suggestions of how to counter the problem. The site’s growth depends on “engagement,” after all – the raw amount of time people stare at the screen. And what keeps them there is a recommendation engine that pushes out increasingly extreme or explicit content.

At the 2018 South by Southwest conference, Bloomberg reported, Ms. Wojcicki defended the problematic content YouTube hosts by comparing the platform to a library. “There have always been controversies, if you look back at libraries,” she said.

But YouTube isn’t a bookshelf. It’s a billion-dollar bookseller, promoting some of the hundreds of millions of stories in its possession over others. Its algorithm doesn’t ignore, or even bury, the factless ramblings of vaccine-science deniers (including at least one in Montreal, a city now seeing an uptick in measles cases). No, it lifts them out of its infinite catalogue and thrusts them out into the world, with the book cover facing out and an “Audience Favourite” sticker slapped on the front.

Revelations of this kind of social-media irresponsibility now lead, reliably, to a certain kind of reaction: the patchwork, flip-flopping, half-measure responses that platforms think will fool us into believing they care. After learning that pedophiles were using comment sections to try to goad children into exploiting themselves, YouTube took comments off of some, but not all, videos featuring children. When Mr. Maza’s situation led to a huge outcry, YouTube “demonetized” the vlogger in question, cutting off his access to ad revenue without a clear explanation about why it was changing its decision, or when and how the revenue might be reinstituted. The criticism continues, as does the company’s inadequate solutions; now YouTube is demonetizing or removing creators it deems extremist entirely, interfering with documentary makers and researchers in the process, and putting itself at risk of being criticized for interfering with free speech.

Free speech is a political issue. Free amplification, though, is a business decision that YouTube is actively making. Which is why the one response that insiders, observers and experts have long advocated continues to be ignored: designing a new, more ethical recommendation algorithm that doesn’t reward repugnant behaviour.

Doing so would reduce traffic, and therefore revenue, for creators, a spokesperson told the Times this week. Somehow, though, she didn’t get around to pointing out that the bulk of that money ends up with YouTube.

Source: Pedophiles, anti-vaxxers, homophobes: YouTube’s algorithm caters to them all: Denise Balkissoon

How YouTube Built a Radicalization Machine for the Far-Right

Good long read on how YouTube’s algorithms work to drive people towards more extremism:

For David Sherratt, like so many teenagers, far-right radicalization began with video game tutorials on YouTube. He was 15 years old and loosely liberal, mostly interested in “Call of Duty” clips. Then YouTube’s recommendations led him elsewhere.

“As I kept watching, I started seeing things like the online atheist community,” Sherratt said, “which then became a gateway to the atheism community’s civil war over feminism.” Due to a large subculture of YouTube atheists who opposed feminism, “I think I fell down that rabbit hole a lot quicker,” he said.

During that four-year trip down the rabbit hole, the teenager made headlines for his involvement in the men’s rights movement, a fringe ideology which believes men are oppressed by women, and which he no longer supports. He made videos with a prominent YouTuber now beloved by the far right.

He attended a screening of a documentary on the “men’s rights” movement, and hung out with other YouTubers afterward, where he met a young man who seemed “a bit off,” Sherratt said. Still, he didn’t think much of it, and ended up posing for a group picture with the man and other YouTubers. Some of Sherratt’s friends even struck up a rapport with the man online afterward, which prompted Sherratt to check out his YouTube channel.

What he found soured his outlook on the documentary screening. The young man’s channel was full of Holocaust denial content.

“I’d met a neo-Nazi and didn’t even know it,” Sherratt said

The encounter was part of his disenchantment with the far-right political world which he’d slowly entered over the end of his childhood.

“I think one of the real things that made it so difficult to get out and realize how radicalized I’d become in certain areas was the fact that in a lot of ways, far-right people make themselves sound less far-right; more moderate or more left-wing,” Sherratt said.

Sherratt wasn’t alone. YouTube has become a quiet powerhouse of political radicalization in recent years, powered by an algorithm that a former employee says suggests increasingly fringe content. And far-right YouTubers have learned to exploit that algorithm and land their videos high in the recommendations on less extreme videos. The Daily Beast spoke to three men whose YouTube habits pushed them down a far-right path and who have since logged out of hate.

Fringe by Design

YouTube has a massive viewership, with nearly 2 billion daily users, many of them young. The site is more popular among teenagers than Facebook and Twitter. A 2018 Pew study found that 85 percent of U.S. teens used YouTube, making it by far the most popular online platform for the under-20 set. (Facebook and Twitter, which have faced regulatory ire for extremist content, are popular among a respective 51 and 32 percent of teens.)

Launched in 2005, YouTube was quickly acquired by Google. The tech giant set about trying to maximize profits by keeping users watching videos. The company hired engineers to craft an algorithm that would recommend new videos before a user had finished watching their current video.

Former YouTube engineer Guillaume Chaslot was hired to a team that designed the algorithm in 2010.

“People think it’s suggesting the most relevant, this thing that’s very specialized for you. That’s not the case,” Chaslot told The Daily Beast, adding that the algorithm “optimizes for watch-time,” not for relevance.

“The goal of the algorithm is really to keep you in line the longest,” he said.

That fixation on watch-time can be banal or dangerous, said Becca Lewis, a researcher with the technology research nonprofit Data & Society. “In terms of YouTube’s business model and attempts to keep users engaged on their content, it makes sense what we’re seeing the algorithms do,” Lewis said. “That algorithmic behavior is great if you’re looking for makeup artists and you watch one person’s content and want a bunch of other people’s advice on how to do your eye shadow. But it becomes a lot more problematic when you’re talking about political and extremist content.”

Chaslot said it was apparent to him then that algorithm could help reinforce fringe beliefs.

“I realized really fast that YouTube’s recommendation was putting people into filter bubbles,” Chaslot said. “There was no way out. If a person was into Flat Earth conspiracies, it was bad for watch-time to recommend anti-Flat Earth videos, so it won’t even recommend them.”

Lewis and other researchers have noted that recommended videos often tend toward the fringes. Writing for The New York Times, sociologist Zeynep Tufekci observed that videos of Donald Trump recommended videos “that featured white supremacist rants, Holocaust denials and other disturbing content.”

Matt, a former right-winger who asked to withhold his name, was personally trapped in such a filter bubble.

For instance, he described watching a video of Bill Maher and Ben Affleck discussing Islam, and seeing recommended a more extreme video about Islam by Infowars employee and conspiracy theorist Paul Joseph Watson. That video led to the next video, and the next.

“Delve into [Watson’s] channel and start finding his anti-immigration stuff which often in turn leads people to become more sympathetic to ethno-nationalist politics,” Matt said.

“This sort of indirectly sent me down a path to moving way more to the right politically as it led me to discover other people with similar far-right views.”

Now 20, Matt has since exited the ideology and built an anonymous internet presence where he argues with his ex-brethren on the right.

“I think YouTube certainly played a role in my shift to the right because through the recommendations I got,” he said, “it led me to discover other content that was very much right of center, and this only got progressively worse over time, leading me to discover more sinister content.”

This opposition to feminism and racial equality movements is part of a YouTube movement that describes itself as “anti-social justice.”

Andrew, who also asked to withhold his last name, is a former white supremacist who has since renounced the movement. These days, he blogs about topics the far right views as anathema: racial justice, gender equality, and, one of his personal passions, the furry community. But an interest in video games and online culture was a constant over his past decade of ideological evolution. When Andrew was 20, he said, he became sympathetic to white nationalism after ingesting the movement’s talking points on an unrelated forum.

Gaming culture on YouTube turned him further down the far-right path. In 2014, a coalition of trolls and right-wingers launched Gamergate, a harassment campaign against people they viewed as trying to advance feminist or “social justice” causes in video games. The movement had a large presence on YouTube, where it convinced some gamers (particularly young men) that their video games were under attack.

“It manufactured a threat to something people put an inordinate amount of value on,” Andrew said. “‘SJWs’ [social justice warriors] were never a threat to video games. But if people could be made to believe they were,” then they were susceptible to further, wilder claims about these new enemies on the left.

Matt described the YouTube-fed feelings of loss as a means of radicalizing young men.

“I think the anti-SJW stuff appeals to young white guys who feel like they’re losing their status for lack of a better term,” he said. “They see that minorities are advocating for their own rights and this makes them uncomfortable so they try and fight against it.”

While in the far-right community, Andrew saw anti-feminist content act as a gateway to more extreme videos.

“The false idea that social justice causes have some sort of nefarious ulterior motive, that they’re distorting the truth somehow” can help open viewers to more extreme causes, he said. “Once you’ve gotten someone to believe that, you can actually go all the way to white supremacy fairly quickly.”

Lewis identified the community as one of several radicalization pathways “that can start from a mainstream conservative perspective: not overtly racist or sexist, but focused on criticizing feminism, focusing on criticizing Black Lives Matter. From there it’s really easy to access content that’s overtly racist and overtly sexist.”

Chaslot, the former YouTube engineer, said he suggested the company let users opt out of the recommendation algorithm, but claims Google was not interested.

Google’s chief executive officer, Sundar Pichai, paid lip service to the problem during a congressional hearing last week. When questioned about a particularly noxious conspiracy theory about Hillary Clinton that appears high in searches for unrelated videos, the CEO made no promise to act.

“It’s an area we acknowledge there’s more work to be done, and we’ll definitely continue doing that,” Pichai said. “But I want to acknowledge there is more work to be done. With our growth comes more responsibility. And we are committed to doing better as we invest more in this area.”

But while YouTube mulls a solution, people are getting hurt.

Hard Right Turn

On Dec. 4, 2016, Edgar Welch fired an AR-15 rifle in a popular Washington, D.C. pizza restaurant. Welch believed Democrats were conducting child sex-trafficking through the pizzeria basement, a conspiracy theory called “Pizzagate.”

Like many modern conspiracy theories, Pizzagate proliferated on YouTube and those videos appeared to influence Welch, who sent them to others. Three days before the shooting, Welch texted a friend about the conspiracy. “Watch ‘PIZZAGATE: The bigger Picture’ on YouTube,” he wrote.

Other YouTube-fed conspiracy theories have similarly resulted in threats of gun violence. A man who was heavily involved in conspiracy theory communities on YouTube allegedly threatened a massacre at YouTube headquarters this summer, after he came to believe a different conspiracy theory about video censorship. Another man who believed the YouTube-fueled QAnon theory led an armed standoff at the Hoover Dam in June. A neo-Nazi arrested with a trove of guns last week ran a YouTube channel where he talked about killing Jewish people.

Religious extremists have also found a home on YouTube. From March to June 2018, people uploaded 1,348 ISIS videos to the platform, according to a study by the Counter Extremism Project. YouTube deleted 76 percent of those videos within two hours of their uploads, but most accounts still remained online. The radical Muslim-American cleric Anwar al-Awlaki radicalized multiple would-be terrorists and his sermons were popular on YouTube.

Less explicitly violent actors can also radicalize viewers by exploiting YouTube’s algorithm.

“YouTubers are extremely savvy at informal SEO [search engine optimization],” Lewis of Data & Society said. “They’ll tag their content with certain keywords they suspect people may be searching for.”

Chaslot described a popular YouTube title format that plays well with the algorithm, as well as to viewers’ emotions. “Keywords like ‘A Destroys B’ or ‘A Humiliates B’” can “exploit the algorithm and human vulnerabilities.” Conservative videos, like those featuring right-wing personality Ben Shapiro or Proud Boys founder Gavin McInnes, often employ that format.

Some fringe users try to proliferate their views by making them appear in the search results for less-extreme videos.

“A moderate user will have certain talking points,” Sherratt said. “But the radical ones, because they’re always trying to infiltrate, and leech subscribers and viewers off those more moderate positions, they’ll put in all the exact same tags, but with a few more. So it won’t just be ‘migrant crisis’ and ‘Islam,’ it’ll be ‘migrant crisis,’ ‘Islam,’ and ‘death of the West.’”

“You could be watching the more moderate videos and the extreme videos will be in that [recommendation] box because there isn’t any concept within the anti-social justice sphere that the far right aren’t willing to use as a tool to co-opt that sphere.”

Vulnerable Viewership

Young people, particularly those without fully formed political beliefs, can be easily influenced by extreme videos that appear in their recommendations. “YouTube appeals to such a young demographic,” Lewis said. “Young people are more susceptible to having their political ideals shaped. That’s the time in your life when you’re figuring out who you are and what your politics are.”

But YouTube hasn’t received the same attention as Facebook and Twitter, which are more popular with adults. During Pichai’s Tuesday congressional testimony, Congress members found time to ask the Google CEO about iPhones (a product Google does not manufacture), but asked few questions about extremist content.

Pichai’s testimony came two days after PewDiePie, YouTube’s most popular user, recommended a channel that posts white nationalist and anti-Semitic videos. PewDiePie (real name Felix Kjellberg) has more than 75 million subscribers, many of whom are young people. Kjellberg has previously been accused of bigotry, after he posted at least nine videos featuring anti-Semitic or Nazi imagery. In a January 2017 stunt, he hired people to hold a “death to all Jews” sign on camera.

Some popular YouTubers in the less-extreme anti social justice community became more overtly sexist and racist in late 2016 and early 2017, a trend some viewers might not notice.

“The rhetoric did start shifting way further right and the Overton Window was moving,” Sherratt said. “One minute it was ‘we’re liberals and we just think these social justice types are too extreme or going too far in their tactics’ and then six months later it turned into ‘progressivism is an evil ideology.’”

One of Matt’s favorite YouTube channels “started off as a tech channel that didn’t like feminists and now he makes videos where almost everything is a Marxist conspiracy to him,” he said.

In some cases, YouTube videos can supplant a person’s previous information sources. Conspiracy YouTubers often discourage viewers from watching or reading other news sources, Chaslot has previously noted. The trend is good for conspiracy theorists and YouTube’s bottom line; viewers become more convinced of conspiracy theories and consume more advertisements on YouTube.

The problem extends to young YouTube viewers, who might follow their favorite channel religiously, but not read more conventional news outlets.

“It’s where people are getting their information about the world and about politics,” Lewis said. “Sometimes instead of going to traditional news sources, people are just watching the content of an influencer they like, who happens to have certain political opinions. Kids may be getting a very different experience from YouTube than their parents expect, whether it’s extremist or not. I think YouTube has the power to shape people’s ideologies more than people give it credit for.”

Some activists have called on YouTube to ban extreme videos. The company often counters that it is difficult to screen the reported 300 million hours of video uploaded each minute. Even Chaslot said he’s skeptical of bans’ efficiency.

“You can ban again and again, but they’ll change the discourse. They’re very good at staying under the line of acceptable,” he said. He pointed to videos that call for Democratic donor George Soros and other prominent Democrats to be “‘the first lowered to hell.’” “The video explained why they don’t deserve to live, and doesn’t explicitly say to kill them,” so it skirts the rules against violent content.

At the same time “it leads to a kind of terrorist mentality” and shows up in recommendations.

“Wherever you put the line, people will find a way to be on the other side of it,” Chaslot said.

“It’s not a content moderation issue, it’s an algorithm issue.”

Source: How YouTube Built a Radicalization Machine for the Far-Right

Inside YouTube’s Far-Right Radicalization Factory

Interesting study, symptomatic of the problems with social media companies:

YouTube is a readymade radicalization network for the far right, a new study finds.

The Google-owned video platform recently banned conspiracy outlet InfoWars and its founder Alex Jones for hate speech. But another unofficial network of fringe channels is pulling YouTubers down the rabbit hole of extremism, said the Tuesday report from research group Data & Society.

The study tracked 65 YouTubers—some of them openly alt-right or white nationalist, others who claim to be simply libertarians, and most of whom have voiced anti-progressive views—as they collaborated across YouTube channels. The result, the study found, is an ecosystem in which a person searching for video game reviews can quickly find themselves watching a four-hour conversation with white nationalist Richard Spencer.

Becca Lewis, the researcher behind the report, calls the group the Alternative Influence Network. Its members include racists like Spencer, Gamergate figureheads like Carl Benjamin (who goes by ‘Sargon of Akkad’), and talk-show hosts like Joe Rogan, who promotes guests from fringe ideologies. Not all people in the group express far-right political views themselves, but will platform guests who do. Combined, the 65 YouTubers account for millions of YouTube followers, who can find themselves clicking through a series of increasingly radical-right videos.

Take Rogan, a comedian and self-described libertarian whose 3.5 million subscribers recently witnessed him host a bizarre interviewwith Tesla founder Elon Musk. While Rogan might not express extreme views, his guests often tend to be more fringe. Last year, he hosted Benjamin, the anti-feminist who gained a large following for his harassment campaigns during Gamergate.

Rogan’s interview with Benjamin, which has nearly 2 million views, describes Benjamin as an “Anti-Identitarian liberal YouTuber.” It’s a misleading title for Rogan fans who might go on to view Benjamin’s work.

Benjamin, in turn, has also claimed not support the alt-right. Like other less explicitly racist members of the network, he’s hyped his “not racist” cred by promoting livestreamed “debates” (a favorite term in these circles) with white supremacists.

But the line between “debate” and collaboration can be indistinct, as Lewis noted in her study. She pointed to one such debate between Benjamin and Spencer, which was moderated by white nationalist creep Jean-Francois Gariepy, and which briefly became the world’s top trending live video on YouTube, with more than 10,000 live viewers.

“In his video with [Richard] Spencer, Benjamin was presumably debating against scientific racism, a stance he frequently echoes,” Lewis wrote in her study. “However, by participating in the debate, he was building a shared audience—and thus, a symbiotic relationship— with white nationalists. In fact, Benjamin has become a frequent guest on channels that host such ‘debates,’ which often function as group entertainment as much as genuine disagreements.”

Debates are often better measures of rhetorical skill than they are of an idea’s merits. A well-spoken idiot might stand a good chance against a shy expert in a televised argument. When they disagreed during the four-hour livestream, Spencer, a more practiced speaker, mopped the floor with Benjamin. The debate earned Spencer new followers, some of whom appear to have been lured in by the other YouTubers’ thinly-disguised bigotry.

“I’ve never really listened to Spencer speak before,” one commenter wrote. “But it is immediately apparent that he’s on a whole different level.”

And Benjamin has been willing to collaborate with further-right far right YouTubers when the circumstances benefited him.

“In many ways, we do have similar objectives,” he told the openly racist YouTuber Millennial Woes in one video cited in the study. “We have the same enemies, right? I mean, you guys hate the SJWs, I hate the SJWs. I want to see the complete destruction of social justice. . . . If the alt-right took the place of the SJWs, I would have a lot less to fear.”

“Some of the more mainstream conservatives or libertarians are able to have it both ways,” Lewis told The Daily Beast on Tuesday. “They can say they reject the alt-right … but at the same time, there’s a lot of nudging and winking.”

Her report cited other instances of this phenomenon, including self-identified “classical liberal” YouTuber Dave Rubin, who promotes anti-progressive views on his talk show, where he hosts more extreme personalities, ostensibly for debate. But the debates can skew friendly. The study pointed to a conversation in which Rubin allowed far-right YouTuber Stefan Molyneux to make junk science claims unchecked. A description for the video encouraged viewers to do their own research, but provided links to Molyneux’s own content.

“It gives a generally unchallenged platform for that white nationalist and their ideas,” Lewis said on Tuesday.

YouTube’s algorithms can sometimes reward fringe content. Researcher Zeynep Tufekci previously highlighted the phenomenonwhen she noted that, after she watched footage of Donald Trump rallies, YouTube began recommending an increasingly radical series of white supremacist and conspiracy videos.

Lewis said YouTubers have learned to leverage the site’s algorithms, frontloading their videos with terms like “liberal” and “intersectional” in a bid to “hijack” search results that would typically be dominated by the left.

YouTube, which is built to keep users watching videos, might be a perfect recruiting platform for fringe movements, which want followers to remain similarly engaged.

“One way scholars of social movements often talk about recruitment is in terms of the capacity of the movement to bring in new recruits and then retain them,” Joan Donovan, a research lead at Data & Society said on Tuesday.“Social media is optimized for engagement, which is both recruitment of an audience and retention of that audience. These groups often use the tools of analytics to make sure they continue to grow their networks.”

Source: Inside YouTube’s Far-Right Radicalization Factory

YouTube, the Great Radicalizer – The New York Times

Good article on how social media reinforces echo chambers and tends towards more extreme views:

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigationof YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

via YouTube, the Great Radicalizer – The New York Times