The language gives it away: How an algorithm can help us detect fake news

Interesting:

Have you ever read something online and shared it among your networks, only to find out it was false?

As a software engineer and computational linguist who spends most of her work and even leisure hours in front of a computer screen, I am concerned about what I read online. In the age of social media, many of us consume unreliable news sources. We’re exposed to a wild flow of information in our social networks — especially if we spend a lot of time scanning our friends’ random posts on Twitter and Facebook.

My colleagues and I at the Discourse Processing Lab at Simon Fraser University have conducted research on the linguistic characteristics of fake news.

The effects of fake news

A study in the United Kingdom found that about two-thirds of the adults surveyed regularly read news on Facebook, and that half of those had the experience of initially believing a fake news story. Another study, conducted by researchers at the Massachusetts Institute of Technology, focused on the cognitive aspects of exposure to fake news and found that, on average, newsreaders believe a false news headline at least 20 percent of the time.

False stories are now spreading 10 times faster than real news and the problem of fake news seriously threatens our society.

For example, during the 2016 election in the United States, an astounding number of U.S. citizens believed and shared a patently false conspiracy claiming that Hilary Clinton was connected to a human trafficking ring run out of a pizza restaurant. The owner of the restaurant received death threats, and one believer showed up in the restaurant with a gun. This — and a number of other fake news stories distributed during the election season — had an undeniable impact on people’s votes.

 

It’s often difficult to find the origin of a story after partisan groups, social media bots and friends of friends have shared it thousands of times. Fact-checking websites such as Snopes and Buzzfeed can only address a small portion of the most popular rumors.

The technology behind the internet and social media has enabled this spread of misinformation; maybe it’s time to ask what this technology has to offer in addressing the problem.

In an interview, Hilary Clinton discusses ‘Pizzagate’ and the problem of fake news online.

Giveaways in writing style

Recent advances in machine learning have made it possible for computers to instantaneously complete tasks that would have taken humans much longer. For example, there are computer programs that help police identify criminal faces in a matter of seconds. This kind of artificial intelligence trains algorithms to classify, detect and make decisions.

When machine learning is applied to natural language processing, it is possible to build text classification systems that recognize one type of text from another.

During the past few years, natural language processing scientists have become more active in building algorithms to detect misinformation; this helps us to understand the characteristics of fake news and develop technology to help readers.

One approach finds relevant sources of information, assigns each source a credibility score and then integrates them to confirm or debunk a given claim. This approach is heavily dependent on tracking down the original source of news and scoring its credibility based on a variety of factors.

A second approach examines the writing style of a news article rather than its origin. The linguistic characteristics of a written piece can tell us a lot about the authors and their motives. For example, specific words and phrases tend to occur more frequently in a deceptive text compared to one written honestly.

Spotting fake news

Our research identifies linguistic characteristics to detect fake news using machine learning and natural language processing technology. Our analysis of a large collection of fact-checked news articles on a variety of topics shows that, on average, fake news articles use more expressions that are common in hate speech, as well as words related to sex, death and anxiety. Genuine news, on the other hand, contains a larger proportion of words related to work (business) and money (economy).

This suggests that a stylistic approach combined with machine learning might be useful in detecting suspicious news.

Our fake news detector is built based on linguistic characteristics extracted from a large body of news articles. It takes a piece of text and shows how similar it is to the fake news and real news items that it has seen before. (Try it out!)

The main challenge, however, is to build a system that can handle the vast variety of news topics and the quick change of headlines online, because computer algorithms learn from samples and if these samples are not sufficiently representative of online news, the model’s predictions would not be reliable.

One option is to have human experts collect and label a large quantity of fake and real news articles. This data enables a machine-learning algorithm to find common features that keep occurring in each collection regardless of other varieties. Ultimately, the algorithm will be able to distinguish with confidence between previously unseen real or fake news articles.

Source: The language gives it away: How an algorithm can help us detect fake news

How YouTube Built a Radicalization Machine for the Far-Right

Good long read on how YouTube’s algorithms work to drive people towards more extremism:

For David Sherratt, like so many teenagers, far-right radicalization began with video game tutorials on YouTube. He was 15 years old and loosely liberal, mostly interested in “Call of Duty” clips. Then YouTube’s recommendations led him elsewhere.

“As I kept watching, I started seeing things like the online atheist community,” Sherratt said, “which then became a gateway to the atheism community’s civil war over feminism.” Due to a large subculture of YouTube atheists who opposed feminism, “I think I fell down that rabbit hole a lot quicker,” he said.

During that four-year trip down the rabbit hole, the teenager made headlines for his involvement in the men’s rights movement, a fringe ideology which believes men are oppressed by women, and which he no longer supports. He made videos with a prominent YouTuber now beloved by the far right.

He attended a screening of a documentary on the “men’s rights” movement, and hung out with other YouTubers afterward, where he met a young man who seemed “a bit off,” Sherratt said. Still, he didn’t think much of it, and ended up posing for a group picture with the man and other YouTubers. Some of Sherratt’s friends even struck up a rapport with the man online afterward, which prompted Sherratt to check out his YouTube channel.

What he found soured his outlook on the documentary screening. The young man’s channel was full of Holocaust denial content.

“I’d met a neo-Nazi and didn’t even know it,” Sherratt said

The encounter was part of his disenchantment with the far-right political world which he’d slowly entered over the end of his childhood.

“I think one of the real things that made it so difficult to get out and realize how radicalized I’d become in certain areas was the fact that in a lot of ways, far-right people make themselves sound less far-right; more moderate or more left-wing,” Sherratt said.

Sherratt wasn’t alone. YouTube has become a quiet powerhouse of political radicalization in recent years, powered by an algorithm that a former employee says suggests increasingly fringe content. And far-right YouTubers have learned to exploit that algorithm and land their videos high in the recommendations on less extreme videos. The Daily Beast spoke to three men whose YouTube habits pushed them down a far-right path and who have since logged out of hate.

Fringe by Design

YouTube has a massive viewership, with nearly 2 billion daily users, many of them young. The site is more popular among teenagers than Facebook and Twitter. A 2018 Pew study found that 85 percent of U.S. teens used YouTube, making it by far the most popular online platform for the under-20 set. (Facebook and Twitter, which have faced regulatory ire for extremist content, are popular among a respective 51 and 32 percent of teens.)

Launched in 2005, YouTube was quickly acquired by Google. The tech giant set about trying to maximize profits by keeping users watching videos. The company hired engineers to craft an algorithm that would recommend new videos before a user had finished watching their current video.

Former YouTube engineer Guillaume Chaslot was hired to a team that designed the algorithm in 2010.

“People think it’s suggesting the most relevant, this thing that’s very specialized for you. That’s not the case,” Chaslot told The Daily Beast, adding that the algorithm “optimizes for watch-time,” not for relevance.

“The goal of the algorithm is really to keep you in line the longest,” he said.

That fixation on watch-time can be banal or dangerous, said Becca Lewis, a researcher with the technology research nonprofit Data & Society. “In terms of YouTube’s business model and attempts to keep users engaged on their content, it makes sense what we’re seeing the algorithms do,” Lewis said. “That algorithmic behavior is great if you’re looking for makeup artists and you watch one person’s content and want a bunch of other people’s advice on how to do your eye shadow. But it becomes a lot more problematic when you’re talking about political and extremist content.”

Chaslot said it was apparent to him then that algorithm could help reinforce fringe beliefs.

“I realized really fast that YouTube’s recommendation was putting people into filter bubbles,” Chaslot said. “There was no way out. If a person was into Flat Earth conspiracies, it was bad for watch-time to recommend anti-Flat Earth videos, so it won’t even recommend them.”

Lewis and other researchers have noted that recommended videos often tend toward the fringes. Writing for The New York Times, sociologist Zeynep Tufekci observed that videos of Donald Trump recommended videos “that featured white supremacist rants, Holocaust denials and other disturbing content.”

Matt, a former right-winger who asked to withhold his name, was personally trapped in such a filter bubble.

For instance, he described watching a video of Bill Maher and Ben Affleck discussing Islam, and seeing recommended a more extreme video about Islam by Infowars employee and conspiracy theorist Paul Joseph Watson. That video led to the next video, and the next.

“Delve into [Watson’s] channel and start finding his anti-immigration stuff which often in turn leads people to become more sympathetic to ethno-nationalist politics,” Matt said.

“This sort of indirectly sent me down a path to moving way more to the right politically as it led me to discover other people with similar far-right views.”

Now 20, Matt has since exited the ideology and built an anonymous internet presence where he argues with his ex-brethren on the right.

“I think YouTube certainly played a role in my shift to the right because through the recommendations I got,” he said, “it led me to discover other content that was very much right of center, and this only got progressively worse over time, leading me to discover more sinister content.”

This opposition to feminism and racial equality movements is part of a YouTube movement that describes itself as “anti-social justice.”

Andrew, who also asked to withhold his last name, is a former white supremacist who has since renounced the movement. These days, he blogs about topics the far right views as anathema: racial justice, gender equality, and, one of his personal passions, the furry community. But an interest in video games and online culture was a constant over his past decade of ideological evolution. When Andrew was 20, he said, he became sympathetic to white nationalism after ingesting the movement’s talking points on an unrelated forum.

Gaming culture on YouTube turned him further down the far-right path. In 2014, a coalition of trolls and right-wingers launched Gamergate, a harassment campaign against people they viewed as trying to advance feminist or “social justice” causes in video games. The movement had a large presence on YouTube, where it convinced some gamers (particularly young men) that their video games were under attack.

“It manufactured a threat to something people put an inordinate amount of value on,” Andrew said. “‘SJWs’ [social justice warriors] were never a threat to video games. But if people could be made to believe they were,” then they were susceptible to further, wilder claims about these new enemies on the left.

Matt described the YouTube-fed feelings of loss as a means of radicalizing young men.

“I think the anti-SJW stuff appeals to young white guys who feel like they’re losing their status for lack of a better term,” he said. “They see that minorities are advocating for their own rights and this makes them uncomfortable so they try and fight against it.”

While in the far-right community, Andrew saw anti-feminist content act as a gateway to more extreme videos.

“The false idea that social justice causes have some sort of nefarious ulterior motive, that they’re distorting the truth somehow” can help open viewers to more extreme causes, he said. “Once you’ve gotten someone to believe that, you can actually go all the way to white supremacy fairly quickly.”

Lewis identified the community as one of several radicalization pathways “that can start from a mainstream conservative perspective: not overtly racist or sexist, but focused on criticizing feminism, focusing on criticizing Black Lives Matter. From there it’s really easy to access content that’s overtly racist and overtly sexist.”

Chaslot, the former YouTube engineer, said he suggested the company let users opt out of the recommendation algorithm, but claims Google was not interested.

Google’s chief executive officer, Sundar Pichai, paid lip service to the problem during a congressional hearing last week. When questioned about a particularly noxious conspiracy theory about Hillary Clinton that appears high in searches for unrelated videos, the CEO made no promise to act.

“It’s an area we acknowledge there’s more work to be done, and we’ll definitely continue doing that,” Pichai said. “But I want to acknowledge there is more work to be done. With our growth comes more responsibility. And we are committed to doing better as we invest more in this area.”

But while YouTube mulls a solution, people are getting hurt.

Hard Right Turn

On Dec. 4, 2016, Edgar Welch fired an AR-15 rifle in a popular Washington, D.C. pizza restaurant. Welch believed Democrats were conducting child sex-trafficking through the pizzeria basement, a conspiracy theory called “Pizzagate.”

Like many modern conspiracy theories, Pizzagate proliferated on YouTube and those videos appeared to influence Welch, who sent them to others. Three days before the shooting, Welch texted a friend about the conspiracy. “Watch ‘PIZZAGATE: The bigger Picture’ on YouTube,” he wrote.

Other YouTube-fed conspiracy theories have similarly resulted in threats of gun violence. A man who was heavily involved in conspiracy theory communities on YouTube allegedly threatened a massacre at YouTube headquarters this summer, after he came to believe a different conspiracy theory about video censorship. Another man who believed the YouTube-fueled QAnon theory led an armed standoff at the Hoover Dam in June. A neo-Nazi arrested with a trove of guns last week ran a YouTube channel where he talked about killing Jewish people.

Religious extremists have also found a home on YouTube. From March to June 2018, people uploaded 1,348 ISIS videos to the platform, according to a study by the Counter Extremism Project. YouTube deleted 76 percent of those videos within two hours of their uploads, but most accounts still remained online. The radical Muslim-American cleric Anwar al-Awlaki radicalized multiple would-be terrorists and his sermons were popular on YouTube.

Less explicitly violent actors can also radicalize viewers by exploiting YouTube’s algorithm.

“YouTubers are extremely savvy at informal SEO [search engine optimization],” Lewis of Data & Society said. “They’ll tag their content with certain keywords they suspect people may be searching for.”

Chaslot described a popular YouTube title format that plays well with the algorithm, as well as to viewers’ emotions. “Keywords like ‘A Destroys B’ or ‘A Humiliates B’” can “exploit the algorithm and human vulnerabilities.” Conservative videos, like those featuring right-wing personality Ben Shapiro or Proud Boys founder Gavin McInnes, often employ that format.

Some fringe users try to proliferate their views by making them appear in the search results for less-extreme videos.

“A moderate user will have certain talking points,” Sherratt said. “But the radical ones, because they’re always trying to infiltrate, and leech subscribers and viewers off those more moderate positions, they’ll put in all the exact same tags, but with a few more. So it won’t just be ‘migrant crisis’ and ‘Islam,’ it’ll be ‘migrant crisis,’ ‘Islam,’ and ‘death of the West.’”

“You could be watching the more moderate videos and the extreme videos will be in that [recommendation] box because there isn’t any concept within the anti-social justice sphere that the far right aren’t willing to use as a tool to co-opt that sphere.”

Vulnerable Viewership

Young people, particularly those without fully formed political beliefs, can be easily influenced by extreme videos that appear in their recommendations. “YouTube appeals to such a young demographic,” Lewis said. “Young people are more susceptible to having their political ideals shaped. That’s the time in your life when you’re figuring out who you are and what your politics are.”

But YouTube hasn’t received the same attention as Facebook and Twitter, which are more popular with adults. During Pichai’s Tuesday congressional testimony, Congress members found time to ask the Google CEO about iPhones (a product Google does not manufacture), but asked few questions about extremist content.

Pichai’s testimony came two days after PewDiePie, YouTube’s most popular user, recommended a channel that posts white nationalist and anti-Semitic videos. PewDiePie (real name Felix Kjellberg) has more than 75 million subscribers, many of whom are young people. Kjellberg has previously been accused of bigotry, after he posted at least nine videos featuring anti-Semitic or Nazi imagery. In a January 2017 stunt, he hired people to hold a “death to all Jews” sign on camera.

Some popular YouTubers in the less-extreme anti social justice community became more overtly sexist and racist in late 2016 and early 2017, a trend some viewers might not notice.

“The rhetoric did start shifting way further right and the Overton Window was moving,” Sherratt said. “One minute it was ‘we’re liberals and we just think these social justice types are too extreme or going too far in their tactics’ and then six months later it turned into ‘progressivism is an evil ideology.’”

One of Matt’s favorite YouTube channels “started off as a tech channel that didn’t like feminists and now he makes videos where almost everything is a Marxist conspiracy to him,” he said.

In some cases, YouTube videos can supplant a person’s previous information sources. Conspiracy YouTubers often discourage viewers from watching or reading other news sources, Chaslot has previously noted. The trend is good for conspiracy theorists and YouTube’s bottom line; viewers become more convinced of conspiracy theories and consume more advertisements on YouTube.

The problem extends to young YouTube viewers, who might follow their favorite channel religiously, but not read more conventional news outlets.

“It’s where people are getting their information about the world and about politics,” Lewis said. “Sometimes instead of going to traditional news sources, people are just watching the content of an influencer they like, who happens to have certain political opinions. Kids may be getting a very different experience from YouTube than their parents expect, whether it’s extremist or not. I think YouTube has the power to shape people’s ideologies more than people give it credit for.”

Some activists have called on YouTube to ban extreme videos. The company often counters that it is difficult to screen the reported 300 million hours of video uploaded each minute. Even Chaslot said he’s skeptical of bans’ efficiency.

“You can ban again and again, but they’ll change the discourse. They’re very good at staying under the line of acceptable,” he said. He pointed to videos that call for Democratic donor George Soros and other prominent Democrats to be “‘the first lowered to hell.’” “The video explained why they don’t deserve to live, and doesn’t explicitly say to kill them,” so it skirts the rules against violent content.

At the same time “it leads to a kind of terrorist mentality” and shows up in recommendations.

“Wherever you put the line, people will find a way to be on the other side of it,” Chaslot said.

“It’s not a content moderation issue, it’s an algorithm issue.”

Source: How YouTube Built a Radicalization Machine for the Far-Right

Amazon’s Algorithm Has an Anti-Semitism Problem

As we are also seeing in the ongoing Facebook scandals, tech hasn’t managed to include ethical and moral considerations in its business models and algorithms:

In the wake of the Pittsburgh massacre, I invited my followers on Twitter to send me their inquiries about anti-Semitism. The response was overwhelming, and ran the gamut from questions about specific aspects of the prejudice to requests for advice on how to help fight it. One reader asked for more information about the Rothschild family, the Jewish banking dynasty that is a favorite bogeyman of anti-Semites and is typically used as a stand-in for the Jewish conspiracy that purportedly controls world affairs. He explained that some in his circle of friends regularly make bigoted remarks about the Rothschilds and their vast power and he wanted to set them straight. I’d written a report about the Rothschilds in school years ago, but figured there was probably better, more up-to-date material out there. So like anyone else, I went to Amazon.com and plugged in “history of rothschilds.” To my surprise, this is what I got:

amazonrothschilds
Rather than direct me to serious scholarship on the Rothschilds, like historian Niall Ferguson’s multivolume history on the family, Amazon first recommended blatantly bigoted content.

This isn’t the only instance of Amazon’s algorithm feeding intellectually bankrupt content to intellectually curious readers regarding fraught subjects. A search for “who did 9/11” yields this book as the #1 search result:

amazon911

As the book’s own blurb notes, its author, Nick Kollerstrom, is a “longtime member of Britain’s 9/11 truth group.” Among other conspiracies, the book contains an entire chapter entitled “9/11 and Zion” which blames the attack on the Jews. (Kollerstrom also happens to be a Holocaust denier who infamously declared, “Let us hope the schoolchildren visitors are properly taught about the elegant swimming pool at Auschwitz, built by the inmates, who would sunbathe there on Saturday and Sunday afternoons while watching the water polo matches.”)

Similarly, if one searches for “Jews and the slave trade,” the second, fourth, and fifth results are not scholarship on the subject, but notoriously anti-Semitic publications from Louis Farrakhan’s Nation of Islam. Farrakhan has worked for years to mainstream the baseless anti-Semitic conspiracy theory that Jews were behind the African slave trade.

(All searches above were done logged out from Amazon while incognito on Chrome, to ensure that the search results were the default ones, and not influenced by the specific user or their past search history.)

This isn’t Amazon’s first run-in with anti-Semitism concerns. Back in 2000, the company came under fire for stocking The Protocols of the Elders of Zion, one of the most influential anti-Semitic tracts in history. At the time, while firmly distancing itself from the bigoted content, Amazon insisted that it would not remove it from the catalog, because the company does not censor books. Today, copies of The Protocols on Amazon carry a cautionary message from the Anti-Defamation League and the following disclaimer from the company:

As a bookseller, Amazon strongly believes that providing open access to written speech, no matter how hateful or ugly, is one of the most important things we do. And because we think the best remedy for offensive speech is more speech, we also make available to readers the ability to make their own voices heard and express their views about this and all our titles in reviews and ratings.

It’s a reasonable defense. But it does not cover Amazon’s algorithm prioritizing the bigoted books over the legitimate ones.

The problem here is not that Amazon sells anti-Semitic material. The problem here is not that Amazon is trying to be anti-Semitic. It’s that the company is ignorant of anti-Semitic ideas, and so has not trained its algorithm to discount them. If a human librarian were asked about the Rothschilds, 9/11, or Jews and the slave trade, they would know how to distinguish between conspiratorial rantings and genuine documentation. They would also likely be aware of the anti-Semitic canards swirling around the subjects, and would steer interested readers away from them. Amazon’s vaunted search engine, perfectly tuned to maximize sales and the user’s shopping experience, has no such cultural competency.

Beyond the moral failure, these results also represent a straightforward professional failure. When a person searches for “history of rothschilds,” they are looking for historical information on the family, not a book featuring a shadowy figure squeezing blood out of globe. A bookselling algorithm that feeds readers misinformation is a broken bookselling algorithm.

If big tech companies like Amazon, Facebook, and Google want to get serious about combating online hate and misinformation, they need to start developing cultural competency on bigotry—and fast. They need not just coding experts working on their algorithms, but anti-hate experts who can flag conspiratorial currents. After all, it’s impossible for computers to identify a prejudice if they don’t know what it looks like. It’s about time we started teaching them.

Source: Amazon’s Algorithm Has an Anti-Semitism Problem

Responsibly deploying AI in the immigration process

Some good practical suggestions. While AI has the potential for greater consistency in decision-making, great care needs to be taken in development, testing and implementation to avoid bias and to identify cases where decisions need to be reviewed:

In April, the federal government sent a request for information to industry to determine where artificial intelligence (AI) could be used in the immigration system for legal research, prediction and trend analysis. The type of AI to be employed here is machine learning: developing algorithms through analysis of wide swaths of data to make predictions within a particular context. The current backlog of immigration applications leaves much room for solutions that could improve the efficiency of case processing, but Canadians should be concerned about the vulnerability of the groups targeted in this pilot project and how the use of these technologies might lead to human rights violations.

An algorithmic mistake that holds up a bank loan is frustrating enough, but in immigration screening a miscalculation could have devastating consequences. The potential for error is especially concerning because of the nature of the two application categories the government has selected for the pilot project: requests for consideration on humanitarian and compassionate grounds, and applications for Pre-Removal Risk Assessment. In the former category of cases, officials consider an applicant’s connections with Canada and the best interests of any children involved. In the latter category, a decision must be made about the danger that would confront the applicant if they were returned to their home country. In some of these cases, assessing whether someone holds political opinions for which they would be persecuted could be a crucial component. Given how challenging it is for current algorithmic methods to extract meaning and intent from human statements, it is unlikely that AI could be trusted to make such a judgment reliably. An error here could lead to someone being sent back to imprisonment or torture.

Moreover, if an inadequately designed algorithm results in decisions that infringe upon rights or amplify discrimination, people in these categories could have less capacity than other applicants to respond with a legal challenge. They may face financial constraints if they’re fleeing a dangerous regime, as well as cultural and language barriers.

An algorithmic mistake that holds up a bank loan is frustrating enough, but in immigration screening a miscalculation could have devastating consequences.

Because of the complexity of these decisions and the stakes involved, the government must think carefully about which parts of the screening process can be automated. Decision-makers need to take extreme care to ensure that machine learning techniques are employed ethically and with respect for human rights. We have several recommendations for how this can be done.

First, we suggest that the federal government take some best practices from the European Union’s General Data Protection Regulation (GDPR). The GDPR has expanded individual rights with regard to the collection and processing of personal data. Article 22 guarantees the right to challenge the automated decisions of algorithms, including the right to have a human review the decision. The Canadian government should consider a similar expansion of rights for individuals whose immigration applications are decided by, or informed by, the use of automated methods. In addition, it must ensure that the vulnerable groups being targeted are able to exercise those rights.

Second, the government must think carefully about what kinds of transparency are needed, for whom, and how greater transparency might create new risks. The immigration process is already complex and opaque, and with added automation, it may become more difficult to verify that these important decisions are being made in fair and thorough ways. The government’s request for information asks for input from industry on ensuring sufficient transparency so that AI decisions can be audited. In the context of immigration screening, we argue that a spectrum of transparency is needed because there are multiple parties with different interests and rights to information.

If the government were to reveal to everyone exactly how these algorithms work, there could be adverse consequences. A fully transparent AI decision process would open doors for people who want to exploit the system, including human traffickers. They could game the algorithm, for example, by observing the keywords and phrases that the AI system flags as markers of acceptability and inserting those words into immigration applications. Job seekers already do something similar, by using keywords strategically to get a resumé in front of human eyes. One possible mechanism for oversight in the case of immigration would be a neutral regulatory body that would be given the full details of how the algorithm operates but would reveal only case-specific details to the applicants and partial details to other relevant stakeholders.

Finally, the government needs to get broader input when designing this proposed use of AI. Requesting solutions from industry alone will deliver only part of the story. The government should also draw on expertise from the country’s three leading AI research institutes in Edmonton, Montreal and Toronto, as well as two new ones focused specifically on AI ethics: the University of Toronto’s Ethics of AI Lab and the Montreal AI Ethics Institute. Another group whose input should be included is the immigration applicants themselves. Developers and policy-makers have a responsibility to understand the context for which they are developing solutions. By bringing these perspectives into their design process, they can help bridge empathy gaps. An example of how users’ first-hand knowledge of a process can yield helpful tools is the recently launched chatbot Destin, which was designed by immigrants to help guide applicants through the Canadian immigration process.

The application of AI to immigration screening is promising: applications could be processed faster, with less human bias and at lower cost. But care must be taken with implementation. Canada has been taking a considered and strategic approach to the use of AI, as evidenced by the Pan-Canadian Artificial Intelligence Strategy, a major investment by the federal government that includes a focus on developing global thought leadership on the ethical and societal implications of advances in AI. We encourage the government to continue to pursue this thoughtful approach and an emphasis on human rights to guide the use of AI in immigration.

Source: Responsibly deploying AI in the immigration process

Here’s the Conversation We Really Need to Have About Bias at Google

Ongoing issue of bias in algorithms:

Let’s get this out of the way first: There is no basis for the charge that President Trump leveled against Google this week — that the search engine, for political reasons, favored anti-Trump news outlets in its results. None.

Mr. Trump also claimed that Google advertised President Barack Obama’s State of the Union addresses on its home page but did not highlight his own. That, too, was false, as screenshots show that Google did link to Mr. Trump’s address this year.

But that concludes the “defense of Google” portion of this column. Because whether he knew it or not, Mr. Trump’s false charges crashed into a longstanding set of worries about Google, its biases and its power. When you get beyond the president’s claims, you come upon a set of uncomfortable facts — uncomfortable for Google and for society, because they highlight how in thrall we are to this single company, and how few checks we have against the many unseen ways it is influencing global discourse.

In particular, a raft of research suggests there is another kind of bias to worry about at Google. The naked partisan bias that Mr. Trump alleges is unlikely to occur, but there is a potential problem for hidden, pervasive and often unintended bias — the sort that led Google to once return links to many pornographic pages for searches for “black girls,” that offered “angry” and “loud” as autocomplete suggestions for the phrase “why are black women so,” or that returned pictures of black people for searches of “gorilla.”

I culled these examples — which Google has apologized for and fixed, but variants of which keep popping up — from “Algorithms of Oppression: How Search Engines Reinforce Racism,” a book by Safiya U. Noble, a professor at the University of Southern California’s Annenberg School of Communication.

Dr. Noble argues that many people have the wrong idea about Google. We think of the search engine as a neutral oracle, as if the company somehow marshals computers and math to objectively sift truth from trash.

But Google is made by humans who have preferences, opinions and blind spots and who work within a corporate structure that has clear financial and political goals. What’s more, because Google’s systems are increasingly created by artificial intelligence tools that learn from real-world data, there’s a growing possibility that it will amplify the many biases found in society, even unbeknown to its creators.

Google says it is aware of the potential for certain kinds of bias in its search results, and that it has instituted efforts to prevent them. “What you have from us is an absolute commitment that we want to continually improve results and continually address these problems in an effective, scalable way,” said Pandu Nayak, who heads Google’s search ranking team. “We have not sat around ignoring these problems.”

For years, Dr. Noble and others who have researched hidden biases — as well as the many corporate critics of Google’s power, like the frequent antagonist Yelp — have tried to start a public discussion about how the search company influences speech and commerce online.

There’s a worry now that Mr. Trump’s incorrect charges could undermine such work. “I think Trump’s complaint undid a lot of good and sophisticated thought that was starting to work its way into public consciousness about these issues,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who has studied Google and Facebook’s influence on society.

Dr. Noble suggested a more constructive conversation was the one “about one monopolistic platform controlling the information landscape.”

So, let’s have it.

Google’s most important decisions are secret

In the United States, about eight out of 10 web searches are conducted through Google; across Europe, South America and India, Google’s share is even higher. Google also owns other major communications platforms, among them YouTube and Gmail, and it makes the Android operating system and its app store. It is the world’s dominant internet advertising company, and through that business, it also shapes the market for digital news.

Google’s power alone is not damning. The important question is how it manages that power, and what checks we have on it. That’s where critics say it falls down.

Google’s influence on public discourse happens primarily through algorithms, chief among them the system that determines which results you see in its search engine. These algorithms are secret, which Google says is necessary because search is its golden goose (it does not want Microsoft’s Bing to know what makes Google so great) and because explaining the precise ways the algorithms work would leave them open to being manipulated.

But this initial secrecy creates a troubling opacity. Because search engines take into account the time, place and some personalized factors when you search, the results you get today will not necessarily match the results I get tomorrow. This makes it difficult for outsiders to investigate bias across Google’s results.

A lot of people made fun this week of the paucity of evidence that Mr. Trump put forward to support his claim. But researchers point out that if Google somehow went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. If the public did spot evidence of such an event, it would look thin and inconclusive, too.

“We really have to have a much more sophisticated sense of how to investigate and identify these claims,” said Frank Pasquale, a professor at the University of Maryland’s law school who has studied the role that algorithms play in society.

In a law review article published in 2010, Mr. Pasquale outlined a way for regulatory agencies like the Federal Trade Commission and the Federal Communications Commission to gain access to search data to monitor and investigate claims of bias. No one has taken up that idea. Facebook, which also shapes global discourse through secret algorithms, recently sketched out a plan to give academic researchers access to its data to investigate bias, among other issues.

Google has no similar program, but Dr. Nayak said the company often shares data with outside researchers. He also argued that Google’s results are less “personalized” than people think, suggesting that search biases, when they come up, will be easy to spot.

“All our work is out there in the open — anyone can evaluate it, including our critics,” he said.

Search biases mirror real-world ones

The kind of blanket, intentional bias Mr. Trump is claiming would necessarily involve many workers at Google. And Google is leaky; on hot-button issues — debates over diversity or whether to work with the military — politically minded employees have provided important information to the media. If there was even a rumor that Google’s search team was skewing search for political ends, we would likely see some evidence of such a conspiracy in the media.

That’s why, in the view of researchers who study the issue of algorithmic bias, the more pressing concern is not about Google’s deliberate bias against one or another major political party, but about the potential for bias against those who do not already hold power in society. These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.

It’s in these blind spots that we find the most problematic biases with Google, like in the way it once suggested a spelling correction for the search “English major who taught herself calculus” — the correct spelling, Google offered, was “English major who taught himself calculus.”

Why did it do that? Google’s explanation was not at all comforting: The phrase “taught himself calculus” is a lot more popular online than “taught herself calculus,” so Google’s computers assumed that it was correct. In other words, a longstanding structural bias in society was replicated on the web, which was reflected in Google’s algorithm, which then hung out live online for who knows how long, unknown to anyone at Google, subtly undermining every female English major who wanted to teach herself calculus.

Eventually, this error was fixed. But how many other such errors are hidden in Google? We have no idea.

Google says it understands these worries, and often addresses them. In 2016, some people noticed that it listed a Holocaust-denial site as a top result for the search “Did the Holocaust happen?” That started a large effort at the company to address hate speech and misinformation online. The effort, Dr. Nayak said, shows that “when we see real-world biases making results worse than they should be, we try to get to the heart of the problem.”

Google has escaped recent scrutiny

Yet it is not just these unintended biases that we should be worried about. Researchers point to other issues: Google’s algorithms favor recency and activity, which is why they are so often vulnerable to being manipulated in favor of misinformation and rumor in the aftermath of major news events. (Google says it is working on addressing misinformation.)

Some of Google’s rivals charge that the company favors its own properties in its search results over those of third-party sites — for instance, how it highlights Google’s local reviews instead of Yelp’s in response to local search queries.

Regulators in Europe have already fined Google for this sort of search bias. In 2012, the F.T.C.’s antitrust investigators found credible evidence of unfair search practices at Google. The F.T.C.’s commissioners, however, voted unanimously against bringing charges. Google denies any wrongdoing.

The danger for Google is that Mr. Trump’s charges, however misinformed, create an opening to discuss these legitimate issues.

On Thursday, Senator Orrin Hatch, Republican of Utah, called for the F.T.C. to reopen its Google investigation. There is likely more to come. For the last few years, Facebook has weathered much of society’s skepticism regarding big tech. Now, it may be Google’s time in the spotlight.

Source: Here’s the Conversation We Really Need to Have About Bias at …

YouTube, the Great Radicalizer – The New York Times

Good article on how social media reinforces echo chambers and tends towards more extreme views:

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigationof YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

via YouTube, the Great Radicalizer – The New York Times

Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds

Not surprising that computer programs and their algorithms can incorporate existing biases, as appears to be the case here:

A computer program used to calculate people’s risk of committing crimes is less accurate and more racist than random humans assigned to the same task, a new Dartmouth study finds.

Before they’re sentenced, people who commit crimes in some U.S. states are required to take a 137-question quiz. The questions, which range from queries about a person’s criminal history, to their parents’ substance use, to “do you feel discouraged at times?” are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS. Using a proprietary algorithm, COMPAS is meant to crunch the numbers on a person’s life, determine their risk for reoffending, and help a judge determine a sentence based on that risk assessment.

Rather than making objective decisions, COMPAS actually plays up racial biases in the criminal justice system, activists allege. And a study released last week from Dartmouth researchers found that random, untrained people on the internet could make more accurate predictions about a person’s criminal future than the expensive software could.

A privately held software, COMPAS’s algorithms are a trade secret. Its conclusions baffle some of the people it evaluates. Take Eric Loomis, a Michigan man arrested in 2013, who pled guilty to attempting to flee a police officer, and no contest to driving a vehicle without its owner’s permission.

While neither offense was violent, COMPAS assessed Loomis’s history and reported him as having “a high risk of violence, high risk of recidivism, high pretrial risk.” Loomis was sentenced to six years in prison based on the finding.

COMPAS came to its conclusion through its 137-question quiz, which asks questions about the person’s criminal history, family history, social life, and opinions. The questionnaire does not ask a person’s race. But the questions — including those about parents’ arrest history, neighborhood crime, and a person’s economic stability — appear unfavorably biased against black defendants, who are disproportionately impoverished or incarcerated in the U.S.

A 2016 ProPublica investigation analyzed the software’s results across 7,000 cases in Broward County, Florida, and found that COMPAS often overestimated a person’s risk for committing future crimes. These incorrect assessments nearly doubled among black defendants, who frequently received higher risk ratings than white defendants who had committed more serious crimes.

But COMPAS isn’t just frequently wrong, the new Dartmouth study found: random humans can do a better job, with less information.

The Dartmouth research group hired 462 participants through Mechanical Turk, a crowdsourcing platform. The participants, who had no background or training in criminal justice, were given a brief description of a real criminal’s age and sex, as well as the crime they committed and their previous criminal history. The person’s race was not given.

“Do you think this person will commit another crime within 2 years,” the researchers asked participants.

The untrained group correctly predicted whether a person would commit another crime with 68.2 percent accuracy for black defendants and 67.6 percent accuracy for white defendants. That’s slightly better than COMPAS, which reports 64.9 percent accuracy for black defendants and 65.7 percent accuracy for white defendants.

In a statement, COMPAS’s parent company Equivalent argued that the Dartmouth findings were actually good.

“Instead of being a criticism of the COMPAS assessment, [the study] actually adds to a growing number of independent studies that have confirmed that COMPAS achieves good predictability and matches the increasingly accepted AUC standard of 0.70 for well-designed risk assessment tools used in criminal justice,” Equivalent said in the statement.

What it didn’t add was that the humans who had slightly outperformed COMPAS were untrained — whereas COMPAS is a massively expensive and secretive program.

In 2015, Wisconsin signed a contract with COMPAS for $1,765,334, documents obtained by the Electronic Privacy Information Center reveal. The largest chunk of the cash — $776,475 — went to licensing and maintenance fees for the software company. By contrast, the Dartmouth researchers paid each study participant $1 for completing the task, and a $5 bonus if they answered correctly more than 65 percent of the time.

And for all that money, defendants still aren’t sure COMPAS is doing its job.

After COMPAS helped sentence him to six years in prison, Loomis attempted to overturn the ruling, claiming the ruling by algorithm violated his right to due process. The secretive nature of the software meant it could not be trusted, he claimed.

His bid failed last summer when the U.S. Supreme Court refused to take up his case, allowing the COMPAS-based sentence to remain.

Instead of throwing himself at the mercy of the court, Loomis was at the mercy of the machine.

He might have had better luck at the hands of random internet users.

Source: Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds

Diversity must be the driver of artificial intelligence: Kriti Sharma

Agree. Those creating the algorithms and related technology need to be both more diverse and more mindful of the assumptions baked into their analysis and work:

The question over what to do about biases and inequalities in the technology industry is not a new one. The number of women working in science, technology, engineering and mathematics (STEM) fields has always been disproportionately less than men. What may be more perplexing is, why is it getting worse?

It’s 2017, and yet according to the American Association of University Women (AAUW) in a review of more than 380 studies from academic journals, corporations, and government sources, there is a major employment gap for women in computing and engineering.

North America, as home to leading centres of innovation and technology, is one of the worst offenders. A report from the Equal Employment Opportunity Commission (EEOC) found “the high-tech industry employed far fewer African-Americans, Hispanics, and women, relative to Caucasians, Asian-Americans, and men.”

However, as an executive working on the front line of technology, focusing specifically on artificial intelligence (AI), I’m one of many hoping to turn the tables.

This issue isn’t only confined to new product innovation. It’s also apparent in other aspects of the technology ecosystem – including venture capital. As The Globe highlighted, Ontario-based MaRS Data Catalyst published research on women’s participation in venture capital and found that “only 12.5 per cent of investment roles at VC firms were held by women. It could find just eight women who were partners in those firms, compared with 93 male partners.”

The Canadian government, for its part, is trying to address this issue head on and at all levels. Two years ago, Prime Minister Justin Trudeau campaigned on, and then fulfilled, the promise of having a cabinet with an equal ratio of women to men – a first in Canada’s history. When asked about the outcome from this decision at the recent Fortune Most Powerful Women Summit, he said, “It has led to a better level of decision-making than we could ever have imagined.”

Despite this push, disparities in developed countries like Canada are still apparent where “women earn 11 per cent less than men in comparable positions within a year of completing a PhD in a science, technology, engineering or mathematics, according to an analysis of 1,200 U.S. grads.”

AI is the creation of intelligent machines that think and learn like humans. Every time Google predicts your search, when you use Alexa or Siri, or your iPhone predicts your next word in a text message – that’s AI in action.

Many in the industry, myself included, strongly believe that AI should reflect the diversity of its users, and are working to minimize biases found in AI solutions. This should drive more impartial human interactions with technology (and with each other) to combat things like bias in the workplace.

The democratization of technology we are experiencing with AI is great. It’s helping to reduce time-to-market, it’s deepening the talent pool, and it’s helping businesses of all size cost-effectively gain access to the most modern of technology. The challenge is there are a few large organizations currently developing the AI fundamentals that all businesses can use. Considering this, we must take a step back and ensure the work happening is ethical.

AI is like a great big mirror. It reflects what it sees. And currently, the groups designing AI are not as diverse as we need them to be. While AI has the potential to bring services to everyone that are currently only available to some, we need to make sure we’re moving ahead in a way that reflects our purpose – to achieve diversity and equality. AI can be greatly influenced by human-designed choices, so we must be aware of the humans behind the technology curating it.

At a point when AI is poised to revolutionize our lives, the tech community has a responsibility to develop AI that is accountable and fit for purpose. For this reason, Sage created Five Core Principles to developing AI for business.

At the end of the day, AI’s biggest problem is a social one – not a technology one. But through diversity in its creation, AI will enable better-informed conversations between businesses and their customers.

If we can train humans to treat software better, hopefully, this will drive humans to treat humans better.

via Diversity must be the driver of artificial intelligence – The Globe and Mail

Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Interesting and revealing thread regarding some of the complexities involved and the degree of awareness of the issues:

Facebook executives don’t usually say much publicly, and when they do, it’s usually measured and approved by the company’s public relations team.

Today was a little different. Facebook’s chief security officer, Alex Stamos, took to Twitter to deliver an unusually raw tweetstorm defending the company’s software algorithms against critics who believe Facebook needs more oversight.

Facebook uses algorithms to determine everything from what you see and don’t see in News Feed, to finding and removing other content like hate speech and violent threats. The company has been criticized in the past for using these algorithms — and not humans — to monitor its service for things like abuse, violent threats, and misinformation.

The algorithms can be fooled or gamed, and part of the criticism is that Facebook and other tech companies don’t always seem to appreciate that algorithms have biases, too.

Stamos says it’s hard to understand from the outside.

“Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks,” Stamos tweeted. “My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.”

Stamos’s thread is all the more interesting given his current role inside the company. As chief security officer, he’s spearheading the company’s investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year’s U.S. presidential campaign.

The irony in Stamos’s suggestion, of course, is that most Silicon Valley tech companies are notorious for controlling their own message. This means individual employees rarely speak to the press, and when they do, it’s usually to deliver a bunch of prepared statements. Companies sometimes fire employees who speak to journalists without permission, and Facebook executives are particularly tight-lipped.

This makes Stamos’s thread, and his candor, very intriguing. Here it is in its entirety.

  1. I appreciate Quinta’s work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV.

  2. I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.

  3. Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.

  4. In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.

  5. For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.

  6. Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.

  7. A bunch of the public research really comes down to the feedback loop of “we believe this viewpoint is being pushed by bots” -> ML

  8. So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!

  9. Likewise all the stories about “The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos

  10. My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.

  11. And to be careful of their own biases when making leaps of judgment between facts.

  12. If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased

  13. If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.

  14. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.

  15. If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad

  16. Likewise if your call for data to be protected from governments is based upon who the person being protected is.

  17. A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.

  18. Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. 

Source: Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Trudeau devalues citizenship: Gordon Chong

Over the top criticism and fear-mongering by Gordon Chong:

When Paul Martin Sr. introduced the bill in the House of Commons that became the Canadian Citizenship Act on Jan. 1, 1947, he said: “For the national unity of Canada and for the future and greatness of the country, it is felt to be of the utmost importance that all of us, new Canadians or old, … have a consciousness of a common purpose and common interests as Canadians, that all of us are able to say with pride and with meaning ‘I am a Canadian citizen.’”

Despite new acts in 1977 and 2002, as well as more recent legislation, those foundational words should be forever etched in our minds.

Subsequent revisions have vacillated between weakening and strengthening the requirements for granting citizenship.

The Harper Conservatives strengthened the value of Canadian citizenship in 2014 by increasing residency and language requirements with Bill C-24, the Strengthening Canadian Citizenship Act.

Applicants aged 14 to 64 were required to meet language and knowledge tests.

Permanent residents also had to have lived in Canada for four out of the six previous years prior to applying for citizenship.

The Liberals’ Bill C-6, an Act to Amend the Citizenship Act, proposes to reduce knowledge and language requirements (they only affect applicants aged 18 to 54) and reduce residency requirements to three of the previous five years.

Bill C-6 also proposes to repeal the right to revoke Canadian citizenship of criminals such as those convicted of terrorism.

As a citizenship court judge for several years in the ’90s, I can assure doubters that acquiring citizenship was relatively easy, especially for seniors over 65 with a translator.

Skilled professional translators have difficulty capturing the nuances between languages. It is not uncommon, for example, to see significant errors and omissions in the Chinese-language media when reporters rush to meet deadlines.

Obviously, without a comprehensive grasp of English, it is impossible to meaningfully participate in Canadian life.

Meanwhile, our federal government is frivolously throwing open our doors to potential terrorists and providing fertile conditions for the cultivation of home-grown terrorists by indirectly subsidizing the self-segregation and ghettoization of newcomers, further balkanizing Canada.

The cavalier Trudeau Liberals, peddling their snake oil political potions, are nothing more than pale, itinerant imitations of the Liberal giants of Canada’s past, shamefully repudiating their predecessors for immediate, short-term gratification.

These privileged high-flying Liberal salesmen with colossal carbon footprints should be summarily fired, solely for seriously devaluing Canadian citizenship!

Source: Trudeau devalues citizenship | CHONG | Columnists | Opinion | Toronto Sun