Here’s the Conversation We Really Need to Have About Bias at Google

Ongoing issue of bias in algorithms:

Let’s get this out of the way first: There is no basis for the charge that President Trump leveled against Google this week — that the search engine, for political reasons, favored anti-Trump news outlets in its results. None.

Mr. Trump also claimed that Google advertised President Barack Obama’s State of the Union addresses on its home page but did not highlight his own. That, too, was false, as screenshots show that Google did link to Mr. Trump’s address this year.

But that concludes the “defense of Google” portion of this column. Because whether he knew it or not, Mr. Trump’s false charges crashed into a longstanding set of worries about Google, its biases and its power. When you get beyond the president’s claims, you come upon a set of uncomfortable facts — uncomfortable for Google and for society, because they highlight how in thrall we are to this single company, and how few checks we have against the many unseen ways it is influencing global discourse.

In particular, a raft of research suggests there is another kind of bias to worry about at Google. The naked partisan bias that Mr. Trump alleges is unlikely to occur, but there is a potential problem for hidden, pervasive and often unintended bias — the sort that led Google to once return links to many pornographic pages for searches for “black girls,” that offered “angry” and “loud” as autocomplete suggestions for the phrase “why are black women so,” or that returned pictures of black people for searches of “gorilla.”

I culled these examples — which Google has apologized for and fixed, but variants of which keep popping up — from “Algorithms of Oppression: How Search Engines Reinforce Racism,” a book by Safiya U. Noble, a professor at the University of Southern California’s Annenberg School of Communication.

Dr. Noble argues that many people have the wrong idea about Google. We think of the search engine as a neutral oracle, as if the company somehow marshals computers and math to objectively sift truth from trash.

But Google is made by humans who have preferences, opinions and blind spots and who work within a corporate structure that has clear financial and political goals. What’s more, because Google’s systems are increasingly created by artificial intelligence tools that learn from real-world data, there’s a growing possibility that it will amplify the many biases found in society, even unbeknown to its creators.

Google says it is aware of the potential for certain kinds of bias in its search results, and that it has instituted efforts to prevent them. “What you have from us is an absolute commitment that we want to continually improve results and continually address these problems in an effective, scalable way,” said Pandu Nayak, who heads Google’s search ranking team. “We have not sat around ignoring these problems.”

For years, Dr. Noble and others who have researched hidden biases — as well as the many corporate critics of Google’s power, like the frequent antagonist Yelp — have tried to start a public discussion about how the search company influences speech and commerce online.

There’s a worry now that Mr. Trump’s incorrect charges could undermine such work. “I think Trump’s complaint undid a lot of good and sophisticated thought that was starting to work its way into public consciousness about these issues,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who has studied Google and Facebook’s influence on society.

Dr. Noble suggested a more constructive conversation was the one “about one monopolistic platform controlling the information landscape.”

So, let’s have it.

Google’s most important decisions are secret

In the United States, about eight out of 10 web searches are conducted through Google; across Europe, South America and India, Google’s share is even higher. Google also owns other major communications platforms, among them YouTube and Gmail, and it makes the Android operating system and its app store. It is the world’s dominant internet advertising company, and through that business, it also shapes the market for digital news.

Google’s power alone is not damning. The important question is how it manages that power, and what checks we have on it. That’s where critics say it falls down.

Google’s influence on public discourse happens primarily through algorithms, chief among them the system that determines which results you see in its search engine. These algorithms are secret, which Google says is necessary because search is its golden goose (it does not want Microsoft’s Bing to know what makes Google so great) and because explaining the precise ways the algorithms work would leave them open to being manipulated.

But this initial secrecy creates a troubling opacity. Because search engines take into account the time, place and some personalized factors when you search, the results you get today will not necessarily match the results I get tomorrow. This makes it difficult for outsiders to investigate bias across Google’s results.

A lot of people made fun this week of the paucity of evidence that Mr. Trump put forward to support his claim. But researchers point out that if Google somehow went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. If the public did spot evidence of such an event, it would look thin and inconclusive, too.

“We really have to have a much more sophisticated sense of how to investigate and identify these claims,” said Frank Pasquale, a professor at the University of Maryland’s law school who has studied the role that algorithms play in society.

In a law review article published in 2010, Mr. Pasquale outlined a way for regulatory agencies like the Federal Trade Commission and the Federal Communications Commission to gain access to search data to monitor and investigate claims of bias. No one has taken up that idea. Facebook, which also shapes global discourse through secret algorithms, recently sketched out a plan to give academic researchers access to its data to investigate bias, among other issues.

Google has no similar program, but Dr. Nayak said the company often shares data with outside researchers. He also argued that Google’s results are less “personalized” than people think, suggesting that search biases, when they come up, will be easy to spot.

“All our work is out there in the open — anyone can evaluate it, including our critics,” he said.

Search biases mirror real-world ones

The kind of blanket, intentional bias Mr. Trump is claiming would necessarily involve many workers at Google. And Google is leaky; on hot-button issues — debates over diversity or whether to work with the military — politically minded employees have provided important information to the media. If there was even a rumor that Google’s search team was skewing search for political ends, we would likely see some evidence of such a conspiracy in the media.

That’s why, in the view of researchers who study the issue of algorithmic bias, the more pressing concern is not about Google’s deliberate bias against one or another major political party, but about the potential for bias against those who do not already hold power in society. These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.

It’s in these blind spots that we find the most problematic biases with Google, like in the way it once suggested a spelling correction for the search “English major who taught herself calculus” — the correct spelling, Google offered, was “English major who taught himself calculus.”

Why did it do that? Google’s explanation was not at all comforting: The phrase “taught himself calculus” is a lot more popular online than “taught herself calculus,” so Google’s computers assumed that it was correct. In other words, a longstanding structural bias in society was replicated on the web, which was reflected in Google’s algorithm, which then hung out live online for who knows how long, unknown to anyone at Google, subtly undermining every female English major who wanted to teach herself calculus.

Eventually, this error was fixed. But how many other such errors are hidden in Google? We have no idea.

Google says it understands these worries, and often addresses them. In 2016, some people noticed that it listed a Holocaust-denial site as a top result for the search “Did the Holocaust happen?” That started a large effort at the company to address hate speech and misinformation online. The effort, Dr. Nayak said, shows that “when we see real-world biases making results worse than they should be, we try to get to the heart of the problem.”

Google has escaped recent scrutiny

Yet it is not just these unintended biases that we should be worried about. Researchers point to other issues: Google’s algorithms favor recency and activity, which is why they are so often vulnerable to being manipulated in favor of misinformation and rumor in the aftermath of major news events. (Google says it is working on addressing misinformation.)

Some of Google’s rivals charge that the company favors its own properties in its search results over those of third-party sites — for instance, how it highlights Google’s local reviews instead of Yelp’s in response to local search queries.

Regulators in Europe have already fined Google for this sort of search bias. In 2012, the F.T.C.’s antitrust investigators found credible evidence of unfair search practices at Google. The F.T.C.’s commissioners, however, voted unanimously against bringing charges. Google denies any wrongdoing.

The danger for Google is that Mr. Trump’s charges, however misinformed, create an opening to discuss these legitimate issues.

On Thursday, Senator Orrin Hatch, Republican of Utah, called for the F.T.C. to reopen its Google investigation. There is likely more to come. For the last few years, Facebook has weathered much of society’s skepticism regarding big tech. Now, it may be Google’s time in the spotlight.

Source: Here’s the Conversation We Really Need to Have About Bias at …

YouTube, the Great Radicalizer – The New York Times

Good article on how social media reinforces echo chambers and tends towards more extreme views:

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigationof YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

via YouTube, the Great Radicalizer – The New York Times

Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds

Not surprising that computer programs and their algorithms can incorporate existing biases, as appears to be the case here:

A computer program used to calculate people’s risk of committing crimes is less accurate and more racist than random humans assigned to the same task, a new Dartmouth study finds.

Before they’re sentenced, people who commit crimes in some U.S. states are required to take a 137-question quiz. The questions, which range from queries about a person’s criminal history, to their parents’ substance use, to “do you feel discouraged at times?” are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS. Using a proprietary algorithm, COMPAS is meant to crunch the numbers on a person’s life, determine their risk for reoffending, and help a judge determine a sentence based on that risk assessment.

Rather than making objective decisions, COMPAS actually plays up racial biases in the criminal justice system, activists allege. And a study released last week from Dartmouth researchers found that random, untrained people on the internet could make more accurate predictions about a person’s criminal future than the expensive software could.

A privately held software, COMPAS’s algorithms are a trade secret. Its conclusions baffle some of the people it evaluates. Take Eric Loomis, a Michigan man arrested in 2013, who pled guilty to attempting to flee a police officer, and no contest to driving a vehicle without its owner’s permission.

While neither offense was violent, COMPAS assessed Loomis’s history and reported him as having “a high risk of violence, high risk of recidivism, high pretrial risk.” Loomis was sentenced to six years in prison based on the finding.

COMPAS came to its conclusion through its 137-question quiz, which asks questions about the person’s criminal history, family history, social life, and opinions. The questionnaire does not ask a person’s race. But the questions — including those about parents’ arrest history, neighborhood crime, and a person’s economic stability — appear unfavorably biased against black defendants, who are disproportionately impoverished or incarcerated in the U.S.

A 2016 ProPublica investigation analyzed the software’s results across 7,000 cases in Broward County, Florida, and found that COMPAS often overestimated a person’s risk for committing future crimes. These incorrect assessments nearly doubled among black defendants, who frequently received higher risk ratings than white defendants who had committed more serious crimes.

But COMPAS isn’t just frequently wrong, the new Dartmouth study found: random humans can do a better job, with less information.

The Dartmouth research group hired 462 participants through Mechanical Turk, a crowdsourcing platform. The participants, who had no background or training in criminal justice, were given a brief description of a real criminal’s age and sex, as well as the crime they committed and their previous criminal history. The person’s race was not given.

“Do you think this person will commit another crime within 2 years,” the researchers asked participants.

The untrained group correctly predicted whether a person would commit another crime with 68.2 percent accuracy for black defendants and 67.6 percent accuracy for white defendants. That’s slightly better than COMPAS, which reports 64.9 percent accuracy for black defendants and 65.7 percent accuracy for white defendants.

In a statement, COMPAS’s parent company Equivalent argued that the Dartmouth findings were actually good.

“Instead of being a criticism of the COMPAS assessment, [the study] actually adds to a growing number of independent studies that have confirmed that COMPAS achieves good predictability and matches the increasingly accepted AUC standard of 0.70 for well-designed risk assessment tools used in criminal justice,” Equivalent said in the statement.

What it didn’t add was that the humans who had slightly outperformed COMPAS were untrained — whereas COMPAS is a massively expensive and secretive program.

In 2015, Wisconsin signed a contract with COMPAS for $1,765,334, documents obtained by the Electronic Privacy Information Center reveal. The largest chunk of the cash — $776,475 — went to licensing and maintenance fees for the software company. By contrast, the Dartmouth researchers paid each study participant $1 for completing the task, and a $5 bonus if they answered correctly more than 65 percent of the time.

And for all that money, defendants still aren’t sure COMPAS is doing its job.

After COMPAS helped sentence him to six years in prison, Loomis attempted to overturn the ruling, claiming the ruling by algorithm violated his right to due process. The secretive nature of the software meant it could not be trusted, he claimed.

His bid failed last summer when the U.S. Supreme Court refused to take up his case, allowing the COMPAS-based sentence to remain.

Instead of throwing himself at the mercy of the court, Loomis was at the mercy of the machine.

He might have had better luck at the hands of random internet users.

Source: Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds

Diversity must be the driver of artificial intelligence: Kriti Sharma

Agree. Those creating the algorithms and related technology need to be both more diverse and more mindful of the assumptions baked into their analysis and work:

The question over what to do about biases and inequalities in the technology industry is not a new one. The number of women working in science, technology, engineering and mathematics (STEM) fields has always been disproportionately less than men. What may be more perplexing is, why is it getting worse?

It’s 2017, and yet according to the American Association of University Women (AAUW) in a review of more than 380 studies from academic journals, corporations, and government sources, there is a major employment gap for women in computing and engineering.

North America, as home to leading centres of innovation and technology, is one of the worst offenders. A report from the Equal Employment Opportunity Commission (EEOC) found “the high-tech industry employed far fewer African-Americans, Hispanics, and women, relative to Caucasians, Asian-Americans, and men.”

However, as an executive working on the front line of technology, focusing specifically on artificial intelligence (AI), I’m one of many hoping to turn the tables.

This issue isn’t only confined to new product innovation. It’s also apparent in other aspects of the technology ecosystem – including venture capital. As The Globe highlighted, Ontario-based MaRS Data Catalyst published research on women’s participation in venture capital and found that “only 12.5 per cent of investment roles at VC firms were held by women. It could find just eight women who were partners in those firms, compared with 93 male partners.”

The Canadian government, for its part, is trying to address this issue head on and at all levels. Two years ago, Prime Minister Justin Trudeau campaigned on, and then fulfilled, the promise of having a cabinet with an equal ratio of women to men – a first in Canada’s history. When asked about the outcome from this decision at the recent Fortune Most Powerful Women Summit, he said, “It has led to a better level of decision-making than we could ever have imagined.”

Despite this push, disparities in developed countries like Canada are still apparent where “women earn 11 per cent less than men in comparable positions within a year of completing a PhD in a science, technology, engineering or mathematics, according to an analysis of 1,200 U.S. grads.”

AI is the creation of intelligent machines that think and learn like humans. Every time Google predicts your search, when you use Alexa or Siri, or your iPhone predicts your next word in a text message – that’s AI in action.

Many in the industry, myself included, strongly believe that AI should reflect the diversity of its users, and are working to minimize biases found in AI solutions. This should drive more impartial human interactions with technology (and with each other) to combat things like bias in the workplace.

The democratization of technology we are experiencing with AI is great. It’s helping to reduce time-to-market, it’s deepening the talent pool, and it’s helping businesses of all size cost-effectively gain access to the most modern of technology. The challenge is there are a few large organizations currently developing the AI fundamentals that all businesses can use. Considering this, we must take a step back and ensure the work happening is ethical.

AI is like a great big mirror. It reflects what it sees. And currently, the groups designing AI are not as diverse as we need them to be. While AI has the potential to bring services to everyone that are currently only available to some, we need to make sure we’re moving ahead in a way that reflects our purpose – to achieve diversity and equality. AI can be greatly influenced by human-designed choices, so we must be aware of the humans behind the technology curating it.

At a point when AI is poised to revolutionize our lives, the tech community has a responsibility to develop AI that is accountable and fit for purpose. For this reason, Sage created Five Core Principles to developing AI for business.

At the end of the day, AI’s biggest problem is a social one – not a technology one. But through diversity in its creation, AI will enable better-informed conversations between businesses and their customers.

If we can train humans to treat software better, hopefully, this will drive humans to treat humans better.

via Diversity must be the driver of artificial intelligence – The Globe and Mail

Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Interesting and revealing thread regarding some of the complexities involved and the degree of awareness of the issues:

Facebook executives don’t usually say much publicly, and when they do, it’s usually measured and approved by the company’s public relations team.

Today was a little different. Facebook’s chief security officer, Alex Stamos, took to Twitter to deliver an unusually raw tweetstorm defending the company’s software algorithms against critics who believe Facebook needs more oversight.

Facebook uses algorithms to determine everything from what you see and don’t see in News Feed, to finding and removing other content like hate speech and violent threats. The company has been criticized in the past for using these algorithms — and not humans — to monitor its service for things like abuse, violent threats, and misinformation.

The algorithms can be fooled or gamed, and part of the criticism is that Facebook and other tech companies don’t always seem to appreciate that algorithms have biases, too.

Stamos says it’s hard to understand from the outside.

“Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks,” Stamos tweeted. “My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.”

Stamos’s thread is all the more interesting given his current role inside the company. As chief security officer, he’s spearheading the company’s investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year’s U.S. presidential campaign.

The irony in Stamos’s suggestion, of course, is that most Silicon Valley tech companies are notorious for controlling their own message. This means individual employees rarely speak to the press, and when they do, it’s usually to deliver a bunch of prepared statements. Companies sometimes fire employees who speak to journalists without permission, and Facebook executives are particularly tight-lipped.

This makes Stamos’s thread, and his candor, very intriguing. Here it is in its entirety.

  1. I appreciate Quinta’s work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV.

  2. I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.

  3. Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.

  4. In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.

  5. For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.

  6. Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.

  7. A bunch of the public research really comes down to the feedback loop of “we believe this viewpoint is being pushed by bots” -> ML

  8. So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!

  9. Likewise all the stories about “The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos

  10. My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.

  11. And to be careful of their own biases when making leaps of judgment between facts.

  12. If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased

  13. If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.

  14. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.

  15. If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad

  16. Likewise if your call for data to be protected from governments is based upon who the person being protected is.

  17. A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.

  18. Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. 

Source: Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms – Recode

Trudeau devalues citizenship: Gordon Chong

Over the top criticism and fear-mongering by Gordon Chong:

When Paul Martin Sr. introduced the bill in the House of Commons that became the Canadian Citizenship Act on Jan. 1, 1947, he said: “For the national unity of Canada and for the future and greatness of the country, it is felt to be of the utmost importance that all of us, new Canadians or old, … have a consciousness of a common purpose and common interests as Canadians, that all of us are able to say with pride and with meaning ‘I am a Canadian citizen.’”

Despite new acts in 1977 and 2002, as well as more recent legislation, those foundational words should be forever etched in our minds.

Subsequent revisions have vacillated between weakening and strengthening the requirements for granting citizenship.

The Harper Conservatives strengthened the value of Canadian citizenship in 2014 by increasing residency and language requirements with Bill C-24, the Strengthening Canadian Citizenship Act.

Applicants aged 14 to 64 were required to meet language and knowledge tests.

Permanent residents also had to have lived in Canada for four out of the six previous years prior to applying for citizenship.

The Liberals’ Bill C-6, an Act to Amend the Citizenship Act, proposes to reduce knowledge and language requirements (they only affect applicants aged 18 to 54) and reduce residency requirements to three of the previous five years.

Bill C-6 also proposes to repeal the right to revoke Canadian citizenship of criminals such as those convicted of terrorism.

As a citizenship court judge for several years in the ’90s, I can assure doubters that acquiring citizenship was relatively easy, especially for seniors over 65 with a translator.

Skilled professional translators have difficulty capturing the nuances between languages. It is not uncommon, for example, to see significant errors and omissions in the Chinese-language media when reporters rush to meet deadlines.

Obviously, without a comprehensive grasp of English, it is impossible to meaningfully participate in Canadian life.

Meanwhile, our federal government is frivolously throwing open our doors to potential terrorists and providing fertile conditions for the cultivation of home-grown terrorists by indirectly subsidizing the self-segregation and ghettoization of newcomers, further balkanizing Canada.

The cavalier Trudeau Liberals, peddling their snake oil political potions, are nothing more than pale, itinerant imitations of the Liberal giants of Canada’s past, shamefully repudiating their predecessors for immediate, short-term gratification.

These privileged high-flying Liberal salesmen with colossal carbon footprints should be summarily fired, solely for seriously devaluing Canadian citizenship!

Source: Trudeau devalues citizenship | CHONG | Columnists | Opinion | Toronto Sun

Artificial Intelligence’s White Guy Problem – The New York Times

Valid concerns regarding who designs the algorithms and how to eliminate or at least minimize bias.

Perhaps the algorithms and the people who write them should take the Implicit Association Test?

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

Take a small example from last year: Users discovered that Google’s photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas. Google apologized; it was unintentional.

But similar errors have emerged in Nikon’s camera software, which misread images of Asian people as blinking, and in Hewlett-Packard’s web camera software, which had difficulty recognizing people with dark skin tones.

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Police departments across the United States are also deploying data-driven risk-assessment tools in “predictive policing” crime prevention efforts. In many cities, including New York, Los Angeles, Chicago and Miami, software analyses of large sets of historical crime data are used to forecast where crime hot spots are most likely to emerge; the police are then directed to those areas.

At the very least, this software risks perpetuating an already vicious cycle, in which the police increase their presence in the same places they are already policing (or overpolicing), thus ensuring that more arrests come from those areas. In the United States, this could result in more surveillance in traditionally poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are scrutinized even less. Predictive programs are only as good as the data they are trained on, and that data has a complex history.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems. Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future.

Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.

If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.

While machine-learning technology can offer unexpected insights and new forms of convenience, we must address the current implications for communities that have less power, for those who aren’t dominant in elite Silicon Valley circles.

Currently the loudest voices debating the potential dangers of superintelligence are affluent white men, and, perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator.

But for those who already face marginalization or bias, the threats are here.

Source: Artificial Intelligence’s White Guy Problem – The New York Times

Facebook’s Bias Is Built-In, and Bears Watching – The New York Times

One of the more perceptive articles I have seen on the recent Facebook controversy and the overall issues regarding the lack of neutrality in algorithms:

The question isn’t whether Facebook has outsize power to shape the world — of course it does, and of course you should worry about that power. If it wanted to, Facebook could try to sway elections, favor certain policies, or just make you feel a certain way about the world, as it once proved it could do in an experiment devised to measure how emotions spread online.

There is no evidence Facebook is doing anything so alarming now. The danger is nevertheless real. The biggest worry is that Facebook doesn’t seem to recognize its own power, and doesn’t think of itself as a news organization with a well-developed sense of institutional ethics and responsibility, or even a potential for bias. Neither does its audience, which might believe that Facebook is immune to bias because it is run by computers.

That myth should die. It’s true that beyond the Trending box, most of the stories Facebook presents to you are selected by its algorithms, but those algorithms are as infused with bias as any other human editorial decision.

“Algorithms equal editors,” said Robyn Caplan, a research analyst at Data & Society, a research group that studies digital communications systems. “With Facebook, humans are never not involved. Humans are in every step of the process — in terms of what we’re clicking on, who’s shifting the algorithms behind the scenes, what kind of user testing is being done, and the initial training data provided by humans.”

Everything you see on Facebook is therefore the product of these people’sexpertise and considered judgment, as well as their conscious and unconscious biases apart from possible malfeasance or potential corruption. It’s often hard to know which, because Facebook’s editorial sensibilities are secret. So are its personalities: Most of the engineers, designers and others who decide what people see on Facebook will remain forever unknown to its audience.

Photo

CreditStuart Goldenberg 

Facebook also has an unmistakable corporate ethos and point of view. The company is staffed mostly by wealthy coastal Americans who tend to support Democrats, and it is wholly controlled by a young billionaire who has expressed policy preferences that many people find objectionable. Mr. Zuckerberg is for free trade, more open immigration and for a certain controversial brand of education reform. Instead of “building walls,” he supports a “connected world and a global community.”

You could argue that none of this is unusual. Many large media outlets are powerful, somewhat opaque, operated for profit, and controlled by wealthy people who aren’t shy about their policy agendas — Bloomberg News, The Washington Post, Fox News and The New York Times, to name a few.

But there are some reasons to be even more wary of Facebook’s bias. One is institutional. Many mainstream outlets have a rigorous set of rules and norms about what’s acceptable and what’s not in the news business.

“The New York Times contains within it a long history of ethics and the role that media is supposed to be playing in democracies and the public,” Ms. Caplan said. “These technology companies have not been engaged in that conversation.”

According to a statement from Tom Stocky, who is in charge of the trending topics list, Facebook has policies “for the review team to ensure consistency and neutrality” of the items that appear in the trending list.

But Facebook declined to discuss whether any editorial guidelines governed its algorithms, including the system that determines what people see in News Feed. Those algorithms could have profound implications for society. For instance, one persistent worry about algorithmic-selected news is that it might reinforce people’s previously held points of view. If News Feed shows news that we’re each likely to Like, it could trap us into echo chambers and contribute to rising political polarization. In a study last year, Facebook’s scientists asserted the echo chamber effect was muted.

But when Facebook changes its algorithm — which it does routinely — does it have guidelines to make sure the changes aren’t furthering an echo chamber? Or that the changes aren’t inadvertently favoring one candidate or ideology over another? In other words, are Facebook’s engineering decisions subject to ethical review? Nobody knows.

Source: Facebook’s Bias Is Built-In, and Bears Watching – The New York Times

Using Algorithms to Determine Character – The New York Times

Good piece on the increasing use of algorithms in granting loans and in the workplace, and the potential for (and limits to – see emphasized text) reducing bias:

Mr. Merrill, who also has a Ph.D. in psychology (from Princeton, in case Mr. Gu wants to lend him money), thinks that data-driven analysis of personality is ultimately fairer than standard measures.

“We’re always judging people in all sorts of ways, but without data we do it with a selection bias,” he said. “We base it on stuff we know about people, but that usually means favoring people who are most like ourselves.” Familiarity is a crude form of risk management, since we know what to expect. But that doesn’t make it fair.

Character (though it is usually called something more neutral-sounding) is now judged by many other algorithms. Workday, a company offering cloud-based personnel software, has released a product that looks at 45 employee performance factors, including how long a person has held a position and how well the person has done. It predicts whether a person is likely to quit and suggests appropriate things, like a new job or a transfer, that could make this kind of person stay.

It also characterizes managers as “rainmakers” or “terminators,” depending on how well they hold talent. Inside Workday, the company has analyzed its own sales force to see what makes for success. The top indicator is tenacity.

“We all have biases about how we hire and promote,” said Dan Beck, Workday’s head of technology strategy. “If you can leverage data to overcome that, great.”

People studying these traits will be encouraged to adopt them, he said, since “if you know there is a pattern of success, why wouldn’t you adopt it?”

In a sense, it’s no different from the way people read the biographies of high achievers, looking for clues for what they need to do differently to succeed. It’s just at a much larger scale, based on observing everybody.

There are reasons to think that data-based character judgments are more reasonable. Jure Leskovec, a professor of computer science at Stanford, is finishing up a study comparing the predictions of data analysis against those of judges at bail hearings, who have just a few minutes to size up prisoners and decide if they could be risks to society. Early results indicate that data-driven analysis is 30 percent better at predicting crime, Mr. Leskovec said.

“Algorithms aren’t subjective,” he said. “Bias comes from people.”

That is only true to a point: Algorithms do not fall from the sky. Algorithms are written by human beings. Even if the facts aren’t biased, design can be, and we could end up with a flawed belief that math is always truth.

Upstart’s Mr. Gu, who said he had perfect SAT scores but dropped out of Yale, wouldn’t have qualified for an Upstart loan using his own initial algorithms. He has since changed the design, and he said he is aware of the responsibility of the work ahead.

“Every time we find a signal, we have to ask ourselves, ‘Would we feel comfortable telling someone this was why they were rejected?’ ” he said.

Using Algorithms to Determine Character – The New York Times.

When Algorithms Discriminate – The New York Times

Given that people have biases, not surprising that the algorithms created reflect some of these biases:

Algorithms, which are a series of instructions written by programmers, are often described as a black box; it is hard to know why websites produce certain results. Often, algorithms and online results simply reflect people’s attitudes and behavior. Machine learning algorithms learn and evolve based on what people do online. The autocomplete feature on Google and Bing is an example. A recent Google search for “Are transgender,” for instance, suggested, “Are transgenders going to hell.”

“Even if they are not designed with the intent of discriminating against those groups, if they reproduce social preferences even in a completely rational way, they also reproduce those forms of discrimination,” said David Oppenheimer, who teaches discrimination law at the University of California, Berkeley.

But there are laws that prohibit discrimination against certain groups, despite any biases people might have. Take the example of Google ads for high-paying jobs showing up for men and not women. Targeting ads is legal. Discriminating on the basis of gender is not.

The Carnegie Mellon researchers who did that study built a tool to simulate Google users that started with no search history and then visited employment websites. Later, on a third-party news site, Google showed an ad for a career coaching service advertising “$200k+” executive positions 1,852 times to men and 318 times to women.

The reason for the difference is unclear. It could have been that the advertiser requested that the ads be targeted toward men, or that the algorithm determined that men were more likely to click on the ads.

Google declined to say how the ad showed up, but said in a statement, “Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed.”

Anupam Datta, one of the researchers, said, “Given the big gender pay gap we’ve had between males and females, this type of targeting helps to perpetuate it.”

It would be impossible for humans to oversee every decision an algorithm makes. But companies can regularly run simulations to test the results of their algorithms. Mr. Datta suggested that algorithms “be designed from scratch to be aware of values and not discriminate.”

“The question of determining which kinds of biases we don’t want to tolerate is a policy one,” said Deirdre Mulligan, who studies these issues at the University of California, Berkeley School of Information. “It requires a lot of care and thinking about the ways we compose these technical systems.”

Silicon Valley, however, is known for pushing out new products without necessarily considering the societal or ethical implications. “There’s a huge rush to innovate,” Ms. Mulligan said, “a desire to release early and often — and then do cleanup.”

When Algorithms Discriminate – The New York Times.