Why Google’s newest AI team is setting up in Canada – Recode

The Canadian advantage includes immigration and related policies:

DeepMind, Google’s London-based artificial intelligence research branch, is launching a team at the University of Alberta in Canada.

Why there? Two reasons come to mind:

1. Canada has a history of AI research

DeepMind is launching a team at the university partly for proximity to the broader AI research community in Canada.

A number of leading AI researchers in Silicon Valley hail from Canada, where they plugged away at deep learning, a complex automated process of data analysis, during a period when that technology — now popular at major tech companies — was considered by the larger computer science community to be a dead end.

Plus, almost a dozen DeepMind staff came from the university, according to a blog post by DeepMind co-founder and CEO Demis Hassabis announcing the new lab. An Alberta PhD and a former post doc from the school played key roles in one of DeepMind’s hallmark accomplishments, getting its AlphaGo software to beat the human world champion at Chinese strategy game Go.

“Our hope is that this collaboration will help turbocharge Edmonton’s growth as a technology and research hub,” wrote Hassabis, “attracting even more world-class AI researchers to the region and helping to keep them there too.”

2. The Canadian government is friendlier to AI research than the U.S.

Political realities also make Canada a particularly attractive place for Google to expand its AI efforts.

The Canadian government has demonstrated a willingness to invest in artificial intelligence, committing about $100 million ($125 million in Canadian currency) in its 2017 budget to develop the AI industry in the country.

This is in contrast to the U.S., where President Donald Trump’s 2018 budget request includes drastic cuts to medical and scientific research, including an 11 percent or $776 million cut to the National Science Foundation.

Another contrast to the U.S. is in immigration policies. Canada doesn’t have an equivalent of the U.S. travel ban, which restricts travel for immigrants and refugees from Iran, Libya, Somalia, Sudan, Syria and Yemen. In the U.S., the ban makes it more difficult for tech and academic talent to enter the country.

Something interesting: One of the three researchers leading the team, Dr. Patrick M. Pilarski, is part of the university’s Department of Medicine. Google won’t comment on whether Pilarski’s medical background will play a role in his machine learning work for DeepMind, but Google is working on ways to integrate AI for health care as part of its cloud offering.

Source: Why Google’s newest AI team is setting up in Canada – Recode

The head of Google’s Brain team is more worried about the lack of diversity in artificial intelligence than an AI apocalypse – Recode

The next frontier of diversity?

As some would have it, robots are poised to take over the world in about 3 … 2 … 1 …

But one machine-learning expert — who is, after all, in a position to know — thinks that’s not the biggest issue facing artificial intelligence. In fact, it’s not an issue at all.

“I am personally not worried about an AI apocalypse, as I consider that a completely made-up fear,” Jeff Dean, a senior fellow at Google, wrote during a Reddit AMA on Aug. 11. “I am concerned about the lack of diversity in the AI research community and in computer science more generally.” (Emphasis his.)

Ding, ding, ding. The issue that the tech industry is trying to maneuver their way around, for better or worse, is the same issue that can stunt the progress of “humanistic thinking” in the development of artificial intelligence, according to Dean.

For the optimists in the audience, Google Brain wants to improve lives, Dean wrote. And how can you improve lives without people with diverse perspectives and backgrounds helping to build and develop the technology you hope will impact positive change? (Answer: You can’t.)

“One of the things I really like about our Brain Residency program is that the residents bring a wide range of backgrounds, areas of expertise (e.g. we have physicists, mathematicians, biologists, neuroscientists, electrical engineers, as well as computer scientists), and other kinds of diversity to our research efforts,” Dean wrote.

“In my experience, whenever you bring people together with different kinds of expertise, different perspectives, etc., you end up achieving things that none of you could do individually, because no one person has the entire skills and perspective necessary.”

Source: The head of Google’s Brain team is more worried about the lack of diversity in artificial intelligence than an AI apocalypse – Recode

Artificial Intelligence’s White Guy Problem – The New York Times

Valid concerns regarding who designs the algorithms and how to eliminate or at least minimize bias.

Perhaps the algorithms and the people who write them should take the Implicit Association Test?

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

Take a small example from last year: Users discovered that Google’s photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas. Google apologized; it was unintentional.

But similar errors have emerged in Nikon’s camera software, which misread images of Asian people as blinking, and in Hewlett-Packard’s web camera software, which had difficulty recognizing people with dark skin tones.

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Police departments across the United States are also deploying data-driven risk-assessment tools in “predictive policing” crime prevention efforts. In many cities, including New York, Los Angeles, Chicago and Miami, software analyses of large sets of historical crime data are used to forecast where crime hot spots are most likely to emerge; the police are then directed to those areas.

At the very least, this software risks perpetuating an already vicious cycle, in which the police increase their presence in the same places they are already policing (or overpolicing), thus ensuring that more arrests come from those areas. In the United States, this could result in more surveillance in traditionally poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are scrutinized even less. Predictive programs are only as good as the data they are trained on, and that data has a complex history.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems. Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future.

Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.

If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.

While machine-learning technology can offer unexpected insights and new forms of convenience, we must address the current implications for communities that have less power, for those who aren’t dominant in elite Silicon Valley circles.

Currently the loudest voices debating the potential dangers of superintelligence are affluent white men, and, perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator.

But for those who already face marginalization or bias, the threats are here.

Source: Artificial Intelligence’s White Guy Problem – The New York Times