Machines Will Handle More Than Half of Workplace Tasks by 2025, WEF Report Says

Question is: what kind of jobs and will it truly be a “positive impact:”

Organizers of the Davos forum say in a new report that machines are increasingly moving in on jobs done by people, projecting that more than half of all workplace tasks will be carried out by machines by 2025.

The World Economic Forum also predicts the loss of some 75 million jobs worldwide by 2022, but also says 133 million new jobs will be created.

The WEF said Monday: “Despite bringing widespread disruption, the advent of machine, robots and algorithm could actually have a positive impact on human employment.”

The “Future of Jobs 2018” report, the second of its kind, is based on a survey of executives representing 15 million employees in 20 economies.

The WEF said challenges for employers include reskilling workers, enabling remote employment and building safety nets for workers.

Source: Machines Will Handle More Than Half of Workplace Tasks by 2025, a Report Says

How we stopped worrying and learned to love robots

Still looking for someone to translate the expected AI impact in terms of what it means in terms of immigration levels and skills:

As Bob Magee, chairman of the Woodbridge Group, walked us through his foam-manufacturing facility just north of Toronto, a familiar story emerged. Automation for this company isn’t a simple calculation of substituting one machine for one worker. Rather, it is one of many incremental steps in a process of continuous improvement that requires engaged employees at each and every step.

At the Woodbridge Group, automation is beneficial for the firm and workers alike. It contributes to improved competitiveness — a necessary precondition for jobs — while making existing jobs easier, more efficient and, from our observations, more enjoyable. Where workers were once required to lift and place heavy sheets of foam, these tasks are now done by machines. Workers are free to do what they do best: oversee processes, ensure quality and work as a team to make the plant more efficient.

Despite many stories like these, concerns over automation decimating the workforce and leaving millions unemployed persist. Is automation driving us toward a jobless future or a more productive and prosperous economy for firms and workers alike? To better understand what’s happening and what’s coming in Ontario, Ontario’s Ministry of Economic Development and Growth and Ministry of Advanced Education and Skills Development commissioned the Brookfield Institute to take a closer look. Our in-depth analysis included systematic reviews of existing literature and data, interviews with over 50 people representing labour, business and developers of technology, and a two-phase citizen engagement process in communities across the province involving roughly 300 individuals. Our work and findings were overseen and reviewed by an expert advisory panel of 14 people with technology, academic and industry expertise.

The extent to which communities and workers are impacted by automation depends on the behaviour of firms — that is, whether they invest in automation technologies. This decision is influenced by a myriad of internal and external factors, including domestic and international competition, changing consumer preferences and the need to maintain output as workers age and retire. The ultimate goal of automation is always to improve productivity, product quality and overall competitiveness.

Given Ontario firm’s track record on technology adoption, large-scale disruption is likely not around the corner. Just as many factors influence tech adoption, others impede it. These include cost barriers and risk aversion, the difficulties associated with integrating new technology in existing legacy systems and — surprisingly — shortages of workers with the skills to properly implement and maintain technology.

For many firms in Ontario these barriers significantly inhibit technological adoption. The gap in information and communications technology (ICT) investment between Ontario and the US is substantial and has grown in recent years. In 2015, Ontario firms’ annual ICT investment was 2.39 percent as a share of GDP, versus 3.15 percent in the US and 2.16 for Canada as a whole. This disparity puts a damper on predictions of an imminent automation-driven jobless future.

If Ontario firms continue to lag when it comes to tech adoption, the associated decline in competitiveness could spell disaster for them and their workers.

In the Canadian manufacturing sector (to which Ontario manufacturers contributed roughly 47 percent of output in 2016), firms’ ICT investment per worker was 57 percent of that of their US counterparts, as of 2013. Despite this lower rate of investment in technology, Ontario experienced sharper declines in employment (5.5 percent from 2001 to 2011) than both the US (4.2 percent) and Germany (4 percent) — jurisdictions with higher rates of technology adoption. This suggests that while automation has enabled many manufacturers to produce more goods with fewer people, low rates of technology adoption may also be a concern for workers.

Without skilled workers, automation simply would not be possible. They are needed at each and every step, to identify inefficiencies and to integrate and oversee technology.

When new technologies are adopted, the impact on workers is a function of how those specific technologies affect business activities, what new skills are needed as a result and whether these new skills are present in the firm’s existing workforce and in the broader labour market.

Automation can help firms retain existing jobs, albeit with different skill requirements. In some instances, employees can be redeployed, often to more interesting, productive and safe work. Automation can also help existing firms expand and new businesses form. Historically, automation has created more jobs than it eliminates, in the long run.

In Ontario’s finance and insurance sector, for example, automation has contributed to improved efficiency, yet employment continues to rise. Between 2002 and 2016, the number of workers required to generate $1 million in output declined from 5.9 to 5.2, but employment expanded by 35 percent, or 85,350 workers. But automation has also contributed to significant shifts in skill requirements, increasing demand for both soft and technical skills, including those related to client experience, sales, and project and risk management, as well as software development and data analysis. This shift is perhaps best exemplified by the impact of the ATM on bank tellers, whose numbers actually increased after ATMs were introduced.

Automation can eliminate certain kinds of job tasks and sometimes whole occupations. When new jobs are created, they often require different skill sets and frequently emerge in industries and regions different than those where jobs might have been lost. If workers are unable to move, to acquire new skills to adapt or to change jobs, they may experience a prolonged adjustment period of underemployment or unemployment. This in turn can depress local labour markets and exacerbate the inequitable distribution of wealth among individuals and across regions.

For workers and firms to be successful, Ontario must overcome barriers and embrace automation with an intensity comparable to that of our international peers. This process will require a skilled workforce able to support technological adoption. For many workers, the benefits are clear: jobs will be retained and may even get better. But an increased pace of automation could leave others behind. We need to ensure that workers have the skills and opportunities to adapt to and even drive automation. This will require more than incremental changes, and our public and private sectors will need to rethink and better coordinate existing programs geared toward promoting technological adoption and delivering skills training.

Source: How we stopped worrying and learned to love robots

The case against transparency in government AI

Useful note of caution regarding the risks of manipulation as Facebook’s experience indicates:

Governments are becoming increasingly aware of the potential for incorporating artificial intelligence (AI) and machine-assisted decision-making into their operations. The business case is compelling; AI has the ability to dramatically improve government service delivery, citizen satisfaction and government efficiency more generally. At the same time, there are ample cases demonstrating the potential for AI to fall short of public expectations. With governments continuing to lag far behind the rest of society in the adoption of digital technology, a successful incorporation of AI into government is far from being a foregone conclusion.

AI itself is essentially the product of combining exponential increases in the availability of data with exponential increases in computing power that can sort that data. Together, the result is software that makes judgments or predictions that are remarkably perceptive to the point of even feeling “intelligent.” Yet, while the outputs of AI might feel intelligent, it’s perhaps more accurate to say that these decisions are actually just very highly informed by software and big data. AI can make decisions and observations, but in fact AI lacks what we would call judgment and certainly does not have the capacity to make decisions guided by human morality. It is only able to report or make decisions based on the training data it is fed, and thus it will perpetuate flaws in data if they exist.

As a result of these technical limitations, AI can have a dark side, especially when it is incorporated in public affairs. Since AI decision-making is easily prone to propagating the biases of others, AI risks making clumsy decisions and offering inappropriate recommendations. In the US, where AI has been partly incorporated into the justice system, on several occasions it has been found to propagate racial profiling, discriminatory policing and harsher sentences for minorities. In other cases, the adoption of AI decision-making in staffing has recommended inferior positions and pay rates for women based solely on gender.

In light of such cases, there have been calls to drop AI from government decision-making or to mandate heavy transparency requirements for AI code and decision-making architecture to compensate for AI’s obvious shortcomings. The need to improve AI’s decision-making processes is clearly warranted in the pursuit of fairness and objectivity, yet accomplishing this in practice will pose many new challenges for government. For one, trying to rid AI of any bias will open up new sites for political conflict in spaces that were previously technocratic and insulated from debates about values. Such questions will spur a conversation about the conditions under which humans have a duty to interfere with, or reverse, decisions made by an AI.

Of course, discussions about how best to incorporate society’s values and expectations into AI systems need to occur, but these discussions also need to be subject to reasonable limits. If every conceivable instance of AI decision-making in government is easily accessible for dispute, revision and probing for values, then the effectiveness of such AI systems will quickly decline and the adoption of desperately needed AI capacity in government will grind to a halt. In this sense, the cautious incorporation of AI in government that is occurring today represents both a grand opportunity for government modernization and also a huge potential risk should the process go poorly. The decision-making process of AI used in government can and should be discussed, but with a keen recognition that these are delicate circumstances.

The clear need for reasonable limits on openness is heading into conflict with the emerging zeitgeist at the commanding heights of public administration which favour ever more transparency in government. To be sure, governments are often notorious for their caution in sharing information, yet at an official level and in the broad principles recently espoused at the centre of government (particularly the Treasury Board), there has been an increasing emphasis on transparency and openness. Leaving aside how this meshes with the culture of the public service, at a policy and institutional level there is a growing reflex to automatically impose significant transparency requirements on new initiatives wherever possible. In general terms, this development is long overdue and a new standard that should be applauded.

Yet in the more specific terms related to AI decision-making in government, this otherwise welcome reflex to ever greater transparency and openness could be dangerously misplaced. Or at least, it may well be too much too soon for AI. AI decision-making depends on the software’s ability to sift through data which it analyzes to identify patterns and ultimately arrive at decisions. In an effort to make AI decisions more accountable to the public, over-zealous transparency can also offer those with malign intent a pathway to tainting the crucial data for informing decisions and inserting bias in the AI. Tainted data makes for flawed decisions, or as the shorthand goes, “garbage in, garbage out.” Full transparency about AI operational details, including what data “go in”, may well represent an inappropriate exposure to potential tampering. All data ultimately have a public source and if the exact collection points for these data are easily known, all the easier for wily individuals to feed tainted data to an AI decision system and bias the system.

A real-world case in point would be “TayTweets”, an artificially intelligent Twitter handle developed by Microsoft and released onto the Twitterverse to playfully interact with the public at large. TayTweets was known in advance to use the Tweets directed at its handle as its data source, which would permit it to “learn” language and ideas. The philosophy was that TayTweets would “learn” by communicating with everyday people through their Tweets, and the result would be a beautiful thing. Unfortunately, under this completely open approach, it did not take long for people to figure out how to manipulate the data that TayTweets would receive and use this to rig its responses. TayTweets had to be taken down within only 24 hours of its launch, when it began to voice disturbing, and even odious, opinions.

Presumably AI-enabled or assisted decision-making processes in government would be much more cautious in the wake of TayTweets, but it would not take long for this kind of vandalism to occur if government AI adhered to a strict regime of openness about its processes. Perhaps more importantly, a failure like TayTweets would be hugely consequential for a government and its legitimacy. Would any government suffering from a “TayTweets”-like incident continue to take the necessary risks with technological adoption that would ultimately permit it to modernize and stay relevant in the digital age? Perhaps not. A balance indeed needs to be struck but, given the high risk and potential for harm that would come with government AI failures, that balance should err on the side of caution.

Information about AI processes should always be accessible in principle to those that have a serious concern, but it should not be so readily accessible as to be a source of catastrophic impediment to the operations of government. Being open about AI decisions will be an important part of ensuring that government remains accountable in the 21st century, yet it is wrong-headed to assume that successful accountability will be accomplished for AI processes under the same paradigm that has been designed to govern traditional human decision-making processes. The principal of transparency remains a cornerstone of good governance, but we are not yet at the point of truly understanding what transparency looks like for AI-enhanced government. Assuming that we already are is a recipe for trouble.

Source: The case against transparency in government AI

Implicit bias, Starbucks and AI

Some of the more interesting articles on these issues over the past few weeks:

Take the horribly complex and difficult task of hiring new employees, make it less transparent, more confusing and remove all accountability. Sound good to you? Of course it does not, but that’s the path many employers are taking by adopting artificial intelligence in the hiring process.

Companies across the nation are now using some rudimentary artificial intelligence, or AI, systems to screen out applicants before interviews commence and for the interviews themselves. As a Guardian article from March explained, many of these companies are having people interview in front of a camera that is connected to AI that analyzes their facial expressions, their voice and more. One of the top recruiting companies doing this, Hirevue, has large customers like Hilton and Unilever. Their AI scores people using thousands of data points and compares it to the scores of the best current employees.

But that can be unintentionally problematic. As Recode pointed out, because most programmers are white men, these AI are actually often trained using white male faces and male voices. That can lead to misperceptions of black faces or female voices, which can lead to the AI making negative judgments about those people. The results could trend sexist or racist, but the employer who is using this AI would be able to shift the blame to a supposedly neutral technology.

Other companies have people do their first interview with an AI chatbot. One popular AI that does this is called Mya, which promises a 70 percent decrease in hiring time. Any number of questions these chatbots could ask could be proxies for race, gender or other factors.

An algorithm that judges resumes or powers a chatbot might factor in how far away someone lives from the office, which may have to do with historically racist housing laws. In that case, the black applicant who lives in the predominantly black neighborhood far away from the office gets rejected. Xerox actually encountered that exact problem years ago.

“You can fire a racist HR person, you might not ever find out your AI has been producing racist or sexist results.”

“If you use data that reflects existing and historical bias, and you ask a mathematical tool to make predictions based on that data, the predictions will reflect that bias,” Rachel Goodman, a staff attorney at the ACLU’s Racial Justice Program, told The Daily Beast. It’s nearly impossible to make an algorithm that won’t produce some kind of bias, because almost every data point that can be connected to another factor like someone’s race or gender. We’ve seen this happening when algorithms are used to determine prison sentences and parole in our justice system.

Source: Your Next Job Interview Could Be with a Racist Bot

Starbucks implicit bias training:

On whether people can learn about their implicit bias and retrain their brains to see others differently, McGill Johnson says it’s possible, but not when it’s done over a short period of time.

“It’s taken centuries for our brains to create these negative schemas about particular groups of people that have been marginalized in society,” she says. “And so it will take a really concerted, intentional effort to develop the counter-stereotypes that are required to move them out of our brains and replace them with others.”

At the workshops she runs, McGill Johnson says she starts with the idea that most people believe that they are fundamentally fair and believe in the egalitarian of all races and genders.

It’s when behaviors such as those that led to the arrest of the two men in Philadelphia arise that people can’t account for the disparity in outcomes between what they say they believe and how they react.

McGill Johnson says that raises the question that maybe the way people practice fairness is flawed.

“We’ve been taught to be colorblind. We’ve been taught that we can be objective when it comes to evaluating people, and the science suggests that sometimes our values aren’t sufficient for us to actually practice those pieces because our brains see race very quickly,” she says.

And what contributes to how brains process race and other identifiers is based on just about every other experience a person has had, watched or read.

“We develop, derive bias from just seeing certain pairings of words together over time. And those bits of information help us navigate our unconscious processes,” she says.

This means that in order to address people’s implicit bias, a lot of fundamental processes in the brain have to be changed.

While Starbucks is addressing a flaw in the company’s previous training, McGill Johnson says it will take more than one afternoon to completely address implicit bias.

“I think at best it will spark curiosity and an awareness that biases do not make us bad people — they actually make us human — but that we do have a capacity to override them,” she says. “And it’s really important for us to build in systems and practices that help us do that.”

Source: A Lesson In How To Overcome Implicit Bias 

Let me be clear. I believe that most Americans today really don’t consciously subscribe to racism or most other overtly bigoted beliefs. Even still, African Americans are incarcerated at five times the rate of whites and receive harsher sentences for the same crimes. Women earn less than what men earn for the same work. LGBT youth are disproportionately more likely to be homeless. Unemployment rates for black and Latino Americans are almost double those for whites. If we’re no longer, by and large, overtly bigoted, how do these blatant injustices persist?

A large body of research assigns some of the blame to our unconscious biases—or, as the academic community calls them, “implicit biases”—the attitudes and misperceptions that are baked into our minds due to systemic racism and pervasive stereotyping across society. As products of a sexist society, we all have a bias in favor of men and masculinity and against women and femininity. As products of a racist society, we all have a bias for white people and against people of color. As products of a classist society, we all have a bias for rich people and against poor people. And so on. We don’t consciously hold these beliefs; they’re like deep-down reflexes we’ve habituated to over time. They’re encoded in our brains and, in turn, they play out in ways that then reinforce society-wide bias.

Source: ‘Implicit Bias’ Is Very Real and It Infects Every One of Us: Sally Kohn 

Should justice be delivered by AI?

Interesting discussion. My understanding is that more legal research is being done by AI and my expectation is that more routine legal work could increasingly be done by AI.

For government, the obvious question is with respect to administrative decisions such as immigration, citizenship, social security etc in routine cases. As the article notes, AI would likely be more consistent than humans, but the algorithms would need to be carefully reviewed given possible programmer biases:

It is conventional wisdom, repeated by authoritative voices such as the former chief justice of Canada Beverley McLachlin, that Canadians face an access-to-justice (A2J) crisis. While artificial intelligence (AI) and algorithm-assisted automated decision-making could play a role in ameliorating the crisis, the contemporary consensus holds that the risks posed by AI mean its use in the justice system should be curtailed. The view is that the types of decisions that have historically been made by judges and state-sanctioned tribunals should be reserved exclusively to human adjudicators, or at the very least be subject to human oversight, although this would limit the advantages of speed and lowered cost that AI might deliver.

But we should be wary of prematurely precluding a role for AI in addressing at least some elements of the A2J crisis. Before we concede that robust deployment of AI in the civil and criminal justice systems is to be avoided, we need to take the public’s views into account. What they have to say may lead us to very different conclusions from those reached by lawyers, judges and scholars.

Though the prospect of walking into a courtroom and being confronted by a robot judge remains the stuff of science fiction, we have entered an era in which informed commentators confidently predict that the foreseeable future will include autonomous artificial intelligences passing bar exams, getting licensed to practice law and, in the words of Matthew and Jean-Gabriel Castel in their  2016 article “The Impact of Artificial Intelligence on Canadian Law and the Legal Profession,” “perform[ing] most of the routine or ‘dull’ work done by justices of the peace, small claims courts and administrative boards and tribunals.” Hundreds of thousands of Canadians are affected by such work every year.

Influential voices in the AI conversation have strongly cautioned against AI being used in legal proceedings. Where the matter has been addressed by governments, such as in the EU’s General Data Protection Regulation or France’s Loi informatique et libertés, that precautionary approach has been rendered as a right for there always to be a “human in the loop”: decisions that affect legal rights are prohibited from being made solely by means of the automated processing of data.

Concerns about the accountability of AI — both generally and specifically in the context of legal decisions — should not be lightly dismissed. There are significant and potentially deleterious implications to replacing human adjudicators with AI. The risks posed by the deployment of AI in the delivery of legal services include nontransparency and concerns about where to locate liability for harms, as well as various forms of bias latent in the data relied on, in the way that algorithms interact with those data and in the way that users interact with the algorithm. Having AI replace human adjudicators may not even be technically possible: some observers such as Frank Pasquale and Eric L. Talley have taken pains to point out that there is an irreducible complexity, dynamism and nonlinearity to law, legal reasoning and moral judgment, which means these matters may not lend themselves to automation.

Real as those technological constraints may be at the moment, they also may be real only forthe moment. Furthermore, while these constraints may apply to some (or even many) instances of adjudication, they don’t — or likely won’t — continue to apply to all of them. Law’s complexity runs along many axes, including applying to many areas of human endeavour and impacting many different aspects of our lives. This requires us to be careful not to treat all interactions with the justice system as equivalent for purposes of AI policy. We might use algorithms to expeditiously resolve, for example, consumer protection complaints or breach of contract disputes, but not matters relating to child custody or criminal offences.

Whether and when we deploy AI in the civil and criminal justice systems are questions that should be answered only after taking into account the views of the people who would be subject to those decisions. The answer to the question of judicial AI doesn’t belong to judges or lawyers, or at least not only to them — it belongs, in large part, to the public. Maintaining public confidence in the institution of the judiciary is a paramount concern for any liberal democratic society. If the courts are creaking under the strain of too many demands, if resolutions to disputes are hobbled by lengthy delays and exorbitant costs, we should be open to the possibility of using AI and algorithms to optimize judicial resources. If and to the extent we can preserve or enhance confidence in the administration of justice through the use of AI, policy-makers should be prepared to do so.

We can reframe the issue as an inquiry into what people look for from judicial decision-making processes. What are the criteria that lead people who are subject to justice system decisions to conclude that the process was “fair” or “just”? As Jay Thornton has noted , scholars in the social psychology of procedural justice, such as Gerald Leventhal and Tom Tyler, have done empirical work that provides exactly this insight into people’s subjective views. People want their justice system to feature such characteristics as consistency, accuracy, correctability, bias suppression, representativeness and ethicality. In Tyler’s formulation, people want a chance to present their side of the story and have it be considered; they want to be assured of the neutrality and trustworthiness of the decision-maker; and they want to be treated in a respectful and dignified manner.

It is not obvious that judicial AI fails to meet those criteria — it is almost certainly the case that on some of the relevant measures, such as consistency, judicial AI might fare better than human adjudicators. (Research has indicated, for example, that judges render more punitive decisions the longer they go without a meal — in other words, a hungry judge is a harsher judge. Whatever else might be said about robot judges, they won’t get hungry. When deciding between human adjudication and AI adjudication, we should also attend to the question of whether existing human-driven processes are performing adequately on the criteria identifiedby Leventhal and Tyler. That is not a theoretical inquiry but an empirical one: it should be assessed by reference to the subjective satisfaction of the parties who are involved in those processes.

There may be certain types or categories of judicial decisions that people would prefer be performed by AI if so doing would result in faster and cheaper decisions. We must also take fully into account the fact that we already calibrate adjudicative processes for solemnity, procedural rigour and cost to reflect conventional views of what kinds of claims or disputes “matter” and to what extent they do so. For example, the rules of evidence that apply in “regular” courts are significantly relaxed (or even obviated) in courts designated as “small claims” (which often aren’t so small: in Ontario, Small Claims Court applies to disputes under $25,000). Some tribunals that make important decisions about the legal rights of parties — such as the Ontario Landlord and Tenant Board — do not require their adjudicators to have a law degree. We have been prepared to adjust judicial processes in an effort to make them more efficient, and where technology has been used to improve processes and facilitate dispute resolution, as has been the case with British Columbia’s online Civil Resolution Tribunal, the results appear to have been salutary. The use of AI in the judicial process should be viewed as a point farther down the road on that same journey.

The criminal and civil justice systems do not exist to provide jobs for judges or lawyers. They exist to deliver justice. If justice can be delivered by AI more quickly, at less cost and with no diminishment in public confidence, then the possibilities of judicial AI should be explored and implemented. It may ultimately be the case that confidence in the administration of justice would be compromised by the use of AI — but that is an empirical question, to be determined in consultation with the public. The questions of confidence in the justice system, and of whether to facilitate and deliver justice by means of AI (including the development of a taxonomy of the types of decisions that can or should be made using AI), can only be fully answered by those in whom that confidence resides: the public.

via Should justice be delivered by AI?

Get ready: A massive automation shift is coming for your job

Still waiting for some of the entities proposing increased immigration (e.g., Barton Commission, Century Initiative) to factor this into their thinking. The Conference Board has at least acknowledged the issue:

The robots are coming to take our jobs and Canada must do a lot more to deal with it.

That’s not the prediction of a doomsday prophet, but of the world’s leading business consultant, the managing director of global firm McKinsey & Co. and chair of the Canadian government’s Advisory Council on Economic Growth, Dominic Barton.

Okay, admittedly Mr. Barton didn’t exactly say the robots are taking over the planet. But he is warning that automation – robots, driverless cars, artificial intelligence, technological transformation – will disrupt millions of Canadian jobs, not far in the future, but in the next dozen years.

Put another way: If you are 30 or 35 now, there’s a good chance that not just your job, but the kind of job you do, will be eliminated – at the most inopportune time of life, when you are 40 to 55, perhaps with a mortgage and kids.

The council that Mr. Barton heads is calling for a national “re-skilling” effort that would cost $15-billion a year – per year – to help Canadians cope. He doesn’t think all that money can come from government, but he thinks it’s going to have to come from somewhere.

“The scale of the change is so significant. What are we doing to really get at that?” Mr. Barton said over the phone from Melbourne, Australia. “We’re talking a really big issue.”

This issue is a massive sleeper test for the government. It’s a test for all governments, really, but in this country it’s a test of ambition for Justin Trudeau’s Liberal government. It could well be the biggest societal issue of our time. Finance Minister Bill Morneau’s next budget will be delivered in less than two weeks. Will it even begin to reflect the scope of the issue?

To be fair, Mr. Morneau’s last budget talked a lot about job training, and it put some modest sums into it. Mr. Morneau, who ran a human-resources firm, was talking about these issues before he was elected as an MP. But there isn’t yet a government response from Ottawa that hints at the scale of Mr. Barton’s warning.

He is talking about vast change, soon. There are driverless cars now, he noted. That makes it easy to see the prospect of truck drivers thrown out of work en masse. (The courier firm FedEx has hinted its driverless vehicle plans aren’t so far away; the company has 400,000 employees.)

It’s not just truck drivers or factory workers who could see their jobs washed away by technological change. It includes knowledge workers, such as well-paid wealth managers who could find their current jobs automated. The Advisory Council estimated 10 to 12 per cent of Canadian workers could see their jobs disrupted by technology by 2030. “That’s two million people,” he noted. Mr. Barton thinks the estimate is conservative.

That’s different from when a company goes bankrupt or a plant closes, and laid-off workers go look for the same job at another company. Technological change will wipe out occupations. People will need to do new kinds of work, and they will need new skills. Technology might also create millions of jobs, but if Canadians don’t have the skills, a lot of those jobs might go to the United States or China or Sweden.

If you’ve watched the way voters in the United States and elsewhere have responded to disruptions of well-paying manufacturing jobs and good job opportunities, how it has fuelled divisive politics, an anti-trade backlash, and anti-immigrant nativism, just imagine how society could be roiled by two million middle-aged Canadians looking for work without much idea how they’re going to start over.

The Advisory Council argued that it has to be met with a major revamp of job training and lifelong education and a $15-billion injection of resources.

It’s an enormous sum, about three-quarters of the cost of the military. It’s too much for federal and provincial governments to pay alone, he argues, but business will have to be given incentives to do more education and training. Individuals, even those who feel squeezed saving for retirement, will have to save for lifelong learning, perhaps with tax-sheltered learning accounts. They won’t have a choice, he believes, “because it’s coming.”

The advisory council was appointed by the Liberals, and Mr. Barton has the ear of Mr. Trudeau and his inner circle. The Liberal government has adopted a lot of the council’s recommendations, to varying degrees, in its strategy to foster economic growth. But Mr. Barton noted the one with the biggest estimate impact is that massive re-skilling initiative. So far, governments are working on the same scale to face up to the impact of automation, but they will have to face it sooner or later. It’s coming.

via Get ready: A massive automation shift is coming for your job – The Globe and Mail

Facial Recognition Is Accurate, if You’re a White Guy – The New York Times

The built-in biases and limitations of facial recognition and the issues it raises:

Facial recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph.

When the person in the photo is a white man, the software is right 99 percent of the time.

But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.

Color Matters in Computer Vision

Facial recognition algorithms made by Microsoft, IBM and Face++ were more likely to misidentify the gender of black women than white men.

Gender was misidentified in up to 1 percent of lighter-skinned males in a set of 385 photos.

Gender was misidentified in up to 7 percent of lighter-skinned females in a set of 296 photos.

Gender was misidentified in up to 12 percent of darker-skinned males in a set of 318 photos.

Gender was misidentified in 35 percent of darker-skinned females in a set of 271 photos.

In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women.

One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study.

The new study also raises broader questions of fairness and accountability in artificial intelligence at a time when investment in and adoption of the technology is racing ahead.

Today, facial recognition software is being deployed by companies in various ways, including to help target product pitches based on social media profile pictures. But companies are also experimenting with face identification and other A.I. technology as an ingredient in automated decisions with higher stakes like hiring and lending.

Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement — and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.

Facial recognition technology is lightly regulated so far.

“This is the right time to be addressing how these A.I. systems work and where they fail — to make them socially accountable,” said Suresh Venkatasubramanian, a professor of computer science at the University of Utah.

Until now, there was anecdotal evidence of computer vision miscues, and occasionally in ways that suggested discrimination. In 2015, for example, Google had to apologize after its image-recognition photo app initially labeled African Americans as “gorillas.”

Sorelle Friedler, a computer scientist at Haverford College and a reviewing editor on Ms. Buolamwini’s research paper, said experts had long suspected that facial recognition software performed differently on different populations.

“But this is the first work I’m aware of that shows that empirically,” Ms. Friedler said.

Ms. Buolamwini, a young African-American computer scientist, experienced the bias of facial recognition firsthand. When she was an undergraduate at the Georgia Institute of Technology, programs would work well on her white friends, she said, but not recognize her face at all. She figured it was a flaw that would surely be fixed before long.

But a few years later, after joining the M.I.T. Media Lab, she ran into the missing-face problem again. Only when she put on a white mask did the software recognize hers as a face.

By then, face recognition software was increasingly moving out of the lab and into the mainstream.

“O.K., this is serious,” she recalled deciding then. “Time to do something.”

So she turned her attention to fighting the bias built into digital technology. Now 28 and a doctoral student, after studying as a Rhodes scholar and a Fulbright fellow, she is an advocate in the new field of “algorithmic accountability,” which seeks to make automated decisions more transparent, explainable and fair.

Her short TED Talk on coded bias has been viewed more than 940,000 times, and she founded the Algorithmic Justice League, a project to raise awareness of the issue.

In her newly published paper, which will be presented at a conferencethis month, Ms. Buolamwini studied the performance of three leading face recognition systems — by Microsoft, IBM and Megvii of China — by classifying how well they could guess the gender of people with different skin tones. These companies were selected because they offered gender classification features in their facial analysis software — and their code was publicly available for testing.

To test the commercial systems, Ms. Buolamwini built a data set of 1,270 faces, using faces of lawmakers from countries with a high percentage of women in office. The sources included three African nations with predominantly dark-skinned populations, and three Nordic countries with mainly light-skinned residents.

The African and Nordic faces were scored according to a six-point labeling system used by dermatologists to classify skin types. The medical classifications were determined to be more objective and precise than race.

Then, each company’s software was tested on the curated data, crafted for gender balance and a range of skin tones. The results varied somewhat. Microsoft’s error rate for darker-skinned women was 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.

Ms. Buolamwini shared the research results with each of the companies. IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparent” services. This month, the company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women.

Microsoft said that it had “already taken steps to improve the accuracy of our facial recognition technology” and that it was investing in research “to recognize, understand and remove bias.”

Ms. Buolamwini’s co-author on her paper is Timnit Gebru, who described her role as an adviser. Ms. Gebru is a scientist at Microsoft Research, working on its Fairness Accountability Transparency and Ethics in A.I. group.

Megvii, whose Face++ software is widely used for identification in online payment and ride-sharing services in China, did not reply to several requests for comment, Ms. Buolamwini said.

Ms. Buolamwini is releasing her data set for others to use and build upon. She describes her research as “a starting point, very much a first step” toward solutions.

Ms. Buolamwini is taking further steps in the technical community and beyond. She is working with the Institute of Electrical and Electronics Engineers, a large professional organization in computing, to set up a group to create standards for accountability and transparency in facial analysis software.

She meets regularly with other academics, public policy groups and philanthropies that are concerned about the impact of artificial intelligence. Darren Walker, president of the Ford Foundation, said that the new technology could be a “platform for opportunity,” but that it would not happen if it replicated and amplified bias and discrimination of the past.

“There is a battle going on for fairness, inclusion and justice in the digital world,” Mr. Walker said.

Part of the challenge, scientists say, is that there is so little diversity within the A.I. community.

“We’d have a lot more introspection and accountability in the field of A.I. if we had more people like Joy,” said Cathy O’Neil, a data scientist and author of “Weapons of Math Destruction.”

Technology, Ms. Buolamwini said, should be more attuned to the people who use it and the people it’s used on.

“You can’t have ethical A.I. that’s not inclusive,” she said. “And whoever is creating the technology is setting the standards.”

via Facial Recognition Is Accurate, if You’re a White Guy – The New York Times

Diversity must be the driver of artificial intelligence: Kriti Sharma

Agree. Those creating the algorithms and related technology need to be both more diverse and more mindful of the assumptions baked into their analysis and work:

The question over what to do about biases and inequalities in the technology industry is not a new one. The number of women working in science, technology, engineering and mathematics (STEM) fields has always been disproportionately less than men. What may be more perplexing is, why is it getting worse?

It’s 2017, and yet according to the American Association of University Women (AAUW) in a review of more than 380 studies from academic journals, corporations, and government sources, there is a major employment gap for women in computing and engineering.

North America, as home to leading centres of innovation and technology, is one of the worst offenders. A report from the Equal Employment Opportunity Commission (EEOC) found “the high-tech industry employed far fewer African-Americans, Hispanics, and women, relative to Caucasians, Asian-Americans, and men.”

However, as an executive working on the front line of technology, focusing specifically on artificial intelligence (AI), I’m one of many hoping to turn the tables.

This issue isn’t only confined to new product innovation. It’s also apparent in other aspects of the technology ecosystem – including venture capital. As The Globe highlighted, Ontario-based MaRS Data Catalyst published research on women’s participation in venture capital and found that “only 12.5 per cent of investment roles at VC firms were held by women. It could find just eight women who were partners in those firms, compared with 93 male partners.”

The Canadian government, for its part, is trying to address this issue head on and at all levels. Two years ago, Prime Minister Justin Trudeau campaigned on, and then fulfilled, the promise of having a cabinet with an equal ratio of women to men – a first in Canada’s history. When asked about the outcome from this decision at the recent Fortune Most Powerful Women Summit, he said, “It has led to a better level of decision-making than we could ever have imagined.”

Despite this push, disparities in developed countries like Canada are still apparent where “women earn 11 per cent less than men in comparable positions within a year of completing a PhD in a science, technology, engineering or mathematics, according to an analysis of 1,200 U.S. grads.”

AI is the creation of intelligent machines that think and learn like humans. Every time Google predicts your search, when you use Alexa or Siri, or your iPhone predicts your next word in a text message – that’s AI in action.

Many in the industry, myself included, strongly believe that AI should reflect the diversity of its users, and are working to minimize biases found in AI solutions. This should drive more impartial human interactions with technology (and with each other) to combat things like bias in the workplace.

The democratization of technology we are experiencing with AI is great. It’s helping to reduce time-to-market, it’s deepening the talent pool, and it’s helping businesses of all size cost-effectively gain access to the most modern of technology. The challenge is there are a few large organizations currently developing the AI fundamentals that all businesses can use. Considering this, we must take a step back and ensure the work happening is ethical.

AI is like a great big mirror. It reflects what it sees. And currently, the groups designing AI are not as diverse as we need them to be. While AI has the potential to bring services to everyone that are currently only available to some, we need to make sure we’re moving ahead in a way that reflects our purpose – to achieve diversity and equality. AI can be greatly influenced by human-designed choices, so we must be aware of the humans behind the technology curating it.

At a point when AI is poised to revolutionize our lives, the tech community has a responsibility to develop AI that is accountable and fit for purpose. For this reason, Sage created Five Core Principles to developing AI for business.

At the end of the day, AI’s biggest problem is a social one – not a technology one. But through diversity in its creation, AI will enable better-informed conversations between businesses and their customers.

If we can train humans to treat software better, hopefully, this will drive humans to treat humans better.

via Diversity must be the driver of artificial intelligence – The Globe and Mail

Why Google’s newest AI team is setting up in Canada – Recode

The Canadian advantage includes immigration and related policies:

DeepMind, Google’s London-based artificial intelligence research branch, is launching a team at the University of Alberta in Canada.

Why there? Two reasons come to mind:

1. Canada has a history of AI research

DeepMind is launching a team at the university partly for proximity to the broader AI research community in Canada.

A number of leading AI researchers in Silicon Valley hail from Canada, where they plugged away at deep learning, a complex automated process of data analysis, during a period when that technology — now popular at major tech companies — was considered by the larger computer science community to be a dead end.

Plus, almost a dozen DeepMind staff came from the university, according to a blog post by DeepMind co-founder and CEO Demis Hassabis announcing the new lab. An Alberta PhD and a former post doc from the school played key roles in one of DeepMind’s hallmark accomplishments, getting its AlphaGo software to beat the human world champion at Chinese strategy game Go.

“Our hope is that this collaboration will help turbocharge Edmonton’s growth as a technology and research hub,” wrote Hassabis, “attracting even more world-class AI researchers to the region and helping to keep them there too.”

2. The Canadian government is friendlier to AI research than the U.S.

Political realities also make Canada a particularly attractive place for Google to expand its AI efforts.

The Canadian government has demonstrated a willingness to invest in artificial intelligence, committing about $100 million ($125 million in Canadian currency) in its 2017 budget to develop the AI industry in the country.

This is in contrast to the U.S., where President Donald Trump’s 2018 budget request includes drastic cuts to medical and scientific research, including an 11 percent or $776 million cut to the National Science Foundation.

Another contrast to the U.S. is in immigration policies. Canada doesn’t have an equivalent of the U.S. travel ban, which restricts travel for immigrants and refugees from Iran, Libya, Somalia, Sudan, Syria and Yemen. In the U.S., the ban makes it more difficult for tech and academic talent to enter the country.

Something interesting: One of the three researchers leading the team, Dr. Patrick M. Pilarski, is part of the university’s Department of Medicine. Google won’t comment on whether Pilarski’s medical background will play a role in his machine learning work for DeepMind, but Google is working on ways to integrate AI for health care as part of its cloud offering.

Source: Why Google’s newest AI team is setting up in Canada – Recode

The head of Google’s Brain team is more worried about the lack of diversity in artificial intelligence than an AI apocalypse – Recode

The next frontier of diversity?

As some would have it, robots are poised to take over the world in about 3 … 2 … 1 …

But one machine-learning expert — who is, after all, in a position to know — thinks that’s not the biggest issue facing artificial intelligence. In fact, it’s not an issue at all.

“I am personally not worried about an AI apocalypse, as I consider that a completely made-up fear,” Jeff Dean, a senior fellow at Google, wrote during a Reddit AMA on Aug. 11. “I am concerned about the lack of diversity in the AI research community and in computer science more generally.” (Emphasis his.)

Ding, ding, ding. The issue that the tech industry is trying to maneuver their way around, for better or worse, is the same issue that can stunt the progress of “humanistic thinking” in the development of artificial intelligence, according to Dean.

For the optimists in the audience, Google Brain wants to improve lives, Dean wrote. And how can you improve lives without people with diverse perspectives and backgrounds helping to build and develop the technology you hope will impact positive change? (Answer: You can’t.)

“One of the things I really like about our Brain Residency program is that the residents bring a wide range of backgrounds, areas of expertise (e.g. we have physicists, mathematicians, biologists, neuroscientists, electrical engineers, as well as computer scientists), and other kinds of diversity to our research efforts,” Dean wrote.

“In my experience, whenever you bring people together with different kinds of expertise, different perspectives, etc., you end up achieving things that none of you could do individually, because no one person has the entire skills and perspective necessary.”

Source: The head of Google’s Brain team is more worried about the lack of diversity in artificial intelligence than an AI apocalypse – Recode