What happens when artificial intelligence comes to Ottawa

More on the note of caution of government adoption of AI for decision-making (Ottawa’s use of AI in immigration system has profound implications for human rights):

There is a notion that the choices a computer algorithm makes on our behalf are neutral and somehow more reliable than our notoriously faulty human decision-making.

But, as a new report presented on Parliament Hill Wednesday points out, artificial intelligence isn’t pristine, absolute wisdom downloaded from the clouds. Rather, it’s shaped by the ideas and priorities of the human beings who build it and by the database of examples those architects feed into the machine’s “brain” to help it “learn” and build rules on which to operate.

Much like a child is a product of her family environment—what her parents teach her, what they read to her and show her of the world—artificial intelligence sees the world through the lens we provide for it. This new report, entitled “Bots at the Gate,” contemplates how decisions rendered by artificial intelligence (AI) in Canada’s immigration and refugee systems could impact the human rights, safety and privacy of people who are by definition among the most vulnerable and least able to advocate for themselves.

The report says the federal government has been “experimenting” with AI in limited immigration and refugee applications since at least 2014, including with “predictive analytics” meant to automate certain activities normally conducted by immigration officials. “The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others,” the document warns. “These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

Citing ample evidence of how biased and confused—how human—artificial intelligence can be, the report from the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy makes the case for a very deliberate sort of caution.

The authors mention how a search engine coughs up ads for criminal record checks when presented with a name it associates with a black identity. A woman searching for jobs sees lower-paying opportunities than a man doing the same search. Image recognition software matches a photo of a woman with another of a kitchen. An app store suggests a sex offender search as related to a dating app for gay men.

“You have this huge dataset, you just feed it into the algorithm and trust it to pick out the patterns,” says Cynthia Khoo, a research fellow at the Citizen Lab and a lawyer specializing in technology. “If that dataset is based on a pre-existing set of human decisions, and human decisions are also faulty and biased—if humans have been traditionally racist, for example, or biased in other ways—then that pattern will simply get embedded into the algorithm and it will say, ‘This is the pattern. This is what they want, so I’m going to keep replicating that.’”

Immigration, Refugees and Citizenship Canada says the department launched two pilot projects in 2018 using computer analytics to identify straightforward and routine Temporary Resident Visa applications from China and India for faster processing. “The use of computer analytics is not intended to replace people,” the department said. “It is another tool to support officers and others in managing our ever-increasing volume of applications. Officers will always remain central to IRCC’s processing.”

This week, the report’s authors made the rounds on the Hill, presenting their findings and concerns to policy-makers. “It does now sound like it’s a measured approach,” says Petra Molnar, a lawyer and technology and human rights researcher with the IHRP. “Which is great.”

Other countries offer cautionary tales rather than best practices. “The algorithm that was used [to determine] whether or not someone was detained at the U.S.-Mexico border was actually set to detain everyone and used as a corroboration for the extension of the detention practices of the Trump administration,” says Molnar.

And in 2016, the U.K. government revoked the visas of 36,000 foreign students after automated voice analysis of their English language equivalency exams suggested they may have cheated and sent someone else to the exam in their place. When the automated voice analysis was compared to human analysis, however, it was found to be wrong over 20 per cent of the time—meaning the U.K. may have ejected 7,000 foreign students who had done nothing wrong.

The European Union’s General Data Protection Regulation that came into force in April 2018, on the other hand, is the gold standard, enshrining such concepts as “the right to an explanation,” or the legal certainty that if your data was processed by an automated tool, you have the right to know how it was done.

Immigration and refugee decisions are both opaque and highly discretionary even when rendered by human beings, argues Molnar, pointing out that two different immigration officers may look at the same file and reach different decisions. The report argues that lack of transparency reaches a different level when you introduce AI into the equation, outlining three distinct reasons.

First, automated decision-making solutions are often created by outside entities that sell them to government agencies, so the source code, training data and other information would be proprietary and hidden from public view.

Second, full disclosure of the guts of these programs might be a bad idea anyway because it could allow people to “game” the system.

“Third, as these systems become more sophisticated (and as they begin to learn, iterate, and improve upon themselves in unpredictable or otherwise unintelligible ways), their logic often becomes less intuitive to human onlookers,” the authors explain. “In these cases, even when all aspects of a system are reviewable and superficially ‘transparent,’ the precise rationale for a given output may remain uninterpretable and unexplainable.” Many of these systems end up inscrutable black boxes that could spit out determinations on the futures of vulnerable people, the report argues.

Her group aims to use a “carrot-and-stick approach,” Khoo says, urging the federal government to make Canada a world leader on this in both a human rights and high-tech context. It’s a message that may find a receptive audience with a government that has been eager to make both halves of that equation central to its brand at home and abroad.

But they’ll have to move fast: If AI is currently in a nascent state in policy decisions that shape real people’s lives, it’s growing fast and won’t stay there for long.

“This is happening everywhere,” Khoo says.

Source: What happens when artificial intelligence comes to Ottawa

Ottawa’s use of AI in immigration system has profound implications for human rights

Good discussion of the main issues and the need for care and accountability frameworks in the development of AI and its algorithms.

The authors also note that “Human decision-making is also riddled with bias and error” (unfortunately, we don’t have any comparable analysis to that of Sean Rehaag with respect to IRB and Federal Court immigration-related decisions – Getting refugee decisions appealed in court ‘the luck of the draw,’ study shows):

How would you feel if an algorithm made a decision about your application for a Canadian work permit, or determined how much money you can bring in as an investor? What if it decided whether your marriage is “genuine?” Or if it trawled through your Tweets or Facebook posts to determine if you are “suspicious” and therefore a “risk,” without ever revealing any of the categories it used to make this decision?

While seemingly futuristic, these types of questions will soon be put to everyone who interacts with Canada’s immigration system.

A report released Wednesday by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy finds that algorithms and artificial intelligence are augmenting and replacing human decision makers in Canada’s immigration and refugee system, with profound implications for fundamental human rights.

We know that Canada has already introduced automated decision-making experiments as part of the immigration determination process since at least 2014. These new automated techniques support the evaluation of immigrant and visitor applications such as Express Entry for Permanent Residence. Recent announcements signal an expansion of the uses of these technologies in a variety of applications and immigration decisions in the coming years.

Exploring new technologies and innovations is exciting and necessary, particularly when used in an immigration system plagued by lengthy delays, protracted family separation and uncertain outcomes. However, without proper oversight, mechanisms and accountability measures, the use of AI threatens to create a laboratory for high-risk experiments.

The system is already opaque. The ramifications of using AI in immigration and refugee decisions are far-reaching. Vulnerable and under-resourced communities such as those without citizenship often have access to less-robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may serve only to exacerbate these disparities and can result in severe rights violations, such as discrimination and threats to life and liberty.

Without proper oversight, automated decisions can rely on discriminatory and stereotypical markers, such as appearance, religion, or travel patterns, and thus entrench bias in the technology. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies. This could lead to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, such as the right to have a fair and impartial decision maker and being able to appeal your decision. These rights are internationally protected by instruments that Canada has ratified, such as the United Nations Convention on the Status of Refugees, and the International Covenant on Economic, Social and Cultural Rights, among others. These rights are also protected by the Canadian Charter of Rights and Freedoms and accompanying provincial human rights legislation.

At this point, there are more questions than answers.

If an algorithm makes a decision about your fate, can it be considered fair and impartial if it relies on biased data that is not made public? What happens to your data during the course of these decisions and can it be shared with other departments, or even with the government of your country, potentially putting you at risk? The use of AI has already been criticized in the predictive policing context, where algorithms linked race with the likelihood of re-offending, or when they link women with lower paying jobs, or purport to discern sexual orientations from photos.

Given the already limited safeguards and procedural justice protections in immigration and refugee decisions, the use of discriminatory and biased algorithms have profound ramifications on a person’s safety, life, liberty, security, and mobility. Before exploring how these technologies will be used, we need to create a framework for transparency and accountability that addresses bias and error in automated decision making.

Our report recommends Ottawa establish an independent, arm’s-length body with the power to engage in all aspects of oversight and review all automated decision-making systems by the federal government, publishing all current and future uses of AI by the government. We advocate for the creation of a task force that brings key government stakeholders, alongside academia and civil society, to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

Without these frameworks and mechanisms, we risk creating a system that – while innovative and efficient – could ultimately result in human rights violations. Canada is exploring the use of this technology in high-risk contexts within an accountability vacuum. Human decision-making is also riddled with bias and error, and AI may in fact have positive impacts in terms of fairness and efficiency. We need a new framework of accountability that builds on the safeguards and review processes we have in place for the frailties in human decision-making. AI is not inherently objective or immune to bias and must be implemented only after a broad and critical look at the very real impacts these technologies will have on human lives.

Source: Ottawa’s use of AI in immigration system has profound implications for human rights

Machines Will Handle More Than Half of Workplace Tasks by 2025, WEF Report Says

Question is: what kind of jobs and will it truly be a “positive impact:”

Organizers of the Davos forum say in a new report that machines are increasingly moving in on jobs done by people, projecting that more than half of all workplace tasks will be carried out by machines by 2025.

The World Economic Forum also predicts the loss of some 75 million jobs worldwide by 2022, but also says 133 million new jobs will be created.

The WEF said Monday: “Despite bringing widespread disruption, the advent of machine, robots and algorithm could actually have a positive impact on human employment.”

The “Future of Jobs 2018” report, the second of its kind, is based on a survey of executives representing 15 million employees in 20 economies.

The WEF said challenges for employers include reskilling workers, enabling remote employment and building safety nets for workers.

Source: Machines Will Handle More Than Half of Workplace Tasks by 2025, a Report Says

How we stopped worrying and learned to love robots

Still looking for someone to translate the expected AI impact in terms of what it means in terms of immigration levels and skills:

As Bob Magee, chairman of the Woodbridge Group, walked us through his foam-manufacturing facility just north of Toronto, a familiar story emerged. Automation for this company isn’t a simple calculation of substituting one machine for one worker. Rather, it is one of many incremental steps in a process of continuous improvement that requires engaged employees at each and every step.

At the Woodbridge Group, automation is beneficial for the firm and workers alike. It contributes to improved competitiveness — a necessary precondition for jobs — while making existing jobs easier, more efficient and, from our observations, more enjoyable. Where workers were once required to lift and place heavy sheets of foam, these tasks are now done by machines. Workers are free to do what they do best: oversee processes, ensure quality and work as a team to make the plant more efficient.

Despite many stories like these, concerns over automation decimating the workforce and leaving millions unemployed persist. Is automation driving us toward a jobless future or a more productive and prosperous economy for firms and workers alike? To better understand what’s happening and what’s coming in Ontario, Ontario’s Ministry of Economic Development and Growth and Ministry of Advanced Education and Skills Development commissioned the Brookfield Institute to take a closer look. Our in-depth analysis included systematic reviews of existing literature and data, interviews with over 50 people representing labour, business and developers of technology, and a two-phase citizen engagement process in communities across the province involving roughly 300 individuals. Our work and findings were overseen and reviewed by an expert advisory panel of 14 people with technology, academic and industry expertise.

The extent to which communities and workers are impacted by automation depends on the behaviour of firms — that is, whether they invest in automation technologies. This decision is influenced by a myriad of internal and external factors, including domestic and international competition, changing consumer preferences and the need to maintain output as workers age and retire. The ultimate goal of automation is always to improve productivity, product quality and overall competitiveness.

Given Ontario firm’s track record on technology adoption, large-scale disruption is likely not around the corner. Just as many factors influence tech adoption, others impede it. These include cost barriers and risk aversion, the difficulties associated with integrating new technology in existing legacy systems and — surprisingly — shortages of workers with the skills to properly implement and maintain technology.

For many firms in Ontario these barriers significantly inhibit technological adoption. The gap in information and communications technology (ICT) investment between Ontario and the US is substantial and has grown in recent years. In 2015, Ontario firms’ annual ICT investment was 2.39 percent as a share of GDP, versus 3.15 percent in the US and 2.16 for Canada as a whole. This disparity puts a damper on predictions of an imminent automation-driven jobless future.

If Ontario firms continue to lag when it comes to tech adoption, the associated decline in competitiveness could spell disaster for them and their workers.

In the Canadian manufacturing sector (to which Ontario manufacturers contributed roughly 47 percent of output in 2016), firms’ ICT investment per worker was 57 percent of that of their US counterparts, as of 2013. Despite this lower rate of investment in technology, Ontario experienced sharper declines in employment (5.5 percent from 2001 to 2011) than both the US (4.2 percent) and Germany (4 percent) — jurisdictions with higher rates of technology adoption. This suggests that while automation has enabled many manufacturers to produce more goods with fewer people, low rates of technology adoption may also be a concern for workers.

Without skilled workers, automation simply would not be possible. They are needed at each and every step, to identify inefficiencies and to integrate and oversee technology.

When new technologies are adopted, the impact on workers is a function of how those specific technologies affect business activities, what new skills are needed as a result and whether these new skills are present in the firm’s existing workforce and in the broader labour market.

Automation can help firms retain existing jobs, albeit with different skill requirements. In some instances, employees can be redeployed, often to more interesting, productive and safe work. Automation can also help existing firms expand and new businesses form. Historically, automation has created more jobs than it eliminates, in the long run.

In Ontario’s finance and insurance sector, for example, automation has contributed to improved efficiency, yet employment continues to rise. Between 2002 and 2016, the number of workers required to generate $1 million in output declined from 5.9 to 5.2, but employment expanded by 35 percent, or 85,350 workers. But automation has also contributed to significant shifts in skill requirements, increasing demand for both soft and technical skills, including those related to client experience, sales, and project and risk management, as well as software development and data analysis. This shift is perhaps best exemplified by the impact of the ATM on bank tellers, whose numbers actually increased after ATMs were introduced.

Automation can eliminate certain kinds of job tasks and sometimes whole occupations. When new jobs are created, they often require different skill sets and frequently emerge in industries and regions different than those where jobs might have been lost. If workers are unable to move, to acquire new skills to adapt or to change jobs, they may experience a prolonged adjustment period of underemployment or unemployment. This in turn can depress local labour markets and exacerbate the inequitable distribution of wealth among individuals and across regions.

For workers and firms to be successful, Ontario must overcome barriers and embrace automation with an intensity comparable to that of our international peers. This process will require a skilled workforce able to support technological adoption. For many workers, the benefits are clear: jobs will be retained and may even get better. But an increased pace of automation could leave others behind. We need to ensure that workers have the skills and opportunities to adapt to and even drive automation. This will require more than incremental changes, and our public and private sectors will need to rethink and better coordinate existing programs geared toward promoting technological adoption and delivering skills training.

Source: How we stopped worrying and learned to love robots

The case against transparency in government AI

Useful note of caution regarding the risks of manipulation as Facebook’s experience indicates:

Governments are becoming increasingly aware of the potential for incorporating artificial intelligence (AI) and machine-assisted decision-making into their operations. The business case is compelling; AI has the ability to dramatically improve government service delivery, citizen satisfaction and government efficiency more generally. At the same time, there are ample cases demonstrating the potential for AI to fall short of public expectations. With governments continuing to lag far behind the rest of society in the adoption of digital technology, a successful incorporation of AI into government is far from being a foregone conclusion.

AI itself is essentially the product of combining exponential increases in the availability of data with exponential increases in computing power that can sort that data. Together, the result is software that makes judgments or predictions that are remarkably perceptive to the point of even feeling “intelligent.” Yet, while the outputs of AI might feel intelligent, it’s perhaps more accurate to say that these decisions are actually just very highly informed by software and big data. AI can make decisions and observations, but in fact AI lacks what we would call judgment and certainly does not have the capacity to make decisions guided by human morality. It is only able to report or make decisions based on the training data it is fed, and thus it will perpetuate flaws in data if they exist.

As a result of these technical limitations, AI can have a dark side, especially when it is incorporated in public affairs. Since AI decision-making is easily prone to propagating the biases of others, AI risks making clumsy decisions and offering inappropriate recommendations. In the US, where AI has been partly incorporated into the justice system, on several occasions it has been found to propagate racial profiling, discriminatory policing and harsher sentences for minorities. In other cases, the adoption of AI decision-making in staffing has recommended inferior positions and pay rates for women based solely on gender.

In light of such cases, there have been calls to drop AI from government decision-making or to mandate heavy transparency requirements for AI code and decision-making architecture to compensate for AI’s obvious shortcomings. The need to improve AI’s decision-making processes is clearly warranted in the pursuit of fairness and objectivity, yet accomplishing this in practice will pose many new challenges for government. For one, trying to rid AI of any bias will open up new sites for political conflict in spaces that were previously technocratic and insulated from debates about values. Such questions will spur a conversation about the conditions under which humans have a duty to interfere with, or reverse, decisions made by an AI.

Of course, discussions about how best to incorporate society’s values and expectations into AI systems need to occur, but these discussions also need to be subject to reasonable limits. If every conceivable instance of AI decision-making in government is easily accessible for dispute, revision and probing for values, then the effectiveness of such AI systems will quickly decline and the adoption of desperately needed AI capacity in government will grind to a halt. In this sense, the cautious incorporation of AI in government that is occurring today represents both a grand opportunity for government modernization and also a huge potential risk should the process go poorly. The decision-making process of AI used in government can and should be discussed, but with a keen recognition that these are delicate circumstances.

The clear need for reasonable limits on openness is heading into conflict with the emerging zeitgeist at the commanding heights of public administration which favour ever more transparency in government. To be sure, governments are often notorious for their caution in sharing information, yet at an official level and in the broad principles recently espoused at the centre of government (particularly the Treasury Board), there has been an increasing emphasis on transparency and openness. Leaving aside how this meshes with the culture of the public service, at a policy and institutional level there is a growing reflex to automatically impose significant transparency requirements on new initiatives wherever possible. In general terms, this development is long overdue and a new standard that should be applauded.

Yet in the more specific terms related to AI decision-making in government, this otherwise welcome reflex to ever greater transparency and openness could be dangerously misplaced. Or at least, it may well be too much too soon for AI. AI decision-making depends on the software’s ability to sift through data which it analyzes to identify patterns and ultimately arrive at decisions. In an effort to make AI decisions more accountable to the public, over-zealous transparency can also offer those with malign intent a pathway to tainting the crucial data for informing decisions and inserting bias in the AI. Tainted data makes for flawed decisions, or as the shorthand goes, “garbage in, garbage out.” Full transparency about AI operational details, including what data “go in”, may well represent an inappropriate exposure to potential tampering. All data ultimately have a public source and if the exact collection points for these data are easily known, all the easier for wily individuals to feed tainted data to an AI decision system and bias the system.

A real-world case in point would be “TayTweets”, an artificially intelligent Twitter handle developed by Microsoft and released onto the Twitterverse to playfully interact with the public at large. TayTweets was known in advance to use the Tweets directed at its handle as its data source, which would permit it to “learn” language and ideas. The philosophy was that TayTweets would “learn” by communicating with everyday people through their Tweets, and the result would be a beautiful thing. Unfortunately, under this completely open approach, it did not take long for people to figure out how to manipulate the data that TayTweets would receive and use this to rig its responses. TayTweets had to be taken down within only 24 hours of its launch, when it began to voice disturbing, and even odious, opinions.

Presumably AI-enabled or assisted decision-making processes in government would be much more cautious in the wake of TayTweets, but it would not take long for this kind of vandalism to occur if government AI adhered to a strict regime of openness about its processes. Perhaps more importantly, a failure like TayTweets would be hugely consequential for a government and its legitimacy. Would any government suffering from a “TayTweets”-like incident continue to take the necessary risks with technological adoption that would ultimately permit it to modernize and stay relevant in the digital age? Perhaps not. A balance indeed needs to be struck but, given the high risk and potential for harm that would come with government AI failures, that balance should err on the side of caution.

Information about AI processes should always be accessible in principle to those that have a serious concern, but it should not be so readily accessible as to be a source of catastrophic impediment to the operations of government. Being open about AI decisions will be an important part of ensuring that government remains accountable in the 21st century, yet it is wrong-headed to assume that successful accountability will be accomplished for AI processes under the same paradigm that has been designed to govern traditional human decision-making processes. The principal of transparency remains a cornerstone of good governance, but we are not yet at the point of truly understanding what transparency looks like for AI-enhanced government. Assuming that we already are is a recipe for trouble.

Source: The case against transparency in government AI

Implicit bias, Starbucks and AI

Some of the more interesting articles on these issues over the past few weeks:

Take the horribly complex and difficult task of hiring new employees, make it less transparent, more confusing and remove all accountability. Sound good to you? Of course it does not, but that’s the path many employers are taking by adopting artificial intelligence in the hiring process.

Companies across the nation are now using some rudimentary artificial intelligence, or AI, systems to screen out applicants before interviews commence and for the interviews themselves. As a Guardian article from March explained, many of these companies are having people interview in front of a camera that is connected to AI that analyzes their facial expressions, their voice and more. One of the top recruiting companies doing this, Hirevue, has large customers like Hilton and Unilever. Their AI scores people using thousands of data points and compares it to the scores of the best current employees.

But that can be unintentionally problematic. As Recode pointed out, because most programmers are white men, these AI are actually often trained using white male faces and male voices. That can lead to misperceptions of black faces or female voices, which can lead to the AI making negative judgments about those people. The results could trend sexist or racist, but the employer who is using this AI would be able to shift the blame to a supposedly neutral technology.

Other companies have people do their first interview with an AI chatbot. One popular AI that does this is called Mya, which promises a 70 percent decrease in hiring time. Any number of questions these chatbots could ask could be proxies for race, gender or other factors.

An algorithm that judges resumes or powers a chatbot might factor in how far away someone lives from the office, which may have to do with historically racist housing laws. In that case, the black applicant who lives in the predominantly black neighborhood far away from the office gets rejected. Xerox actually encountered that exact problem years ago.

“You can fire a racist HR person, you might not ever find out your AI has been producing racist or sexist results.”

“If you use data that reflects existing and historical bias, and you ask a mathematical tool to make predictions based on that data, the predictions will reflect that bias,” Rachel Goodman, a staff attorney at the ACLU’s Racial Justice Program, told The Daily Beast. It’s nearly impossible to make an algorithm that won’t produce some kind of bias, because almost every data point that can be connected to another factor like someone’s race or gender. We’ve seen this happening when algorithms are used to determine prison sentences and parole in our justice system.

Source: Your Next Job Interview Could Be with a Racist Bot

Starbucks implicit bias training:

On whether people can learn about their implicit bias and retrain their brains to see others differently, McGill Johnson says it’s possible, but not when it’s done over a short period of time.

“It’s taken centuries for our brains to create these negative schemas about particular groups of people that have been marginalized in society,” she says. “And so it will take a really concerted, intentional effort to develop the counter-stereotypes that are required to move them out of our brains and replace them with others.”

At the workshops she runs, McGill Johnson says she starts with the idea that most people believe that they are fundamentally fair and believe in the egalitarian of all races and genders.

It’s when behaviors such as those that led to the arrest of the two men in Philadelphia arise that people can’t account for the disparity in outcomes between what they say they believe and how they react.

McGill Johnson says that raises the question that maybe the way people practice fairness is flawed.

“We’ve been taught to be colorblind. We’ve been taught that we can be objective when it comes to evaluating people, and the science suggests that sometimes our values aren’t sufficient for us to actually practice those pieces because our brains see race very quickly,” she says.

And what contributes to how brains process race and other identifiers is based on just about every other experience a person has had, watched or read.

“We develop, derive bias from just seeing certain pairings of words together over time. And those bits of information help us navigate our unconscious processes,” she says.

This means that in order to address people’s implicit bias, a lot of fundamental processes in the brain have to be changed.

While Starbucks is addressing a flaw in the company’s previous training, McGill Johnson says it will take more than one afternoon to completely address implicit bias.

“I think at best it will spark curiosity and an awareness that biases do not make us bad people — they actually make us human — but that we do have a capacity to override them,” she says. “And it’s really important for us to build in systems and practices that help us do that.”

Source: A Lesson In How To Overcome Implicit Bias 

Let me be clear. I believe that most Americans today really don’t consciously subscribe to racism or most other overtly bigoted beliefs. Even still, African Americans are incarcerated at five times the rate of whites and receive harsher sentences for the same crimes. Women earn less than what men earn for the same work. LGBT youth are disproportionately more likely to be homeless. Unemployment rates for black and Latino Americans are almost double those for whites. If we’re no longer, by and large, overtly bigoted, how do these blatant injustices persist?

A large body of research assigns some of the blame to our unconscious biases—or, as the academic community calls them, “implicit biases”—the attitudes and misperceptions that are baked into our minds due to systemic racism and pervasive stereotyping across society. As products of a sexist society, we all have a bias in favor of men and masculinity and against women and femininity. As products of a racist society, we all have a bias for white people and against people of color. As products of a classist society, we all have a bias for rich people and against poor people. And so on. We don’t consciously hold these beliefs; they’re like deep-down reflexes we’ve habituated to over time. They’re encoded in our brains and, in turn, they play out in ways that then reinforce society-wide bias.

Source: ‘Implicit Bias’ Is Very Real and It Infects Every One of Us: Sally Kohn 

Should justice be delivered by AI?

Interesting discussion. My understanding is that more legal research is being done by AI and my expectation is that more routine legal work could increasingly be done by AI.

For government, the obvious question is with respect to administrative decisions such as immigration, citizenship, social security etc in routine cases. As the article notes, AI would likely be more consistent than humans, but the algorithms would need to be carefully reviewed given possible programmer biases:

It is conventional wisdom, repeated by authoritative voices such as the former chief justice of Canada Beverley McLachlin, that Canadians face an access-to-justice (A2J) crisis. While artificial intelligence (AI) and algorithm-assisted automated decision-making could play a role in ameliorating the crisis, the contemporary consensus holds that the risks posed by AI mean its use in the justice system should be curtailed. The view is that the types of decisions that have historically been made by judges and state-sanctioned tribunals should be reserved exclusively to human adjudicators, or at the very least be subject to human oversight, although this would limit the advantages of speed and lowered cost that AI might deliver.

But we should be wary of prematurely precluding a role for AI in addressing at least some elements of the A2J crisis. Before we concede that robust deployment of AI in the civil and criminal justice systems is to be avoided, we need to take the public’s views into account. What they have to say may lead us to very different conclusions from those reached by lawyers, judges and scholars.

Though the prospect of walking into a courtroom and being confronted by a robot judge remains the stuff of science fiction, we have entered an era in which informed commentators confidently predict that the foreseeable future will include autonomous artificial intelligences passing bar exams, getting licensed to practice law and, in the words of Matthew and Jean-Gabriel Castel in their  2016 article “The Impact of Artificial Intelligence on Canadian Law and the Legal Profession,” “perform[ing] most of the routine or ‘dull’ work done by justices of the peace, small claims courts and administrative boards and tribunals.” Hundreds of thousands of Canadians are affected by such work every year.

Influential voices in the AI conversation have strongly cautioned against AI being used in legal proceedings. Where the matter has been addressed by governments, such as in the EU’s General Data Protection Regulation or France’s Loi informatique et libertés, that precautionary approach has been rendered as a right for there always to be a “human in the loop”: decisions that affect legal rights are prohibited from being made solely by means of the automated processing of data.

Concerns about the accountability of AI — both generally and specifically in the context of legal decisions — should not be lightly dismissed. There are significant and potentially deleterious implications to replacing human adjudicators with AI. The risks posed by the deployment of AI in the delivery of legal services include nontransparency and concerns about where to locate liability for harms, as well as various forms of bias latent in the data relied on, in the way that algorithms interact with those data and in the way that users interact with the algorithm. Having AI replace human adjudicators may not even be technically possible: some observers such as Frank Pasquale and Eric L. Talley have taken pains to point out that there is an irreducible complexity, dynamism and nonlinearity to law, legal reasoning and moral judgment, which means these matters may not lend themselves to automation.

Real as those technological constraints may be at the moment, they also may be real only forthe moment. Furthermore, while these constraints may apply to some (or even many) instances of adjudication, they don’t — or likely won’t — continue to apply to all of them. Law’s complexity runs along many axes, including applying to many areas of human endeavour and impacting many different aspects of our lives. This requires us to be careful not to treat all interactions with the justice system as equivalent for purposes of AI policy. We might use algorithms to expeditiously resolve, for example, consumer protection complaints or breach of contract disputes, but not matters relating to child custody or criminal offences.

Whether and when we deploy AI in the civil and criminal justice systems are questions that should be answered only after taking into account the views of the people who would be subject to those decisions. The answer to the question of judicial AI doesn’t belong to judges or lawyers, or at least not only to them — it belongs, in large part, to the public. Maintaining public confidence in the institution of the judiciary is a paramount concern for any liberal democratic society. If the courts are creaking under the strain of too many demands, if resolutions to disputes are hobbled by lengthy delays and exorbitant costs, we should be open to the possibility of using AI and algorithms to optimize judicial resources. If and to the extent we can preserve or enhance confidence in the administration of justice through the use of AI, policy-makers should be prepared to do so.

We can reframe the issue as an inquiry into what people look for from judicial decision-making processes. What are the criteria that lead people who are subject to justice system decisions to conclude that the process was “fair” or “just”? As Jay Thornton has noted , scholars in the social psychology of procedural justice, such as Gerald Leventhal and Tom Tyler, have done empirical work that provides exactly this insight into people’s subjective views. People want their justice system to feature such characteristics as consistency, accuracy, correctability, bias suppression, representativeness and ethicality. In Tyler’s formulation, people want a chance to present their side of the story and have it be considered; they want to be assured of the neutrality and trustworthiness of the decision-maker; and they want to be treated in a respectful and dignified manner.

It is not obvious that judicial AI fails to meet those criteria — it is almost certainly the case that on some of the relevant measures, such as consistency, judicial AI might fare better than human adjudicators. (Research has indicated, for example, that judges render more punitive decisions the longer they go without a meal — in other words, a hungry judge is a harsher judge. Whatever else might be said about robot judges, they won’t get hungry. When deciding between human adjudication and AI adjudication, we should also attend to the question of whether existing human-driven processes are performing adequately on the criteria identifiedby Leventhal and Tyler. That is not a theoretical inquiry but an empirical one: it should be assessed by reference to the subjective satisfaction of the parties who are involved in those processes.

There may be certain types or categories of judicial decisions that people would prefer be performed by AI if so doing would result in faster and cheaper decisions. We must also take fully into account the fact that we already calibrate adjudicative processes for solemnity, procedural rigour and cost to reflect conventional views of what kinds of claims or disputes “matter” and to what extent they do so. For example, the rules of evidence that apply in “regular” courts are significantly relaxed (or even obviated) in courts designated as “small claims” (which often aren’t so small: in Ontario, Small Claims Court applies to disputes under $25,000). Some tribunals that make important decisions about the legal rights of parties — such as the Ontario Landlord and Tenant Board — do not require their adjudicators to have a law degree. We have been prepared to adjust judicial processes in an effort to make them more efficient, and where technology has been used to improve processes and facilitate dispute resolution, as has been the case with British Columbia’s online Civil Resolution Tribunal, the results appear to have been salutary. The use of AI in the judicial process should be viewed as a point farther down the road on that same journey.

The criminal and civil justice systems do not exist to provide jobs for judges or lawyers. They exist to deliver justice. If justice can be delivered by AI more quickly, at less cost and with no diminishment in public confidence, then the possibilities of judicial AI should be explored and implemented. It may ultimately be the case that confidence in the administration of justice would be compromised by the use of AI — but that is an empirical question, to be determined in consultation with the public. The questions of confidence in the justice system, and of whether to facilitate and deliver justice by means of AI (including the development of a taxonomy of the types of decisions that can or should be made using AI), can only be fully answered by those in whom that confidence resides: the public.

via Should justice be delivered by AI?

Get ready: A massive automation shift is coming for your job

Still waiting for some of the entities proposing increased immigration (e.g., Barton Commission, Century Initiative) to factor this into their thinking. The Conference Board has at least acknowledged the issue:

The robots are coming to take our jobs and Canada must do a lot more to deal with it.

That’s not the prediction of a doomsday prophet, but of the world’s leading business consultant, the managing director of global firm McKinsey & Co. and chair of the Canadian government’s Advisory Council on Economic Growth, Dominic Barton.

Okay, admittedly Mr. Barton didn’t exactly say the robots are taking over the planet. But he is warning that automation – robots, driverless cars, artificial intelligence, technological transformation – will disrupt millions of Canadian jobs, not far in the future, but in the next dozen years.

Put another way: If you are 30 or 35 now, there’s a good chance that not just your job, but the kind of job you do, will be eliminated – at the most inopportune time of life, when you are 40 to 55, perhaps with a mortgage and kids.

The council that Mr. Barton heads is calling for a national “re-skilling” effort that would cost $15-billion a year – per year – to help Canadians cope. He doesn’t think all that money can come from government, but he thinks it’s going to have to come from somewhere.

“The scale of the change is so significant. What are we doing to really get at that?” Mr. Barton said over the phone from Melbourne, Australia. “We’re talking a really big issue.”

This issue is a massive sleeper test for the government. It’s a test for all governments, really, but in this country it’s a test of ambition for Justin Trudeau’s Liberal government. It could well be the biggest societal issue of our time. Finance Minister Bill Morneau’s next budget will be delivered in less than two weeks. Will it even begin to reflect the scope of the issue?

To be fair, Mr. Morneau’s last budget talked a lot about job training, and it put some modest sums into it. Mr. Morneau, who ran a human-resources firm, was talking about these issues before he was elected as an MP. But there isn’t yet a government response from Ottawa that hints at the scale of Mr. Barton’s warning.

He is talking about vast change, soon. There are driverless cars now, he noted. That makes it easy to see the prospect of truck drivers thrown out of work en masse. (The courier firm FedEx has hinted its driverless vehicle plans aren’t so far away; the company has 400,000 employees.)

It’s not just truck drivers or factory workers who could see their jobs washed away by technological change. It includes knowledge workers, such as well-paid wealth managers who could find their current jobs automated. The Advisory Council estimated 10 to 12 per cent of Canadian workers could see their jobs disrupted by technology by 2030. “That’s two million people,” he noted. Mr. Barton thinks the estimate is conservative.

That’s different from when a company goes bankrupt or a plant closes, and laid-off workers go look for the same job at another company. Technological change will wipe out occupations. People will need to do new kinds of work, and they will need new skills. Technology might also create millions of jobs, but if Canadians don’t have the skills, a lot of those jobs might go to the United States or China or Sweden.

If you’ve watched the way voters in the United States and elsewhere have responded to disruptions of well-paying manufacturing jobs and good job opportunities, how it has fuelled divisive politics, an anti-trade backlash, and anti-immigrant nativism, just imagine how society could be roiled by two million middle-aged Canadians looking for work without much idea how they’re going to start over.

The Advisory Council argued that it has to be met with a major revamp of job training and lifelong education and a $15-billion injection of resources.

It’s an enormous sum, about three-quarters of the cost of the military. It’s too much for federal and provincial governments to pay alone, he argues, but business will have to be given incentives to do more education and training. Individuals, even those who feel squeezed saving for retirement, will have to save for lifelong learning, perhaps with tax-sheltered learning accounts. They won’t have a choice, he believes, “because it’s coming.”

The advisory council was appointed by the Liberals, and Mr. Barton has the ear of Mr. Trudeau and his inner circle. The Liberal government has adopted a lot of the council’s recommendations, to varying degrees, in its strategy to foster economic growth. But Mr. Barton noted the one with the biggest estimate impact is that massive re-skilling initiative. So far, governments are working on the same scale to face up to the impact of automation, but they will have to face it sooner or later. It’s coming.

via Get ready: A massive automation shift is coming for your job – The Globe and Mail

Facial Recognition Is Accurate, if You’re a White Guy – The New York Times

The built-in biases and limitations of facial recognition and the issues it raises:

Facial recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph.

When the person in the photo is a white man, the software is right 99 percent of the time.

But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.

Color Matters in Computer Vision

Facial recognition algorithms made by Microsoft, IBM and Face++ were more likely to misidentify the gender of black women than white men.

Gender was misidentified in up to 1 percent of lighter-skinned males in a set of 385 photos.

Gender was misidentified in up to 7 percent of lighter-skinned females in a set of 296 photos.

Gender was misidentified in up to 12 percent of darker-skinned males in a set of 318 photos.

Gender was misidentified in 35 percent of darker-skinned females in a set of 271 photos.

In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women.

One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study.

The new study also raises broader questions of fairness and accountability in artificial intelligence at a time when investment in and adoption of the technology is racing ahead.

Today, facial recognition software is being deployed by companies in various ways, including to help target product pitches based on social media profile pictures. But companies are also experimenting with face identification and other A.I. technology as an ingredient in automated decisions with higher stakes like hiring and lending.

Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement — and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.

Facial recognition technology is lightly regulated so far.

“This is the right time to be addressing how these A.I. systems work and where they fail — to make them socially accountable,” said Suresh Venkatasubramanian, a professor of computer science at the University of Utah.

Until now, there was anecdotal evidence of computer vision miscues, and occasionally in ways that suggested discrimination. In 2015, for example, Google had to apologize after its image-recognition photo app initially labeled African Americans as “gorillas.”

Sorelle Friedler, a computer scientist at Haverford College and a reviewing editor on Ms. Buolamwini’s research paper, said experts had long suspected that facial recognition software performed differently on different populations.

“But this is the first work I’m aware of that shows that empirically,” Ms. Friedler said.

Ms. Buolamwini, a young African-American computer scientist, experienced the bias of facial recognition firsthand. When she was an undergraduate at the Georgia Institute of Technology, programs would work well on her white friends, she said, but not recognize her face at all. She figured it was a flaw that would surely be fixed before long.

But a few years later, after joining the M.I.T. Media Lab, she ran into the missing-face problem again. Only when she put on a white mask did the software recognize hers as a face.

By then, face recognition software was increasingly moving out of the lab and into the mainstream.

“O.K., this is serious,” she recalled deciding then. “Time to do something.”

So she turned her attention to fighting the bias built into digital technology. Now 28 and a doctoral student, after studying as a Rhodes scholar and a Fulbright fellow, she is an advocate in the new field of “algorithmic accountability,” which seeks to make automated decisions more transparent, explainable and fair.

Her short TED Talk on coded bias has been viewed more than 940,000 times, and she founded the Algorithmic Justice League, a project to raise awareness of the issue.

In her newly published paper, which will be presented at a conferencethis month, Ms. Buolamwini studied the performance of three leading face recognition systems — by Microsoft, IBM and Megvii of China — by classifying how well they could guess the gender of people with different skin tones. These companies were selected because they offered gender classification features in their facial analysis software — and their code was publicly available for testing.

To test the commercial systems, Ms. Buolamwini built a data set of 1,270 faces, using faces of lawmakers from countries with a high percentage of women in office. The sources included three African nations with predominantly dark-skinned populations, and three Nordic countries with mainly light-skinned residents.

The African and Nordic faces were scored according to a six-point labeling system used by dermatologists to classify skin types. The medical classifications were determined to be more objective and precise than race.

Then, each company’s software was tested on the curated data, crafted for gender balance and a range of skin tones. The results varied somewhat. Microsoft’s error rate for darker-skinned women was 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.

Ms. Buolamwini shared the research results with each of the companies. IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparent” services. This month, the company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women.

Microsoft said that it had “already taken steps to improve the accuracy of our facial recognition technology” and that it was investing in research “to recognize, understand and remove bias.”

Ms. Buolamwini’s co-author on her paper is Timnit Gebru, who described her role as an adviser. Ms. Gebru is a scientist at Microsoft Research, working on its Fairness Accountability Transparency and Ethics in A.I. group.

Megvii, whose Face++ software is widely used for identification in online payment and ride-sharing services in China, did not reply to several requests for comment, Ms. Buolamwini said.

Ms. Buolamwini is releasing her data set for others to use and build upon. She describes her research as “a starting point, very much a first step” toward solutions.

Ms. Buolamwini is taking further steps in the technical community and beyond. She is working with the Institute of Electrical and Electronics Engineers, a large professional organization in computing, to set up a group to create standards for accountability and transparency in facial analysis software.

She meets regularly with other academics, public policy groups and philanthropies that are concerned about the impact of artificial intelligence. Darren Walker, president of the Ford Foundation, said that the new technology could be a “platform for opportunity,” but that it would not happen if it replicated and amplified bias and discrimination of the past.

“There is a battle going on for fairness, inclusion and justice in the digital world,” Mr. Walker said.

Part of the challenge, scientists say, is that there is so little diversity within the A.I. community.

“We’d have a lot more introspection and accountability in the field of A.I. if we had more people like Joy,” said Cathy O’Neil, a data scientist and author of “Weapons of Math Destruction.”

Technology, Ms. Buolamwini said, should be more attuned to the people who use it and the people it’s used on.

“You can’t have ethical A.I. that’s not inclusive,” she said. “And whoever is creating the technology is setting the standards.”

via Facial Recognition Is Accurate, if You’re a White Guy – The New York Times

Diversity must be the driver of artificial intelligence: Kriti Sharma

Agree. Those creating the algorithms and related technology need to be both more diverse and more mindful of the assumptions baked into their analysis and work:

The question over what to do about biases and inequalities in the technology industry is not a new one. The number of women working in science, technology, engineering and mathematics (STEM) fields has always been disproportionately less than men. What may be more perplexing is, why is it getting worse?

It’s 2017, and yet according to the American Association of University Women (AAUW) in a review of more than 380 studies from academic journals, corporations, and government sources, there is a major employment gap for women in computing and engineering.

North America, as home to leading centres of innovation and technology, is one of the worst offenders. A report from the Equal Employment Opportunity Commission (EEOC) found “the high-tech industry employed far fewer African-Americans, Hispanics, and women, relative to Caucasians, Asian-Americans, and men.”

However, as an executive working on the front line of technology, focusing specifically on artificial intelligence (AI), I’m one of many hoping to turn the tables.

This issue isn’t only confined to new product innovation. It’s also apparent in other aspects of the technology ecosystem – including venture capital. As The Globe highlighted, Ontario-based MaRS Data Catalyst published research on women’s participation in venture capital and found that “only 12.5 per cent of investment roles at VC firms were held by women. It could find just eight women who were partners in those firms, compared with 93 male partners.”

The Canadian government, for its part, is trying to address this issue head on and at all levels. Two years ago, Prime Minister Justin Trudeau campaigned on, and then fulfilled, the promise of having a cabinet with an equal ratio of women to men – a first in Canada’s history. When asked about the outcome from this decision at the recent Fortune Most Powerful Women Summit, he said, “It has led to a better level of decision-making than we could ever have imagined.”

Despite this push, disparities in developed countries like Canada are still apparent where “women earn 11 per cent less than men in comparable positions within a year of completing a PhD in a science, technology, engineering or mathematics, according to an analysis of 1,200 U.S. grads.”

AI is the creation of intelligent machines that think and learn like humans. Every time Google predicts your search, when you use Alexa or Siri, or your iPhone predicts your next word in a text message – that’s AI in action.

Many in the industry, myself included, strongly believe that AI should reflect the diversity of its users, and are working to minimize biases found in AI solutions. This should drive more impartial human interactions with technology (and with each other) to combat things like bias in the workplace.

The democratization of technology we are experiencing with AI is great. It’s helping to reduce time-to-market, it’s deepening the talent pool, and it’s helping businesses of all size cost-effectively gain access to the most modern of technology. The challenge is there are a few large organizations currently developing the AI fundamentals that all businesses can use. Considering this, we must take a step back and ensure the work happening is ethical.

AI is like a great big mirror. It reflects what it sees. And currently, the groups designing AI are not as diverse as we need them to be. While AI has the potential to bring services to everyone that are currently only available to some, we need to make sure we’re moving ahead in a way that reflects our purpose – to achieve diversity and equality. AI can be greatly influenced by human-designed choices, so we must be aware of the humans behind the technology curating it.

At a point when AI is poised to revolutionize our lives, the tech community has a responsibility to develop AI that is accountable and fit for purpose. For this reason, Sage created Five Core Principles to developing AI for business.

At the end of the day, AI’s biggest problem is a social one – not a technology one. But through diversity in its creation, AI will enable better-informed conversations between businesses and their customers.

If we can train humans to treat software better, hopefully, this will drive humans to treat humans better.

via Diversity must be the driver of artificial intelligence – The Globe and Mail