What happens when artificial intelligence comes to Ottawa

More on the note of caution of government adoption of AI for decision-making (Ottawa’s use of AI in immigration system has profound implications for human rights):

There is a notion that the choices a computer algorithm makes on our behalf are neutral and somehow more reliable than our notoriously faulty human decision-making.

But, as a new report presented on Parliament Hill Wednesday points out, artificial intelligence isn’t pristine, absolute wisdom downloaded from the clouds. Rather, it’s shaped by the ideas and priorities of the human beings who build it and by the database of examples those architects feed into the machine’s “brain” to help it “learn” and build rules on which to operate.

Much like a child is a product of her family environment—what her parents teach her, what they read to her and show her of the world—artificial intelligence sees the world through the lens we provide for it. This new report, entitled “Bots at the Gate,” contemplates how decisions rendered by artificial intelligence (AI) in Canada’s immigration and refugee systems could impact the human rights, safety and privacy of people who are by definition among the most vulnerable and least able to advocate for themselves.

The report says the federal government has been “experimenting” with AI in limited immigration and refugee applications since at least 2014, including with “predictive analytics” meant to automate certain activities normally conducted by immigration officials. “The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others,” the document warns. “These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

Citing ample evidence of how biased and confused—how human—artificial intelligence can be, the report from the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy makes the case for a very deliberate sort of caution.

The authors mention how a search engine coughs up ads for criminal record checks when presented with a name it associates with a black identity. A woman searching for jobs sees lower-paying opportunities than a man doing the same search. Image recognition software matches a photo of a woman with another of a kitchen. An app store suggests a sex offender search as related to a dating app for gay men.

“You have this huge dataset, you just feed it into the algorithm and trust it to pick out the patterns,” says Cynthia Khoo, a research fellow at the Citizen Lab and a lawyer specializing in technology. “If that dataset is based on a pre-existing set of human decisions, and human decisions are also faulty and biased—if humans have been traditionally racist, for example, or biased in other ways—then that pattern will simply get embedded into the algorithm and it will say, ‘This is the pattern. This is what they want, so I’m going to keep replicating that.’”

Immigration, Refugees and Citizenship Canada says the department launched two pilot projects in 2018 using computer analytics to identify straightforward and routine Temporary Resident Visa applications from China and India for faster processing. “The use of computer analytics is not intended to replace people,” the department said. “It is another tool to support officers and others in managing our ever-increasing volume of applications. Officers will always remain central to IRCC’s processing.”

This week, the report’s authors made the rounds on the Hill, presenting their findings and concerns to policy-makers. “It does now sound like it’s a measured approach,” says Petra Molnar, a lawyer and technology and human rights researcher with the IHRP. “Which is great.”

Other countries offer cautionary tales rather than best practices. “The algorithm that was used [to determine] whether or not someone was detained at the U.S.-Mexico border was actually set to detain everyone and used as a corroboration for the extension of the detention practices of the Trump administration,” says Molnar.

And in 2016, the U.K. government revoked the visas of 36,000 foreign students after automated voice analysis of their English language equivalency exams suggested they may have cheated and sent someone else to the exam in their place. When the automated voice analysis was compared to human analysis, however, it was found to be wrong over 20 per cent of the time—meaning the U.K. may have ejected 7,000 foreign students who had done nothing wrong.

The European Union’s General Data Protection Regulation that came into force in April 2018, on the other hand, is the gold standard, enshrining such concepts as “the right to an explanation,” or the legal certainty that if your data was processed by an automated tool, you have the right to know how it was done.

Immigration and refugee decisions are both opaque and highly discretionary even when rendered by human beings, argues Molnar, pointing out that two different immigration officers may look at the same file and reach different decisions. The report argues that lack of transparency reaches a different level when you introduce AI into the equation, outlining three distinct reasons.

First, automated decision-making solutions are often created by outside entities that sell them to government agencies, so the source code, training data and other information would be proprietary and hidden from public view.

Second, full disclosure of the guts of these programs might be a bad idea anyway because it could allow people to “game” the system.

“Third, as these systems become more sophisticated (and as they begin to learn, iterate, and improve upon themselves in unpredictable or otherwise unintelligible ways), their logic often becomes less intuitive to human onlookers,” the authors explain. “In these cases, even when all aspects of a system are reviewable and superficially ‘transparent,’ the precise rationale for a given output may remain uninterpretable and unexplainable.” Many of these systems end up inscrutable black boxes that could spit out determinations on the futures of vulnerable people, the report argues.

Her group aims to use a “carrot-and-stick approach,” Khoo says, urging the federal government to make Canada a world leader on this in both a human rights and high-tech context. It’s a message that may find a receptive audience with a government that has been eager to make both halves of that equation central to its brand at home and abroad.

But they’ll have to move fast: If AI is currently in a nascent state in policy decisions that shape real people’s lives, it’s growing fast and won’t stay there for long.

“This is happening everywhere,” Khoo says.

Source: What happens when artificial intelligence comes to Ottawa

Ottawa’s use of AI in immigration system has profound implications for human rights

Good discussion of the main issues and the need for care and accountability frameworks in the development of AI and its algorithms.

The authors also note that “Human decision-making is also riddled with bias and error” (unfortunately, we don’t have any comparable analysis to that of Sean Rehaag with respect to IRB and Federal Court immigration-related decisions – Getting refugee decisions appealed in court ‘the luck of the draw,’ study shows):

How would you feel if an algorithm made a decision about your application for a Canadian work permit, or determined how much money you can bring in as an investor? What if it decided whether your marriage is “genuine?” Or if it trawled through your Tweets or Facebook posts to determine if you are “suspicious” and therefore a “risk,” without ever revealing any of the categories it used to make this decision?

While seemingly futuristic, these types of questions will soon be put to everyone who interacts with Canada’s immigration system.

A report released Wednesday by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy finds that algorithms and artificial intelligence are augmenting and replacing human decision makers in Canada’s immigration and refugee system, with profound implications for fundamental human rights.

We know that Canada has already introduced automated decision-making experiments as part of the immigration determination process since at least 2014. These new automated techniques support the evaluation of immigrant and visitor applications such as Express Entry for Permanent Residence. Recent announcements signal an expansion of the uses of these technologies in a variety of applications and immigration decisions in the coming years.

Exploring new technologies and innovations is exciting and necessary, particularly when used in an immigration system plagued by lengthy delays, protracted family separation and uncertain outcomes. However, without proper oversight, mechanisms and accountability measures, the use of AI threatens to create a laboratory for high-risk experiments.

The system is already opaque. The ramifications of using AI in immigration and refugee decisions are far-reaching. Vulnerable and under-resourced communities such as those without citizenship often have access to less-robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may serve only to exacerbate these disparities and can result in severe rights violations, such as discrimination and threats to life and liberty.

Without proper oversight, automated decisions can rely on discriminatory and stereotypical markers, such as appearance, religion, or travel patterns, and thus entrench bias in the technology. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies. This could lead to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, such as the right to have a fair and impartial decision maker and being able to appeal your decision. These rights are internationally protected by instruments that Canada has ratified, such as the United Nations Convention on the Status of Refugees, and the International Covenant on Economic, Social and Cultural Rights, among others. These rights are also protected by the Canadian Charter of Rights and Freedoms and accompanying provincial human rights legislation.

At this point, there are more questions than answers.

If an algorithm makes a decision about your fate, can it be considered fair and impartial if it relies on biased data that is not made public? What happens to your data during the course of these decisions and can it be shared with other departments, or even with the government of your country, potentially putting you at risk? The use of AI has already been criticized in the predictive policing context, where algorithms linked race with the likelihood of re-offending, or when they link women with lower paying jobs, or purport to discern sexual orientations from photos.

Given the already limited safeguards and procedural justice protections in immigration and refugee decisions, the use of discriminatory and biased algorithms have profound ramifications on a person’s safety, life, liberty, security, and mobility. Before exploring how these technologies will be used, we need to create a framework for transparency and accountability that addresses bias and error in automated decision making.

Our report recommends Ottawa establish an independent, arm’s-length body with the power to engage in all aspects of oversight and review all automated decision-making systems by the federal government, publishing all current and future uses of AI by the government. We advocate for the creation of a task force that brings key government stakeholders, alongside academia and civil society, to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

Without these frameworks and mechanisms, we risk creating a system that – while innovative and efficient – could ultimately result in human rights violations. Canada is exploring the use of this technology in high-risk contexts within an accountability vacuum. Human decision-making is also riddled with bias and error, and AI may in fact have positive impacts in terms of fairness and efficiency. We need a new framework of accountability that builds on the safeguards and review processes we have in place for the frailties in human decision-making. AI is not inherently objective or immune to bias and must be implemented only after a broad and critical look at the very real impacts these technologies will have on human lives.

Source: Ottawa’s use of AI in immigration system has profound implications for human rights