Ottawa’s use of AI in immigration system has profound implications for human rights

Good discussion of the main issues and the need for care and accountability frameworks in the development of AI and its algorithms.

The authors also note that “Human decision-making is also riddled with bias and error” (unfortunately, we don’t have any comparable analysis to that of Sean Rehaag with respect to IRB and Federal Court immigration-related decisions – Getting refugee decisions appealed in court ‘the luck of the draw,’ study shows):

How would you feel if an algorithm made a decision about your application for a Canadian work permit, or determined how much money you can bring in as an investor? What if it decided whether your marriage is “genuine?” Or if it trawled through your Tweets or Facebook posts to determine if you are “suspicious” and therefore a “risk,” without ever revealing any of the categories it used to make this decision?

While seemingly futuristic, these types of questions will soon be put to everyone who interacts with Canada’s immigration system.

A report released Wednesday by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy finds that algorithms and artificial intelligence are augmenting and replacing human decision makers in Canada’s immigration and refugee system, with profound implications for fundamental human rights.

We know that Canada has already introduced automated decision-making experiments as part of the immigration determination process since at least 2014. These new automated techniques support the evaluation of immigrant and visitor applications such as Express Entry for Permanent Residence. Recent announcements signal an expansion of the uses of these technologies in a variety of applications and immigration decisions in the coming years.

Exploring new technologies and innovations is exciting and necessary, particularly when used in an immigration system plagued by lengthy delays, protracted family separation and uncertain outcomes. However, without proper oversight, mechanisms and accountability measures, the use of AI threatens to create a laboratory for high-risk experiments.

The system is already opaque. The ramifications of using AI in immigration and refugee decisions are far-reaching. Vulnerable and under-resourced communities such as those without citizenship often have access to less-robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may serve only to exacerbate these disparities and can result in severe rights violations, such as discrimination and threats to life and liberty.

Without proper oversight, automated decisions can rely on discriminatory and stereotypical markers, such as appearance, religion, or travel patterns, and thus entrench bias in the technology. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies. This could lead to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, such as the right to have a fair and impartial decision maker and being able to appeal your decision. These rights are internationally protected by instruments that Canada has ratified, such as the United Nations Convention on the Status of Refugees, and the International Covenant on Economic, Social and Cultural Rights, among others. These rights are also protected by the Canadian Charter of Rights and Freedoms and accompanying provincial human rights legislation.

At this point, there are more questions than answers.

If an algorithm makes a decision about your fate, can it be considered fair and impartial if it relies on biased data that is not made public? What happens to your data during the course of these decisions and can it be shared with other departments, or even with the government of your country, potentially putting you at risk? The use of AI has already been criticized in the predictive policing context, where algorithms linked race with the likelihood of re-offending, or when they link women with lower paying jobs, or purport to discern sexual orientations from photos.

Given the already limited safeguards and procedural justice protections in immigration and refugee decisions, the use of discriminatory and biased algorithms have profound ramifications on a person’s safety, life, liberty, security, and mobility. Before exploring how these technologies will be used, we need to create a framework for transparency and accountability that addresses bias and error in automated decision making.

Our report recommends Ottawa establish an independent, arm’s-length body with the power to engage in all aspects of oversight and review all automated decision-making systems by the federal government, publishing all current and future uses of AI by the government. We advocate for the creation of a task force that brings key government stakeholders, alongside academia and civil society, to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

Without these frameworks and mechanisms, we risk creating a system that – while innovative and efficient – could ultimately result in human rights violations. Canada is exploring the use of this technology in high-risk contexts within an accountability vacuum. Human decision-making is also riddled with bias and error, and AI may in fact have positive impacts in terms of fairness and efficiency. We need a new framework of accountability that builds on the safeguards and review processes we have in place for the frailties in human decision-making. AI is not inherently objective or immune to bias and must be implemented only after a broad and critical look at the very real impacts these technologies will have on human lives.

Source: Ottawa’s use of AI in immigration system has profound implications for human rights

Responsibly deploying AI in the immigration process

Some good practical suggestions. While AI has the potential for greater consistency in decision-making, great care needs to be taken in development, testing and implementation to avoid bias and to identify cases where decisions need to be reviewed:

In April, the federal government sent a request for information to industry to determine where artificial intelligence (AI) could be used in the immigration system for legal research, prediction and trend analysis. The type of AI to be employed here is machine learning: developing algorithms through analysis of wide swaths of data to make predictions within a particular context. The current backlog of immigration applications leaves much room for solutions that could improve the efficiency of case processing, but Canadians should be concerned about the vulnerability of the groups targeted in this pilot project and how the use of these technologies might lead to human rights violations.

An algorithmic mistake that holds up a bank loan is frustrating enough, but in immigration screening a miscalculation could have devastating consequences. The potential for error is especially concerning because of the nature of the two application categories the government has selected for the pilot project: requests for consideration on humanitarian and compassionate grounds, and applications for Pre-Removal Risk Assessment. In the former category of cases, officials consider an applicant’s connections with Canada and the best interests of any children involved. In the latter category, a decision must be made about the danger that would confront the applicant if they were returned to their home country. In some of these cases, assessing whether someone holds political opinions for which they would be persecuted could be a crucial component. Given how challenging it is for current algorithmic methods to extract meaning and intent from human statements, it is unlikely that AI could be trusted to make such a judgment reliably. An error here could lead to someone being sent back to imprisonment or torture.

Moreover, if an inadequately designed algorithm results in decisions that infringe upon rights or amplify discrimination, people in these categories could have less capacity than other applicants to respond with a legal challenge. They may face financial constraints if they’re fleeing a dangerous regime, as well as cultural and language barriers.

An algorithmic mistake that holds up a bank loan is frustrating enough, but in immigration screening a miscalculation could have devastating consequences.

Because of the complexity of these decisions and the stakes involved, the government must think carefully about which parts of the screening process can be automated. Decision-makers need to take extreme care to ensure that machine learning techniques are employed ethically and with respect for human rights. We have several recommendations for how this can be done.

First, we suggest that the federal government take some best practices from the European Union’s General Data Protection Regulation (GDPR). The GDPR has expanded individual rights with regard to the collection and processing of personal data. Article 22 guarantees the right to challenge the automated decisions of algorithms, including the right to have a human review the decision. The Canadian government should consider a similar expansion of rights for individuals whose immigration applications are decided by, or informed by, the use of automated methods. In addition, it must ensure that the vulnerable groups being targeted are able to exercise those rights.

Second, the government must think carefully about what kinds of transparency are needed, for whom, and how greater transparency might create new risks. The immigration process is already complex and opaque, and with added automation, it may become more difficult to verify that these important decisions are being made in fair and thorough ways. The government’s request for information asks for input from industry on ensuring sufficient transparency so that AI decisions can be audited. In the context of immigration screening, we argue that a spectrum of transparency is needed because there are multiple parties with different interests and rights to information.

If the government were to reveal to everyone exactly how these algorithms work, there could be adverse consequences. A fully transparent AI decision process would open doors for people who want to exploit the system, including human traffickers. They could game the algorithm, for example, by observing the keywords and phrases that the AI system flags as markers of acceptability and inserting those words into immigration applications. Job seekers already do something similar, by using keywords strategically to get a resumé in front of human eyes. One possible mechanism for oversight in the case of immigration would be a neutral regulatory body that would be given the full details of how the algorithm operates but would reveal only case-specific details to the applicants and partial details to other relevant stakeholders.

Finally, the government needs to get broader input when designing this proposed use of AI. Requesting solutions from industry alone will deliver only part of the story. The government should also draw on expertise from the country’s three leading AI research institutes in Edmonton, Montreal and Toronto, as well as two new ones focused specifically on AI ethics: the University of Toronto’s Ethics of AI Lab and the Montreal AI Ethics Institute. Another group whose input should be included is the immigration applicants themselves. Developers and policy-makers have a responsibility to understand the context for which they are developing solutions. By bringing these perspectives into their design process, they can help bridge empathy gaps. An example of how users’ first-hand knowledge of a process can yield helpful tools is the recently launched chatbot Destin, which was designed by immigrants to help guide applicants through the Canadian immigration process.

The application of AI to immigration screening is promising: applications could be processed faster, with less human bias and at lower cost. But care must be taken with implementation. Canada has been taking a considered and strategic approach to the use of AI, as evidenced by the Pan-Canadian Artificial Intelligence Strategy, a major investment by the federal government that includes a focus on developing global thought leadership on the ethical and societal implications of advances in AI. We encourage the government to continue to pursue this thoughtful approach and an emphasis on human rights to guide the use of AI in immigration.

Source: Responsibly deploying AI in the immigration process

USA: Data Points to Wide Gap in Asylum Approval Rates at Nation’s Immigration Courts

Similar to what some of the research by Sean Rehaag has shown in Canada with the IRB:

Years of data from immigration courts around the United States and compiled by the Transactional Records Access Clearinghouse (TRAC) at Syracuse University show that whether or not a person seeking asylum is granted that request depends more on where they live and appear before an immigration court judge than it does on the facts of the case.

The NBC Bay Area Investigative Unit closely tracked the asylum results at every U.S. immigration court over the past three years and found a wide variation in the number of asylum approvals depending upon the court; in some instances the rate varies as much as 75 percentage points.

From 2016 through the first part of 2018, immigration courts in Los Angeles and San Francisco consistently ranked in the nation’s top 15 courts when it comes to the number of asylum requests granted. Phoenix, Philadelphia, San Antonio, New York and Boston were also in the top 15 each year. Court data show each of those courts grants asylum requests more than 50 percent of the time.

On the opposite end of the spectrum, U.S. immigration courts that are vastly less likely to approve asylum petitions include Atlanta, Lumpkin, Georgia, Charlotte, Dallas and Houston. In some of those courts, asylum is granted around 20 percent of the time. In other jurisdictions, like the court in Lumpkin, judges grant asylum only 10 percent of the time.

This disparity has led many observers—from academic researchers, to judges, to the very lawyers appearing before the immigration court judges—to worry that political beliefs could be getting in the way of justice in America’s immigration courts.

“There’s something going on that is very, very troubling,” said Karen Musalo, director at the Center for Gender and Refugee Studies at the UC Hastings School of Law in San Francisco. Musalo spent years studying these inconsistencies in asylum outcomes.

“I think there are a number of factors that contribute to these disparities. They have to do with both the selection process for the individual judges and what their backgrounds are and whether or not they’re qualified (to serve as judges),” Musalo said.

“It has to do with a politicization of the selection process. It has to do with a lack of independence of these immigration judges,” she added.

U.S. Immigration Court judges are not part of the independent judiciary but, rather, are appointed and work for the Executive Office for Immigration Review (EOIR), which is an arm of the U.S. Department of Justice.

Congress’s own Government Accountability Office twice issued reports—in 2008 and 2016—that point out a “signification variation” in asylum cases. In its reports, the GAO called on Congress to fix the problem.

But so far, nothing has happened.

…Paul Wickham Schmidt, a retired U.S. immigration judge, says the nation’s immigration court disparity is so wide that it can be explained only by personal bias creeping into judges’ decisions.

“If I were an immigrant, I’d rather be in California than in Atlanta, Georgia, any day,” he said. “Clearly, the attitudes of the judges and how they feel about asylum law has quite a bit to do with it,” Schmidt added.

The EOIR, the agency in charge of the immigration courts, declined NBC Bay Area’s request for an interview on the subject. Kathryn Mattingly, an EOIR spokesperson, emailed a statement:

“Regarding your reference to TRAC data, please note that we do not comment on third-party analysis of EOIR data because the method another party may use to analyze the raw data may be different from the analytical techniques EOIR uses. When looking at this issue, it is important to note that each asylum case is unique, with its own set of facts, evidentiary factors, and circumstances. Asylum cases typically include complex legal and factual issues and EOIR immigration judges and Board of Immigration Appeals members review each one on a case-by-case basis, taking into consideration every factor allowable by law. It is also important to note that immigration courts in detained facilities typically have lower asylum grant rates because detained aliens with criminal convictions are not eligible for many forms of relief from removal. That all said, EOIR takes seriously any claims of unjustified and significant anomalies in immigration judge decision-making and takes steps to evaluate disparities in immigration adjudications. In addition, EOIR monitors immigration judge performance through an official performance work plan and evaluation process, as well as daily supervision of the courts by assistant chief immigration judges.”

Source: Data Points to Wide Gap in Asylum Approval Rates at Nation’s Immigration Courts

Canadian immigration applications could soon be assessed by computers

Makes sense that IRCC is exploring this as a means to improve efficiency and processing times, along with greater consistency of decision-making Richard Kurland, Vic Satzewich and I comment on some of the advantages as well as cautions on the proposed approach:

Ottawa is quietly working on a plan to use computers to assess immigration applications and make some of the decisions currently made by immigration officers, the Star has learned.

Since 2014, the Immigration Department has been developing what’s known as a “predictive analytics” system, which would evaluate applications in a way that’s similar to the work performed by officials today.

The plan — part of the government’s modernization of a system plagued by backlogs and delays — is to use the technology to identify the merits of an immigration application, spot potential red flags for fraud and weigh all these factors to recommend whether an applicant should be accepted or refused.

At the moment, the focus of the project is on building processes that would distinguish between high-risk and low-risk applications, immigration officials said.

“Predictive analytics models are built by analyzing thousands of past applications and their outcomes. This allows the computer to ‘learn’ by detecting patterns in the data, in a manner analogous to how officers learn through the experience of processing applications,” department spokesperson Lindsay Wemp said in an email.

“The goal is to improve client service and increase operational efficiency by reducing processing times while strengthening program integrity.”

The project was approved by the former Conservative government cabinet in February 2013. Wemp said there is no firm timeline on when automated decisions might be a viable option.

“To ensure the accuracy of decisions, models undergo extensive testing prior to being used. Once in service, quality assurance will be performed continually to make sure that model predictions are accurate,” she explained.

 “The novelty of the technology and the importance of getting it right make it imperative that we do not rush this project.”

With the proliferation of artificial intelligence in people’s day-to-day lives, from IBM’s Watson (the supercomputer that defeated Jeopardy! champions) to Google’s self-driving cars, immigration experts said they were not surprised by the move toward automation.

“This is the greatest change in immigration processing since the Internet. What requires weeks if not months to process would only take days with the new system. There are going to be cascades of savings in time and money,” said immigration lawyer and policy analyst Richard Kurland.

“A lot of countries have used predictive analytics as a tool but not for immigration processing. Canada Revenue Agency also uses the techniques to identify red flags. It uses artificial intelligence. It is decision-making by machines. The dividends of this exercise are huge.”

The Immigration Department’s Wemp, however, said the department’s plans shouldn’t be classified as artificial intelligence because a predictive model cannot exercise judgment in the same way as a human and officers will always remain central to the process.

“We would be able to catch abnormalities in real time, which would then help us to identify fraud and threats more quickly,” noted Wemp. “Some immigration programs are better suited for predictive analytics than others. The department envisions a phased approach, one program at a time.”

Calling the government’s move evolution rather than revolution, Andrew Griffith, a retired director general of the Immigration Department, said applying the technology to immigration processing is a big deal for the public mostly because of border security concerns.

For Griffith, however, the bigger worry is what algorithms officials use to codify the computer system.

“The more you can bring the government to the 21st century, the better. But we should be using the tools intelligently and efficiently. The challenge is not to embed biases into the system and create extra barriers for applicants,” said Griffith, adding that an oversight body is warranted to constantly monitor the automated decisions.

McMaster University professor Vic Satzewich, author of a research project on decision-making by immigration officers, agreed.

“My concern is how they are going to operationalize the different factors officers use in weighing their decisions and program a decision-making algorithm. There are no real formulas they use,” said Satzewich.

“Two officers look at an application in different ways. A machine is going to miss things that an officer picks up. I still feel better to have an officer look at every immigration file. Public confidence is important. If Canadians lose confidence in the system, their support for immigration goes down.”

Immigration officials said no job losses are anticipated, as officers who are no longer working on low-risk applications would be reassigned to “higher value-added activities” such as reducing backlogs elsewhere.

Wemp would not reveal how much was budgeted for the project, but said the department has received “modest amounts of funding” to develop proofs of concept by a small team of analysts and no major informational technology investments have been made.

Source: Canadian immigration applications could soon be assessed by computers | Toronto Star

Decision-Making: Refugee claim acceptance in Canada appears to be ‘luck of the draw’ despite reforms, analysis shows

Interesting from a decision-making perspective.

Reading this reminded me of some of Daniel Kahneman’s similar work where he showed considerable variability in decision-making, even depending on the time of day. A reminder of the difficulty of ensuring consistent decision-making, given that people are people, automatic thinking, reflecting our experiences and perceptions, is often as important as more deliberative thinking. No easy solutions but regular analysis of decisions and feedback may help:

There are legitimate reasons why decisions by some adjudicators lean in one direction, such as adjudicators specializing in claimants from a certain region. (Someone hearing cases from Syria will have a higher acceptance rate than someone hearing claims from France.) Some members hear more expedited cases, which are typically urgent claims with specific aggravating or mitigating facts.

“My view is that even when you try to control for those sorts of differences, a very large difference in acceptance rates still exists,” said Mr. Rehaag. “You get into the more idiosyncratic elements of individual identity.”

These may reflect the politics of the adjudicator or impressions about a country. If adjudicators have been on a relaxing holiday in a country they may be less likely to accept a claimant faces horrors there.

Refugee claim acceptance in Canada appears to be ‘luck of the draw’ despite reforms, analysis shows | National Post.