How Canada is using AI to catch immigration fraud — and why some say it’s a problem
2023/10/13 Leave a comment
While I understand the worries, I also find that they are overwrought, given that the only way to manage large numbers is through AI and related IT tools.
And as Kahneman’s exhaustive survey of automated vs human systems in Noise indicates, automated systems deliver greater consistency than solely human systems.
So by all means, IRCC has to make every effort to ensure no untoward bias and discrimation is embedded in these systems and ensure that the inherent discrimination in any immigration or citizenship processes, who gets in/who doesn’t, is evidence based and aligned to policy objectives:
Canada is using a new artificial intelligence tool to screen would-be international students and visitors — raising questions about what role AI should be playing in determining who gets into the country.
Immigration officials say the tool improves their ability to figure out who may be trying to game Canada’s system, and insist that, at the end of the day, it’s human beings making the final decisions.
Experts, however, say we already know that AI can reinforce very human biases. One expert, in fact, said he expects some legitimate applicants to get rejected as a result.
Rolled out officially in January, the little-known Integrity Trends Analysis Tool (ITAT) — formerly called Lighthouse or Watertower — has mined the data set of 1.4 million study-permit applications and 2.9 million visitor applications.
What it’s searching for are clues of “risk and fraud patterns” — a combination of elements that, together, may be cause for additional scrutiny on a given file.
Officials say that, among study-permit applications alone, they have already identified more than 800 “unique risk patterns.”
Through ongoing updates based on fresh data, the AI-driven apparatus not only analyses these risk patterns but also flags incoming applications that match them.
It produces reports to assist officers in Immigration Risk Assessment Units, who determine whether an application warrants further scrutiny.
“Maintaining public confidence in how our immigration system is managed is of paramount importance,” Immigration Department spokesperson Jeffrey MacDonald told the Star in an email.
“The use of ITAT has effectively allowed us to improve the way we manage risk by using technology to examine risk with a globalized lens.”
Helping with a big caseload
Each year, Canada receives millions of immigration applications — for temporary and permanent residence, as well as for citizenship — and the number has continued to grow.
The Immigration Department says the total number of decisions it renders per year increased from 4.1 million in 2018 to 5.2 million last year, with the overwhelming majority of applicants trying to obtain temporary-resident status as students, foreign workers and visitors; last year temporary-resident applications accounted for 80 per cent of the decisions the department rendered.
During the pandemic, the department was overwhelmed by skyrocketing backlogs in every single program, which spurred Ottawa to go on a hiring spree and fast-track its modernization to tackle the rising inventory of applications.
Enter: a new tool
ITAT, which was developed in-house and first piloted in the summer of 2020, is the latest instrument in the department’s tool box, one that goes beyond performing simple administrative tasks, such as responding to online inquiries, to more sophisticated functions, like detecting fraud.
MacDonald said ITAT can readily find connections across application records in immigration databases, which may include reports and dossiers produced by Canada Border Services Agency or other law enforcement bodies. The tool, he said, helps officials identify applications that share similar characteristics of previously refused applications.
He said that in order to protect the integrity of the immigration system and investigative techniques, he could not disclose details of the risk patterns that are used to assess applications.
However, MacDonald stressed that “every effort is taken to ensure risk patterns do not create actual or perceived bias as it relates to Charter-protected factors, such as gender, age, race or religion.
“These are reviewed carefully before weekly reports are distributed to risk assessment units.”
A government report about ITAT released last year did make reference to the “adverse characteristics” monitored for in an application, such as inadmissibility findings (e.g. criminality and misrepresentation) and other records of immigration violations, such as overstaying or working without authorization.
The report said that in the past, risk assessment units conducted a random sample of applications to detect frauds through various techniques, including phone calls, site visits or in-person interviews. The results of the verification activity are shared with processing officers whether or not fraudulent information was found.
The report suggested the new tool is meant to assist these investigations. MacDonald emphasized that ITAT does not recommend or make decisions on applications, and the final decisions on applications still rest with the processing officers.
Unintended influence?
However, that doesn’t mean the use of the tool won’t influence an officer’s decision-making, said Ebrahim Bagheri, director of the Natural Sciences and Engineering Research Council of Canada’s collaborative program on responsible AI development. He said he expects human staff to wrongfully flag and reject applicants out of deference to ITAT.
Bagheri, who specializes in information retrieval, social media analytics and software and knowledge engineering, said humans tend to heed such programs too much: “You’re inclined to agree with the AI at the unconscious level, thinking — again unconsciously — there may have been things that you may have missed and the machine, which is quite rigorous, has picked up.”
While the shift toward automation, AI-assisted and data-driven decision-making is part of a global trend, immigration lawyer Mario Bellissimo says the technology is not advanced enough yet for immigration processing.
“Most of the experts are pretty much saying, relying on automated statistical tools to make decisions or to predict risk is a bad idea,” said Bellissimo, who takes a personal interest in studying the use of AI in Canadian immigration.
“AI is required to achieve precision, scale and personalization. But the tools aren’t there yet to do that without discrimination.”
The shortcomings of AI
A history of multiple marriages might be a red flag to AI, suggesting a marriage of convenience, Bellissimo said. But what could’ve been omitted in the assessment of an application were the particular facts — that the person’s first spouse had passed away, for instance, or even that the second ran away because it was a forced marriage.
“You need to know what the paradigm and what the data set is. Is it all based on the Middle East, Africa? Are there different rules?” asked Bellissimo.
“To build public confidence in data, you need external audits. You need a data scientist and a couple of immigration practitioners to basically validate (it). That’s not being done now and it’s a problem.”
Bagheri said AI can reinforce its own findings and recommendations when its findings are acted on, creating new data of rejections and approvals that conform to its conclusions.
“Let’s think about an AI system that’s telling you who’s the risk to come to Canada. It flags a certain set of applications. The officers will look at it. They will decide on the side of caution. They flag it. That goes back to the system,” he said.
“And you just think that you’re becoming more accurate where they’re just intensifying the biases.”
Bellissimo said immigration officials have been doing a poor job in communicating to the public about the tool: “There is such a worry about threat actors that they’re putting so much behind the curtain (and) the public generally has no confidence in this use.”
Bagheri said immigration officials should just limit their use of AI tools to optimize resources and administer its processes, such as using robots to answer emails, screen eligibility and triage applications — freeing up officers for the actual risk assessment and decision making.
“I think the decisions on who we welcome should be based on compassion and a welcoming approach, rather than a profiling approach,” he said.
Source: How Canada is using AI to catch immigration fraud — and why some say it’s a problem
