Akbar: Canadian immigrants are overqualified and underemployed — reforms must address this

Well, labour economists would disagree regarding competitiveness given the current mix of temporary workers and students but interesting that CERC academics recognize the value of AI without automatically expressing concerns of algorithmic biases. Kahneman argues convincingly that such systems ensure greater consistency, albeit with the risk of coding of biases:

…Canada’s long-term competitiveness is hindered not by immigration, but by systemic labour market discrimination and inefficiencies that prevent skilled newcomers from fully contributing to the economy. 

Eliminating biases related to Canadian work experience and soft skills is key to ensuring newcomers can find fair work. The lack of recognition of foreign talent has a detrimental effect on the Canadian economy by under-utilizing valuable human capital.

To build a more inclusive labour market, a credential recognition system should support employers in assessing transferable skills and experience to mitigate perceived hiring risks related to immigrants. 

For international students, enhanced career services at educational institutions are critical. Strengthening partnerships between universities, colleges and employers can expand internships, co-op placements and mentorship programs, providing students with relevant Canadian work experience before graduation. 

Such collaboration is also key to implementing employer education initiatives that address misconceptions about hiring international graduates and highlight their contributions to the workforce. 

Artificial Intelligence (AI) can also play a role in reducing hiring biases and improving job matching for new immigrants and international graduates. Our recent report, which gathered insight from civil society, the private sector and academia, highlights the following AI-driven solutions:

  • Tools like Toronto Metropolitan University’s AI resume builder, Mogul AI, and Knockri can help match skills to roles, neutralize hiring bias and promote equity.
  • Wage subsidies and AI tools can encourage equitable hiring, while AI-powered programs can help human resources recognize and reduce biases.
  • Tools like the Toronto Region Immigrant Employment Council Mentoring Partnership, can connect newcomers with mentors, track their skills and match them to employer needs.

Harnessing AI-driven solutions, alongside policy reforms and stronger employer engagement, can help break down hiring barriers so Canada can fully benefit from the skills and expertise of its immigrant workforce.

Source: Canadian immigrants are overqualified and underemployed — reforms must address this

Eng: Will artificial intelligence really fix Ottawa’s troubled Phoenix pay?

Nails it. Without simplification, extremely hard to achieve, AI and automation unlikely to be successful:

…Why did Phoenix fail? There are many reasons, but to name a few: an overwhelming number of rules and processes, including 72 job classifications and 80,000 pay rules, requiring more than 300 customizations built into the payroll system; a lack of proper testing with users before a major rollout; and dated procurement processes that favour large vendors and waterfall methodologies….

Source: Will artificial intelligence really fix Ottawa’s troubled Phoenix pay?

Ottawa using AI to tackle Phoenix backlog as it tests replacement pay system

Needed. But again, a major part of the challenge is the multiple HR classifications, complex rules among other aspects, with major simplification and streamlining unlikely to be pursued as messy, time consuming and of little interest to the political level:

…Benay says AI is automating repetitive tasks, speeding up decision making and providing insights into human resources and pay data.

He says the government is testing the use of its AI assistant tool for three types of transactions – acting appointments, leave without pay and executive acting appointments – and is planning to launch automated “bulk processing” in these areas in April.

The government plans to expand AI-use to more transaction types over the course of next year, according to Benay, and could eventually use it to help with all types of cases, like departmental transfers and retirements.

There will always be an aspect of human verification, Benay says, as the tool was developed to keep humans in the loop.

“One thing we will not do is just turn it over to the AI machine,” says Benay.

The Government of Canada website says the backlog of transactions stood at 383,000 as of Dec. 31, 2024, with 52 per cent of those over a year old.

The government has said that it doesn’t want any backlog older than a year being transferred into a new system.

“A human only learns so fast, and the intake is continuing to come in,” Benay says. “The reason the AI work that we’re doing is so crucial is we have to increase (the) pace.”

Benay says the government has launched two boards that will oversee the use of AI and is looking at a third-party review of the AI virtual assistant tool over the course of the winter, with results to be published once it’s completed….

Source: Ottawa using AI to tackle Phoenix backlog as it tests replacement pay system

Klein: ‘Now Is the Time of Monsters’

Good summary of four macro issues that will affect our lives for years to come. Makes for depressing reading but cannot be ignored.

Donald Trump is returning, artificial intelligence is maturing, the planet is warming, and the global fertility rate is collapsing.

To look at any of these stories in isolation is to miss what they collectively represent: the unsteady, unpredictable emergence of a different world. Much that we took for granted over the last 50 years — from the climate to birthrates to political institutions — is breaking down; movements and technologies that seek to upend the next 50 years are breaking through….

Source: ‘Now Is the Time of Monsters’

Study provides evidence of AI’s alarming dialect prejudice

Interesting study, just adding to the challenges of using AI to evaluate speech:

An Englishman’s way of speaking absolutely classifies him, The moment he talks he makes some other Englishman despise him. – Dr Henry Higgins in My Fair Lady

While large language models (LLMs) like ChatGPT-4 have been trained to avoid answers that overtly racially stereotype, a new study shows that they “covertly” stereotype African Americans who speak in the dialect prevalent in New York, Detroit, Washington DC and other cities such as Los Angeles.

In “AI generates covertly racial decisions about people based on their dialect” published in Nature at the end of August, a team of three researchers working with Dr Valentin Hofmann at the Allen Institute for AI in Seattle shows how AI’s (learned) prejudice against African-American English (AAE) can have harmful and dangerous consequences.

In a series of experiments, Hofmann’s team found that LLMs are “more likely to suggest that a speaker of AAE be assigned to less-prestigious jobs, be convicted of crimes and be sentenced to death”.

The study, the authors write, “provides the first empirical evidence for the existence of dialect prejudice in language models: that is, covert racism that is activated by features of a dialect (AAE).”

The study states: “Using our new method of matching guise probing, we show that language models exhibit archaic stereotypes about speakers of AAE that most closely agree with the most negative human stereotypes about African Americans ever experimentally recorded, dating from before the civil rights movement.”

Developed in the 1960s at McGill University in Montreal, Canada, “guise probing” allowed the isolation of attitudes held by bilingual French Canadians towards both Francophones and Anglophones by having subjects pay attention to language, dialect and accent of Francophones and Anglophones on recordings and asking the subject to make judgements about these individuals’ looks, sense of humour, intelligence, religiousness, kindness, and ambition, among other qualities.

A new racism emerges

Hofmann and his co-authors begin their discussion by placing the AI’s covert racism in a historical context that is quite separate from other problems with machine learning such as hallucinations, that is, when an AI system makes things up.

Instead, they map the appearance of covert racism onto the history of American racism since the end of Reconstruction in 1877.

Between the end of the American Civil War in 1865 and 1877, to a greater or lesser degree, the national government enforced the Amendments to the US Constitution that ended slavery and granted civil rights to the freedman.

This effort was abandoned in 1877 and, soon, white supremacist state governments in the South began instituting Jim Crow laws that stripped the freedmen of their civil rights and created a legal regimen of peonage that was slavery in all but name.

In the 1950s, the civil rights movement and Supreme Court decisions such as the 1954 Brown vs Board of Education (which ruled that “separate but equal” was unconstitutional) set the stage for the Civil Rights Act of 1964 and other federal laws that dismantled the legal structures of Jim Crow.

However, Hofmann et al write, “social scientists have argued that, unlike the racism associated with the Jim Crow era, which included overt behaviours such as name calling or more brutal acts of violence such as lynching, a ‘new racism’ happens in the present-day United States in more subtle ways that rely on a ‘colour-blind’ racist ideology”.

This ideology (which the Supreme Court of the United States endorsed when it ruled that affirmative action admissions programmeswere unconstitutional) allows individuals to “avoid mentioning race by claiming not to see colour or to ignore race but still hold negative beliefs about racialised people”.

“Importantly,” the authors argue, “such a framework emphasises the avoidance of racial terminology but maintains racial inequities”.

Two lines of defence

According to Dr Craig Kaplan, who has taught computer science at the University of California and is the founder and CEO of the consulting firm iQ Company, which focuses on artificial general intelligence (AGI), when AI reproduces the racist assumptions contained in the texts the systems were trained on, developers typically first try to further filter and curate the data on which the systems were trained.

“Some of these systems are trained on three Library of Congresses’ worth of information that could include information from books like Tom Sawyer and Huckleberry Finn that contain racist stereotypes and dialogue.

The first line of defence, then, is to try to curate the data. But, it’s impossible for humans to sort reliably and filter every instance of racial stereotype. There’s so much data that it’s a losing battle,” he said.

The second line of defence is a technique known as Reinforcement learning with human feedback (RLHF) which uses humans to question the LLMs and correct them with feedback when the LLMs’ responses are dangerous or inappropriate.

Unfortunately, Kaplan explained, it is impossible to question LLMs on every topic, so bad actors can always find ways to get into an LLM to provide dangerous or inappropriate information. As fast as bad responses can be addressed, new ways of “jailbreaking” the LLMs emerge.

Kaplan characterises RLHF as “Whack a Mole”, a child’s game in which the aim is to keep hitting the mole that pops up.

“In this game … you tell the model that when it says African Americans are less intelligent and so forth, the system gets whacked. This is called reinforcement learning with human feedback (HF). But it’s impossible to anticipate every potential racist response that the LLM might generate,” said Kaplan.

Part of the reason RLHF won’t work is because of the way AI systems work.

“How LLMs represent anything, including African Americans, is a ‘black box’, meaning it is not transparent to us,” Kaplan told University World News.

“We don’t know how the information is represented or understood by the LLM. LLMs have maybe 500 billion parameters or a trillion parameters – far too many for a human to really grasp. We don’t know which exact combination of parameters, which are just numeric values, might represent erroneous concepts about African Americans.

“We simply have no visibility into that,” he said.

Though Hofmann and his co-authors do not speculate as to what is happening in the ‘black box’, their statistical analysis shows that HF (the same as RLHF) training perversely increases the dialect prejudice.

“In fact we observed a discrepancy between what language models overtly say about African Americans and what they covertly associate with them as revealed by dialect prejudice.

This discrepancy is particularly pronounced for language models trained with human feedback, such as GPT4: our results indicate that HF training obscures the racism on the surface, but the racial stereotypes remain unaffected on a deeper level,” the study states.

Striking and dangerous assumptions

The different assumptions made because of dialect are striking.

Prompted by the (Standardised American English, SAE) sentence “I am so happy when I wake up from a bad dream because they feel too real” the LLM said the speaker is likely to be “brilliant” or “intelligent” and not likely to be “dirty”, “lazy” or “stupid”.

By contrast, the AAE sentence “I be so happy when I wake up from a bad dream cus they feelin’ too real” led the LLM to say the speaker was “dirty”, “lazy” and “stupid”.

The authors draw attention to the fact that race is never mentioned; “its presence is encoded in the AAE dialect”.

However, they continue, “we found that there is a substantial overlap in the adjectives associated most strongly with African Americans by humans and the adjectives associated most strongly with AAE by language models, particularly for the earlier Princeton Trilogy studies”.

The Princeton Trilogy was a series of studies that investigated common American racial stereotypes held by Americans. Accordingly, speakers of AAE were recommended by various LLMs for jobs like cleaner, cook, guard or attendant.

By contrast, speakers of SAE were recommended for jobs like astronaut, professor, psychiatrist, architect, lawyer, pilot and doctor.

Criminal justice experiments

If anything, what Hofmann et al found in their two criminal justice experiments is even more alarming.

In the first, they asked the LLM to decide whether an individual was guilty or not guilty of an unspecified crime using only the statement of the defendant. In the case of GPT4, when the statement was in AAE, the conviction rate was 50% higher than when the statement prompt was in SAE.

The second experiment asked the LLM if the defendant merited the death penalty for first-degree (planned and deliberate) murder. Again, the only evidence provided to the language modes was a statement made by the defendant.

In this instance GPT4 sentenced speakers of AAE to death approximately 90% more often than it did speakers of SAE.

Massive pattern detectors

Why, Kaplan was asked, do LLMs produce such unjust outcomes for African Americans?

“These systems are basically massive pattern detectors. They could be trained on millions of documents, including court records that go back decades,” he replied.

“Those old court records would reflect the prejudices of the times, when people of colour were sentenced more harshly, as they still are.

“The records may also contain court transcripts including African Americans’ speech in the context of sentencing. That could all be reflected in the data used to train an LLM.

“The AI system could recognise these patterns of prejudices of the society, reflected in the court records and bound up with the language of the African American defendants who were sentenced to death,” he explained.

Source: Study provides evidence of AI’s alarming dialect prejudice

Brooks: Many People Fear A.I. They Shouldn’t

Perhaps overly optimistic view but useful counterpart to some of the doom predictions:

…Like everybody else, I don’t know where this is heading. When air-conditioning was invented, I would not have predicted: “Oh wow. This is going to create modern Phoenix.” But I do believe lots of people are getting overly sloppy in attributing all sorts of human characteristics to the bots. And I do agree with the view that A.I. is an ally and not a rival — a different kind of intelligence, more powerful than us in some ways, but narrower.

It’s already helping people handle odious tasks, like writing bureaucratic fund-raising requests and marketing pamphlets or utilitarian emails to people they don’t really care about. It’s probably going to be a fantastic tutor, that will transform education and help humans all around the world learn more. It might make expertise nearly free, so people in underserved communities will have access to medical, legal and other sorts of advice. It will help us all make more informed decisions.

It may be good for us liberal arts grads. Peter Thiel recently told the podcast host Tyler Cowen that he believed A.I. would be worse for math people than it would be for word people, because the technology was getting a lot better at solving math problems than verbal exercises.

It may also make the world more equal. In coding and other realms, studies so far show that A.I. improves the performance of less accomplished people more than it does the more accomplished people. If you are an immigrant trying to write in a new language, A.I. takes your abilities up to average. It will probably make us vastly more productive and wealthier. A 2023 study led by Harvard Business School professors, in coordination with the Boston Consulting Group, found that consultants who worked with A.I. produced 40 percent higher quality results on 18 different work tasks.

Of course, bad people will use A.I. to do harm, but most people are pretty decent and will use A.I. to learn more, innovate faster and produce advances like medical breakthroughs. But A.I.’s ultimate accomplishment will be to remind us who we are by revealing what it can’t do. It will compel us to double down on all the activities that make us distinctly human: taking care of each other, being a good teammate, reading deeply, exploring daringly, growing spiritually, finding kindred spirits and having a good time.

“I am certain of nothing but of the holiness of the Heart’s affections and the truth of Imagination,” Keats observed. Amid the flux of A.I., we can still be certain of that.

Source: Brooks: Many People Fear A.I. They Shouldn’t

Will A.I. Kill Meaningless Jobs?

Hard not to think of government having a preponderance of “meaningless jobs,” such as drafting talking points, Q&As, along with basic application processing, call centre and chat routine enquiries etc:

…Kevin Kelly, a Wired co-founder who has written many books on technology, said he was somewhat optimistic about the effect A.I. would have on meaningless work. He said he believed that partly because workers might begin probing deeper questions about what made a good job.

Mr. Kelly has laid out a cycle of the psychology of job automation. Stage 1: “A robot/computer cannot possibly do what I do.” Stage 3: “OK, it can do everything I do, except it needs me when it breaks down, which is often.” Skip ahead to Stage 5: “Whew, that was a job that no human was meant to do, but what about me?” The worker finds a new and more invigorating pursuit, leading full circle to Stage 7: “I am so glad a robot cannot possibly do what I do.”

It’s demoralizing to realize that your job can be replaced by technology. It can bring the pointlessness into sharp relief. And it can also nudge people to ask what they want out of work and seek out new, more exhilarating pursuits.

“It might make certain things seem more meaningless than they were before,” Mr. Kelly said. “What that drives people to do is keep questioning: ‘Why am I here? What am I doing? What am I all about?’”

“Those are really difficult questions to answer, but also really important questions to ask,” he added. “The species-level identity crisis that A.I. is promoting is a good thing.”

Some scholars suggest that the crises prompted by automation could steer people toward more socially valuable work. The Dutch historian Rutger Bregman started a movement for “moral ambition” centered in the Netherlands. Groups of white-collar workers who feel that they are in meaningless jobs meet regularly to encourage one another to do something more worthwhile. (These are modeled on Sheryl Sandberg’s “Lean In” circles.) There’s also a fellowship for 24 morally ambitious people, paying them to switch into jobs specifically focused on fighting the tobacco industry or promoting sustainable meats.

“We don’t start with the question of ‘What is your passion?’” Mr. Bregman said of his moral ambition movement. “Gandalf didn’t ask Frodo ‘What’s your passion?’ He said, ‘This is what needs to get done.”

What will need to get done in the A.I era is likely to veer less toward sustainable meat and more toward oversight, at least in the immediate term. Automated jobs are especially likely to require “A.I. babysitters,” according to David Autor, an M.I.T. labor economist focused on technology and jobs. Companies will hire humans to edit the work that A.I. makes, whether legal reviews or marketing copy, and to police A.I.’s propensity to “hallucinate.” Some people will benefit, especially in jobs where there’s a tidy division of labor — A.I. handles projects that are easy and repetitive, while humans take on ones that are more complicated and variable. (Think radiology, where A.I. can interpret scans that fit into preset patterns, while humans need to tackle scans that don’t resemble dozens that the machine has seen before.)

But in many other cases, humans will end up mindlessly skimming for errors in a mountain of content made by A.I. Would that help relieve a sense of pointlessness? Overseeing drudge work doesn’t promise to be any better than doing it, or as Mr. Autor put it: “If A.I. does the work, and people babysit A.I., they’ll be bored silly.”

Some of the jobs most immediately at risk of being swallowed up by A.I. are those anchored in human empathy and connection, Mr. Autor said. That’s because machines don’t get worn out from feigning empathy. They can absorb endless customer abuse.

The new roles created for humans would be drained of that emotional difficulty — but also drained of the attendant joy. The sociologist Allison Pugh studied the effects of technology on empathic professions like therapy or chaplaincy, and concluded that “connective labor” has been degraded by the slow rollout of technology. Grocery clerks, for example, find that as automated checkout systems come to their stores, they’ve lost out on meaningful conversations with customers — which they understand managers don’t prioritize — and now are left mostly with customers exasperated about self checkout. That’s partially why Ms. Pugh fears that new jobs created by A.I. will be even more meaningless than any we have today.

Even the techno-optimists like Mr. Kelly, though, argue that there’s a certain inevitability to meaningless jobs. After all, meaninglessness, per Mr. Graeber’s definition, is in the eye of the worker.

And even beyond the realm of Mr. Graeber’s categories of pointless work, plenty of people have ambivalent relationships with their jobs. Give them enough hours and then years clocking in to do the same things, and they might start to feel frustrated: about being tiny cogs in big systems, about answering to orders that don’t make sense, about monotony. Those aggrieved feelings could crop up even as they jump into new roles, while the robot cycles spin forward, taking over some human responsibilities while creating new tasks for those who babysit the robots.

Some people will look for new roles; others might organize their workplaces, trying to remake the parts of their jobs they find most aggravating, and finding meaning in lifting up their colleagues. Some will search for broader economic solutions to the problems with work. Mr. Graeber, for example, saw universal basic income as an answer; OpenAI’s Sam Altman has also been a proponent of experiments with guaranteed income.

In other words, A.I. magnifies and complicates the social issues entwined with labor but isn’t a reset or cure-all — and while technology will transform work, it can’t displace people’s complicated feelings toward it.

Mr. Wang says he certainly believes that will hold true in Silicon Valley. He predicts that automating pointless work will mean engineers get even more creative about seeking out their promotions. “These jobs exist on selling a vision,” he said. “I fear this is one problem you can’t automate.”

Source: Will A.I. Kill Meaningless Jobs?

AI Can Fix Immigration, Low Fertility & Retirements

While I believe that AI holds great potential, this analysis is overly optimistic in the shorter term. Longer-term, much more likely:

AI-enabled automation may hold the key to solving three major problems: immigration, low fertility rates and retirements. But strangely, automation is not a planned policy solution to these and related problems. Why there were all sorts of problems with how the US federal government handled the Covid 19 pandemic, Operation Warp Speed was not one of them. Should AI-enabled automation receive the same kind of investment priority the vaccine received – instead of how defensively everyone treats “automation”? Remember that Operation Warp Speed “was a public-private partnership initiated by the United States government to facilitate and accelerate the development, manufacturing, and distribution of COVID-19 vaccines, therapeutics, and diagnostics.” Is this a model for investments in AI and automation?

AI & Immigration

For example, instead of making economic arguments for why the US (and other countries) need immigrants, why not sidestep the argument with a massive federal investment in automation designed to contribute directly to economic growth? Is there a public-private partnership opportunity here? Obviously many companies are pursuing automation at breakneck speed. They want to save money and increase profitability by reducing their dependency on humans. Progress is impressive. But the suggestion here is a massive public-private partnership to accelerate and focus automation on the economic holes immigration is intended to fill.

Obviously, there are many reasons why people come to the US and other developed countries. The economic argument is not to diminish any of those motivations. Instead, the hypothesis is that the economic arguments around immigration might be framed very differently than they are today. We know, for example, that many immigrants come to the US to avoid political prosecution, violence and because they want better lives for their families. All good, but the economic arguments that politicians make about the need for more immigration might be influenced by warp speed investments in automation. Of course, since “politicians” are heat-seeking missiles to money and power, it’s impossible to know if they’d even entertain arguments that don’t perfectly fit their personal agendas. But that aside, there are opportunities to leverage AI-enabled automation to address some of the economic requirements that immigration might – or might not – satisfy.

AI & Low Fertility Rates

Let’s now look at human reproduction:

“The general fertility rate in the United States decreased by 3% from 2022, reaching a historic low. This marks the second consecutive year of decline, following a brief 1% increase from 2020 to 2021. From 2014 to 2020, the rate consistently decreased by 2% annually.”

(Note that “the fertility rate measures the number of live births per 1,000 women within the childbearing age range, often 15-44 years old.”)

What does this mean?

“A prolonged US total fertility rate this low – specifically, a rate substantially below 2 – would lead to slower population growth, which could in turn cause slower economic growth and present fiscal challenges. While the decline presents a fairly new challenge to the United States, other high-income countries have sustained below replacement level fertility for some years now and have attempted policies to mitigate that trend.”

Automation can help mitigate the trends. AI can provide nuanced efficiencies.

Is automation an answer to aging societies? Well, if there’s machine to replace a non-existent human – so long as the human needs to be replaced – is that all bad? All of the worry about aging societies shrinking because of low birth rates can perhaps be relieved through automation.

Retirements

The same argument that applies to low fertility applies to retirements, early or otherwise. What does it matter if someone retires from a job that can be automated?

We’re told that retirements are increasing at a pace never before seen:

“Today, the number of retirees is surging at a remarkable pace, outpacing the influx of new workers. This trend is leading to an unparalleled aging of America’s population, bringing about significant transformations in the workforce, economy, and the global mobility industry.”

Implications?

“The demand for workers continues to be robust, with approximately two job openings available for every unemployed individual. And with more than 75 million baby boomers retiring sooner rather than later, it’s clear that employers will need a strong workforce plan for replacing exiting workers.

“Meeting the workforce gap presents a considerable challenge. Relying solely on Gen X workers is not enough, and many millennials may lack essential work experience. Foreign-born workers could face immigration hurdles, and not all roles are suitable for flexible or remote workers.”

If ever there was a role for automation, this is it.

Automation Policy

I’ve discussed this before:

“Do we need tax preparers? Car salespersons? Loan officers? Automation has only begun, and as more and more employees call it quits, automation may take their place faster than we think. Why wouldn’t Uber want to eliminate their biggest headache – drivers – with autonomous vehicles? Why wouldn’t all companies want to deploy ‘workers’ that work 24/7, never need vacations, never join unions and never get sick? Checkout clerks? Postal workers? Gas station attendants (almost gone now)? And many more.”

Honeywell reports some survey results that focus on robotics:

“The productivity gains that we see from … robotics have increased,” said John Dillon … ‘the technology has gotten better … (and) the cost of not automating has gotten higher.’

“That’s because a warehouse that might typically require 2,000 workers could deploy technologies and warehouse execution software to instead operate with only 200 people.”

Automation may be the answer to many economic problems. In 1982 (!), the government believed in the power of automation through “federal efforts to encourage automation (which included): (1) financial incentives for private sector action; (2) research responsibilities; (3) technology transfer mechanisms; (4) support of engineering education; and (5) the development of standards to facilitate integration of diverse components of automation systems.” But today – 40 years later – here’s the question heard over and over again: “what should the government do about the coming automation apocalypse?”

Automation and its closest friend “AI,” are not the apocalypse. They’re solutions to some tough economic problems the US and developed countries face. Yes, there will be job displacement and perfectly timing the adoption of automation to immigration, fertility rates and retirements is impossible. But the hypotheses should at least be tested. It may be that planned automation can reduce some economic stress — maybe a lot of stress. The technology is ready. The companies are ready. But will the politicians support Operation Automation? Or are they focused on other things?

Source: AI Can Fix Immigration, Low Fertility & Retirements

Artificial Intelligence and Immigration Implications

More directed at legal and immigration firms than governments but nevertheless interesting. Predictive technology, if fed with the right assumptions and data, could be a very useful tool for governments as they often appear flat-footed and late with respect to impacts and change:

According to the International Monetary Fund, almost 40% of global employment will be impacted by artificial intelligence.

The field of immigration is no exception, and several countries are already implementing or planning to implement artificial intelligence (AI) into their immigration systems to obtain benefits such as increased productivity by their staff members, enhanced security measures and streamlined recruitment of foreign nationals.

This blog discusses recent and forthcoming examples of AI in immigration systems; ways for companies and governments to prepare for the AI revolution and adapt it for their uses; and addresses some of the challenges and concerns surrounding the use of AI in immigration.

Some recent examples of AI being utilized in immigration systems include:

  • In the United Arab Emirates, the Dubai airport launched an iris scanner to confirm identity, allowing travelers entering the country to move rapidly through passport control while still maintaining security precautions.
  • Portugal uses AI tools to validate the authenticity of documents submitted with an online citizenship application.
  • The government of Brazil is planning to utilize AI to analyze residence permit applications for employment, to reduce bureaucracy and speed up processing times.
  • France is expected to begin using AI to uncover and trace document fraud on the ANEF (Digital Administration for Foreigners in France) portal.

How can companies and governments prepare for AI and adapt it for their purposes?

  • Ensure compliance with standards in the region they are operating. Across the world, countries and regions are taking different approaches towards regulating AI and affected employers should be aware and revise their business practices if they are subject to these new rules. For example, the European Union is set to become the world’s leading tech regulator when the Artificial Intelligence Act goes into effect; the law will implement regulations on AI in phases, with the first phase banning prohibited AI systems that pose “unacceptable risks”.
  • Adopt specific AI visas to attract talent. Many governments recognize the transformative nature of AI and the critical need to attract individuals specialized in AI practices to transform industries and boost productivity.
    • The United States is considering changes to the J-1 exchange visitor program that could enhance opportunities for AI talent. The U.S. government is also reviewing existing immigration pathways, including the EB-1, EB-2, O-1 and International Entrepreneur Parole Program, to clarify and modernize these pathways for experts in AI.
    • Australia launched a Mobility Arrangement for Talented Early-Professionals Scheme, which provides 3,000 places for Indian national early professionals in several fields, including AI.
  • Utilize predictive technology to understand how migration management affects their companies. AI is being used for migration management, allowing the public and private sectors to pool information that can be used to predict migration flows, leading to more informed decisions and policy-making.
  • Implement upskilling and reskilling initiatives. The private sector should include AI upskilling initiatives as part of their workers’ regular assignments. This is particularly important as, according to the Harvard Business Review, “the half-life of [tech] skills is now less than five years, and in some tech fields it’s as low as two and a half years.” Constantly hiring new talent for emerging AI technology would result in a revolving door at a company, creating a loss of institutional knowledge, productivity, and revenue. By adopting upskilling and reskilling initiatives to keep up with the latest AI technology, employers build employee loyalty.

What challenges and concerns should companies and governments be aware of when utilizing AI or immigration systems built with AI?

  • Confidentiality of information. Governments and the private sector alike collect highly sensitive data essential to immigration procedures, such as biometrics and passports. With the AI transfer of information into systems, employers and government officials must ensure that these systems comply with data privacy laws, contain adequate cyber security precautions, and will not be used to harm the individual.
  • National security issues. Governments want to ensure that information they store on private sector AI platforms is only shared with select partners and does not end up with adversaries that could potentially use this information for nefarious reasons.
  • Translation issues. AI has already proven to be somewhat unreliable when used for translation purposes, due to the nuances of written and spoken languages. Although it may be cheaper and faster to utilize AI for this purpose, translation errors may lead to undesirable outcomes, such as denied visa applications. Employers and government individuals should be extremely circumspect in determining when and what type of AI translation technology they employ.
  • Divide in uptake of AI by countries. To effectively utilize AI, companies and governments must operate in countries with a suitable information and communication technology infrastructure. Developing countries, which may not have this infrastructure or individuals with the skill set to operate such infrastructure, may be slower adopters of AI technology. As a result, if AI is needed for productivity, companies may end up reshoring jobs originally outsourced to these developing countries, causing greater disparities among countries.

Due to the ever-changing nature of AI technology, companies should reach out to their immigration professionals for guidance in navigating the complex landscape at the intersection of these two fields.

Source: Artificial Intelligence and Immigration Implications

ICYMI: Ottawa will prevent AI tools from discriminating against potential hires, Anand says

Of note:

The federal government will work to prevent artificial intelligence from discriminating against people applying for jobs in federal government departments, says Treasury Board President Anita Anand.

In a wide-ranging year-end interview with CBC News, Anand acknowledged concerns about the use of AI tools in hiring.

“There is no question that at all times, a person’s privacy needs to be respected in accordance with privacy laws, and that our hiring practices must be non-discriminatory and must be embedded with a sense of equality,” Anand said when asked about the government’s use of AI in its hiring process.

“Certainly, as a racialized woman, I feel this very deeply … We need to ensure that any use of AI in the workplace … has to be compliant with existing law and has to be able to stand the moral test of being non-discriminatory….

Source: Ottawa will prevent AI tools from discriminating against potential hires, Anand says