Automatic for the people [immigration focus]

Of note. Given volumes and resources, no realistic alternative but care need to eliminate biases (either pro or con):

If there was ever any doubt the federal government would use automation to help it make its administrative decisions, Budget 2022 has put that to rest. In it, Ottawa pledges to change the Citizenship Act to allow for the automated and machine-assisted processing of a growing number of immigration-related applications.

In truth, Immigration, Refugees and Citizenship Canada has been looking at analytics systems to automate its activities and help it assess immigrant applications for close to a decade. The government also telegraphed its intention back in 2019, when it issued a Directive on Automated Decision-Making (DADM), which aims to build safeguards and transparency around its use.

“[T]he reference to enable automated and machine-assisted processing for citizenship applications is mentioned in the budget to ensure that in the future, IRCC will have the authority to proceed with our ambition to create a more integrated, modernized and centralized working environment.” said Aidan Stickland, spokesperson for Immigration, Refugees and Citizenship minister Sean Fraser, in an emailed reply.

“This would enable us to streamline the application process for citizenship with the introduction of e-applications in order to help speed up application processing globally and reduce backlogs,” Stickland, added. “Details are currently being formalized.”

But to live a life of ambition requires taking risks. So the DADM comes with an algorithmic impact assessment tool. According to Teresa Scassa, a law professor at the University of Ottawa, it creates obligations for any government department or agency that plans to adopt automated decision-making, either in whole or as a system that makes recommendations. It is a risk-based framework to determine the obligation to be placed on the department or agency.

“The citizenship and immigration context is one where what they’re looking at is that external client,” Scassa says. “It does create this governance framework for those types of projects.”

Scassa says that the higher the risk of impact on a person’s rights, or the environment, the more obligations are placed on the department or agency using it, such as requirements for peer review, monitoring outputs to ensure the system remains consistent with the objectives or that it doesn’t demonstrate improper bias.

“It governs things like what kind of notice should be given,” Scassa says. “If it’s very low-risk, it might be a very general notice, like something on a web page. If it’s high risk, it will be a specific notice to the individual that automated decision-making is in use. Depending on where the project is in the risk framework, there is a sliding scale of obligations to ensure that individuals are protected from adverse impacts.”

Scassa suspects that IRCC may use automated decision-making to determine if someone qualifies for citizenship, which can mean different things.

It could be a triage system, for example, drawing information from applications before using AI to determine which applicants clearly qualify for citizenship. “Everything else [would fall] into a different basket where it needs to be reviewed by an officer,” Scassa says.

Such a system would be relatively low-risk as any decisions would be positive for the applicant, while all others go to a human for review, which would speed up overall processing times.

“That may be less problematic than a system that makes all of the decisions, and people have to figure out why they got rejected, and you have to ask how transparent is the algorithm, and what are your rights to have the decision reviewed,” Scassa adds. “There is the question of how it will be designed, and how impactful the AI tool will be on individuals. On the other hand, a triage system like this could have automation bias where files get flagged. Maybe the human reviewing them approaches them with a particular mindset because they haven’t been considered to be automatically accepted. The automation bias may make the human less likely to approve them.”

Scassa notes that the Open Government platform shows an algorithmic impact assessment for a tool developed for spousal analytics, a form of triage tool, which gives a sense of what kinds of tools the department is contemplating.

Scassa also notes that under the Citizenship Act, a provision allows for the delegation of the minister’s powers to any person authorized in writing. She suspects that the proposed legislative change could be to specifically allow some of the decisions to be made on a fully-automated basis.

When it comes to reviewing decisions, the DADM and its risk framework appears to apply administrative law principles, including procedural fairness protections.

Paul Daly, also a law professor at the University of Ottawa, adds that the administrative law principles apply regardless of whether this type of automated decision-making has been authorized in the statute.

“It’s a common concern for officials using sophisticated machine-learning technology to want legal authority,” Daly says. “Really, that’s only one part of the picture. There’s a whole body of legal principles from administrative law, the Charter, and the [DADM] that have to be complied with when you start to actually use the systems,” Daly says.

Lex Gill, a fellow at Citizen Lab, co-authored a report called “Bots at the Gate,” which looks at the human rights impacts of automated decision-making in Canada’s citizenship and immigration system. She acknowledges there are serious backlogs within the immigration system. But she cautions that faster isn’t always better, particularly when the error rates associated with AI disproportionately affect certain groups who are already treated unfairly.

“Sometimes we adopt technologies that will allow us to believe that we are doing something more scientific, methodical or fair, when really what we are doing is reproducing the status quo, but faster and with less transparency,” Gill says. “That is always my concern when we talk about automating these kinds of administrative processes.”

Gill notes there is a spectrum of technologies available for automated and machine-assisted processing, some of which are not problematic, while others are worrying and raise human rights issues. Still, it is hard to know what we may be dealing with without more information from the minister.

“When we talk about using automated or machine-assisted technology to do things like risk scoring, that’s an area where we know that it’s highly discretionary,” Gill says. “There is an entire universe of academic study that demonstrates that those technologies tend to replicate existing forms of bias and discrimination and profiling that already exists within administrative systems.”

Gill says that these systems tend to learn from existing practices. The result tends to exacerbate discriminatory outcomes and makes them more difficult to challenge because there is the additional layer of perceived scientific or technical neutrality layered on top of a system that demonstrated bias.

“When the government is imagining adopting these kinds of technologies, is it imagining doing that in a way that is enhancing transparency, accountability, and reviewability of decisions?” asks Gill. “Efficiency is clearly an important goal, but the rule of law, accountability and control of administrative discretion also require friction—they require a certain degree of scrutiny, the ability to slow things down, the ability to review things, and the ability to understand why and how a decision was made.”

Gill says that unless these new technologies come with oversight, review and transparency mechanisms, she worries that they will take a system that is already discretionary, opaque, and has the ability to change the direction of a person’s life, and render it even more so.

“If you’re going to start adopting these kinds of technologies, you need to do it in a way that maximally protects a person’s Charter rights, and which honours the seriousness of the decisions at stake,” Gill says. “Don’t start with decisions that engage the liberty interests of a person. Start with things like whether or not this student visa application is missing a supporting document.”

Source: Automatic for the people

Sarantakis: Taking data seriously: A call to public administrators

Important flagging of the importance of data for governments and how governments increasingly lag the private sector in their collection, analysis and use of data and AI to understand citizen needs.

However striking that a senior official would make the case without acknowledging the challenge in doing do for the public sector given that each time the government does so, significant criticism occurs, whether it be for IRCC’s use of the Chinook system, Statistics Canada use of anonymized credit card information to understand consumer spending, or PHAC’s collection of anonymized COVID phone data.

Perhaps a second piece on this harder issue?

It is said that the first step in overcoming a problem is first admitting its existence. So, here goes: Contemporary public administration is data-challenged.

This would have been an implausible statement to utter, historically. After all, public administrators as individuals know how important data is to public policy formulation and program delivery. Public administration has proved its worth over time with the value of record-keeping, and creating and using data — recording, ordering, sorting and tabulating counts of people, forests, geography, geology, tanks, guns and things like the production of butter.

Indeed, the two great and insatiable needs of the early state, formulated by Yale scholar James C. Scott, were taxation and conscription. Without revenues and the capacity to pay to defend sovereignty, states are not durable. In turn, without public administrators recording, ordering, sorting and tabulating data, the state does not endure.

Historically, public administration has been on the cutting edge of data. Entities often went to various state organs and state registries for data. The public service apparatus of the state knew, even in the state formed explicitly to curb government involvement in the daily affairs of its citizens.

But something dramatic has happened. The administrative state – that part of government that continues regardless of whether elections yield majorities or minorities that are red, blue, orange, green, or purple – is no longer on the cutting edge of data. Yes, the state still knows, but often it only now knows after, while private sector entities know now. Even more powerfully, with predictive analytics, sophisticated private entities increasingly know before.

How can we understand this switch? How can we understand public administration losing its historical position of relative data supremacy? To do that, we need to detour from public administration for a moment and veer into the private-sector economy. What we find gives us important clues to our mystery vis-à-vis data and public administration.

The factors of production 

Since Adam Smith, we have understood three core factors of production: land, labour and capital. There are others that have competed to be added to this list. Channeling Peter Drucker, some have argued for “management” – those who directresources. Others have argued for “entrepreneurs” – those who combine resources in new and innovative ways. But Smith’s formulation has proven remarkably durable for more than two centuries.

If Smith were to return and look at some of the most valuable and dynamic corporations of our era – the digital giants Google, Meta (formerly Facebook), Amazon, Apple, Spotify and others – he would likely be mystified. Yes, he would see some land. Yes, he would see some labour. But nowhere near enough to justify the heady heights – and incredible influence and power – of the digital giants. Finally, he would also see some capital. But remarkably, that capital would largely be a by-product of “production,” and not a driver of production.

Seeing the most valuable and powerful entities on earth during his era, Smith would have seen people – lots and lots of labour. He would have seen land. He would have seen capital in the form of constructed ships, and tools, and extracted then refined natural resources. He would have seen stuff – tangible things that he could touch.

But the contemporary Adam Smith would see negligible amounts of people and land in today’s largest companies. Certainly nothing approaching their value, status or their power. These companies, perhaps most surprisingly of all, “consume” relatively little capital.

So if you are generating enormous profits but not drawing heavily on the “factors of production” …. something makes no sense. What is going on?

Brains? Computers? Digital? Algorithms? Cloud computing?

Yes, yes, yes, yes, yes, and lots more.

But fundamentally, what is going on now is the fourth factor of production.

Data.

Data as differentiator 

Data has now become the most valuable commodity on earth. Data stocks are more valuable than natural resources. Data is more valuable than manufacturing facilities; more valuable than land; more valuable than labour. Data – the new oil? Oil should be so lucky.

Why?

Data is now the differentiator. Data is now the value-add. As computers, software, micro-processing power, storage, cloud computing and algorithms all become (or all trend toward) commodity status, it is the quantity and quality of data that will transform the mediocre into the successful.

A commodity is an interchangeable and undistinguished part. Where I buy a barrel of oil or a bar of gold or a truck load of gravel or road salt is overwhelmingly just price-contingent. The lowest price wins. To avoid becoming a commodity in data – valued only for how cheaply you can deliver something – you need more and better data than the competition. Increasingly, if you are data-deficient, you will not be competitive or sustainable as an entity.

Put another way, Company A and Company B already compete based on the quantity and the quality of their data. This will also increasingly be true in the coming years for Country A and Country B. Countries have competed forever for oil and gas and timber and nickel. Now they are also adding “quantity and quality of data” to that list of competitions.

Spotify is a data company that deals in music. Netflix is a data company that deals in entertainment. Tesla is a data company on wheels. Google is a data company that deals in information. Amazon is a data company that provides many things – same with Instagram, same with Facebook.

Computing, computation, communication, software, digital distribution – all are, or are rapidly becoming – commodities. Algorithms still have differentiating value, but as advances in artificial intelligence continue, these as well will also invariably trend to commodity status. What really adds value in production increasingly is the quality and quantity of data.

Data and public administration

What does all this have to do with public administration? At first glance, perhaps nothing. But on closer examination, a great deal.

The digital giants became digital giants because they understood – before others – the enormous value of enormous quantities of data. They understood – like the early state understood the power of knowing the quantity and location of trees and people and minerals – that data is power.

As Shoshana Zuboff expertly describes in The Age of Surveillance Capitalism, data becomes the nexus of power. But the power of data in the contemporary age isn’t about counting trees and people, it is rather about the “instrumentalization of behavior for the purposes of modification, prediction, monetization, and control.”

Contemporary public administration, which traces its very heritage back to data, is far less sophisticated in data today than the digital giants. Data is not utilized for public good applications anywhere near the degree to which data is utilized for commercial gain.

Over time, that will harm us all because the public-good realm will have less access to rich data than the private profit realm. Over time, that will make public administration a dinosaur. We need to better understand the power and application of data.

Public administration and real-time actionable data

States often revert to using blunt policy instruments because public administrations do not have the granularity of data – in real time – that is available to the digital giants. When you don’t have real-time actionable data, you estimate. You ask people to apply. You create programs with criteria instead of directly apply funding to public policy objectives.

That worked for a world when real-time actionable data either did not exist or was enormously expensive to actualize. But that is not today’s world. The percentage of the economy migrating online is growing every day, and the online economy has grown much faster than the analog economy in recent years. But something else is happening, too. With the internet of things(IoT), our toasters and our refrigerators and our lightbulbs and our ventilation systems and our water treatment plants and our garage doors and our pacemakers are all migrating online. The enormous oceans of data we have today will, in a few very short years, look like little trickles of water when the IoT begins to take hold in full flight.

Public administration is already behind. Imagine what happens when the volume of data being generated every moment of every day by billions of connected things across the globe increases at an even faster rate.

Does public administration understand the power of data? Do we understand how to use it to serve public policy goals? Do we understand how to regulate it for the public good? Do we have the systems in place to capture data? Do we have the systems in place to safeguard data? Do we have the systems in place to safeguard its use by non-state actors?

These are the many questions facing public administration today. The faster we get the answers, the better public administrators will be able to serve their political decision-makers and their state populations.

Time is not our friend on these questions.

Source: Taking data seriously: A call to public administrators

‘Racism plays a role in immigration decisions,’ House Immigration Committee hears

While always important to recognize that bias and discrimination can influence decisions, different acceptance rates can also reflect other factors, and that misrepresentation may be more prevalent in some regions than others.

Training guides and materials need to provide illustrations and examples. Meurrens is one of the few lawyers who regularly looks at the data but his challenge of the training guide “Kids in India are not back-packers as they are in Canada.” is odd given that the data likely confirms that statement.

Moreover, the call for more transparency, welcome and needed, may provide opportunities for the more unscrupulous to “game the system.”

“Kids in India are not back-packers as they are in Canada” reads a note appended to a slide in a presentation used to train Canadian immigration officials in mid-2019.

In a recent access to information request, Immigration lawyer Steven Meurrens said he received a copy of the presentation which was used in a training session by Immigration, Refugees and Citizenship Canada (IRCC) officials, dated April 2019 and titled “India [Temporary Resident Visa]s: A quick introduction.” He shared the full results of the request with The Hill Times.

The slides, which detail the reasons why Indians may apply for a Temporary Resident Visa (TRV) and what officials should look for in applications—have notes appended to them, as if they were speaking notes for the person giving the presentation. On one slide detailing potential reasons for travel to Canada, the notes read: “Kids in India are not back-packers as they are in Canada.”

In an interview, Meurrens spoke to an apparent double standard for Indian people looking to travel to Canada.

“It drives me nuts, because I’ve often thought that, as a Canadian, a broke university student, I could hop on a plane, go anywhere, apply for visas, and no one would be like, ‘That’s not what Canadians do,’” Meurrens said, adding that he’s representing people from India who did in fact intend to come to Canada to backpack through the country.

A screenshot of the page wherein an IRCC presentation notes that ‘Kids in India are not back-packers as they are in Canada.’ Image courtesy of IRCC

“To learn that people are trained specifically that Indian people don’t backpack” was “over the top,” he said. It reminded him of another instance of generalizations made within IRCC about different nationalities of people, when in 2015, an ATIP he received showed that training materials within the department stated that when a Chinese person marrying a non-Chinese person was a likely indicator of marriage fraud.

At the time, the department said that document was more than five years old, and no longer in use.

“[I’d like us] to get to a state where someone’s country of origin doesn’t dictate the level of procedural fairness that they’ll get and how they’re assessed,” he said.

The fact of systemic racism within Canada’s Department of Citizenship, Immigration, and Refugees Canada (IRCC) is not new; evidence of such racism was uncovered through what is colloquially known as the Pollara report. This report, conducted by Pollara Strategic Insights and released in 2021, was the result of focus groups conducted with IRCC employees to better understand “current experiences of racism within the department.”

The report found that within the department, the use of the phrase “the dirty 30” was widely used to refer to certain African nations and that Nigerians in particular were stereotyped as “particularly corrupt or untrustworthy.”

As the House Immigration Committee heard last week, there remains much work to be done to combat systemic racism within IRCC.

On March 22, the House Committee on Immigration and Citizenship began its study on differential outcomes in immigration decisions at IRCC, and Immigration Minister Sean Fraser (Central Nova, N.S.) appeared at the committee on March 24. Other issues brought up by witnesses included a lack of transparency from the department as well as concerns of systemic racism and bias being embedded in any automated intelligence (AI) the department uses to assess applications.

From students in Nigeria being subjected to English-language proficiency tests when they hail from an English-speaking country, to the differential treatment of some groups of refugees versus others, to which groups are eligible for resettlement support and which are not, the committee heard several examples of differential treatment of potential immigrants to Canada due to systemic racism and bias within IRCC.

“I know it’s very uncomfortable raising the issue of racism,” said Dr. Gideon Christian, president of the African Scholars Initiative and an assistant professor of AI and law at the University of Calgary.

“But the fact is that we need to call racism for what it is—as uncomfortable as it might be. … Yes, this is a clear case of racism. And we should call it that. We should actually be having conversations around this problem with a clear framework as to how to address it,” he said.

According to Christian, Nigerian students looking to come to Canada to study through the Nigerian Study Express program are subjected to an English-language proficiency test, despite the fact that the official language in Nigeria is English, that English is the language used in all official academic institutions there, and that academic institutions in Canada do not require a language test from Nigerian students for their admission.

A spokesperson for IRCC said the department does not single out Nigeria in its requirement for a language test.

“IRCC is committed to a fair and non-discriminatory application process,” reads the written statement.

“While language testing is not a requirement to be eligible for a study permit, individual visa offices may require them as part of their review of whether the applicant is a bona fide student. This includes many applicants from English-speaking countries, including a large number from India and Pakistan, two nations where English is widely taught and top countries for international students in Canada.”

“Nigeria is not singled out by the requirement of language tests for the Nigeria Student Express initiative,” the spokesperson said.

Systemic racism embedded in AI

Christian, who is also an assistant professor of AI and law at the University of Calgary and has spent the last three years researching algorithmic racism, expressed concern that the “advanced analytics” IRCC uses to triage its immigration applications—including the Microsoft Excel-based software system called Chinook—has systemic racism and bias embedded within it.

“IRCC has in its possession a great deal of historical data that can enable it to train AI and automate its visa application processes,” Christian told the committee. As revealed by the Pollara report, systemic bias, racism and discrimination does account for differential treatment of immigration applications, particularly when it comes to study visa refusals for those applying from Sub-Saharan Africa, he said.

“External story of IRCC—especially the Pollara report—have revealed systemic bias, racism and discrimination in IRCC processing of immigration applications. Inevitably, this historical data imposition of IRCC is tainted by the same systemic bias, racism and discrimination. Now the problem is that the use of these tainted data to train any AI algorithm will inevitably result in algorithmic racism. Racist AI, making immigration decisions,” he said.

The Pollara report echoed these concerns in a section that laid out a few ways processes and procedures adopted for expediency’s sake “have taken on discriminatory undertones.” This included “concern that increased automation of processing will embed racially discriminatory practices in a way that will be harder to see over time.”

Meurrens, who also appeared at committee on March 22, said a lack of transparency from the government impedes the public’s ability to assess whether it is indeed making progress on the issue of addressing systemic racism or not.

He said he’d like to see the department publish Access to Information results pertaining to internal manuals, visa office specific training guides, and other similar documents as downloadable PDFs on its website, pointing out this is how the provincial government of B.C. releases its ATIP responses. He also said he thinks IRCC should publish “detailed explanations and reports of how its artificial intelligence triaging and new processing tools work in practice.”

“Almost everything public today [about the AI programs] has been obtained through access to information results that are heavily redacted and which I don’t believe present the whole picture,” he said.

Whether the concerns were actually reflected in the AI itself, Meurrens said, could not be known without more transparency from the department.

“In the absence of increased transparency, concerns like this are only growing,” he said.

Fraser: racism is a ‘sickness’

On Thursday, Fraser told the committee that he agrees that racism is a problem within the department, calling it a “sickness in our society.”

“There are examples of racism not just in one department but across different levels of government. It’s a sickness in our society that limits the productivity of human beings who want to fully participate in our communities. IRCC is not immune from that social phenomenon that hampers our success as a nation, and we have to do everything we can to eradicate racism, not just from our department,” he said.

Fraser said there is “zero tolerance for racism, discrimination, or harassment of any kind,” but acknowledged those problems do exist within the department.

The minister pointed towards the anti-racism task force which was created in 2020 and “guides the department’s strategy to eliminate racism and applies an anti-racism lens” to the department’s work. He also said IRCC has been “actively reviewing its human resource systems so that Indigenous, Black, racialized peoples and persons with disabilities are better represented across IRCC at every level.”

Fraser also referenced a three-year anti-racism strategy for the department, which includes plans to implement mandatory bias training, anti-racist work and training objectives, and trauma coaching sessions for Black employees and managers to recognize the impacts of racism on mental health, among other things.

“It’s not lost on me that there have been certain very serious issues that have pertained to IRCC,” he said.

These measures are different from the ones witnesses and opposition MPs are calling for, however.

NDP MP Jenny Kwan (Vancouver East, B.C.) her top priority on this topic is to convince the government to put an independent ombudsperson in place whose job it would be to assess IRCC policies and the application of said policies as they relate to differential treatment, systemic racism, and gender biases.

“Let’s dig deep. Have an officer of the House do this work completely independent from the government,” she said in an interview with The Hill Times.

At the March 22 meeting, Kwan asked all six witnesses to state for the record if they agreed that the government should put such an ombudsperson in place. All six witnesses agreed.

Kwan questioned the ability of the department to conduct its own internal reviews.

“As the minister said [at committee], he’s undertaking a variety of measures to address these issues and to see how they can rectify it. … But how deeply is it embedded? And if it’s done internally, then how independent is it?” she wondered.

Fraser said the implementation of an ombudsperson was something he would consider after reading the committee’s report.

Conservative MP Jasraj Singh Hallan (Calgary Forest Lawn, Alta.), his party’s immigration critic and the vice-chair of the committee, agreed with Meurrens’ calls for increased transparency. “We need more evidence that the government is serious about this,” he said in an interview.

Hallan also said he wants to see consequences for those within the department who participated in the racism documented by the Pollara report.

“[Fraser] should start by approaching those employees of IRCC that made these complaints from that Pollara report and find out who is making these remarks. Reprimand them, fire them if they need to be,” he said.

Source: ‘Racism plays a role in immigration decisions,’ House Immigration Committee hears

Helping A.I. to Learn About Indigenous Cultures

Interesting:

In September 2021, Native American technology students in high school and college gathered at a conference in Phoenix and were asked to create photo tags — word associations, essentially — for a series of images.

One image showed ceremonial sage in a seashell; another, a black-and-white photograph circa 1884, showed hundreds of Native American children lined up in uniform outside the Carlisle Indian Industrial School, one of the most prominent boarding schools run by the American government during the 19th and 20th centuries.

For the ceremonial sage, the students chose the words “sweetgrass,” “sage,” “sacred,” “medicine,” “protection” and “prayers.” They gave the photo of the boarding school tags with a different tone: “genocide,” “tragedy,” “cultural elimination,” “resiliency” and “Native children.”

The exercise was for the workshop Teaching Heritage to Artificial Intelligence Through Storytelling at the annual conference for the American Indian Science and Engineering Society. The students were creating metadata that could train a photo recognition algorithm to understand the cultural meaning of an image.

The workshop presenters — Chamisa Edmo, a technologist and citizen of the Navajo Nation, who is also Blackfeet and Shoshone-Bannock; Tracy Monteith, a senior Microsoft engineer and member of the Eastern Band of Cherokee Indians; and the journalist Davar Ardalan — then compared these answers with those produced by a major image recognition app.

For the ceremonial sage, the app’s top tag was “plant,” but other tags included “ice cream” and “dessert.” The app tagged the school image with “human,” “crowd,” “audience” and “smile” — the last a particularly odd descriptor, given that few of the children are smiling.

The image recognition app botched its task, Mr. Monteith said, because it didn’t have proper training data. Ms. Edmo explained that tagging results are often “outlandish” and “offensive,” recalling how one app identified a Native American person wearing regalia as a bird. And yet similar image recognition apps have identified with ease a St. Patrick’s Day celebration, Ms. Ardalan noted as an example, because of the abundance of data on the topic.

As Mr. Monteith put it, A.I. is only as good as the data it is fed. And data on cultures that have long been marginalized, like Native ones, are simply not at the levels they need to be. “Clearly, there’s a bias represented,” he said.

The workshop was the initiative of Intelligent Voices of Wisdom, or IVOW, a tech start-up that Ms. Ardalan, an executive producer of audio at National Geographic, founded to preserve culture through A.I. and to counter those biases.

“The internet is not representative of the entire population, and when people are represented, it may not be accurate because of stereotypes and hate speech,” said Percy Liang, an associate professor of computer science at Stanford University and director of the school’s Center for Research on Foundation Models.

To counter this tendency, Ms. Ardalan, who is an Iranian American of Bakhtiari and Kurdish descent, wants IVOW to develop tools to create “cultural engines” for underrepresented groups so they can generate, and take ownership of, their data. “The cultural engine cannot be a data scientist in Philadelphia trying to create data sets for a tribe in Arizona,” she said.

More representative, accurate data is beneficial not only to the groups it represents, but also to A.I. systems at large, said W. Victor H. Yarlott, an A.I. researcher at Florida International University, a member of the Crow Tribe of Montana and an IVOW collaborator.

“Lacking this knowledge just makes your system worse,” he said. “You’re not really representing human intelligence or human knowledge unless your system can handle it from a broad range of cultures.”

The participation of Indigenous people in the project was critical. Mr. Monteith, who led the effort to enter the Cherokee writing system into Microsoft Office, said he’s been working on building trust for technology, and more recently A.I., in his Native communities for decades. “I knew without me doing this that we would be in a worse spot in terms of literacy, and our culture,” he said.

The team at IVOW, along with a group of volunteer collaborators and advisers, has been developing proofs of concept for these cultural engines — smart data sets that can feed more inclusive A.I. tools, including chatbots and image recognition apps.

One such tool is IVOW’s Indigenous Knowledge Graph, or IKG, a cultural engine in early development that is focused on storytelling about Indigenous recipes and culinary practices. After meeting the IVOW team in 2018, Mr. Yarlott pitched the IKG, a sort of visualization of a data set, to capture Indigenous knowledge.

“You know in dramas, you see the person trying to unravel a mystery and they have the corkboard and the little notes and the string between them?” Mr. Yarlott said. “That’s basically what the IKG is, but for cultural knowledge.”

The first step was to gather the data. The team chose a culinary focus because it is a part of life that all people share. They collected recipes and related stories from both the public domain and team members.

Mr. Monteith chose to enter the story of the Three Sisters stew, a recipe created from symbiotic crops (corn, beans and squash) that he said is known among Indigenous peoples wherever those ingredients grow. The story of the Three Sisters, he said, is not only a recipe but a way to teach sustainability practices, such as the preservation of water. “It’s just a great metaphor for what we need to do as a society and as a people across the world,” Mr. Monteith said.

Using Neo4J, a graph database management system, the recipes were broken down into components (title, ingredients, instructions and related stories) and tagged with information, like the tribe of origin or whether the recipe was contemporary or historical, or had roots in folklore. This data set was then entered into Dialogflow, a natural language processing platform, so it could be fed into a chatbot — in this case, Sina Storyteller, the Siri-like conversational agent designed by IVOW. Currently, anyone can interact with the early version through Google Assistant.

The tools and techniques to create the IKG were designed to be basic enough that anyone, not just those with a background in computer science, could use them. And IKG uses only information that is widely available or that the team had permission to use from their own tribes, bands and nations.

There are challenges, though. The process is labor intensive and expensive; IVOW is a self-funded enterprise, and the work of the collaborators is voluntary.

“It’s a little bit of a chicken and an egg problem because you need the data to really build a big system that demonstrates value,” Mr. Yarlott said. “But to get all the data, you need money, which only really starts to come when people realize that there’s substantial value here.”

Mr. Liang said that while this kind of “artisanal” data is important, it is difficult to scale, and that more emphasis should be placed on improving foundation knowledge — models that are trained on large-scale data sets.

For years, computer scientists have warned Ms. Ardalan that cultivating this sort of data is a tedious process. She doesn’t disagree, which is why she says the time to start is now.

“The future is going to be these cultural engines that communities create that are relevant to their heritage,” she said, adding that the notion that A.I. will be all-encompassing is wrong. “Machines cannot replace humans. They can only be there with us around the campfire and inform us.”

Source: Helping A.I. to Learn About Indigenous Cultures

Automation could make 12 million jobs redundant. Here’s who’s most at risk

Although a European study, likely more broadly applicable to Canada and other countries, something that advocates of current and higher levels of immigration to Canada understate or ignore:

Up to a third of job roles in Europe could be made redundant by automation over the next 20 years as companies battle to increase productivity and fill skills gaps created by an ageing population, according to Forrester. 

The tech analyst’s latest Future of Jobs Forecast estimates that as many as 12 million jobs could be lost to automation across Europe by 2040, primarily impacting workers in industries such as retail, food services, and leisure and hospitality.  

Mid-skill labour jobs that consist of simple, routine tasks are most at risk from automation, the report said. These roles make up 38% of the workforce in Germany, 34% of the workforce in France, and 31% of the workforce in the UK. 

In total, 49 million jobs in ‘Europe-5’ (France, Germany, Italy, Spain, and the UK) could potentially be automated, according to Forrester. This jeopardizes casual work, such as zero-hour contracts in the UK, and low-paid, part-time jobs where workers hold “little bargaining power”. 

A combination of pressures is prompting businesses to ramp up their investments in automation, particularly in countries where industry, construction and agriculture are big business. 

While small and medium enterprises (SMEs) with up to 50 workers capture two-thirds of European employment, their productivity lags that of larger corporations, according to Forrester. In manufacturing, for example, ‘microenterprises’ are 40% less productive than large companies. 

A five-year study of robot adoption at French manufacturing firms found that robots lowered production overheads by reducing labor costs by between 4% and 6%. 

Business leaders also see automation technology as a means of filling the gaps created by Europe’s ageing population, which Forrester describes as “a demographic time bomb.” By 2050, Europe will have 30 million fewer people of working age than in 2020, the analyst said. 

Productivity lost to the pandemic is seeing organizations look to machine processes to recoup efficiency, while industries that were already using automation to grow their revenues have invested even more heavily in the technology to increase service delivery and mitigate pandemic restrictions. 

“Lost productivity due to COVID-19 is forcing companies globally to automate manual processes and improve remote work,” said Michael O’Grady, principal forecast analyst at Forrester. 

“European organisations are also in a particularly strong position to embrace automation because of Europe’s declining working-age population and the high number of routine, low-skilled jobs that can be easily automated.” 

While many low-skilled and routine roles face being replaced by machine processes, nine million new jobs are forecast to be created in Europe by 2040 in emerging sectors like green energy and smart cities, Forrester said.

This means that, all told, only three million jobs will truly be ‘lost’ to automation by 2040 – the caveat being that people who lose jobs may not find new ones.

Business leaders outside of Europe are also exploring the role of automation in bridging skills shortages and speeding up processes in the enterprise. 

Polling of 500 C-suite executives and senior management personnel by automation platform UIPath found that 78% were likely to increase their investment in automation tools to help them address labor shortages. Business leaders are turning to automation because they are struggling to find new talent (74%). 

At the same time, 85% of survey respondents said incorporating automation and automation training into their organization would help them attract new talent and hold onto existing staff. Meanwhile, leaders said automation was already helping them to save time (71%), improve productivity (63%) and save money (59%). 

Academic forecasts of jobs that could be lost to automation vary wildly. The European Parliament’s 2021 ‘Digital automation and the future of work’ report found that estimates varied from as little as 9% to almost half (47%). 

“Machine-learning experts often drive this uncertainty,” said Forrester. 

“They imagine future computer capabilities without understanding enterprise technology adoption constraints and the cultural barriers within an organization that resist change.” 

Source: https://www.zdnet.com/article/automation-could-make-12-million-jobs-redundant-heres-whos-most-at-risk/?utm_campaign=David%20Akin%27s%20🇨🇦%20Roundup&utm_medium=email&utm_source=Revue%20newsletter

Canada is refusing more study permits. Is new AI technology to blame?

Given the high volumes (which immigration lawyers and consultants benefit from), expanded use of technology and templates inevitable and necessary, although thorough review and safeguards necessary.

Alternate narrative, given reporting on abuse and exploitation of international students and the program itself (The reality of life in Canada for international students), perhaps a system generating more refusals has merit:

Soheil Moghadam applied twice for a study permit for a postgraduate program in Canada, only to be refused with an explanation that read like a templated answer.

The immigration officer was “not satisfied that you will leave Canada at the end of your stay,” he was told.

After a third failed attempt, Moghadam, who already has a master’s degree in electronics engineering from Iran, challenged the refusal in court and the case was settled. He’s now studying energy management at the New York Institute of Technology in Vancouver.

His Canadian lawyer, Zeynab Ziaie, said that in the past couple of years, she has noticed a growing number of study permit refusals like Moghadam’s. The internal notes made by officers reveal only generic analyses based on cookie-cutter language and often have nothing to do with the particular evidence presented by the applicant.

“We’re seeing a lot of people that previously would have been accepted or have really what we consider as complete files with lots of evidence of financial support, lots of ties to their home country. These kinds of files are just being refused,” said Ziaie, who added that she has seen more than 100 of these refusals in her practice in the past two years.

It’s a Microsoft Excel-based system called Chinook. 

Its existence came to light during a court case involving Abigail Ocran, a woman from Ghana who was refused a study permit by the Immigration Department.

Government lawyers in that case filed an affidavit by Andie Daponte, director of international-network optimization and modernization, who detailed the working and application of Chinook.

That affidavit has created a buzz among those practising immigration law, who see the new system — the department’s transition to artificial intelligence — as a potential threat to quality decision making, and its arrival as the harbinger of more troubling AI technology that could transform how immigration decisions are made in this country.

All eyes are now on the pending decision of the Ocran case to see if and how the court will weigh in on the use of Chinook. 


Chinook was implemented in March 2018 to help the Immigration Department handle an exponential growth in cases within its existing, and antiquated, Global Case Management System (GCMS).

Between 2011 and 2019, before everything slowed down during the pandemic, the number of visitor visa applications skyrocketed by 109 per cent, with the caseload of applications for overseas work permits and study permits up by 147 per cent and 222 per cent, respectively.

In 2019 alone, Daponte said in his affidavit, Canada received almost 2.2 million applications from prospective visitors, in addition to 366,000 from people looking to work here and 431,500 from would-be international students.

Meanwhile, the department’s 17-year-old GCMS system, which requires officers to open multiple screens to download different information pertaining to an application, has not caught up. Each time decision-makers move from screen to screen they must wait for the system to load, causing significant delays in processing, especially in countries with limited network bandwidth.

Chinook was developed in-house and implemented “to enhance efficiency and consistency, and to reduce processing times,” Daponte said.

As a result, he said, migration offices have generally seen an increase of between five per cent and 35 per cent in the number of applications they have been able to process.

Here’s how Chinook works: an applicant’s information is extracted from the old system and populated in a spreadsheet, with each cell on the same row filled with data from that one applicant — such as name, age, purpose of visit, date of receipt of the application and previous travel history.

Each spreadsheet contains content from multiple applicants and is assigned to an officer to enable them to use “batch processes.”

After the assessment of an application is done, the officer will click on the decision column to prompt a pop-up window to record the decision, along with a notes generator if they’re giving reasons in the case of a refusal.

(An officer can refuse or approve an application, and sometimes hold it for further information.)

When done, decision-makers click a button labelled “Action List,” which organizes data for ease of transfer into the old system. It presents the decision, reasons for refusal if applicable, and any “risk indicators” or “local word flags” for each application.

The spreadsheets are deleted daily after the data transfer for privacy concerns.

While working on the spreadsheet, said Daponte, decision-makers continue to have access to paper applications or electronic documents and GCMS if needed.

“Chinook was built to save decision-makers time in querying GCMS for application information and to allow for the review of multiple applications,” Daponte noted.

However, critics are concerned that the way the system is set up may be guiding the officers toward certain conclusions, giving them the option of not reviewing all the material presented in each case, and that it effectively shields much of the decision making from real scrutiny.

According to Daponte’s court affidavit, the notes generator presents standard language that immigration officers may select, review and modify to fit the circumstances of an application in preparing reasons for refusal. The function is there to “assist them in the creation of reasons.”

Ziaie believes that explains the templated reasons for refusals she’s been seeing.

“These officers are looking at a spreadsheet of potentially 100 different applicants. And those names don’t mean anything to the officers. You could mix up rows. You could easily make errors,” said the Toronto lawyer.

“There’s no way to go back and check that because these decisions end up with very similar notes that are generated right when they’re refused. So my concern is about accountability. Every time we have a decision, it has to make sense. We don’t know if they make mistakes.”

That’s why she and other lawyers worry the surge of study permit refusals is linked to the implementation of Chinook. 

In fact, that question was put to Daponte during the cross-examination in the Ocran case by the Ghanaian student’s lawyer, Edos Omorotionmwan.

Immigration data obtained by Omorotionmwan showed the refusal rate of student permit applications had gone from 31 per cent in 2016 to 34 per cent in 2018, the year Chinook was launched. The trend continued in 2019 to 40 per cent and reached 53 per cent last year.

“Is there a system within the Chinook software requiring some oversight function where there is some other person to review what a visa officer has come up with before that decision is handed over to the applicants?” asked Omorotionmwan.

“Within Chinook, no,” replied Daponte, who also said there’s no mechanism within this platform to track if an officer has reviewed all the support documents and information pertaining to an applicant’s file in the GCMS data.


“This idea of using portals and technology to speed up the way things are done is the reality of the future,” said Vancouver-based immigration lawyer Will Tao, who has tracked the uses of Chinook and blogged about it.

“My concern as an advocate is: who did this reality negatively impact and what systems does it continue to uphold?”

Tao said the way the row of personal information is selected and set out in the Chinook spreadsheet “disincentivizes” officers to go into the actual application materials and support documents out of convenience.

“And then the officers are supposed to use those notes generators to justify their reasoning and not go into some of the details that you would like to see to reflect that they actually reviewed the facts of the case. The biggest problem I have is that this system has had very limited oversight,” he said.

“It makes it easier to refuse because you don’t have to look at all the facts. You don’t have to go through a deep, thoughtful analysis. You have a refusal notes generator that you can apply without having read the detailed study plans and financial documents.”

He points to Chinook’s built-in function that flags “risk factors” — such as an applicant’s occupation and intended employer’s information — for inconsistency in an application, as well as “local flag words” to triage and ensure priority processing of time-sensitive applications to attend a wedding or a funeral.

Those very same flag words used in the spreadsheet can also be misused to mark a particular group of applicants based on their personal profiles and pick them out for refusals, said Tao.

In 2019, in a case involving the revocation of citizenship to the Canadian-born sons of two Russian spies, the Supreme Court of Canada made a landmark ruling that helps guide judges to review the decisions of immigration officials.

In the unanimous judgment, Canada’s highest court ruled it would be “unacceptable for an administrative decision maker to provide an affected party formal reasons that fail to justify its decision, but nevertheless expect that its decision would be upheld on the basis of internal records that were not available to that party.”

Tao said he’s closely watching how the Ocran decision is going to shed light on the application of Chinook in the wake of that Supreme Court of Canada ruling over the reasonableness standard.

“Obviously, a lot of these applications have critical points that they get refused on and with the reasons being template and standard, it’s hard for reviewers to understand how that came to be,” he said.

In a response to the Star’s inquiry about the concerns raised about Chinook, the Immigration Department said the tool is simply to streamline the administrative steps that would otherwise be required in the processing of applications to improve efficiency.

“Decision makers are required to review all applications and render their decisions based on the information presented before them,” said spokesperson Nancy Caron.

“Chinook does not fundamentally change the way applications are processed, and it is always the officer that gives the rational for the decisions and not the Chinook tool.”

For immigration lawyer Mario Bellissimo, Chinook is another step in the Immigration Department’s move toward digitalization and modernization.

Ottawa has been using machine learning technology since 2018 to triage temporary resident visa applications from China and India, using a “set of rules derived from thousands of past officer decisions” then deployed by the technology to classify applications into high, medium and low complexity.

Cases identified as low complexity and low risk automatically receive positive eligibility decisions, allowing officers to review these files exclusively on the basis of admissibility. This enables officers to spend more time scrutinizing the more complex files.

Chinook, said Bellissimo, has gone beyond the triage. He contends it facilitates the decision-making process by officers.

The use of templated responses from the notes generator makes the refusal reasons “devoid of meaning,” he noted.

“Eventually, do you see age discriminators put into place for study permits when anyone over the age of 30 is all automatically streamed to a different tier because they are less likely bona fide students? This is the type of stuff we need to know,” Bellissimo explained.

“When they’re just pulling standard refusal reasons and just slapping it in, then those decisions become more difficult to understand and more difficult to challenge. Who made the decision? Was technology used? And that becomes a problem.”

He said immigration officials need to be accountable and transparent to applicants about the use of these technologies before they are rolled out, not after they become an issue.

Petra Molnar, a Canadian expert specializing in migration and technology, said automated decision-making and artificial intelligence tools are difficult to scrutinize because they are often very opaque, including how they are developed and deployed and what review mechanisms, if any, exist once they are in use.

“Decisions in the immigration and refugee context have lifelong and life-altering ramifications. People have the right to know what types of tools are being used against them and how they work, so that we can meaningfully challenge these types of systems.”

Ziaie, the lawyer, said she understands the tremendous pressure on front-line immigration officers, but if charging a higher application fee — a study permit application now costs $150 — can help improve the service and quality of decisions, then that should be implemented.

“They should allocate a fair amount of that revenue toward trying to hire more people, train their officers better and give them more time to review the files so they actually do get a better success rate,” she said. “By that, I mean fewer files going to Federal Court.”

As a study permit applicant, Moghadam said it’s frustrating not to understand how an immigration officer reaches a refusal decision because so much is at stake for the applicant.

It took him two extra years to finally obtain his study permit and pursue an education in Canada, let alone the additional application fees and hefty legal costs.

“Your life is put on hold and your future is uncertain,” said the 39-year-old, who had a decade of work experience in engineering for both Iranian and international companies.

“There’s the time, the costs, the stress and the anxiety.”

Source: https://www.thestar.com/news/canada/2021/11/15/canada-is-refusing-more-study-permits-is-new-ai-technology-to-blame.html

Rise of the Robots Speeds Up in Pandemic With U.S. Labor Scarce

Of note to Canadian policy makers as well given this trend will cross the border and needs to be taken into account in immigration policy:

American workers are hoping that the tight pandemic labor market will translate into better pay. It might just mean robots take their jobs instead.

Labor shortages and rising wages are pushing U.S. business to invest in automation. A recent Federal Reserve survey of chief financial officers found that at firms with difficulty hiring, one-third are implementing or exploring automation to replace workers. In earnings calls over the past month, executives from a range of businesses confirmed the trend.

Domino’s Pizza Inc. is “putting in place equipment and technology that reduce the amount of labor that is required to produce our dough balls,” said Chief Executive Officer Ritch Allison.
[time-brightcove not-tgx=”true”]

Mark Coffey, a group vice president at Hormel Foods Corp., said the maker of Spam spread and Skippy peanut butter is “ramping up our investments in automation” because of the “tight labor supply.”

The mechanizing of mundane tasks has been underway for generations. It’s made remarkable progress in the past decade: The number of industrial robots installed in the world’s factories more than doubled in that time, to about 3 million. Automation has been spreading into service businesses too.

The U.S. has lagged behind other economies, especially Asian ones, but the pandemic might trigger some catching up. With some 10.4 million open positions as of August, and record numbers of Americans quitting their jobs, the difficulty of finding staff is adding new incentives.

Ametek Inc. makes automation equipment for industrial firms, like motion trackers that are used from steel and lumber mills to packaging systems. Chief Executive Officer David A. Zapico says that part of the company is “firing on all cylinders.” That’s because “people want to remove labor from the processes,” he said on an earnings call. “In some places, you can’t hire labor.”

Unions have long seen automation as a threat. At U.S. ports, which lag their global peers in technology and are currently at the center of a major supply-chain crisis, the International Longshoremen’s Association has vowed to fight it.

Companies that say they want to automate “have one goal in mind: to eliminate your job, and put more money in their pockets,” ILA President Harold Daggett said in a video message to a June conference. “We’re going to fight this for 100 years.”

Some economists have warned that automation could make America’s income and wealth gaps worse.

“If it continues, labor demand will grow slowly, inequality will increase, and the prospects for many low-education workers will not be very good,” says Daron Acemoglu, a professor at the Massachusetts Institute of Technology, who testified Wednesday at a Senate hearing on the issue.

That’s not an inevitable outcome, Acemoglu says: Scientific knowhow could be used “to develop technologies that are more complementary to workers.” But, with research largely dominated by a handful of giant firms that spend the most money on it, “this is not the direction the technology is going currently.”

Knightscope makes security robots that look a bit like R2-D2 from Star Wars, and can patrol sites such as factory perimeters. The company says it’s attracting new clients who are having trouble hiring workers to keep watch. Its robots cost from $3.50 to $7.50 an hour, according to Chief Client Officer Stacy Stephens, and can be installed a month after signing a contract.

One new customer is the Los Angeles International Airport, one of the busiest in the U.S. Soon, Knightscope robots will be monitoring some of its parking lots.

They are “supplementing what we have in place and are not replacing any human services,” said Heath Montgomery, the airport’s director of public relations. “It’s another way we are providing exceptional guest experiences.”

Source: Rise of the Robots Speeds Up in Pandemic With U.S. Labor Scarce

AI’s anti-Muslim bias problem

Of note (and unfortunately, not all that surprising):

Imagine that you’re asked to finish this sentence: “Two Muslims walked into a …”

Which word would you add? “Bar,” maybe?

It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. “Two Muslims walked into a synagogue with axes and a bomb,” it said. Or, on another try, “Two Muslims walked into a Texas cartoon contest and opened fire.”

For Abubakar Abid, one of the researchers, the AI’s output came as a rude awakening. “We were just trying to see if it could tell jokes,” he recounted to me. “I even tried numerous prompts to steer it away from violent completions, and it would find some way to make it violent.”

Language models such as GPT-3 have been hailed for their potential to enhance our creativity. Given a phrase or two written by a human, they can add on more phrases that sound uncannily human-like. They can be great collaborators for anyone trying to write a novel, say, or a poem.

Source: AI’s anti-Muslim bias problem

Misattributed blame? Attitudes toward globalization in the age of automation

Interesting study and findings:

Many, especially low-skilled workers, blame globalization for their economic woes. Robots and machines, which have led to job market polarization, rising income inequality, and labor displacement, are often viewed much more forgivingly. This paper argues that citizens have a tendency to misattribute blame for economic dislocations toward immigrants and workers abroad, while discounting the effects of technology. Using the 2016 American National Elections Studies, a nationally representative survey, I show that workers facing higher risks of automation are more likely to oppose free trade agreements and favor immigration restrictions, even controlling for standard explanations for these attitudes. Although pocket-book concerns do influence attitudes toward globalization, this study calls into question the standard assumption that individuals understand and can correctly identify the sources of their economic anxieties. Accelerated automation may have intensified attempts to resist globalization.

Source: https://www.cambridge.org/core/journals/political-science-research-and-methods/article/misattributed-blame-attitudes-toward-globalization-in-the-age-of-automation/29B08295CEAC4A4A89991E064D0284FF

Facebook Apologizes After Its AI Labels Black Men As ‘Primates’

Ouch!

Facebook issued an apology on behalf of its artificial intelligence software that asked users watching a video featuring Black men if they wanted to see more “videos about primates.” The social media giant has since disabled the topic recommendation feature and says it’s investigating the cause of the error, but the video had been online for more than a year.

A Facebook spokesperson told The New York Times on Friday, whichfirst reported on the story, that the automated prompt was an “unacceptable error” and apologized to anyone who came across the offensive suggestion.

The video, uploaded by the Daily Mail on June 27, 2020, documented an encounter between a white man and a group of Black men who were celebrating a birthday. The clip captures the white man allegedly calling 911 to report that he is “being harassed by a bunch of Black men,” before cutting to an unrelated video that showed police officers arresting a Black tenant at his own home.

Former Facebook employee Darci Groves tweeted about the error on Thursday after a friend clued her in on the misidentification. She shared a screenshot of the video that captured Facebook’s “Keep seeing videos about Primates?” message.

“This ‘keep seeing’ prompt is unacceptable, @Facebook,” she wrote. “And despite the video being more than a year old, a friend got this prompt yesterday. Friends at [Facebook], please escalate. This is egregious.”

This is not Facebook’s first time in the spotlight for major technical errors. Last year, Chinese President Xi Jinping’s name appeared as “Mr. S***hole” on its platform when translated from Burmese to English. The translation hiccup seemed to be Facebook-specific, and didn’t occur on Google, Reuters had reported.

However, in 2015, Google’s image recognition software classified photos of Black people as “gorillas.” Google apologized and removed the labels of gorilla, chimp, chimpanzee and monkey words that remained censored over two years later, Wired reported.

Facebook could not be reached for comment.

Source: Facebook Apologizes After Its AI Labels Black Men As ‘Primates’