Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

Interesting and significant study. A comparable study on automated decision-making systems that have been successful in minimizing injustice and inequality would be helpful, as well as recognition that automated systems can improve decision consistency as Kahneman and others demonstrated in Noise.

As these systems will continue to grow in order to manage increased numbers of decisions required, greater care in their design and impacts will of course be necessary. But a mistake to assume that all such systems are worse than human decision-making:

In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.Alston was worried for good reason. Research shows that ADS can be used in ways that discriminateexacerbate inequalityinfringe upon rightssort people into different social groupswrongly limit access to services and intensify surveillance

For example, families have been bankrupted and forced into crises after being falsely accused of benefit fraud. 

Researchers have identified how facial recognition systems and risk assessment tools are more likely to wrongly identify people with darker skin tones and women. These systems have already led to wrongful arrests and misinformed sentencing decisions.

Often, people only learn that they have been affected by an ADS application when one of two things happen: after things go wrong, as was the case with the A-levels scandal in the United Kingdom; or when controversies are made public, as was the case with uses of facial recognition technology in Canada and the United States.

Automated problems

Greater transparency, responsibility, accountability and public involvement in the design and use of ADS is important to protect people’s rights and privacy. There are three main reasons for this: 

  1. these systems can cause a lot of harm
  2. they are being introduced faster than necessary protections can be implemented, and;
  3. there is a lack of opportunity for those affected to make democratic decisions about if they should be used and if so, how they should be used.

Our latest research project, Automating Public Services: Learning from Cancelled Systems, provides findings aimed at helping prevent harm and contribute to meaningful debate and action. The report provides the first comprehensive overview of systems being cancelled across western democracies. 

Researching the factors and rationales leading to cancellation of ADS systems helps us better understand their limits. In our report, we identified 61 ADS that were cancelled across Australia, Canada, Europe, New Zealand and the U.S. We present a detailed account of systems cancelled in the areas of fraud detection, child welfare and policing. Our findings demonstrate the importance of careful consideration and concern for equity.

Reasons for cancellation

There are a range of factors that influence decisions to cancel the uses of ADS. One of our most important findings is how often systems are cancelled because they are not as effective as expected. Another key finding is the significant role played by community mobilization and research, investigative reporting and legal action. 

Our findings demonstrate there are competing understandings, visions and politics surrounding the use of ADS.

a table showing the factors influencing the decision to cancel and ADS system
There are a range of factors that influence decisions to cancel the uses of ADS systems. (Data Justice Lab), Author provided

Hopefully, our recommendations will lead to increased civic participation and improved oversight, accountability and harm prevention.

In the report, we point to widespread calls for governments to establish resourced ADS registers as a basic first step to greater transparency. Some countries such as the U.K., have stated plans to do so, while other countries like Canada have yet to move in this direction.

Our findings demonstrate that the use of ADS can lead to greater inequality and systemic injustice. This reinforces the need to be alert to how the use of ADS can create differential systems of advantage and disadvantage.

Accountability and transparency

ADS need to be developed with care and responsibility by meaningfully engaging with affected communities. There can be harmful consequences when government agencies do not engage the public in discussions about the appropriate use of ADS before implementation. 

This engagement should include the option for community members to decide areas where they do not want ADS to be used. Examples of good government practice can include taking the time to ensure independent expert reviews and impact assessments that focus on equality and human rights are carried out. 

a list of recommendations for governments using ADS systems
Governments can take several different approaches to implement ADS systems in a more accountable manner.(Data Justice Lab), Author provided

We recommend strengthening accountability for those wanting to implement ADS by requiring proof of accuracy, effectiveness and safety, as well as reviews of legality. At minimum, people should be able to find out if an ADS has used their data and, if necessary, have access to resources to challenge and redress wrong assessments. 

There are a number of cases listed in our report where government agencies’ partnership with private companies to provide ADS services has presented problems. In one case, a government agency decided not to use a bail-setting system because the proprietary nature of the system meant that defendants and officials would not be able to understand why a decision was made, making an effective challenge impossible. 

Government agencies need to have the resources and skills to thoroughly examine how they procure ADS systems.

A politics of care

All of these recommendations point to the importance of a politics of care. This requires those wanting to implement ADS to appreciate the complexities of people, communities and their rights. 

Key questions need to be asked about how the uses of ADS lead to blind spots because of the way they increase the distancing between administrators and the people they are meant to serve through scoring and sorting systems that oversimplify, infer guilt, wrongly target and stereotype people through categorizations and quantifications.

Good practice, in terms of a politics of care, involves taking the time to carefully consider the potential impacts of ADS before implementation and being responsive to criticism, ensuring ongoing oversight and review, and seeking independent and community review.

Source: Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

Automating Public Services: Learning from Cancelled Systems

Harris: The future of malicious artificial intelligence applications is here

More on some of the more fundamental risks of AI:

The year is 2016. Under close scrutiny by CCTV cameras, 400 contractors are working around the clock in a Russian state-owned facility. Many are experts in American culture, tasked with writing posts and memes on Western social media to influence the upcoming U.S. Presidential election. The multimillion dollar operation would reach 120 million people through Facebook alone. 

Six years later, the impact of this Russian info op is still being felt. The techniques it pioneered continue to be used against democracies around the world, as Russia’s “troll factory” — the Russian internet Research Agency — continues to fuel online radicalization and extremism. Thanks in no small part to their efforts, our world has become hyper-polar, increasingly divided into parallel realities by cherry-picked facts, falsehoods, and conspiracy theories.

But if making sense of reality seems like a challenge today, it will be all but impossible tomorrow. For the past two years, a quiet revolution has been brewing in AI — and despite some positive consequences, it’s also poised to hand authoritarian regimes unprecedented new ways to spread misinformation across the globe at an almost inconceivable scale.

In 2020, AI researchers created a text generation system called GPT-3. GPT-3 can produce text that’s indistinguishable from human writing — including viral articles, tweets, and other social media posts. GPT-3 was one of the most significant breakthroughs in the history of AI: it offered a simple recipe that AI researchers could follow to radically accelerate AI progress, and build much more capable, humanlike systems. 

But it also opened a Pandora’s box of malicious AI applications. 

Text-generating AIs — or “language models” — can now be used to massively augment online influence campaigns. They can craft complex and compelling arguments, and be leveraged to create automated bot armies and convincing fake news articles. 

This isn’t a distant future concern: it’s happening already. As early as 2020, Chinese efforts to interfere with Taiwan’s national election involved “the instant distribution of artificial-intelligence-generated fake news to social media platforms.”

But the 2020 AI breakthrough is now being harnessed for more than just text. New image-generation systems, able to create photorealistic pictures based on any text prompt, have become reality this year for the first time. As AI-generated content becomes better and cheaper, the posts, pictures, and videos we consume in our social media feeds will increasingly reflect the massively amplified interests of tech-savvy actors.

And malicious applications of AI go far beyond social media manipulation. Language models can already write better phishing emails than humans, and have code-writing capabilities that outperform human competitive programmers. AI that can write code can also write malware, and many AI researchers see language models as harbingers of an era of self-mutating AI-powered malicious software that could blindside the world. Other recent breakthroughs have significant implications for weaponized drone control and even bioweapon design.

Needed: a coherent plan

Policy and governance usually follow crises, rather than anticipate them. And that makes sense: the future is uncertain, and most imagined risks fail to materialize. We can’t invest resources in solving every hypothetical problem.

But exceptions have always been made for problems which, if left unaddressed, could have catastrophic effects. Nuclear technology, biotechnology, and climate change are all examples. Risk from advanced AI represents another such challenge. Like biological and nuclear risk, it calls for a co-ordinated, whole-of-government response.

Public safety agencies should establish AI observatories that produce unclassified reporting on publicly available information about AI capabilities and risks, and begin studying how to frame AI through a counterproliferation lens

Given the pivotal role played by semiconductors and advanced processors in the development of what are effectively new AI weapons, we should be tightening export control measures for hardware or resources that feed into the semiconductor supply chains of countries like China and Russia. 

Our defence and security agencies could follow the lead of the U.K.’s Ministry of Defence, whose Defence AI Strategy involves tracking and mitigating extreme and catastrophic risks from advanced AI.

AI has entered an era of remarkable, rapidly accelerating capabilities. Navigating the transition to a world with advanced AI will require that we take seriously possibilities that would have seemed like science fiction until very recently. We’ve got a lot to rethink, and now is the time to get started.

Source: The future of malicious artificial intelligence applications is here

Trudel: Intelligence artificielle discriminatoire

Somewhat shallow analysis, as the only area that IRCC is using AI is with respect to visitor visas, not international students or other categories (unless that has changed). So Trudel’s argumentation may be based on a false understanding.

While concerns regarding AI are legitimate and need to be addressed, bias and noise are common to human decision making.

And differences in outcomes don’t necessarily reflect bias and discrimination but these differences do signal potential issues:

Les étudiants francophones internationaux subissent un traitement qui a toutes les allures de la discrimination systémique. Les Africains, surtout francophones, encaissent un nombre disproportionné de refus de permis de séjourner au Canada pour fins d’études. On met en cause des systèmes d’intelligence artificielle (IA) utilisés par les autorités fédérales en matière d’immigration pour expliquer ces biais systémiques.

Le député Alexis Brunelle-Duceppe rappelait ce mois-ci que « les universités francophones arrivent […] en tête du nombre de demandes d’études refusées. Ce ne sont pas les universités elles-mêmes qui les refusent, mais bien le gouvernement fédéral. Par exemple, les demandes d’étudiants internationaux ont été refusées à 79 % à l’Université du Québec à Trois-Rivières et à 58 % à l’Université du Québec à Chicoutimi. Pour ce qui est de l’Université McGill, […] on parle de 9 % ».

En février, le vice-recteur de l’Université d’Ottawa, Sanni Yaya, relevait qu’« au cours des dernières années, de nombreuses demandes de permis, traitées par Immigration, Réfugiés et Citoyenneté Canada, ont été refusées pour des motifs souvent incompréhensibles et ont demandé des délais anormalement longs. » Il s’agit pourtant d’étudiants qui ont des bourses garanties par leur établissement et un bon dossier. Le vice-recteur se demande à juste titre s’il n’y a pas là un préjugé implicite de la part de l’agent responsable de leur évaluation, convaincu de leur intention de ne pas quitter le Canada une fois que sera expiré leur permis d’études.

En somme, il existe un faisceau d’indices donnant à conclure que les outils informatiques d’aide à la décision utilisés par les autorités fédérales amplifient la discrimination systémique à l’encontre des étudiants francophones originaires d’Afrique.

Outils faussés

Ce cafouillage doit nous interpeller à propos des préjugés amplifiés par les outils d’IA. Tout le monde est concerné, car ces technologies font partie intégrante de la vie quotidienne. Les téléphones dotés de dispositifs de reconnaissance faciale ou les assistants domestiques ou même les aspirateurs « intelligents », sans parler des dispositifs embarqués dans plusieurs véhicules, carburent à l’IA.

La professeure Karine Gentelet et l’étudiante Lily-Cannelle Mathieu expliquent, dans un article diffusé sur le site de l’Observatoire international sur les impacts sociétaux de l’IA et du numérique, que les technologies d’IA, bien que souvent présentées comme étant neutres, sont marquées par l’environnement social duquel elles sont issues. Elles tendent à reproduire et même à amplifier les préjugés et les apports de pouvoir inéquitables.

Les chercheuses rappellent que plusieurs études ont montré que, si elles ne sont pas adéquatement encadrées, ces technologies excluent des populations racisées, ou bien les surreprésentent au sein de catégories sociales considérées comme « problématiques » ou encore, fonctionnent inadéquatement lorsqu’elles sont appliquées à des individus racisés. Elles peuvent accentuer les tendances discriminatoires dans divers processus décisionnels, comme la surveillance policière, des diagnostics médicaux, des décisions de justice, des processus d’embauche ou d’admission scolaire, ou même le calcul des taux hypothécaires.

Une loi nécessaire

En juin dernier, le ministre fédéral de l’Innovation, des Sciences et de l’Industrie a présenté le projet de loi C-27 afin d’encadrer l’usage des technologies d’intelligence artificielle. Le projet de loi entend imposer des obligations de transparence et de reddition de comptes aux entreprises qui font un usage important des technologies d’IA.

Le projet propose d’interdire certaines conduites relativement aux systèmes d’IA qui peuvent causer un préjudice sérieux aux individus. Il comporte des dispositions afin de responsabiliser les entreprises qui tirent parti de ces technologies. La loi garantirait une gouvernance et un contrôle appropriés des systèmes d’IA afin de prévenir les dommages physiques ou psychologiques ou les pertes économiques infligés aux individus.

On veut aussi prévenir les résultats faussés qui établissent une distinction négative non justifiée sur un ou plusieurs des motifs de discrimination interdits par les législations sur les droits de la personne. Les utilisateurs des technologies d’IA seraient tenus à des obligations d’évaluation et d’atténuation des risques inhérents à leurs systèmes. Le projet de loi entend mettre en place des obligations de transparence pour les systèmes ayant un potentiel de conséquences importantes sur les personnes. Ceux qui rendent disponibles des systèmes d’IA seraient obligés de publier des explications claires sur leurs conditions de fonctionnement de même que sur les décisions, recommandations ou prédictions qu’ils font.

Le traitement discriminatoire que subissent plusieurs étudiants originaires de pays africains francophones illustre les biais systémiques qui doivent être repérés, analysés et supprimés. C’est un rappel que le déploiement de technologies d’IA s’accompagne d’importants risques de reconduire les tendances problématiques des processus de décision. Pour faire face à de tels risques, il faut des législations imposant aussi bien aux entreprises qu’aux autorités publiques de fortes exigences de transparence et de reddition de comptes. Il faut surtout se défaire du mythe de la prétendue « neutralité » de ces outils techniques.

Source: Intelligence artificielle discriminatoire

Roose: We Need to Talk About How Good A.I. Is Getting

Of note. World is going to become more complex, and the potential for AI in many fields will continue to grow, with these tools and programs being increasing able to replace, at least in part, professionals including government workers:

For the past few days, I’ve been playing around with DALL-E 2, an app developed by the San Francisco company OpenAI that turns text descriptions into hyper-realistic images.

OpenAI invited me to test DALL-E 2 (the name is a play on Pixar’s WALL-E and the artist Salvador Dalí) during its beta period, and I quickly got obsessed. I spent hours thinking up weird, funny and abstract prompts to feed the A.I. — “a 3-D rendering of a suburban home shaped like a croissant,” “an 1850s daguerreotype portrait of Kermit the Frog,” “a charcoal sketch of two penguins drinking wine in a Parisian bistro.” Within seconds, DALL-E 2 would spit out a handful of images depicting my request — often with jaw-dropping realism.

Here, for example, is one of the images DALL-E 2 produced when I typed in “black-and-white vintage photograph of a 1920s mobster taking a selfie.” And how it rendered my request for a high-quality photograph of “a sailboat knitted out of blue yarn.”

DALL-E 2 can also go more abstract. The illustration at the top of this article, for example, is what it generated when I asked for a rendering of “infinite joy.” (I liked this one so much I’m going to have it printed and framed for my wall.)

What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.

DALL-E 2 got a lot of attention when it was announced this year, and rightfully so. It’s an impressive piece of technology with big implications for anyone who makes a living working with images — illustrators, graphic designers, photographers and so on. It also raises important questions about what all of this A.I.-generated art will be used for, and whether we need to worry about a surge in synthetic propaganda, hyper-realistic deepfakes or even nonconsensual pornography.

But art is not the only area where artificial intelligence has been making major strides.

Over the past 10 years — a period some A.I. researchers have begun referring to as a “golden decade” — there’s been a wave of progress in many areas of A.I. research, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive A.I. models.

Some of that progress has been slow and steady — bigger models with more data and processing power behind them yielding slightly better results.

But other times, it feels more like the flick of a switch — impossible acts of magic suddenly becoming possible.

Just five years ago, for example, the biggest story in the A.I. world was AlphaGo, a deep learning model built by Google’s DeepMind that could beat the best humans in the world at the board game Go. Training an A.I. to win Go tournaments was a fun party trick, but it wasn’t exactly the kind of progress most people care about.

But last year, DeepMind’s AlphaFold — an A.I. system descended from the Go-playing one — did something truly profound. Using a deep neural network trained to predict the three-dimensional structures of proteins from their one-dimensional amino acid sequences, it essentially solved what’s known as the “protein-folding problem,” which had vexed molecular biologists for decades.

This summer, DeepMind announced that AlphaFold had made predictions for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.

Or look at what’s happening with A.I.-generated text.

Only a few years ago, A.I. chatbots struggled even with rudimentary conversations — to say nothing of more difficult language-based tasks.

But now, large language models like OpenAI’s GPT-3 are being used to write screenplayscompose marketing emails and develop video games. (I even used GPT-3 to write a book review for this paper last year — and, had I not clued in my editors beforehand, I doubt they would have suspected anything.)

A.I. is writing code, too — more than a million people have signed up to use GitHub’s Copilot, a tool released last year that helps programmers work faster by automatically finishing their code snippets.

Then there’s Google’s LaMDA, an A.I. model that made headlines a couple of months ago when Blake Lemoine, a senior Google engineer, was fired after claiming that it had become sentient.

Google disputed Mr. Lemoine’s claims, and lots of A.I. researchers have quibbled with his conclusions. But take out the sentience part, and a weaker version of his argument — that LaMDA and other state-of-the-art language models are becoming eerily good at having humanlike text conversations — would not have raised nearly as many eyebrows.

In fact, many experts will tell you that A.I. is getting better at lots of things these days — even in areas, such as language and reasoning, where it once seemed that humans had the upper hand.

“It feels like we’re going from spring to summer,” said Jack Clark, a co-chair of Stanford University’s annual A.I. Index Report. “In spring, you have these vague suggestions of progress, and little green shoots everywhere. Now, everything’s in bloom.”

In the past, A.I. progress was mostly obvious only to insiders who kept up with the latest research papers and conference presentations. But recently, Mr. Clark said, even laypeople can sense the difference.

“You used to look at A.I.-generated language and say, ‘Wow, it kind of wrote a sentence,’” Mr. Clark said. “And now you’re looking at stuff that’s A.I.-generated and saying, ‘This is really funny, I’m enjoying reading this,’ or ‘I had no idea this was even generated by A.I.’”

There is still plenty of bad, broken A.I. out there, from racist chatbots to faulty automated driving systems that result in crashes and injury. And even when A.I. improves quickly, it often takes a while to filter down into products and services that people actually use. An A.I. breakthrough at Google or OpenAI today doesn’t mean that your Roomba will be able to write novels tomorrow.

But the best A.I. systems are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift. Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing A.I.; many now believe that major changes are right around the corner, for better or worse.

Ajeya Cotra, a senior analyst with Open Philanthropy who studies A.I. risk, estimated two years ago that there was a 15 percent chance of “transformational A.I.” — which she and others have defined as A.I. that is good enough to usher in large-scale economic and societal changes, such as eliminating most white-collar knowledge jobs — emerging by 2036.

But in a recent post, Ms. Cotra raised that to a 35 percent chance, citing the rapid improvement of systems like GPT-3.

“A.I. systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time,” Ms. Cotra told me. “People should take more seriously that A.I. could change things soon, and that could be really scary.”

There are, to be fair, plenty of skeptics who say claims of A.I. progress are overblown. They’ll tell you that A.I. is still nowhere close to becoming sentient, or replacing humans in a wide variety of jobs. They’ll say that models like GPT-3 and LaMDA are just glorified parrots, blindly regurgitating their training data, and that we’re still decades away from creating true A.G.I. — artificial general intelligence — that is capable of “thinking” for itself.

There are also tech optimists who believe that A.I. progress is accelerating, and who want it to accelerate faster. Speeding A.I.’s rate of improvement, they believe, will give us new tools to cure diseases, colonize space and avert ecological disaster.

I’m not asking you to take a side in this debate. All I’m saying is: You should be paying closer attention to the real, tangible developments that are fueling it.

After all, A.I. that works doesn’t stay in a lab. It gets built into the social media apps we use every day, in the form of Facebook feed-ranking algorithms, YouTube recommendations and TikTok “For You” pages. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use A.I. to determine who’s eligible for loans, and police departments use it to investigate crimes.

Even if the skeptics are right, and A.I. doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society. In a few years, the vast majority of the photos, videos and text we encounter on the internet could be A.I.-generated. Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. And tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale, distorting the political process in ways we won’t see coming.

It’s a cliché, in the A.I. world, to say things like “we need to have a societal conversation about A.I. risk.” There are already plenty of Davos panels, TED talks, think tanks and A.I. ethics committees out there, sketching out contingency plans for a dystopian future.

What’s missing is a shared, value-neutral way of talking about what today’s A.I. systems are actually capable of doing, and what specific risks and opportunities those capabilities present.

I think three things could help here.

First, regulators and politicians need to get up to speed.

Because of how new many of these A.I. systems are, few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the A.I. frontier.

We’ve seen a few efforts to close the gap — Stanford’s Institute for Human-Centered Artificial Intelligence recently held a three-day “A.I. boot camp” for congressional staff members, for example — but we need more politicians and regulators to take an interest in the technology. (And I don’t mean that they need to start stoking fears of an A.I. apocalypse, Andrew Yang-style. Even reading a book like Brian Christian’s “The Alignment Problem” or understanding a few basic details about how a model like GPT-3 works would represent enormous progress.)

Otherwise, we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance, which resulted in nothing but gridlock and testy hearings.

Second, big tech companies investing billions in A.I. development — the Googles, Metas and OpenAIs of the world — need to do a better job of explaining what they’re working on, without sugarcoating or soft-pedaling the risks. Right now, many of the biggest A.I. models are developed behind closed doors, using private data sets and tested only by internal teams. When information about them is made public, it’s often either watered down by corporate P.R. or buried in inscrutable scientific papers.

Downplaying A.I. risks to avoid backlash may be a smart short-term strategy, but tech companies won’t survive long term if they’re seen as having a hidden A.I. agenda that’s at odds with the public interest. And if these companies won’t open up voluntarily, A.I. engineers should go around their bosses and talk directly to policymakers and journalists themselves.

Third, the news media needs to do a better job of explaining A.I. progress to nonexperts. Too often, journalists — and I admit I’ve been a guilty party here — rely on outdated sci-fi shorthand to translate what’s happening in A.I. to a general audience. We sometimes compare large language models to Skynet and HAL 9000, and flatten promising machine learning breakthroughs to panicky “The robots are coming!” headlines that we think will resonate with readers. Occasionally, we betray our ignorance by illustrating articles about software-based A.I. models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.

In a broad sense, most people think about A.I. narrowly as it relates to us — Will it take my job? Is it better or worse than me at Skill X or Task Y? — rather than trying to understand all of the ways A.I. is evolving, and what that might mean for our future.

I’ll do my part, by writing about A.I. in all its complexity and weirdness without resorting to hyperbole or Hollywood tropes. But we all need to start adjusting our mental models to make space for the new, incredible machines in our midst.

Source: We Need to Talk About How Good A.I. Is Getting

How VR and AI could revolutionize language training for newcomers

Innovative:

Adla Hitou is shamelessly showcasing her stellar work experience as she tries to convince the interviewer to hire her.

The Syrian newcomer answers every question that’s tossed her way.

“Give me that $3-trillion job,” she says, before bursting into laughter.

The assertive and fun-loving Hitou undertaking this mock job interview in virtual reality seems like an entirely different person from the timid mother of two who normally only whispers to others in her real-world classroom.

“I don’t feel nervous speaking in English in the virtual world because I just disappear. When I talk, I’m not afraid to make mistakes anymore. I just feel more confident,” says the 51-year-old Mississaugan, a former pharmacist who resettled in Canada in 2018 via Lebanon.

Alexander called the project’s use of VR in language learning “groundbreaking,” adding the technology appears to help participants overcome the self-consciousness in communicating in their second language.

“At this point, VR is a great equalizer. When students are using VR, they seem to feel this freedom to want to be able to speak. I think part of it is, when they’re in the VR world of it, they’re less concerned about themselves and about making mistakes, where they are actually being represented by a cartoon character.”

On this sweltering Saturday, instructor Anthony Faulkner and volunteers prepared the four female and three male adult learners on how to tackle questions at a job interview.

At first glance, there’s nothing atypical about the drill, a part of the English as a Second Language curriculum to help adult immigrants learn the language for their successful integration in their adopted country.

They talked about how to make a formal introduction of themselves, highlight their accomplishments and think on their feet when faced with the unexpected.

Sitting in a circle in the lab, Hitou — in a soft and gentle voice — told classmates in the physical world about her training as a pharmacist, her work experience as a project manager with the World Health Organization’s food and vaccination programs in Syria, and the civil war that forced her exile.

“I’m good at communication. I can take your ideas, relate the information and make a good presentation,” she said, adding personal details: “I love volunteering and do handcraft. I’ve made crochets and sold them at bazaars to raise money for charities. I am a bad seller but I have a kind heart.”

After the in-class session, with help of a team of volunteers, the participants were invited to put on their headsets, lift the hand-held controllers and enter Faulkner’s virtual office resembling an executive suite — wood panelling, rows of bookshelves and a bronze chandelier.

In an instant, Hitou, in her white hijab and blue one-piece dress, transformed into an avatar in the virtual world, revealing her long silver hair and sporting a black business suit.

“How do you deal with failures and mistakes?” asked Faulkner, standing in the middle of the classroom and moving his hand-held controllers in the air to make his avatar do a “hands-open, palms-up” gesture.

Caught by the surprise question, Hitou, behind her headset and facing a whiteboard in the physical world, confidently replied: “We are all humans. We have to learn from our mistakes and understand why we made the mistakes and failed. I do not give up.”

“This is so much more fun than learning English from books and notes in the traditional classroom,” she said later. “Now, I remember every word and thing that I see and learn in the virtual world when I go back to the real world.”

The virtual job interview is just one of many thematic VR experiences covered over the eight-week course by the research team that developed the customized scenarios with help from ENGAGE, a virtual platform that simulates the way people interact in the physical world for multi-user events, collaboration, training and education. One scene includes a dinner party at a virtual highrise loft; another involves the planning of a Canadian road trip where each newcomer is assigned to research a part of the country before going to the different booths in a virtual conference hall to make a presentation to their peers about these places and activities.

Instead of just viewing some Canadian landmarks on television as peers in a regular class might, the VR participants can take virtual tours of Niagara Falls, Toronto’s Eaton Centre and St. Lawrence Market through the VR 360 videos on YouTube VR.

Oakville’s Manar Mustafa, a computer engineer who fled war in Syria and came here in 2016, said she attended a regular English program at a newcomer settlement service agency but nothing can compare to VR learning.

“This is a perfect experience. I never used VR but everything feels so real in the VR world. I have not visited the Niagara Falls but now I have. It was right in front of me in the classroom and I didn’t even get wet,” the 36-year-old mother of four said with a chuckle.

“Initially, I felt dizzy (with motion sickness), but now I really love it. I feel very comfortable with it.” 

Her classmate, Afghan journalist Abdul Mujib Ebrahimi, who only arrived in Canada last November, said he hopes to quickly translate what he has learned from the virtual world to the real world.

“The VR experience pushes you in a real situation. You are a character and you use your imagination to communicate with others. If I don’t know a word or how to say something, I just explain it in a different way for people to understand me,” said the 27-year-old from Badakhshan. 

“I’m still trying to learn English and work with English, but I am more confident when I talk to real people.” 

Alexander cautions that it’s too early to determine the effectiveness of VR in language learning.

“We have to be careful of the novelty effect of VR. Our students are enjoying the experience maybe because they’re trying VR for the first time and they can tell their friends and family about it,” he explained.

The VR class this summer is part of a series of projects that also include participants in traditional classrooms and artificial-intelligence-assisted learning in front of a computer. The AI session will be launched in winter.

There are about 80 newcomers waiting to get in the program, according to Marwa Khobieh, executive director of the Syrian Canadian Foundation, which has been running a joint English tutoring program for newcomers with volunteers from U of T since 2017.

She concedes that VR language learning is expensive, with the required infrastructure and equipment in its infancy, but if it succeeds, there’s a huge potential to use it to accelerate the learning and ultimately the integration of new immigrants.

“If we’re able to help newcomers learn and improve their language skills in two years instead of four, it’s worth the investment,” noted Khobieh. “Technology is our future and this can change the future of language training for newcomers.”

Source: How VR and AI could revolutionize language training for newcomers

‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Legitimate concerns about AI bias (which individual decision-makers have), also need to address “noise,” variability among decision-making by people for comparable cases:

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the act, once adopted, cannot be overstated,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EU’s final list of “high risk” uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or – in the case of lenders – assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

“AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture,” Sarah Kocianski, an independent financial technology consultant said. “If designed correctly, such systems can provide wider access to affordable credit.”

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said. “Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as “black-box” syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called “trustworthy AI” models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. “Correlation-based models are learning the injustices from the past and they’re just replaying it into the future,” Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

“It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model,” he said. “We don’t know how many people haven’t gone to university because of a haywire algorithm. We don’t know how many people weren’t able to get their mortgage because of algorithm biases. We just don’t know.”

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. “Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it,” he said.

While the EU’s new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

“The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present,” Circiumaru said.

“AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they won’t.”

Source: ‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Could robots take your job? How automation is changing the future of work

A reminder that immigration levels and mix should factor in trends in automation. Current high levels do not do so, nor do relaxed requirements for Temporary Foreign Workers. Neither approach will improve productivity and GDP per capita, nor will these approaches encourage Canadian firms to invest more in technology and automation:

A precursor to our automated future sits inconspicuously off Baldwin Street in Toronto’s busy Kensington Market.

The RC Coffee Robo Cafe, which juts out slightly from the brick wall by the sidewalk, bills itself as Canada’s first robotic café.

As opposed to a vending-machine brew that dispenses coffee from hand-filled urns, the robotic barista makes each cup of coffee, espresso, latte and more by request, ready in just a few moments.

For Jasmine Arnold, visiting Toronto from Providence, R.I., the iced matcha prepared at RC Coffee topped drinks dispensed by a vending machine and was on par with coffee served at a chain.

While the drink went down smooth, she told Global News the experience was unique if a little jarring.

“I have mixed feelings about a robot, from a jobs perspective,” she said, expressing some discomfort about what this means for the prospects of human baristas.

After trying his own robo-poured beverage, Arnold’s partner Eric echoed her sentiments but noted that with the pandemic changing our expectations of what work can be done from where, it seemed to align with recent shifts in work.

“I think this is kind of where we’re going as a society,” he said.

Workforce shifts driven by a tight labour market and the COVID-19 pandemicare opening the door to a faster adoption of automated solutions, but at least one expert is warning that Canada might not be prepared for how quickly robotic workers are set to transform the economy.

Robots in demand in tight labour markets

Statistics Canada said Friday that though Canada shed some 31,000 jobs in July, the country’s unemployment rate remained at its lowest ever at 4.9 per cent last month. The labour market is even hotter in the U.S., with unemployment falling to 3.5 per cent in July.

This tight North American job market is driving up interest in automated solutions, says Brad Ford, vice-president of sales for KioCafé in Canada, the company that operates RC Coffee.

The company had just one RC Coffee kiosk in Toronto in the fall of 2020, which it had launched as an “experiment,” he recalls. But in the past two and a half years, it’s scaled up to five locations across the Greater Toronto Area with three more on the way.

Most storefront locations are in high-traffic neighbourhoods, but there’s also a standalone RC Coffee kiosk in the Toronto General Hospital.

Hospitals, universities and airports have been among Kio Cafe’s most interested customers, Ford says, as these locations have been unable to staff their coffee shops quickly enough to accommodate the surge in demand from the pandemic recovery.

“People have been knocking at our door trying to buy the equipment from us, especially in the U.S., where they just cannot get the staff to open up the locations,” he says.

Companies in other sectors are also increasingly embracing automation. Beyond just installing self-checkout systems, grocers like Loblaw and Sobeys are turning to robotics to speed up fulfilment. The company announced plans in June to open an automated distribution centre in the GTA by early 2024.

The Association for Advancing Automation said that U.S. workplace orders for robots were up 40 per cent in the first quarter of 2022. That followed a record 2021 that saw a 28 per cent jump in orders fueled by non-automotive sectors.

Pandemic accelerated automated future

While it was “coincidental” that RC Coffee offered a touch-free experience just as the pandemic was getting underway, Ford notes this has also been an in-demand upside.

The pace of automation has only been accelerated by the COVID-19 pandemic, says Dan Ciuriak, senior fellow with the Center for International Governance Innovation in Waterloo, Ont.

He points to the 2020 Beijing Olympics (held in 2021), when China ramped up development of contactless services to reduce opportunities for COVID transmission, as a hint at what to expect our post-pandemic realities to become.

Looking at hospitals specifically, Ciuriak says there’s an opportunity to automate work beyond just the food court.

Amid a widely reported health-care staffing shortage, more than one in five Canadian nurses worked paid overtime shifts in July, Statistics Canada reported Friday. Some 11.2 per cent of nurses were meanwhile off sick for part of the week when the labour force survey was conducted.

Ciuriak says that there’s an opportunity for increasingly intelligent robots to support or even replace some nursing jobs as Canada’s ageing population threatens to overwhelm an already stressed health system.

“That is going to be a great boon and will enable us to actually get through this demographic transition,” he says.

This is largely what futurists — Ciuriak included, he notes — had long expected our automated future to look like: robots working side-by-side with humans, streamlining simple tasks and making us more productive.

But developments in artificial intelligence are seeing more powerful chips accelerate the pace of automation, he says. Each time a machine surpasses a human in a knowledge-based field, such as Google’s DeepMind AI mastering chess, Ciuriak says we should consider the implications for work we long assumed was solely meant for humans.

“You’re seeing just tremendous scaling up of the power of these networks. And that is being reflected in how many artificial intelligence systems are breaking through human benchmarks. This is now a regular phenomenon,” he says.

“We’re at the dawn of a new era, and that’s going to have massive implications for the labour market.”

Service-sector jobs at risk

The services sector in particular is rife for disruption, Ciuriak says, and it’s not just entry-level positions at risk.

He argues, for example, that skills a person might gain from years of investment and studying toward a law degree could be largely replicated — and mass-produced — on a computer chip within the next decade.

When these services, typically constrained by human limits, become scaled up through automation, the implications for income generation and distribution will be immense. The owners of these machines would become new centres for wealth concentration, he argues, warranting a shift in thinking about how we tax the products of this work.

“We are embarking into a new type of economy that we’re not prepared to regulate or manage,” Ciuriak says.

While he doesn’t believe that RC Coffee Robo Cafes will ever replace the traditional barista or communal feel of the local coffee shop, Ford does acknowledge some “front-line” jobs could be at risk in our automated future.

He argues, however, that the machines themselves are “job creators.” Each cafe requires an extensive development and maintenance team behind them, and the machines themselves require the same material inputs as your typical Starbucks or Tim Hortons.

By enabling more coffee shop locations to open today rather than shuttering due to staff shortages, Ford argues that java producers are able to keep their businesses running and maintain employment throughout the coffee supply chain.

“The more that we can roll these things out and get great coffee out there, I think it’s great for everybody.”

Source: Could robots take your job? How automation is changing the future of work

Bailey: Harnessing the Best of Automation While Minimizing the Downside Risks

A good summary of some of the issues. Arguably, the USA is ahead of us given our (over) reliance on immigration, both permanent and temporary, to address labour shortages rather that developing and implementing technologies:

The pandemic and economic disruptions have accelerated the adoption of automation technologies that will introduce important benefits to businesses and consumers but may also create disruptions for many workers and communities. Policymakers and leaders can take steps now to help navigate these disruptive changes.

Automation covers a broad range of technologies and advances in artificial intelligence (AI) and robotics that are deployed in novel ways to increase productivity or expand business capabilities. The Federal Reserve’s most recent Beige Book included observations from several districts noting that companies facing labor shortages were turning to automation as a solution. A McKinsey survey of 800 business executives found that 85 percent were accelerating their digitization and automation as a result of COVID-19. Companies across North America also spent a record $2 billion for almost 40,000 robots in 2021.

These new technologies are increasingly being deployed in a wide range of economic sectors. For example, in agriculture, drones such as the Agras MG-1 can provide precision irrigation for over 6,000 square meters of farmland in just under 10 minutes. John Deere is piloting autonomously driving tractors that can plow fields and plant crops with minimum human interaction. The autonomous robot created by Carbon Robotics can kill 100,000 weeds per hour, leading to increased crop yields, and reduce the use of pesticides by using nothing but lasers. 

These and other innovations will bring numerous benefits to businesses and consumers alike, but the transition could be disruptive to workers and communities. Policymakers should consider several actions to help harness the best of automation while minimizing the downside risks.

Community Dynamism. Policymakers and community leaders have a broad array of community development tools in their toolboxes, including Opportunity ZonesNew Markets Tax Credits, and Coronavirus State and Local Fiscal Recovery Funds. But they should first take a step back and consider how to create the conditions for dynamism, which AEI’s Ryan Streeter notes is “a culture rooted in a taste for discovery and betterment [that] can shape—indeed, has shaped—our institutions and policies, from how we structure patents to how we tax capital investments.” It offers a conceptual way to think through, structure, and orient all the existing policies and projects aimed at strengthening communities.

Boost Research and Development. The US must continue investing and expanding research and development on emerging technologies, including AI, to power the next generation of smart technologies, robotics, and drones. Addressing the computer chip shortage is critical, including bolstering domestic manufacturing capabilities. Various proposals being considered in the Bipartisan Innovation Act will advance this important work.

Invest in Human Capital. Automation is eroding jobs further up the skills ladder, which is raising the skill level for every new job while creating entirely new lines of work. Boston Consulting Group and the Burning Glass Institute analyzed more than 15 million job postings to understand how skill requests changed from 2016 to 2021. They found an acceleration in the pace of change. Nearly three-quarters of jobs changed more from 2019 through 2021 (with a compound annual growth rate of 22 percent) than they did from 2016 through 2018 (19 percent). The main driver was found to be technology, which redefined jobs sometimes radically and sometimes more subtly. The US should strengthen its entire skills pipeline to ensure individuals have the skills these jobs require. Community college programs will need to align with these new trends and employer needs. Companies should also explore apprenticeships to provide work-based learning opportunities for individuals transitioning careers. Skilled immigration, through ideas such as Heartland Visas, can also bolster the human capital available to communities.

Broadband Build-Out. State and community leaders must begin preparing their broadband plans to make the most of the $65 billion in new broadband fundingavailable through the Infrastructure Investment and Jobs Act. Connectivity enables smart devices and AI systems to talk to and coordinate with one another. It allows scaling of critical services, including telehealth and online job training. Leaders must begin developing their plans and priorities now to ensure projects support broader economic and community needs, prevent overbuilding, and ensure the funds build out future-proof infrastructure to underserved communities.

Regulatory Sandboxes. Policymakers should create regulatory sandboxes that invite experimentation with new technologies and automated systems. These are a win-win because they give policymakers the chance to better understand the new technologies they are responsible for regulating, while providing entrepreneurs and investors with clearer regulatory pathways and guardrails toward which they can develop. North Carolina launched a FinTech Regulatory Sandbox that allows pilot projects to test emerging technologies and business models, including technologies that would otherwise be illegal under existing regulations. Arizona created a regulatory pathway for safely developing and testing autonomous and connected vehicle technologies. These flexible regulatory environments can accelerate innovation and lead to smarter polices and regulations that protect consumers.

AI and automation will introduce important benefits to communities, businesses, and society. Policymakers and community leaders have important roles in helping to accelerate the use of these technologies while minimizing the disruption they pose for different communities.

Source: Harnessing the Best of Automation While Minimizing the Downside Risks

Automatic for the people [immigration focus]

Of note. Given volumes and resources, no realistic alternative but care need to eliminate biases (either pro or con):

If there was ever any doubt the federal government would use automation to help it make its administrative decisions, Budget 2022 has put that to rest. In it, Ottawa pledges to change the Citizenship Act to allow for the automated and machine-assisted processing of a growing number of immigration-related applications.

In truth, Immigration, Refugees and Citizenship Canada has been looking at analytics systems to automate its activities and help it assess immigrant applications for close to a decade. The government also telegraphed its intention back in 2019, when it issued a Directive on Automated Decision-Making (DADM), which aims to build safeguards and transparency around its use.

“[T]he reference to enable automated and machine-assisted processing for citizenship applications is mentioned in the budget to ensure that in the future, IRCC will have the authority to proceed with our ambition to create a more integrated, modernized and centralized working environment.” said Aidan Stickland, spokesperson for Immigration, Refugees and Citizenship minister Sean Fraser, in an emailed reply.

“This would enable us to streamline the application process for citizenship with the introduction of e-applications in order to help speed up application processing globally and reduce backlogs,” Stickland, added. “Details are currently being formalized.”

But to live a life of ambition requires taking risks. So the DADM comes with an algorithmic impact assessment tool. According to Teresa Scassa, a law professor at the University of Ottawa, it creates obligations for any government department or agency that plans to adopt automated decision-making, either in whole or as a system that makes recommendations. It is a risk-based framework to determine the obligation to be placed on the department or agency.

“The citizenship and immigration context is one where what they’re looking at is that external client,” Scassa says. “It does create this governance framework for those types of projects.”

Scassa says that the higher the risk of impact on a person’s rights, or the environment, the more obligations are placed on the department or agency using it, such as requirements for peer review, monitoring outputs to ensure the system remains consistent with the objectives or that it doesn’t demonstrate improper bias.

“It governs things like what kind of notice should be given,” Scassa says. “If it’s very low-risk, it might be a very general notice, like something on a web page. If it’s high risk, it will be a specific notice to the individual that automated decision-making is in use. Depending on where the project is in the risk framework, there is a sliding scale of obligations to ensure that individuals are protected from adverse impacts.”

Scassa suspects that IRCC may use automated decision-making to determine if someone qualifies for citizenship, which can mean different things.

It could be a triage system, for example, drawing information from applications before using AI to determine which applicants clearly qualify for citizenship. “Everything else [would fall] into a different basket where it needs to be reviewed by an officer,” Scassa says.

Such a system would be relatively low-risk as any decisions would be positive for the applicant, while all others go to a human for review, which would speed up overall processing times.

“That may be less problematic than a system that makes all of the decisions, and people have to figure out why they got rejected, and you have to ask how transparent is the algorithm, and what are your rights to have the decision reviewed,” Scassa adds. “There is the question of how it will be designed, and how impactful the AI tool will be on individuals. On the other hand, a triage system like this could have automation bias where files get flagged. Maybe the human reviewing them approaches them with a particular mindset because they haven’t been considered to be automatically accepted. The automation bias may make the human less likely to approve them.”

Scassa notes that the Open Government platform shows an algorithmic impact assessment for a tool developed for spousal analytics, a form of triage tool, which gives a sense of what kinds of tools the department is contemplating.

Scassa also notes that under the Citizenship Act, a provision allows for the delegation of the minister’s powers to any person authorized in writing. She suspects that the proposed legislative change could be to specifically allow some of the decisions to be made on a fully-automated basis.

When it comes to reviewing decisions, the DADM and its risk framework appears to apply administrative law principles, including procedural fairness protections.

Paul Daly, also a law professor at the University of Ottawa, adds that the administrative law principles apply regardless of whether this type of automated decision-making has been authorized in the statute.

“It’s a common concern for officials using sophisticated machine-learning technology to want legal authority,” Daly says. “Really, that’s only one part of the picture. There’s a whole body of legal principles from administrative law, the Charter, and the [DADM] that have to be complied with when you start to actually use the systems,” Daly says.

Lex Gill, a fellow at Citizen Lab, co-authored a report called “Bots at the Gate,” which looks at the human rights impacts of automated decision-making in Canada’s citizenship and immigration system. She acknowledges there are serious backlogs within the immigration system. But she cautions that faster isn’t always better, particularly when the error rates associated with AI disproportionately affect certain groups who are already treated unfairly.

“Sometimes we adopt technologies that will allow us to believe that we are doing something more scientific, methodical or fair, when really what we are doing is reproducing the status quo, but faster and with less transparency,” Gill says. “That is always my concern when we talk about automating these kinds of administrative processes.”

Gill notes there is a spectrum of technologies available for automated and machine-assisted processing, some of which are not problematic, while others are worrying and raise human rights issues. Still, it is hard to know what we may be dealing with without more information from the minister.

“When we talk about using automated or machine-assisted technology to do things like risk scoring, that’s an area where we know that it’s highly discretionary,” Gill says. “There is an entire universe of academic study that demonstrates that those technologies tend to replicate existing forms of bias and discrimination and profiling that already exists within administrative systems.”

Gill says that these systems tend to learn from existing practices. The result tends to exacerbate discriminatory outcomes and makes them more difficult to challenge because there is the additional layer of perceived scientific or technical neutrality layered on top of a system that demonstrated bias.

“When the government is imagining adopting these kinds of technologies, is it imagining doing that in a way that is enhancing transparency, accountability, and reviewability of decisions?” asks Gill. “Efficiency is clearly an important goal, but the rule of law, accountability and control of administrative discretion also require friction—they require a certain degree of scrutiny, the ability to slow things down, the ability to review things, and the ability to understand why and how a decision was made.”

Gill says that unless these new technologies come with oversight, review and transparency mechanisms, she worries that they will take a system that is already discretionary, opaque, and has the ability to change the direction of a person’s life, and render it even more so.

“If you’re going to start adopting these kinds of technologies, you need to do it in a way that maximally protects a person’s Charter rights, and which honours the seriousness of the decisions at stake,” Gill says. “Don’t start with decisions that engage the liberty interests of a person. Start with things like whether or not this student visa application is missing a supporting document.”

Source: Automatic for the people

Sarantakis: Taking data seriously: A call to public administrators

Important flagging of the importance of data for governments and how governments increasingly lag the private sector in their collection, analysis and use of data and AI to understand citizen needs.

However striking that a senior official would make the case without acknowledging the challenge in doing do for the public sector given that each time the government does so, significant criticism occurs, whether it be for IRCC’s use of the Chinook system, Statistics Canada use of anonymized credit card information to understand consumer spending, or PHAC’s collection of anonymized COVID phone data.

Perhaps a second piece on this harder issue?

It is said that the first step in overcoming a problem is first admitting its existence. So, here goes: Contemporary public administration is data-challenged.

This would have been an implausible statement to utter, historically. After all, public administrators as individuals know how important data is to public policy formulation and program delivery. Public administration has proved its worth over time with the value of record-keeping, and creating and using data — recording, ordering, sorting and tabulating counts of people, forests, geography, geology, tanks, guns and things like the production of butter.

Indeed, the two great and insatiable needs of the early state, formulated by Yale scholar James C. Scott, were taxation and conscription. Without revenues and the capacity to pay to defend sovereignty, states are not durable. In turn, without public administrators recording, ordering, sorting and tabulating data, the state does not endure.

Historically, public administration has been on the cutting edge of data. Entities often went to various state organs and state registries for data. The public service apparatus of the state knew, even in the state formed explicitly to curb government involvement in the daily affairs of its citizens.

But something dramatic has happened. The administrative state – that part of government that continues regardless of whether elections yield majorities or minorities that are red, blue, orange, green, or purple – is no longer on the cutting edge of data. Yes, the state still knows, but often it only now knows after, while private sector entities know now. Even more powerfully, with predictive analytics, sophisticated private entities increasingly know before.

How can we understand this switch? How can we understand public administration losing its historical position of relative data supremacy? To do that, we need to detour from public administration for a moment and veer into the private-sector economy. What we find gives us important clues to our mystery vis-à-vis data and public administration.

The factors of production 

Since Adam Smith, we have understood three core factors of production: land, labour and capital. There are others that have competed to be added to this list. Channeling Peter Drucker, some have argued for “management” – those who directresources. Others have argued for “entrepreneurs” – those who combine resources in new and innovative ways. But Smith’s formulation has proven remarkably durable for more than two centuries.

If Smith were to return and look at some of the most valuable and dynamic corporations of our era – the digital giants Google, Meta (formerly Facebook), Amazon, Apple, Spotify and others – he would likely be mystified. Yes, he would see some land. Yes, he would see some labour. But nowhere near enough to justify the heady heights – and incredible influence and power – of the digital giants. Finally, he would also see some capital. But remarkably, that capital would largely be a by-product of “production,” and not a driver of production.

Seeing the most valuable and powerful entities on earth during his era, Smith would have seen people – lots and lots of labour. He would have seen land. He would have seen capital in the form of constructed ships, and tools, and extracted then refined natural resources. He would have seen stuff – tangible things that he could touch.

But the contemporary Adam Smith would see negligible amounts of people and land in today’s largest companies. Certainly nothing approaching their value, status or their power. These companies, perhaps most surprisingly of all, “consume” relatively little capital.

So if you are generating enormous profits but not drawing heavily on the “factors of production” …. something makes no sense. What is going on?

Brains? Computers? Digital? Algorithms? Cloud computing?

Yes, yes, yes, yes, yes, and lots more.

But fundamentally, what is going on now is the fourth factor of production.

Data.

Data as differentiator 

Data has now become the most valuable commodity on earth. Data stocks are more valuable than natural resources. Data is more valuable than manufacturing facilities; more valuable than land; more valuable than labour. Data – the new oil? Oil should be so lucky.

Why?

Data is now the differentiator. Data is now the value-add. As computers, software, micro-processing power, storage, cloud computing and algorithms all become (or all trend toward) commodity status, it is the quantity and quality of data that will transform the mediocre into the successful.

A commodity is an interchangeable and undistinguished part. Where I buy a barrel of oil or a bar of gold or a truck load of gravel or road salt is overwhelmingly just price-contingent. The lowest price wins. To avoid becoming a commodity in data – valued only for how cheaply you can deliver something – you need more and better data than the competition. Increasingly, if you are data-deficient, you will not be competitive or sustainable as an entity.

Put another way, Company A and Company B already compete based on the quantity and the quality of their data. This will also increasingly be true in the coming years for Country A and Country B. Countries have competed forever for oil and gas and timber and nickel. Now they are also adding “quantity and quality of data” to that list of competitions.

Spotify is a data company that deals in music. Netflix is a data company that deals in entertainment. Tesla is a data company on wheels. Google is a data company that deals in information. Amazon is a data company that provides many things – same with Instagram, same with Facebook.

Computing, computation, communication, software, digital distribution – all are, or are rapidly becoming – commodities. Algorithms still have differentiating value, but as advances in artificial intelligence continue, these as well will also invariably trend to commodity status. What really adds value in production increasingly is the quality and quantity of data.

Data and public administration

What does all this have to do with public administration? At first glance, perhaps nothing. But on closer examination, a great deal.

The digital giants became digital giants because they understood – before others – the enormous value of enormous quantities of data. They understood – like the early state understood the power of knowing the quantity and location of trees and people and minerals – that data is power.

As Shoshana Zuboff expertly describes in The Age of Surveillance Capitalism, data becomes the nexus of power. But the power of data in the contemporary age isn’t about counting trees and people, it is rather about the “instrumentalization of behavior for the purposes of modification, prediction, monetization, and control.”

Contemporary public administration, which traces its very heritage back to data, is far less sophisticated in data today than the digital giants. Data is not utilized for public good applications anywhere near the degree to which data is utilized for commercial gain.

Over time, that will harm us all because the public-good realm will have less access to rich data than the private profit realm. Over time, that will make public administration a dinosaur. We need to better understand the power and application of data.

Public administration and real-time actionable data

States often revert to using blunt policy instruments because public administrations do not have the granularity of data – in real time – that is available to the digital giants. When you don’t have real-time actionable data, you estimate. You ask people to apply. You create programs with criteria instead of directly apply funding to public policy objectives.

That worked for a world when real-time actionable data either did not exist or was enormously expensive to actualize. But that is not today’s world. The percentage of the economy migrating online is growing every day, and the online economy has grown much faster than the analog economy in recent years. But something else is happening, too. With the internet of things(IoT), our toasters and our refrigerators and our lightbulbs and our ventilation systems and our water treatment plants and our garage doors and our pacemakers are all migrating online. The enormous oceans of data we have today will, in a few very short years, look like little trickles of water when the IoT begins to take hold in full flight.

Public administration is already behind. Imagine what happens when the volume of data being generated every moment of every day by billions of connected things across the globe increases at an even faster rate.

Does public administration understand the power of data? Do we understand how to use it to serve public policy goals? Do we understand how to regulate it for the public good? Do we have the systems in place to capture data? Do we have the systems in place to safeguard data? Do we have the systems in place to safeguard its use by non-state actors?

These are the many questions facing public administration today. The faster we get the answers, the better public administrators will be able to serve their political decision-makers and their state populations.

Time is not our friend on these questions.

Source: Taking data seriously: A call to public administrators