This company adopted AI. Here’s what happened to its human workers

This is a really interesting study. Given that it involved call centres and customer support, IRCC, ESDC, CRA and others should be studying this example of how to improve productivity and citizen service:

Lately, it’s felt like technological change has entered warp speed. Companies like OpenAI and Google have unveiled new Artificial Intelligence systems with incredible capabilities, making what once seemed like science fiction an everyday reality. It’s an era that is posing big, existential questions for us all, about everything from literally the future of human existence to — more to the focus of Planet Money — the future of human work.

“Things are changing so fast,” says Erik Brynjolfsson, a leading, technology-focused economist based at Stanford University.

Back in 2017, Brynjolfsson published a paper in one of the top academic journals, Science, which outlined the kind of work that he believed AI was capable of doing. It was called “What Can Machine Learning Do? Workforce Implications.” Now, Brynjolfsson says, “I have to update that paper dramatically given what’s happened in the past year or two.”

Sure, the current pace of change can feel dizzying and kinda scary. But Brynjolfsson is not catastrophizing. In fact, quite the opposite. He’s earned a reputation as a “techno-optimist.” And, recently at least, he has a real reason to be optimistic about what AI could mean for the economy.

Last week, Brynjolfsson, together with MIT economists Danielle Li and Lindsey R. Raymond, released what is, to the best of our knowledge, the first empirical study of the real-world economic effects of new AI systems. They looked at what happened to a company and its workers after it incorporated a version of ChatGPT, a popular interactive AI chatbot, into workflows.

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.

The Rise Of Cyborg Customer Service Reps

The story of this study starts a few years ago, when an unnamed Fortune 500 company — Brynjolfsson and his colleagues have not gotten permission to disclose its identity — decided to adopt an earlier version of OpenAI’s ChatGPT. This AI system is an example of what computer scientists call “generative AI” and also a “Large Language Model,” systems that have crunched a ton of data — especially text — and learned word patterns that enable them to do things like answer questions and write instructions.

This company provides other companies with administrative software. Think like programs that help businesses do accounting and logistics. A big part of this company’s job is helping its customers, mostly small businesses, with technical support.

The company’s customer support agents are based primarily in the Philippines, but also the United States and other countries. And they spend their days helping small businesses tackle various kinds of technical problems with their software. Think like, “Why am I getting this error message?” or like, “Help! I can’t log in!”

Instead of talking to their customers on the phone, these customer service agents mostly communicate with them through online chat windows. These troubleshooting sessions can be quite long. The average conversation between the agents and customers lasts about 40 minutes. Agents need to know the ins and outs of their company’s software, how to solve problems, and how to deal with sometimes irate customers. It’s a stressful job, and there’s high turnover. In the broader customer service industry, up to 60 percent of reps quit each year.

Facing such high turnover rates, this software company was spending a lot of time and money training new staffers. And so, in late 2020, it decided to begin using an AI system to help its constantly churning customer support staff get better at their jobs faster. The company’s goal was to improve the performance of their workers, not replace them.

Now, when the agents look at their computer screens, they don’t only see a chat window with their customers. They also see another chat window with an AI chatbot, which is there to help them more effectively assist customers in real time. It advises them on what to potentially write to customers and also provides them with links to internal company information to help them more quickly find solutions to their customers’ technical problems.

This interactive chatbot was trained by reading through a ton of previous conversations between reps and customers. It has recognized word patterns in these conversations, identifying key phrases and common problems facing customers and how to solve them. Because the company tracks which conversations leave its customers satisfied, the AI chatbot also knows formulas that often lead to success. Think, like, interactions that customers give a 5 star rating. “I’m so sorry you’re frustrated with error message 504. All you have to do is restart your computer and then press CTRL-ALT-SHIFT. Have a blessed day!”

Equipped with this new AI system, the company’s customer support representatives are now basically part human, part intelligent machine. Cyborg customer reps, if you will.

Lucky for Brynjolfsson, his colleagues, and econ nerds like us at Planet Money, this software company gave the economists inside access to rigorously evaluate what happened when customer service agents were given assistance from intelligent machines. The economists examine the performance of over 5,000 agents, comparing the outcomes of old-school customer reps without AI against new, AI-enhanced cyborg customer reps.

What Happened When This Company Adopts AI

The economists’ big finding: after the software company adopted AI, the average customer support representative became, on average, 14 percent more productive. They were able to resolve more customer issues per hour. That’s huge. The company’s workforce is now much faster and more effective. They’re also, apparently, happier. Turnover has gone down, especially among new hires.

Not only that, the company’s customers are more satisfied. They give higher ratings to support staff. They also generally seem to be nicer in their conversations and are less likely to ask to speak to an agent’s supervisor.

So, yeah, AI seems to really help improve the work of the company’s employees. But what’s even more interesting is that not all employees gained equally from using AI. It turns out that the company’s more experienced, highly skilled customer support agents saw little or no benefit from using it. It was mainly the less experienced, lower-skilled customer service reps who saw big gains in their job performance.

“And what this system did was it took people with just two months of experience and had them performing at the level of people with six months of experience,” Brynjolfsson says. “So it got them up the learning curve a lot faster — and that led to very positive benefits for the company.”

Brynjolfsson says these improvements make a lot of sense when you think about how the AI system works. The system has analyzed company records and learned from highly rated conversations between agents and customers. In effect, the AI chatbot is basically mimicking the company’s top performers, who have experience on the job. And it’s pushing newbies and low performers to act more like them. The machine has essentially figured out the recipe for the magic sauce that makes top performers so good at their jobs, and it’s offering that recipe for the workers who are less good at their jobs.

That’s great news for the company and its customers, as well as the company’s low performers, who are now better at their jobs. But, Brynjolfsson says, it also raises the question: should the company’s top performers be getting paid even more? After all, they’re now not only helping the customers they directly interact with. They’re now also, indirectly, helping all the company’s customers, by modeling what good interactions look like and providing vital source material for the AI.

“It used to be that high-skilled workers would come up with a good answer and that would only help them and their customer,” Brynjolfsson says. “Now that good answer gets amplified and used by people throughout the organization.”

The Big Picture

While Brynjolfsson is cautious, noting that this is one company in one study, he also says one of his big takeaways is that AI could make our economy much more productive in the near future. And that’s important. Productivity gains — doing more in less time — are a crucial component for rising living standards. After years of being disappointed by lackluster productivity growth, Brynjolfsson is excited by this possibility. Not only does AI seem to be delivering productivity gains, it seems to deliver them pretty fast.

“And the fact that we’re getting some really significant benefits suggests that we could have some big benefits over the next few years or decades as these systems are more widely used,” Brynjolfsson says. When machines take over more work and boost our productivity, Brynjolfsson says, that’s generally a great thing. It means that society is getting richer, that the economic pie is getting larger.

At the same time, Brynjolfsson says, there are no guarantees about how this pie will be distributed. Even when the pie gets bigger, there are people who could see their slice get smaller or even disappear. “It’s very clear that it’s not automatic that the bigger pie is evenly shared by everybody,” Brynjolfsson says. “We have to put in place policies, whether it’s in tax policy or the strategy of companies like this one, which make sure the gains are more widely shared.”

Higher productivity is a really important finding. But what’s probably most fascinating about this study is that it adds to a growing body of evidence that suggests that AI could have a much different effect on the labor market than previous waves of technological change.

For the last few decades, we’ve seen a pattern that economists have called “skill-biased technological change.” The basic idea is that so-called “high-skill” office workers have disproportionately benefited from the use of computers and the internet. Things like Microsoft Word and Excel, Google, and so on have made office workers and other high-paid professionals much better at their jobs.

Meanwhile, however, so-called “low-skill” workers, who often work in the service industry, have not benefited as much from new technology. Even worse, this body of research finds, new technology killed many “middle-skill” jobs that once offered non-college-educated workers a shot at upward mobility and a comfortable living in the middle class. In this previous technological era, the jobs that were automated away were those that focused on doing repetitive, “routine” tasks. Tasks that you could provide a machine with explicit, step-by-step instructions how to do. It turned out that, even before AI, computer software was capable of doing a lot of secretarial work, data entry, bookkeeping, and other clerical tasks. And robots, meanwhile, were able to do many tasks in factories. This killed lots of middle class jobs.

The MIT economist David Autor has long studied this phenomenon. He calls it “job polarization” and a “hollowing out” of the middle class. Basically, the data suggests that the last few decades of technological change was a major contributor to increasing inequality. Technology has mostly boosted the incomes of college-educated and skilled workers while doing little for — and perhaps even hurting — the incomes of non-college-educated and low-skilled workers.

Upside Downside

But, what’s interesting is, as Brynjolfsson notes, this new wave of technological change looks like it could be pretty different. You can see it in his new study. Instead of experienced and skilled workers benefiting mostly from AI technology, it’s the opposite. It’s the less experienced and less skilled workers who benefit the most. In this customer support center, AI improved the know-how and intelligence of those who were new at the job and those who were lower performers. It suggests that AI could benefit those who were left behind in the previous technological era.

“And that might be helpful in terms of closing some of the inequality that previous technologies actually helped amplify,” Brynjolfsson says. So one benefit of intelligence machines is — maybe — they will improve the know-how and smarts of low performers, thereby reducing inequality.

But — and Brynjolfsson seemed a bit skeptical about this — it’s also possible that AI could lower the premium on being experienced, smart, or knowledgeable. If anybody off the street can now come in and — augmented by a machine — start doing work at a higher level, maybe the specialized skills and intelligence of people who were previously in the upper echelon become less valuable. So, yeah, AI could reduce inequality by bringing the bottom up. But it could also reduce inequality by bringing the top and middle down, essentially de-skilling a whole range of occupations, making them easier for anyone to do and thus lowering their wage premium.

Of course, it’s also possible that AI could end up increasing inequality even more. For one, it could make the Big AI companies, which own these powerful new systems, wildly rich. It could also empower business owners to replace more and more workers with intelligent machines. And it could kill jobs for all but the best of the best in various industries, who keep their jobs because maybe they’re superstars or because maybe they have seniority. Then, with AI, these workers could become much more productive, and so their industries might need fewer of these types of jobs than before.

The effects of AI, of course, are still very much being studied — and these systems are evolving fast — so this is all just speculation. But it does look like AI may have different effects than previous technologies, especially because machines are now more capable of doing “non-routine” tasks. Previously, as stated, it was only “routine” tasks that proved to be automatable. But, now, with AI, you don’t have to program machines with specific instructions. They are much more capable of figuring out things on the fly. And this machine intelligence could upend much of the previous thinking on which kinds of jobs will be affected by automation.

Source: This company adopted AI. Here’s what happened to its human workers

Government ‘hackathon’ to search for ways to use AI to cut asylum backlog

For all the legitimate worries about AI and algorithms, many forget that human systems have similar biases and the additional issue of inconsistencies (see Kahneman’s Noise). Given numbers, irresponsible not to develop these tools, but take steps to avoid bias. And I think we need to get off the mindset that every case is unique as many, if not most, have more commonalities than differences:

The Home Office plans to use artificial intelligence to reduce the asylum backlog, and is launching a three-day hackathon in the search for quicker ways to process the 138,052 undecided asylum cases.

The government is convening academics, tech experts, civil servants and business people to form 15 multidisciplinary teams tasked with brainstorming solutions to the backlog. Teams will be invited to compete to find the most innovative solutions, and will present their ideas to a panel of judges. The winners are expected to meet the prime minister, Rishi Sunak, in Downing Street for a prize-giving ceremony.

Inspired by Silicon Valley’s approach to problem-solving, the hackathon will take place in London and Peterborough in May. One possible method of speeding up the processing of asylum claims, discussed in preliminary talks before the event, involves establishing whether AI can be used to transcribe and analyse the Home Office’s huge existing database of thousands of hours of previous asylum interviews, to identify trends.

Source: Government ‘hackathon’ to search for ways to use AI to cut asylum backlog

ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Of interest (I am starting to find it useful as an editor):

“Canada’s historical monuments are also symbols of Indigenous genocide.”

“Police brutality in Canada is just as real as in the U.S.”

Those seemed to me like articles that my colleague, Shree Paradkar, a Toronto Star social and racial justice columnist, could have plausibly written. They were provided by an AI chatbot in response to my request for a list of articles by Paradkar.

The problem is that they don’t exist.

“At first blush it might seem easy to associate me with these headlines. As an opinion writer, I even agree with the premise of some of them,” Paradkar wrote to me after I emailed her the list.

“But there are two major red flags. The big one: they’re false. No articles I wrote have these headlines. And two, they either bludgeon nuance (the first headline) or summarize what I quote other people saying and what I write in different articles into one piece,” she said.

Paradkar’s discomfort reflects wider concerns about the abundance of fake references dished out by popular chatbots including ChatGPT — and worry that with rapidly evolving technology, people may not know how to identify false information. 

The use of artificial intelligence chatbots to summarize large volumes of online information is now widely known, and while some school districts have banned AI-assisted research, some educators advocate for the use of AI as a learning tool.

Users may think that one way to verify information from a chatbot is to ask it to provide references. The problem? The citations look real and even come with hyperlinks. But they are usually fake.

In recent months, academics have issued multiple warnings that ChatGPT was making up academic studies, including convincing scientific research abstracts. This came to the attention of Oxford University professor David Wilkinson when a student turned in a paper with a reference to a study that he couldn’t locate, but which was similar to fake references he found on ChatGPT.

It is less well known that media sources provided by chatbots are often fabricated as well. The Guardian recently called attention to the confusion that ensued at the newspaper when a reader inquired about an article that did not appear on The Guardian’s website.

The headline was so consistent with the newspaper’s coverage that staff thought it could have been something the reporter had written in the past. Staff went deep into computer systems to try to track down the article in case it was deleted. Luckily before more time was wasted, the reader disclosed that the reference came from ChatGPT.

“The invention of sources is particularly troubling for trusted news organizations,” wrote Chris Moran, The Guardian’s head of editorial innovation.

“It opens up whole new questions about whether citations can be trusted in any way, and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place,” said Moran.

In cases of both fake media and academic references from chatbots, this is because “generative” AI chatbots are large language models. They are trained on a huge body of internet data, and can reproduce human language. But the important point is that they are not search engines, said Subodha Kumar, professor of statistics, operations and data science at Temple University in Philadelphia.

“The general public using ChatGPT now doesn’t really know how it works. It creates links and references that don’t exist, because it is designed to generatecontent,” Kumar told me.

“That’s what makes AI beautiful, but also problematic. False references can fool the public and policymakers, so if we have a newspaper article based on a false reference, then we’ll be in big trouble,” he added.

I wondered if ChatGPT has undergone any modifications since researchers raised concerns about the prevalence of fake citations. The U.S.-based company Open AI did not reply to my request for clarification. So like many other curious journalists have done, I interacted with the bot itself.

“Could you give me some articles by Globe and Mail journalist André Picard?” I asked.

“As an AI language model, I do not have access to specific articles or publications by individual authors, including André Picard, as I do not have browsing capabilities or access to the internet. I can, however, provide you with a general format for citing articles,” it replied.

This response appeared to clearly explain ChatGPT’s limitations. I was pleasantly surprised.

But alas, when I asked ChatGPT for a summary of Huawei executive Meng Wanzhou’s legal battle in Canada and asked it to provide sources, the list of references it provided were all fake.

The hyperlinks were either broken or led to unrelated pages. A note at the bottom said that since it was trained on information up to September 2021, it may not reflect current events. But there was no disclaimer that the references themselves were fake.

“Why are your citations fake?” I asked ChatGPT.

“As an AI language model, I do not generate fake citations deliberately. However, I do not have direct access to external sources or the internet, and I do not provide real-time information … It’s always a good practice to cross-check and verify information from reputable and reliable sources independently, especially for critical or sensitive topics.”

But Kumar said it is very possible for developers to train chatbots to verify the references they generate.

“They don’t have to fundamentally change the concept to be just like Google Search, but the algorithm can be easily improved by giving it one more step: When references or links are created, check for the veracity of the content.”

Kumar added that other companies may soon offer new AI products that provide more reliable references, but as a “first mover” in the field, OpenAI has a special responsibility to address the issue.

OpenAI has said it is aware of the potential of generative AI to spread disinformation. In January, the organization partnered with Stanford University and Georgetown University to release a study forecasting potential misuses of language models for disinformation campaigns.

“For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations,” the study found.

And ChatGPT is only one out of a plethora of chatbot products from different companies, including apps that purport to be based on ChatGPT’s open API. I had found the list of my colleague’s fake opinion articles on one such Android app, “AI Chat by GPT,” (ChatGPT doesn’t currently offer a mobile version.)

For Ezra Levant, a conservative Canadian media commentator, the app offered up fake headlines on hot-button issues such as a fake column alleging that global migration will “undermine Canadian sovereignty” and another that Prime Minister Justin Trudeau’s carbon tax is in fact a “wealth tax.”

Paradkar pointed out that the generation of fake stories attributed to real people is particularly dangerous during a time of increasing physical violence and online abuse against journalists worldwide.

“When AI puts out data that is incorrect but plausible, it counts as misinformation. And I fear that it offers ammunition to trolls and bad actors confirming their worst biases and giving them more reason to abuse journalists.”

Source: ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Friedman: Our New Promethean Moment

Friedman is always interesting as to where future conversations and emerging issues are headed:

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

Then he did a demonstration. And I realized Craig’s words were an understatement.

First, he asked GPT-4 — for which Craig was a selected advanced tester and which was just released to the public — to summarize Planet Word and its mission in 400 words. It did so perfectly — in a few seconds.

Then he asked it to do the same in 200 words. Another few seconds.

Then he asked it to do the same in Arabic. Just as quickly. Then in Mandarin. Two more seconds. Then in English again — but in the form of a Shakespearean sonnet. A few more seconds.

Then Craig asked GPT-4 to write the same description in an abecedarian verse — where the first line begins with the letter A, the second with B and so on through the alphabet. It did it with stunning creativity, beginning:

Alluring in Washington, is a museum so grand,
Built to teach, inspire, and help us understand.
Curious minds Planet flock to Word’s embrace,
Delving into language and its intricate grace
Every exhibit here has a story to tell,
From the origins of speech to the art of the quill.

And so on, through Z.

I could barely sleep that night. To observe an A.I. system — its software, microchips and connectivity — produce that level of originality in multiple languages in just seconds each time, well, the first thing that came to mind was the observation by the science fiction writer Arthur C. Clarke that “any sufficiently advanced technology is indistinguishable from magic.”

The second thing that came to mind was a moment at the start of “The Wizard of Oz” — the tornado scene where everything and everyone are lifted into a swirling gyre, including Dorothy and Toto, and then swept away from mundane, black and white Kansas to the gleaming futuristic Land of Oz, where everything is in color.

We are about to be hit by such a tornado. This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

We know the key Promethean eras of the last 600 years: the invention of the printing press, the scientific revolution, the agricultural revolution combined with the industrial revolution, the nuclear power revolution, personal computing and the internet and … now this moment.

Only this Promethean moment is not driven by a single invention, like a printing press or a steam engine, but rather by a technology super-cycle. It is our ability to sense, digitize, process, learn, share and act, all increasingly with the help of A.I. That loop is being put into everything — from your car to your fridge to your smartphone to fighter jets — and it’s driving more and more processes every day.

It’s why I call our Promethean era “The Age of Acceleration, Amplification and Democratization.” Never have more humans had access to more cheap tools that amplify their power at a steadily accelerating rate — while being diffused into the personal and working lives of more and more people all at once. And it’s happening faster than most anyone anticipated.

The potential to use these tools to solve seemingly impossible problems — from human biology to fusion energy to climate change — is awe-inspiring. Consider just one example that most people probably haven’t even heard of — the way DeepMind, an A.I. lab owned by Google parent Alphabet, recently used its AlphaFold A.I. system to solve one of the most wicked problems in science — at a speed and scope that was stunning to the scientists who had spent their careers slowly, painstakingly creeping closer to a solution.

The problem is known as protein folding. Proteins are large complex molecules, made up of strings of amino acids. And as my Times colleague Cade Metz explained in a story on AlphaFold, proteins are “the microscopic mechanisms that drive the behavior of the human body and all other living things.”

What each protein can do, though, largely depends on its unique three-dimensional structure. Once scientists can “identify the shapes of proteins,” added Metz, “they can accelerate the ability to understand diseases, create new medicines and otherwise probe the mysteries of life on Earth.”

But, Science News noted, it has taken “decades of slow-going experiments” to reveal “the structure of more than 194,000 proteins, all housed in the Protein Data Bank.” In 2022, though, “the AlphaFold database exploded with predicted structures for more than 200 million proteins.” For a human that would be worthy of a Nobel Prize. Maybe two.

And with that our understanding of the human body took a giant leap forward. As a 2021 scientific paper, “Unfolding AI’s Potential,” published by the Bipartisan Policy Center, put it, AlphaFold is a meta technology: “Meta technologies have the capacity to … help find patterns that aid discoveries in virtually every discipline.”

ChatGPT is another such meta technology.

But as Dorothy discovered when she was suddenly transported to Oz, there was a good witch and a bad witch there, both struggling for her soul. So it will be with the likes of ChatGPT, Google’s Bard and AlphaFold.

Are we ready? It’s not looking that way: We’re debating whether to ban books at the dawn of a technology that can summarize or answer questions about virtually every book for everyone everywhere in a second.

Like so many modern digital technologies based on software and chips, A.I is “dual use” — it can be a tool or a weapon.

The last time we invented a technology this powerful we created nuclear energy — it could be used to light up your whole country or obliterate the whole planet. But the thing about nuclear energy is that it was developed by governments, which collectively created a system of controls to curb its proliferation to bad actors — not perfectly but not bad.

A.I., by contrast, is being pioneered by private companies for profit. The question we have to ask, Craig argued, is how do we govern a country, and a world, where these A.I. technologies “can be weapons or tools in every domain,” while they are controlled by private companies and are accelerating in power every day? And do it in a way that you don’t throw the baby out with the bathwater.

We are going to need to develop what I call “complex adaptive coalitions” — where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I. No one player in this coalition can fix the problem alone. It requires a very different governing model from traditional left-right politics. And we will have to transition to it amid the worst great-power tensions since the end of the Cold War and culture wars breaking out inside virtually every democracy.

We better figure this out fast because, Toto, we’re not in Kansas anymore.

Source: Our New Promethean Moment

Krauss: Artificially Intelligent Offense?

Of note, yet another concern and issue that needs to be addressed:

…Let’s be clear about this: Valid, empirically derived information is not, in the abstract, either harmful or offensive.

The reception of information can be offensive, and it can, depending upon the circumstances of the listener, potentially result in psychological or physical harm. But precisely because one cannot presume to know all such possible circumstances, following the OpenAI guidelines can instead sanction the censorship of almost any kind of information for fear that someone, somewhere, will be offended.

Even before ChatGPT, this was not a hypothetical worry. Recall the recent firing of a heralded NYT science reporter for using “the N-word” with a group of students in the process of explaining why the use of that word could be inappropriate or hurtful. The argument the NYT editors made was that “intent” was irrelevant. Offense is in the ear of the listener, and that overrides the intent of the speaker or the veracity of his or her argument.

A more relevant example, perhaps, involves the loony guidelines recently provided to editors and reviewers for the journals of the Royal Society of Chemistry to “minimise the risk of publishing inappropriate or otherwise offensive content.” As they describe it, “[o]ffence is a subjective matter and sensitivity to it spans a considerable range; however, we bear in mind that it is the perception of the recipient that we should consider, regardless of the author’s intention [italics mine] … Please consider whether or not any content (words, depictions or imagery) might have the potential to cause offence, referring to the guidelines as needed.”

Moreover, they define offensive content specifically as “Any content that could reasonably offend someone on the basis of their age, gender, race, sexual orientation, religious or political beliefs, marital or parental status, physical features, national origin, social status or disability.”

The mandate against offensiveness propounded by the RSC was taken to another level by the journal Nature Human Behaviour, which indicated that not only would they police language, but they would restrict the nature of scientific research they publish on the basis of social justice concerns about possible “negative social consequences for studied groups.” One can see echoes of both the RSC and Nature actions in the ChatGPT response to my questions.

The essential problem here is removing the obligation, or rather, the opportunity, all of us should have to rationally determine how we respond to potentially offensive content by instead ensuring that any such potentially offensive content may be censored. Intent and accuracy become irrelevant. Veto power in this age of potential victimization is given to the imaginary recipient of information.

Free and open access to information, even information that can cause pain or distress, is essential in a free society. As Christopher Hitchens so often stressed, freedom of speech is primarily important not because it provides an opportunity for speakers to speak out against prevailing winds but because that speech gives listeners or readers the freedom to realize they might want to change their minds.

The problem with the dialogues presented above is that ChatGPT appears to be programmed with a biased perception of what might be offensive or harmful. Moreover, it has been instructed to limit the information it provides to that which its programmers have deemed is neither. What makes this example more than an interesting—or worrying—anecdote is the emerging potential of AI chatbots to further exacerbate already disturbing trends.

As chatbot responses begin to proliferate throughout the Internet, they will, in turn, impact future machine learning algorithms that mine the Internet for information, thus perpetuating and amplifying the impact of the current programming biases evident in ChatGPT.

ChatGPT is admittedly a work in progress, but how the issues of censorship and offense ultimately play out will be important. The last thing anyone should want in the future is a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver. Providing information guaranteed not to disturb is a sure way to squash knowledge and progress. It is also a clear example of the fallacy of attempting to input “universal human values” into AI systems, because one can bet that the choice of which values to input will be subjective.

If the future of AI follows the current trend apparent in ChatGPT, a more dangerous, dystopic machine-based future might not be the one portrayed in the Terminator films but, rather, a future populated by AI versions of Fahrenheit 451firemen.

Source: Artificially Intelligent Offense?

Mims: The AI Boom That Could Make Google and Microsoft Even More Powerful

Good long read. Hard to be optimistic about how the technology will be used. And the regulators will likely be more than a few steps behind corporations:

Seeing the new artificial intelligence-powered chatbots touted in dueling announcements this past week by Microsoft and Googledrives home two major takeaways. First, the feeling of “wow, this definitely could change everything.” And second, the realization that for chat-based search and related AI technologies to have an impact, we’re going to have to put a lot of faith in them and the companies they come from.

When AI is delivering answers, and not just information for us to base decisions on, we’re going to have to trust it much more deeply than we have before. This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value.

The same will be true of tools that help generate text, spreadsheets, code, images and anything else we create on our devices—some version of which both Microsoft MSFT -0.20%decrease; red down pointing triangle and Google have promised to offer within their existing productivity services, Microsoft 365 and Google Workspace. 

These technologies, and chat-based search, are all based on the latest generation of “generative” AI, capable of creating verbal and visual content and not just processing it the way more established AI has done. And the added trust it will require is one of several ways in which this new generative AI technology is poised to shift even more power into the hands of the biggest tech companies

Generative AI in all its forms will insinuate technology more deeply into the way we live and work than it already is—not just answering our questions but writing our memos and speeches or even producing poetry and art. And because of the financial, intellectual and computational resources needed to develop and run the technology are so enormous, the companies that control these AI systems will be the largest, richest companies.

OpenAI, the creator of the ChatGPT chatbot and DALL-E 2 image generator AIs that have fueled much of the current hype, seemed like an exception to that: a relatively small startup that has driven major AI innovation. But it has leapt into the arms of Microsoft, which has made successive rounds of investment, in part because of the need to pay for the computing power needed to make its systems work. 

The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes.

Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk.

Galactica was “the output of a research project versus something intended for commercial use,” says a Meta spokeswoman. In a public statement, Joelle Pineau, managing director of fundamental AI research at Meta, wrote that “given the propensity of large language models such as Galactica to generate text that may appear authentic, but is inaccurate, and because it has moved beyond the research community, we chose to remove the demo from public availability.”

On the other hand, proving your AI more trustworthy could be a competitive advantage more powerful than being the biggest, best or fastest repository of answers. This seems to be Google’s bet, as the company has emphasized in recent announcements and a presentation on Wednesday that as it tests and rolls out its own chat-based and generative AI systems, it will strive for “Responsible AI,” as outlined in 2019 in its “AI Principles.”

My colleague Joanna Stern this past week provided a helpful description of what it’s like to use Microsoft’s Bing search engine and Edge web browser with ChatGPT incorporated. You can join a list to test the service—and Google says it will make its chatbot, named Bard, available at some point in the coming months.

But in the meantime, to see just why trust in these kinds of search engines is so tricky, you can visit other chat-based search engines that already exist. There’s You.com, which will answer your questions via a chatbot, or Andisearch.com, which will summarize any article it returns when you search for a topic on it.

Even these smaller services feel a little like magic. If you ask You.com’s chat module a question like “Please list the best chat AI-based search engines,” it can, under the right circumstances, give you a coherent and succinct answer that includes all the best-known startups in this space. But it can also, depending on small changes in how you phrase that question, add complete nonsense to its answer. 

In my experimentation, You.com would, more often than not, give a reasonably accurate answer, but then add to it the name of a search engine that doesn’t exist at all. Googling the made-up search engine names it threw in revealed that You.com seemed to be misconstruing the names of humans quoted in articles as the names of search engines.

Andi doesn’t return search results in a chat format, precisely because making sure that those answers are accurate is still so difficult, says Chief Executive Angela Hoover. “It’s been super exciting to see these big players validating that conversational search is the future, but nailing factual accuracy is hard to do,” she adds. As a result, for now, Andi offers search results in a conventional format, but offers to use AI to summarize any page it returns.

Andi currently has a team of fewer than 10 people, and has raised $2.5 million so far. It’s impressive what such a small team has accomplished, but it’s clear that making trustworthy AI will require enormous resources, probably on the scale of what companies like Microsoft and Google possess.

There are two reasons for this: The first is the enormous amount of computing infrastructure required, says Tinglong Dai, a professor of operations management at Johns Hopkins University who studies human-AI interaction. That means tens of thousands of computers in big technology companies’ current cloud infrastructures. Some of those computers are used to train the enormous “foundation” models that power generative AI systems. Others specialize in making the trained models available to users, which as the number of users grows can become a more taxing task than the original training.

The second reason, says Dr. Dai, is that it requires enormous human resources to continually test and tune these models, in order to make sure they’re not spouting an inordinate amount of nonsense or biased and offensive speech.

Google has said that it has called on every employee in the company to test its new chat-based search engine and flag any issues with the results it generates. Microsoft, which is already rolling out its chat-based search engine to the public on a limited basis, is doing that kind of testing in public. ChatGPT, on which Microsoft’s chat-based search engine is based, has already proved to be vulnerable to attempts to “jailbreak” it into producing inappropriate content. 

Big tech companies can probably overcome the issues arising from their rollout of AI—Google’s go-slow approach, ChatGPT’s sometimes-inaccurate results, and the incomplete or misleading answers chat-based Bing could offer—by experimenting with these systems on a large scale, as only they can.

“The only reason ChatGPT and other foundational models are so bad at bias and even fundamental facts is they are closed systems, and there is no opportunity for feedback,” says Dr. Dai. Big tech companies like Google have decades of practice at soliciting feedback to improve their algorithmically-generated results. Avenues for such feedback have, for example, long been a feature of both Google Search and Google Maps.

Dr. Dai says that one analogy for the future of trust in AI systems could be one of the least algorithmically-generated sites on the internet: Wikipedia. While the entirely human-written and human-edited encyclopedia isn’t as trustworthy as primary-source material, its users generally know that and find it useful anyway. Wikipedia shows that “social solutions” to problems like trust in the output of an algorithm—or trust in the output of human Wikipedia editors—are possible.

But the model of Wikipedia also shows that the kind of labor-intensive solutions for creating trustworthy AI—which companies like Meta and Google have already employed for years and at scale in their content moderation systems—are likely to entrench the power of existing big technology companies. Only they have not just the computing resources, but also the human resources, to deal with all the misleading, incomplete or biased information their AIs will be generating.

In other words, creating trust by moderating the content generated by AIs might not prove to be so different from creating trust by moderating the content generated by humans. And that is something the biggest technology companies have already shown is a difficult, time-consuming and resource-intensive task they can take on in a way that few other companies can.

The obvious and immediate utility of these new kinds of AIs, when integrated into a search engine or in their many other potential applications, is the reason for the current media, analyst and investor frenzy for AI. It’s clear that this could be a disruptive technology, resetting who is harvesting attention and where they’re directing it, threatening Google’s search monopoly and opening up new markets and new sources of revenue for Microsoft and others.

Based on the runaway success of the ChatGPT AI—perhaps the fastest service to reach 100 million users in history, according to a recent UBS report—it’s clear that being an aggressive first mover in this space could matter a great deal. It’s also clear that being a successful first-mover in this space will require the kinds of resources that only the biggest tech companies can muster.

Source: The AI Boom That Could Make Google and Microsoft Even More Powerful

How to reduce citizen harm from automated decision systems

While more at a local level, some good basic guidelines:

For agencies that use automated systems to inform decisions about schools, social services and medical treatment, it’s imperative that they’re using technology that protects data.

new report finds that there’s little transparency about the automated decision-making (ADM) systems that state and local agencies use for many tasks, leading to unintended, detrimental consequences for the people they’re meant to help. But agencies can take steps to ensure that their organization buys responsible products.

The findings are shared in “Screened and Scored in the District of Columbia,” a new report from the Electronic Privacy Information Center (EPIC). Researchers spent 14 months investigating 29 ADM systems at about 20 Washington, D.C., government agencies. They chose that location because it’s where EPIC is located, said Thomas McBrien, law fellow at EPIC and one of four report authors.

The agencies use such systems to inform decisions about many activities, including assigning children to schools, understanding drivers’ travel patterns and informing medical decisions about patients, so it’s imperative that they’re using technology that protects data.

“Overburdened agencies turn to tech in the hope that it can make difficult political and administrative decisions for them,” according to the report. But “agencies and tech companies block audits of their ADM tools because companies claim that allowing the public to scrutinize the tools would hurt their competitive position or lead to harmful consequences. As a result, few people know how, when, or even whether they have been subjected to automated decision-making.”

Agencies can take four steps to mitigate the problem, McBrien said. First, agencies can require data minimization through contract language. “That’s basically the principle that when a company is rendering a service for an agency using its software, the agency should really ensure that the company isn’t taking more data than it needs to render that service,” he said.

That connects to his second recommendation, which is monitoring the downstream use of this data. Some ADM system vendors might take the data, run their services with it and that’s it, but others may share the data with their parent company or a subsidiary—or sell it to third parties.

“That’s where we see a lot of leakage of people’s personal data that can be really harmful, and definitely not what people are expecting their government to do for them,” McBrien said.

A third step is to audit for accuracy and bias. Sometimes, a tool used on one population or in one area can be very accurate, but applied to a different context, that accuracy may drop off and biased results could emerge. The only way to know whether that’s happening is by auditing and validating the system using the group of people you’re serving.

“The gold standard here would be to have an external auditor do this before you implement the system,” he said. But it’s a good idea to also do audits periodically to ensure that the algorithms the system uses are still accurate “because as the real world changes, the model of the real world it uses to make predictions should also be changing.”

Fourth, agencies should inform the public about their use of these systems, McBrien said, adding that it’s a good way to build trust. Meaningful public participation is the No. 1 recommendation to come out of a report by the Pittsburgh Task Force on Public Algorithms.

“Agencies should publish baseline information about the proposed system: what the system is, its purposes, the data on which it relies, its intended outcomes, and how it supplants or replaces existing processes, as well as likely or potential social, racial, and economic harms and privacy effects to be mitigated,” according to the report’s second recommendation.

It’s also important to share the outcome of any decision being made based on ADM systems, McBrien added. “People who are directly impacted by these systems are often the first ones to realize when there’s a problem,” he said. “I think it’s really important that when that outcome has been driven or informed by an algorithmic system, that that’s communicated to the person so they have the full picture of what happened.”

He added that privacy laws such as the California Privacy Rights Act of 2020 support transparency, as does an effort in that state to redefine state technology procurement as well as a bill in Washington state that would establish “guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.”

Although he couldn’t say how prevalent such systems are among state and local agencies—in fact, EPIC’s report states that researchers couldn’t access all of the systems in D.C. because many agencies were unwilling to share information because of companies’ claims of trade secrets or other commercial protections—there are examples of their use elsewhere.

For instance, in 2019, New York City Mayor Bill de Blasio signed an executive order establishing an algorithms management and policy officer to be a central resource on algorithm policy and to develop guidelines and best practices on the city’s use of them. That move follows a 2017 law that made the city the first in the country to create a task force to study agencies’ use of algorithms. But that group’s work led to a shadow report highlighting the task force’s shortcomings.

“We definitely urge people to think of other solutions to these problems,” McBrien said. “Sometimes agencies implement that system and are locked into them for a long time and spend enormous amounts of money trying to fix them, manage the problem, ameliorate the harms of the system that could have been used to hire more caseworkers.”

Source: How to reduce citizen harm from automated decision systems

Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

Interesting and significant study. A comparable study on automated decision-making systems that have been successful in minimizing injustice and inequality would be helpful, as well as recognition that automated systems can improve decision consistency as Kahneman and others demonstrated in Noise.

As these systems will continue to grow in order to manage increased numbers of decisions required, greater care in their design and impacts will of course be necessary. But a mistake to assume that all such systems are worse than human decision-making:

In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.Alston was worried for good reason. Research shows that ADS can be used in ways that discriminateexacerbate inequalityinfringe upon rightssort people into different social groupswrongly limit access to services and intensify surveillance

For example, families have been bankrupted and forced into crises after being falsely accused of benefit fraud. 

Researchers have identified how facial recognition systems and risk assessment tools are more likely to wrongly identify people with darker skin tones and women. These systems have already led to wrongful arrests and misinformed sentencing decisions.

Often, people only learn that they have been affected by an ADS application when one of two things happen: after things go wrong, as was the case with the A-levels scandal in the United Kingdom; or when controversies are made public, as was the case with uses of facial recognition technology in Canada and the United States.

Automated problems

Greater transparency, responsibility, accountability and public involvement in the design and use of ADS is important to protect people’s rights and privacy. There are three main reasons for this: 

  1. these systems can cause a lot of harm
  2. they are being introduced faster than necessary protections can be implemented, and;
  3. there is a lack of opportunity for those affected to make democratic decisions about if they should be used and if so, how they should be used.

Our latest research project, Automating Public Services: Learning from Cancelled Systems, provides findings aimed at helping prevent harm and contribute to meaningful debate and action. The report provides the first comprehensive overview of systems being cancelled across western democracies. 

Researching the factors and rationales leading to cancellation of ADS systems helps us better understand their limits. In our report, we identified 61 ADS that were cancelled across Australia, Canada, Europe, New Zealand and the U.S. We present a detailed account of systems cancelled in the areas of fraud detection, child welfare and policing. Our findings demonstrate the importance of careful consideration and concern for equity.

Reasons for cancellation

There are a range of factors that influence decisions to cancel the uses of ADS. One of our most important findings is how often systems are cancelled because they are not as effective as expected. Another key finding is the significant role played by community mobilization and research, investigative reporting and legal action. 

Our findings demonstrate there are competing understandings, visions and politics surrounding the use of ADS.

a table showing the factors influencing the decision to cancel and ADS system
There are a range of factors that influence decisions to cancel the uses of ADS systems. (Data Justice Lab), Author provided

Hopefully, our recommendations will lead to increased civic participation and improved oversight, accountability and harm prevention.

In the report, we point to widespread calls for governments to establish resourced ADS registers as a basic first step to greater transparency. Some countries such as the U.K., have stated plans to do so, while other countries like Canada have yet to move in this direction.

Our findings demonstrate that the use of ADS can lead to greater inequality and systemic injustice. This reinforces the need to be alert to how the use of ADS can create differential systems of advantage and disadvantage.

Accountability and transparency

ADS need to be developed with care and responsibility by meaningfully engaging with affected communities. There can be harmful consequences when government agencies do not engage the public in discussions about the appropriate use of ADS before implementation. 

This engagement should include the option for community members to decide areas where they do not want ADS to be used. Examples of good government practice can include taking the time to ensure independent expert reviews and impact assessments that focus on equality and human rights are carried out. 

a list of recommendations for governments using ADS systems
Governments can take several different approaches to implement ADS systems in a more accountable manner.(Data Justice Lab), Author provided

We recommend strengthening accountability for those wanting to implement ADS by requiring proof of accuracy, effectiveness and safety, as well as reviews of legality. At minimum, people should be able to find out if an ADS has used their data and, if necessary, have access to resources to challenge and redress wrong assessments. 

There are a number of cases listed in our report where government agencies’ partnership with private companies to provide ADS services has presented problems. In one case, a government agency decided not to use a bail-setting system because the proprietary nature of the system meant that defendants and officials would not be able to understand why a decision was made, making an effective challenge impossible. 

Government agencies need to have the resources and skills to thoroughly examine how they procure ADS systems.

A politics of care

All of these recommendations point to the importance of a politics of care. This requires those wanting to implement ADS to appreciate the complexities of people, communities and their rights. 

Key questions need to be asked about how the uses of ADS lead to blind spots because of the way they increase the distancing between administrators and the people they are meant to serve through scoring and sorting systems that oversimplify, infer guilt, wrongly target and stereotype people through categorizations and quantifications.

Good practice, in terms of a politics of care, involves taking the time to carefully consider the potential impacts of ADS before implementation and being responsive to criticism, ensuring ongoing oversight and review, and seeking independent and community review.

Source: Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

Automating Public Services: Learning from Cancelled Systems

Harris: The future of malicious artificial intelligence applications is here

More on some of the more fundamental risks of AI:

The year is 2016. Under close scrutiny by CCTV cameras, 400 contractors are working around the clock in a Russian state-owned facility. Many are experts in American culture, tasked with writing posts and memes on Western social media to influence the upcoming U.S. Presidential election. The multimillion dollar operation would reach 120 million people through Facebook alone. 

Six years later, the impact of this Russian info op is still being felt. The techniques it pioneered continue to be used against democracies around the world, as Russia’s “troll factory” — the Russian internet Research Agency — continues to fuel online radicalization and extremism. Thanks in no small part to their efforts, our world has become hyper-polar, increasingly divided into parallel realities by cherry-picked facts, falsehoods, and conspiracy theories.

But if making sense of reality seems like a challenge today, it will be all but impossible tomorrow. For the past two years, a quiet revolution has been brewing in AI — and despite some positive consequences, it’s also poised to hand authoritarian regimes unprecedented new ways to spread misinformation across the globe at an almost inconceivable scale.

In 2020, AI researchers created a text generation system called GPT-3. GPT-3 can produce text that’s indistinguishable from human writing — including viral articles, tweets, and other social media posts. GPT-3 was one of the most significant breakthroughs in the history of AI: it offered a simple recipe that AI researchers could follow to radically accelerate AI progress, and build much more capable, humanlike systems. 

But it also opened a Pandora’s box of malicious AI applications. 

Text-generating AIs — or “language models” — can now be used to massively augment online influence campaigns. They can craft complex and compelling arguments, and be leveraged to create automated bot armies and convincing fake news articles. 

This isn’t a distant future concern: it’s happening already. As early as 2020, Chinese efforts to interfere with Taiwan’s national election involved “the instant distribution of artificial-intelligence-generated fake news to social media platforms.”

But the 2020 AI breakthrough is now being harnessed for more than just text. New image-generation systems, able to create photorealistic pictures based on any text prompt, have become reality this year for the first time. As AI-generated content becomes better and cheaper, the posts, pictures, and videos we consume in our social media feeds will increasingly reflect the massively amplified interests of tech-savvy actors.

And malicious applications of AI go far beyond social media manipulation. Language models can already write better phishing emails than humans, and have code-writing capabilities that outperform human competitive programmers. AI that can write code can also write malware, and many AI researchers see language models as harbingers of an era of self-mutating AI-powered malicious software that could blindside the world. Other recent breakthroughs have significant implications for weaponized drone control and even bioweapon design.

Needed: a coherent plan

Policy and governance usually follow crises, rather than anticipate them. And that makes sense: the future is uncertain, and most imagined risks fail to materialize. We can’t invest resources in solving every hypothetical problem.

But exceptions have always been made for problems which, if left unaddressed, could have catastrophic effects. Nuclear technology, biotechnology, and climate change are all examples. Risk from advanced AI represents another such challenge. Like biological and nuclear risk, it calls for a co-ordinated, whole-of-government response.

Public safety agencies should establish AI observatories that produce unclassified reporting on publicly available information about AI capabilities and risks, and begin studying how to frame AI through a counterproliferation lens

Given the pivotal role played by semiconductors and advanced processors in the development of what are effectively new AI weapons, we should be tightening export control measures for hardware or resources that feed into the semiconductor supply chains of countries like China and Russia. 

Our defence and security agencies could follow the lead of the U.K.’s Ministry of Defence, whose Defence AI Strategy involves tracking and mitigating extreme and catastrophic risks from advanced AI.

AI has entered an era of remarkable, rapidly accelerating capabilities. Navigating the transition to a world with advanced AI will require that we take seriously possibilities that would have seemed like science fiction until very recently. We’ve got a lot to rethink, and now is the time to get started.

Source: The future of malicious artificial intelligence applications is here

Trudel: Intelligence artificielle discriminatoire

Somewhat shallow analysis, as the only area that IRCC is using AI is with respect to visitor visas, not international students or other categories (unless that has changed). So Trudel’s argumentation may be based on a false understanding.

While concerns regarding AI are legitimate and need to be addressed, bias and noise are common to human decision making.

And differences in outcomes don’t necessarily reflect bias and discrimination but these differences do signal potential issues:

Les étudiants francophones internationaux subissent un traitement qui a toutes les allures de la discrimination systémique. Les Africains, surtout francophones, encaissent un nombre disproportionné de refus de permis de séjourner au Canada pour fins d’études. On met en cause des systèmes d’intelligence artificielle (IA) utilisés par les autorités fédérales en matière d’immigration pour expliquer ces biais systémiques.

Le député Alexis Brunelle-Duceppe rappelait ce mois-ci que « les universités francophones arrivent […] en tête du nombre de demandes d’études refusées. Ce ne sont pas les universités elles-mêmes qui les refusent, mais bien le gouvernement fédéral. Par exemple, les demandes d’étudiants internationaux ont été refusées à 79 % à l’Université du Québec à Trois-Rivières et à 58 % à l’Université du Québec à Chicoutimi. Pour ce qui est de l’Université McGill, […] on parle de 9 % ».

En février, le vice-recteur de l’Université d’Ottawa, Sanni Yaya, relevait qu’« au cours des dernières années, de nombreuses demandes de permis, traitées par Immigration, Réfugiés et Citoyenneté Canada, ont été refusées pour des motifs souvent incompréhensibles et ont demandé des délais anormalement longs. » Il s’agit pourtant d’étudiants qui ont des bourses garanties par leur établissement et un bon dossier. Le vice-recteur se demande à juste titre s’il n’y a pas là un préjugé implicite de la part de l’agent responsable de leur évaluation, convaincu de leur intention de ne pas quitter le Canada une fois que sera expiré leur permis d’études.

En somme, il existe un faisceau d’indices donnant à conclure que les outils informatiques d’aide à la décision utilisés par les autorités fédérales amplifient la discrimination systémique à l’encontre des étudiants francophones originaires d’Afrique.

Outils faussés

Ce cafouillage doit nous interpeller à propos des préjugés amplifiés par les outils d’IA. Tout le monde est concerné, car ces technologies font partie intégrante de la vie quotidienne. Les téléphones dotés de dispositifs de reconnaissance faciale ou les assistants domestiques ou même les aspirateurs « intelligents », sans parler des dispositifs embarqués dans plusieurs véhicules, carburent à l’IA.

La professeure Karine Gentelet et l’étudiante Lily-Cannelle Mathieu expliquent, dans un article diffusé sur le site de l’Observatoire international sur les impacts sociétaux de l’IA et du numérique, que les technologies d’IA, bien que souvent présentées comme étant neutres, sont marquées par l’environnement social duquel elles sont issues. Elles tendent à reproduire et même à amplifier les préjugés et les apports de pouvoir inéquitables.

Les chercheuses rappellent que plusieurs études ont montré que, si elles ne sont pas adéquatement encadrées, ces technologies excluent des populations racisées, ou bien les surreprésentent au sein de catégories sociales considérées comme « problématiques » ou encore, fonctionnent inadéquatement lorsqu’elles sont appliquées à des individus racisés. Elles peuvent accentuer les tendances discriminatoires dans divers processus décisionnels, comme la surveillance policière, des diagnostics médicaux, des décisions de justice, des processus d’embauche ou d’admission scolaire, ou même le calcul des taux hypothécaires.

Une loi nécessaire

En juin dernier, le ministre fédéral de l’Innovation, des Sciences et de l’Industrie a présenté le projet de loi C-27 afin d’encadrer l’usage des technologies d’intelligence artificielle. Le projet de loi entend imposer des obligations de transparence et de reddition de comptes aux entreprises qui font un usage important des technologies d’IA.

Le projet propose d’interdire certaines conduites relativement aux systèmes d’IA qui peuvent causer un préjudice sérieux aux individus. Il comporte des dispositions afin de responsabiliser les entreprises qui tirent parti de ces technologies. La loi garantirait une gouvernance et un contrôle appropriés des systèmes d’IA afin de prévenir les dommages physiques ou psychologiques ou les pertes économiques infligés aux individus.

On veut aussi prévenir les résultats faussés qui établissent une distinction négative non justifiée sur un ou plusieurs des motifs de discrimination interdits par les législations sur les droits de la personne. Les utilisateurs des technologies d’IA seraient tenus à des obligations d’évaluation et d’atténuation des risques inhérents à leurs systèmes. Le projet de loi entend mettre en place des obligations de transparence pour les systèmes ayant un potentiel de conséquences importantes sur les personnes. Ceux qui rendent disponibles des systèmes d’IA seraient obligés de publier des explications claires sur leurs conditions de fonctionnement de même que sur les décisions, recommandations ou prédictions qu’ils font.

Le traitement discriminatoire que subissent plusieurs étudiants originaires de pays africains francophones illustre les biais systémiques qui doivent être repérés, analysés et supprimés. C’est un rappel que le déploiement de technologies d’IA s’accompagne d’importants risques de reconduire les tendances problématiques des processus de décision. Pour faire face à de tels risques, il faut des législations imposant aussi bien aux entreprises qu’aux autorités publiques de fortes exigences de transparence et de reddition de comptes. Il faut surtout se défaire du mythe de la prétendue « neutralité » de ces outils techniques.

Source: Intelligence artificielle discriminatoire