Friedman: Our New Promethean Moment

Friedman is always interesting as to where future conversations and emerging issues are headed:

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

Then he did a demonstration. And I realized Craig’s words were an understatement.

First, he asked GPT-4 — for which Craig was a selected advanced tester and which was just released to the public — to summarize Planet Word and its mission in 400 words. It did so perfectly — in a few seconds.

Then he asked it to do the same in 200 words. Another few seconds.

Then he asked it to do the same in Arabic. Just as quickly. Then in Mandarin. Two more seconds. Then in English again — but in the form of a Shakespearean sonnet. A few more seconds.

Then Craig asked GPT-4 to write the same description in an abecedarian verse — where the first line begins with the letter A, the second with B and so on through the alphabet. It did it with stunning creativity, beginning:

Alluring in Washington, is a museum so grand,
Built to teach, inspire, and help us understand.
Curious minds Planet flock to Word’s embrace,
Delving into language and its intricate grace
Every exhibit here has a story to tell,
From the origins of speech to the art of the quill.

And so on, through Z.

I could barely sleep that night. To observe an A.I. system — its software, microchips and connectivity — produce that level of originality in multiple languages in just seconds each time, well, the first thing that came to mind was the observation by the science fiction writer Arthur C. Clarke that “any sufficiently advanced technology is indistinguishable from magic.”

The second thing that came to mind was a moment at the start of “The Wizard of Oz” — the tornado scene where everything and everyone are lifted into a swirling gyre, including Dorothy and Toto, and then swept away from mundane, black and white Kansas to the gleaming futuristic Land of Oz, where everything is in color.

We are about to be hit by such a tornado. This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

We know the key Promethean eras of the last 600 years: the invention of the printing press, the scientific revolution, the agricultural revolution combined with the industrial revolution, the nuclear power revolution, personal computing and the internet and … now this moment.

Only this Promethean moment is not driven by a single invention, like a printing press or a steam engine, but rather by a technology super-cycle. It is our ability to sense, digitize, process, learn, share and act, all increasingly with the help of A.I. That loop is being put into everything — from your car to your fridge to your smartphone to fighter jets — and it’s driving more and more processes every day.

It’s why I call our Promethean era “The Age of Acceleration, Amplification and Democratization.” Never have more humans had access to more cheap tools that amplify their power at a steadily accelerating rate — while being diffused into the personal and working lives of more and more people all at once. And it’s happening faster than most anyone anticipated.

The potential to use these tools to solve seemingly impossible problems — from human biology to fusion energy to climate change — is awe-inspiring. Consider just one example that most people probably haven’t even heard of — the way DeepMind, an A.I. lab owned by Google parent Alphabet, recently used its AlphaFold A.I. system to solve one of the most wicked problems in science — at a speed and scope that was stunning to the scientists who had spent their careers slowly, painstakingly creeping closer to a solution.

The problem is known as protein folding. Proteins are large complex molecules, made up of strings of amino acids. And as my Times colleague Cade Metz explained in a story on AlphaFold, proteins are “the microscopic mechanisms that drive the behavior of the human body and all other living things.”

What each protein can do, though, largely depends on its unique three-dimensional structure. Once scientists can “identify the shapes of proteins,” added Metz, “they can accelerate the ability to understand diseases, create new medicines and otherwise probe the mysteries of life on Earth.”

But, Science News noted, it has taken “decades of slow-going experiments” to reveal “the structure of more than 194,000 proteins, all housed in the Protein Data Bank.” In 2022, though, “the AlphaFold database exploded with predicted structures for more than 200 million proteins.” For a human that would be worthy of a Nobel Prize. Maybe two.

And with that our understanding of the human body took a giant leap forward. As a 2021 scientific paper, “Unfolding AI’s Potential,” published by the Bipartisan Policy Center, put it, AlphaFold is a meta technology: “Meta technologies have the capacity to … help find patterns that aid discoveries in virtually every discipline.”

ChatGPT is another such meta technology.

But as Dorothy discovered when she was suddenly transported to Oz, there was a good witch and a bad witch there, both struggling for her soul. So it will be with the likes of ChatGPT, Google’s Bard and AlphaFold.

Are we ready? It’s not looking that way: We’re debating whether to ban books at the dawn of a technology that can summarize or answer questions about virtually every book for everyone everywhere in a second.

Like so many modern digital technologies based on software and chips, A.I is “dual use” — it can be a tool or a weapon.

The last time we invented a technology this powerful we created nuclear energy — it could be used to light up your whole country or obliterate the whole planet. But the thing about nuclear energy is that it was developed by governments, which collectively created a system of controls to curb its proliferation to bad actors — not perfectly but not bad.

A.I., by contrast, is being pioneered by private companies for profit. The question we have to ask, Craig argued, is how do we govern a country, and a world, where these A.I. technologies “can be weapons or tools in every domain,” while they are controlled by private companies and are accelerating in power every day? And do it in a way that you don’t throw the baby out with the bathwater.

We are going to need to develop what I call “complex adaptive coalitions” — where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I. No one player in this coalition can fix the problem alone. It requires a very different governing model from traditional left-right politics. And we will have to transition to it amid the worst great-power tensions since the end of the Cold War and culture wars breaking out inside virtually every democracy.

We better figure this out fast because, Toto, we’re not in Kansas anymore.

Source: Our New Promethean Moment

Krauss: Artificially Intelligent Offense?

Of note, yet another concern and issue that needs to be addressed:

…Let’s be clear about this: Valid, empirically derived information is not, in the abstract, either harmful or offensive.

The reception of information can be offensive, and it can, depending upon the circumstances of the listener, potentially result in psychological or physical harm. But precisely because one cannot presume to know all such possible circumstances, following the OpenAI guidelines can instead sanction the censorship of almost any kind of information for fear that someone, somewhere, will be offended.

Even before ChatGPT, this was not a hypothetical worry. Recall the recent firing of a heralded NYT science reporter for using “the N-word” with a group of students in the process of explaining why the use of that word could be inappropriate or hurtful. The argument the NYT editors made was that “intent” was irrelevant. Offense is in the ear of the listener, and that overrides the intent of the speaker or the veracity of his or her argument.

A more relevant example, perhaps, involves the loony guidelines recently provided to editors and reviewers for the journals of the Royal Society of Chemistry to “minimise the risk of publishing inappropriate or otherwise offensive content.” As they describe it, “[o]ffence is a subjective matter and sensitivity to it spans a considerable range; however, we bear in mind that it is the perception of the recipient that we should consider, regardless of the author’s intention [italics mine] … Please consider whether or not any content (words, depictions or imagery) might have the potential to cause offence, referring to the guidelines as needed.”

Moreover, they define offensive content specifically as “Any content that could reasonably offend someone on the basis of their age, gender, race, sexual orientation, religious or political beliefs, marital or parental status, physical features, national origin, social status or disability.”

The mandate against offensiveness propounded by the RSC was taken to another level by the journal Nature Human Behaviour, which indicated that not only would they police language, but they would restrict the nature of scientific research they publish on the basis of social justice concerns about possible “negative social consequences for studied groups.” One can see echoes of both the RSC and Nature actions in the ChatGPT response to my questions.

The essential problem here is removing the obligation, or rather, the opportunity, all of us should have to rationally determine how we respond to potentially offensive content by instead ensuring that any such potentially offensive content may be censored. Intent and accuracy become irrelevant. Veto power in this age of potential victimization is given to the imaginary recipient of information.

Free and open access to information, even information that can cause pain or distress, is essential in a free society. As Christopher Hitchens so often stressed, freedom of speech is primarily important not because it provides an opportunity for speakers to speak out against prevailing winds but because that speech gives listeners or readers the freedom to realize they might want to change their minds.

The problem with the dialogues presented above is that ChatGPT appears to be programmed with a biased perception of what might be offensive or harmful. Moreover, it has been instructed to limit the information it provides to that which its programmers have deemed is neither. What makes this example more than an interesting—or worrying—anecdote is the emerging potential of AI chatbots to further exacerbate already disturbing trends.

As chatbot responses begin to proliferate throughout the Internet, they will, in turn, impact future machine learning algorithms that mine the Internet for information, thus perpetuating and amplifying the impact of the current programming biases evident in ChatGPT.

ChatGPT is admittedly a work in progress, but how the issues of censorship and offense ultimately play out will be important. The last thing anyone should want in the future is a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver. Providing information guaranteed not to disturb is a sure way to squash knowledge and progress. It is also a clear example of the fallacy of attempting to input “universal human values” into AI systems, because one can bet that the choice of which values to input will be subjective.

If the future of AI follows the current trend apparent in ChatGPT, a more dangerous, dystopic machine-based future might not be the one portrayed in the Terminator films but, rather, a future populated by AI versions of Fahrenheit 451firemen.

Source: Artificially Intelligent Offense?

Mims: The AI Boom That Could Make Google and Microsoft Even More Powerful

Good long read. Hard to be optimistic about how the technology will be used. And the regulators will likely be more than a few steps behind corporations:

Seeing the new artificial intelligence-powered chatbots touted in dueling announcements this past week by Microsoft and Googledrives home two major takeaways. First, the feeling of “wow, this definitely could change everything.” And second, the realization that for chat-based search and related AI technologies to have an impact, we’re going to have to put a lot of faith in them and the companies they come from.

When AI is delivering answers, and not just information for us to base decisions on, we’re going to have to trust it much more deeply than we have before. This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value.

The same will be true of tools that help generate text, spreadsheets, code, images and anything else we create on our devices—some version of which both Microsoft MSFT -0.20%decrease; red down pointing triangle and Google have promised to offer within their existing productivity services, Microsoft 365 and Google Workspace. 

These technologies, and chat-based search, are all based on the latest generation of “generative” AI, capable of creating verbal and visual content and not just processing it the way more established AI has done. And the added trust it will require is one of several ways in which this new generative AI technology is poised to shift even more power into the hands of the biggest tech companies

Generative AI in all its forms will insinuate technology more deeply into the way we live and work than it already is—not just answering our questions but writing our memos and speeches or even producing poetry and art. And because of the financial, intellectual and computational resources needed to develop and run the technology are so enormous, the companies that control these AI systems will be the largest, richest companies.

OpenAI, the creator of the ChatGPT chatbot and DALL-E 2 image generator AIs that have fueled much of the current hype, seemed like an exception to that: a relatively small startup that has driven major AI innovation. But it has leapt into the arms of Microsoft, which has made successive rounds of investment, in part because of the need to pay for the computing power needed to make its systems work. 

The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes.

Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk.

Galactica was “the output of a research project versus something intended for commercial use,” says a Meta spokeswoman. In a public statement, Joelle Pineau, managing director of fundamental AI research at Meta, wrote that “given the propensity of large language models such as Galactica to generate text that may appear authentic, but is inaccurate, and because it has moved beyond the research community, we chose to remove the demo from public availability.”

On the other hand, proving your AI more trustworthy could be a competitive advantage more powerful than being the biggest, best or fastest repository of answers. This seems to be Google’s bet, as the company has emphasized in recent announcements and a presentation on Wednesday that as it tests and rolls out its own chat-based and generative AI systems, it will strive for “Responsible AI,” as outlined in 2019 in its “AI Principles.”

My colleague Joanna Stern this past week provided a helpful description of what it’s like to use Microsoft’s Bing search engine and Edge web browser with ChatGPT incorporated. You can join a list to test the service—and Google says it will make its chatbot, named Bard, available at some point in the coming months.

But in the meantime, to see just why trust in these kinds of search engines is so tricky, you can visit other chat-based search engines that already exist. There’s You.com, which will answer your questions via a chatbot, or Andisearch.com, which will summarize any article it returns when you search for a topic on it.

Even these smaller services feel a little like magic. If you ask You.com’s chat module a question like “Please list the best chat AI-based search engines,” it can, under the right circumstances, give you a coherent and succinct answer that includes all the best-known startups in this space. But it can also, depending on small changes in how you phrase that question, add complete nonsense to its answer. 

In my experimentation, You.com would, more often than not, give a reasonably accurate answer, but then add to it the name of a search engine that doesn’t exist at all. Googling the made-up search engine names it threw in revealed that You.com seemed to be misconstruing the names of humans quoted in articles as the names of search engines.

Andi doesn’t return search results in a chat format, precisely because making sure that those answers are accurate is still so difficult, says Chief Executive Angela Hoover. “It’s been super exciting to see these big players validating that conversational search is the future, but nailing factual accuracy is hard to do,” she adds. As a result, for now, Andi offers search results in a conventional format, but offers to use AI to summarize any page it returns.

Andi currently has a team of fewer than 10 people, and has raised $2.5 million so far. It’s impressive what such a small team has accomplished, but it’s clear that making trustworthy AI will require enormous resources, probably on the scale of what companies like Microsoft and Google possess.

There are two reasons for this: The first is the enormous amount of computing infrastructure required, says Tinglong Dai, a professor of operations management at Johns Hopkins University who studies human-AI interaction. That means tens of thousands of computers in big technology companies’ current cloud infrastructures. Some of those computers are used to train the enormous “foundation” models that power generative AI systems. Others specialize in making the trained models available to users, which as the number of users grows can become a more taxing task than the original training.

The second reason, says Dr. Dai, is that it requires enormous human resources to continually test and tune these models, in order to make sure they’re not spouting an inordinate amount of nonsense or biased and offensive speech.

Google has said that it has called on every employee in the company to test its new chat-based search engine and flag any issues with the results it generates. Microsoft, which is already rolling out its chat-based search engine to the public on a limited basis, is doing that kind of testing in public. ChatGPT, on which Microsoft’s chat-based search engine is based, has already proved to be vulnerable to attempts to “jailbreak” it into producing inappropriate content. 

Big tech companies can probably overcome the issues arising from their rollout of AI—Google’s go-slow approach, ChatGPT’s sometimes-inaccurate results, and the incomplete or misleading answers chat-based Bing could offer—by experimenting with these systems on a large scale, as only they can.

“The only reason ChatGPT and other foundational models are so bad at bias and even fundamental facts is they are closed systems, and there is no opportunity for feedback,” says Dr. Dai. Big tech companies like Google have decades of practice at soliciting feedback to improve their algorithmically-generated results. Avenues for such feedback have, for example, long been a feature of both Google Search and Google Maps.

Dr. Dai says that one analogy for the future of trust in AI systems could be one of the least algorithmically-generated sites on the internet: Wikipedia. While the entirely human-written and human-edited encyclopedia isn’t as trustworthy as primary-source material, its users generally know that and find it useful anyway. Wikipedia shows that “social solutions” to problems like trust in the output of an algorithm—or trust in the output of human Wikipedia editors—are possible.

But the model of Wikipedia also shows that the kind of labor-intensive solutions for creating trustworthy AI—which companies like Meta and Google have already employed for years and at scale in their content moderation systems—are likely to entrench the power of existing big technology companies. Only they have not just the computing resources, but also the human resources, to deal with all the misleading, incomplete or biased information their AIs will be generating.

In other words, creating trust by moderating the content generated by AIs might not prove to be so different from creating trust by moderating the content generated by humans. And that is something the biggest technology companies have already shown is a difficult, time-consuming and resource-intensive task they can take on in a way that few other companies can.

The obvious and immediate utility of these new kinds of AIs, when integrated into a search engine or in their many other potential applications, is the reason for the current media, analyst and investor frenzy for AI. It’s clear that this could be a disruptive technology, resetting who is harvesting attention and where they’re directing it, threatening Google’s search monopoly and opening up new markets and new sources of revenue for Microsoft and others.

Based on the runaway success of the ChatGPT AI—perhaps the fastest service to reach 100 million users in history, according to a recent UBS report—it’s clear that being an aggressive first mover in this space could matter a great deal. It’s also clear that being a successful first-mover in this space will require the kinds of resources that only the biggest tech companies can muster.

Source: The AI Boom That Could Make Google and Microsoft Even More Powerful

How to reduce citizen harm from automated decision systems

While more at a local level, some good basic guidelines:

For agencies that use automated systems to inform decisions about schools, social services and medical treatment, it’s imperative that they’re using technology that protects data.

new report finds that there’s little transparency about the automated decision-making (ADM) systems that state and local agencies use for many tasks, leading to unintended, detrimental consequences for the people they’re meant to help. But agencies can take steps to ensure that their organization buys responsible products.

The findings are shared in “Screened and Scored in the District of Columbia,” a new report from the Electronic Privacy Information Center (EPIC). Researchers spent 14 months investigating 29 ADM systems at about 20 Washington, D.C., government agencies. They chose that location because it’s where EPIC is located, said Thomas McBrien, law fellow at EPIC and one of four report authors.

The agencies use such systems to inform decisions about many activities, including assigning children to schools, understanding drivers’ travel patterns and informing medical decisions about patients, so it’s imperative that they’re using technology that protects data.

“Overburdened agencies turn to tech in the hope that it can make difficult political and administrative decisions for them,” according to the report. But “agencies and tech companies block audits of their ADM tools because companies claim that allowing the public to scrutinize the tools would hurt their competitive position or lead to harmful consequences. As a result, few people know how, when, or even whether they have been subjected to automated decision-making.”

Agencies can take four steps to mitigate the problem, McBrien said. First, agencies can require data minimization through contract language. “That’s basically the principle that when a company is rendering a service for an agency using its software, the agency should really ensure that the company isn’t taking more data than it needs to render that service,” he said.

That connects to his second recommendation, which is monitoring the downstream use of this data. Some ADM system vendors might take the data, run their services with it and that’s it, but others may share the data with their parent company or a subsidiary—or sell it to third parties.

“That’s where we see a lot of leakage of people’s personal data that can be really harmful, and definitely not what people are expecting their government to do for them,” McBrien said.

A third step is to audit for accuracy and bias. Sometimes, a tool used on one population or in one area can be very accurate, but applied to a different context, that accuracy may drop off and biased results could emerge. The only way to know whether that’s happening is by auditing and validating the system using the group of people you’re serving.

“The gold standard here would be to have an external auditor do this before you implement the system,” he said. But it’s a good idea to also do audits periodically to ensure that the algorithms the system uses are still accurate “because as the real world changes, the model of the real world it uses to make predictions should also be changing.”

Fourth, agencies should inform the public about their use of these systems, McBrien said, adding that it’s a good way to build trust. Meaningful public participation is the No. 1 recommendation to come out of a report by the Pittsburgh Task Force on Public Algorithms.

“Agencies should publish baseline information about the proposed system: what the system is, its purposes, the data on which it relies, its intended outcomes, and how it supplants or replaces existing processes, as well as likely or potential social, racial, and economic harms and privacy effects to be mitigated,” according to the report’s second recommendation.

It’s also important to share the outcome of any decision being made based on ADM systems, McBrien added. “People who are directly impacted by these systems are often the first ones to realize when there’s a problem,” he said. “I think it’s really important that when that outcome has been driven or informed by an algorithmic system, that that’s communicated to the person so they have the full picture of what happened.”

He added that privacy laws such as the California Privacy Rights Act of 2020 support transparency, as does an effort in that state to redefine state technology procurement as well as a bill in Washington state that would establish “guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.”

Although he couldn’t say how prevalent such systems are among state and local agencies—in fact, EPIC’s report states that researchers couldn’t access all of the systems in D.C. because many agencies were unwilling to share information because of companies’ claims of trade secrets or other commercial protections—there are examples of their use elsewhere.

For instance, in 2019, New York City Mayor Bill de Blasio signed an executive order establishing an algorithms management and policy officer to be a central resource on algorithm policy and to develop guidelines and best practices on the city’s use of them. That move follows a 2017 law that made the city the first in the country to create a task force to study agencies’ use of algorithms. But that group’s work led to a shadow report highlighting the task force’s shortcomings.

“We definitely urge people to think of other solutions to these problems,” McBrien said. “Sometimes agencies implement that system and are locked into them for a long time and spend enormous amounts of money trying to fix them, manage the problem, ameliorate the harms of the system that could have been used to hire more caseworkers.”

Source: How to reduce citizen harm from automated decision systems

Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

Interesting and significant study. A comparable study on automated decision-making systems that have been successful in minimizing injustice and inequality would be helpful, as well as recognition that automated systems can improve decision consistency as Kahneman and others demonstrated in Noise.

As these systems will continue to grow in order to manage increased numbers of decisions required, greater care in their design and impacts will of course be necessary. But a mistake to assume that all such systems are worse than human decision-making:

In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.Alston was worried for good reason. Research shows that ADS can be used in ways that discriminateexacerbate inequalityinfringe upon rightssort people into different social groupswrongly limit access to services and intensify surveillance

For example, families have been bankrupted and forced into crises after being falsely accused of benefit fraud. 

Researchers have identified how facial recognition systems and risk assessment tools are more likely to wrongly identify people with darker skin tones and women. These systems have already led to wrongful arrests and misinformed sentencing decisions.

Often, people only learn that they have been affected by an ADS application when one of two things happen: after things go wrong, as was the case with the A-levels scandal in the United Kingdom; or when controversies are made public, as was the case with uses of facial recognition technology in Canada and the United States.

Automated problems

Greater transparency, responsibility, accountability and public involvement in the design and use of ADS is important to protect people’s rights and privacy. There are three main reasons for this: 

  1. these systems can cause a lot of harm
  2. they are being introduced faster than necessary protections can be implemented, and;
  3. there is a lack of opportunity for those affected to make democratic decisions about if they should be used and if so, how they should be used.

Our latest research project, Automating Public Services: Learning from Cancelled Systems, provides findings aimed at helping prevent harm and contribute to meaningful debate and action. The report provides the first comprehensive overview of systems being cancelled across western democracies. 

Researching the factors and rationales leading to cancellation of ADS systems helps us better understand their limits. In our report, we identified 61 ADS that were cancelled across Australia, Canada, Europe, New Zealand and the U.S. We present a detailed account of systems cancelled in the areas of fraud detection, child welfare and policing. Our findings demonstrate the importance of careful consideration and concern for equity.

Reasons for cancellation

There are a range of factors that influence decisions to cancel the uses of ADS. One of our most important findings is how often systems are cancelled because they are not as effective as expected. Another key finding is the significant role played by community mobilization and research, investigative reporting and legal action. 

Our findings demonstrate there are competing understandings, visions and politics surrounding the use of ADS.

a table showing the factors influencing the decision to cancel and ADS system
There are a range of factors that influence decisions to cancel the uses of ADS systems. (Data Justice Lab), Author provided

Hopefully, our recommendations will lead to increased civic participation and improved oversight, accountability and harm prevention.

In the report, we point to widespread calls for governments to establish resourced ADS registers as a basic first step to greater transparency. Some countries such as the U.K., have stated plans to do so, while other countries like Canada have yet to move in this direction.

Our findings demonstrate that the use of ADS can lead to greater inequality and systemic injustice. This reinforces the need to be alert to how the use of ADS can create differential systems of advantage and disadvantage.

Accountability and transparency

ADS need to be developed with care and responsibility by meaningfully engaging with affected communities. There can be harmful consequences when government agencies do not engage the public in discussions about the appropriate use of ADS before implementation. 

This engagement should include the option for community members to decide areas where they do not want ADS to be used. Examples of good government practice can include taking the time to ensure independent expert reviews and impact assessments that focus on equality and human rights are carried out. 

a list of recommendations for governments using ADS systems
Governments can take several different approaches to implement ADS systems in a more accountable manner.(Data Justice Lab), Author provided

We recommend strengthening accountability for those wanting to implement ADS by requiring proof of accuracy, effectiveness and safety, as well as reviews of legality. At minimum, people should be able to find out if an ADS has used their data and, if necessary, have access to resources to challenge and redress wrong assessments. 

There are a number of cases listed in our report where government agencies’ partnership with private companies to provide ADS services has presented problems. In one case, a government agency decided not to use a bail-setting system because the proprietary nature of the system meant that defendants and officials would not be able to understand why a decision was made, making an effective challenge impossible. 

Government agencies need to have the resources and skills to thoroughly examine how they procure ADS systems.

A politics of care

All of these recommendations point to the importance of a politics of care. This requires those wanting to implement ADS to appreciate the complexities of people, communities and their rights. 

Key questions need to be asked about how the uses of ADS lead to blind spots because of the way they increase the distancing between administrators and the people they are meant to serve through scoring and sorting systems that oversimplify, infer guilt, wrongly target and stereotype people through categorizations and quantifications.

Good practice, in terms of a politics of care, involves taking the time to carefully consider the potential impacts of ADS before implementation and being responsive to criticism, ensuring ongoing oversight and review, and seeking independent and community review.

Source: Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

Automating Public Services: Learning from Cancelled Systems

Harris: The future of malicious artificial intelligence applications is here

More on some of the more fundamental risks of AI:

The year is 2016. Under close scrutiny by CCTV cameras, 400 contractors are working around the clock in a Russian state-owned facility. Many are experts in American culture, tasked with writing posts and memes on Western social media to influence the upcoming U.S. Presidential election. The multimillion dollar operation would reach 120 million people through Facebook alone. 

Six years later, the impact of this Russian info op is still being felt. The techniques it pioneered continue to be used against democracies around the world, as Russia’s “troll factory” — the Russian internet Research Agency — continues to fuel online radicalization and extremism. Thanks in no small part to their efforts, our world has become hyper-polar, increasingly divided into parallel realities by cherry-picked facts, falsehoods, and conspiracy theories.

But if making sense of reality seems like a challenge today, it will be all but impossible tomorrow. For the past two years, a quiet revolution has been brewing in AI — and despite some positive consequences, it’s also poised to hand authoritarian regimes unprecedented new ways to spread misinformation across the globe at an almost inconceivable scale.

In 2020, AI researchers created a text generation system called GPT-3. GPT-3 can produce text that’s indistinguishable from human writing — including viral articles, tweets, and other social media posts. GPT-3 was one of the most significant breakthroughs in the history of AI: it offered a simple recipe that AI researchers could follow to radically accelerate AI progress, and build much more capable, humanlike systems. 

But it also opened a Pandora’s box of malicious AI applications. 

Text-generating AIs — or “language models” — can now be used to massively augment online influence campaigns. They can craft complex and compelling arguments, and be leveraged to create automated bot armies and convincing fake news articles. 

This isn’t a distant future concern: it’s happening already. As early as 2020, Chinese efforts to interfere with Taiwan’s national election involved “the instant distribution of artificial-intelligence-generated fake news to social media platforms.”

But the 2020 AI breakthrough is now being harnessed for more than just text. New image-generation systems, able to create photorealistic pictures based on any text prompt, have become reality this year for the first time. As AI-generated content becomes better and cheaper, the posts, pictures, and videos we consume in our social media feeds will increasingly reflect the massively amplified interests of tech-savvy actors.

And malicious applications of AI go far beyond social media manipulation. Language models can already write better phishing emails than humans, and have code-writing capabilities that outperform human competitive programmers. AI that can write code can also write malware, and many AI researchers see language models as harbingers of an era of self-mutating AI-powered malicious software that could blindside the world. Other recent breakthroughs have significant implications for weaponized drone control and even bioweapon design.

Needed: a coherent plan

Policy and governance usually follow crises, rather than anticipate them. And that makes sense: the future is uncertain, and most imagined risks fail to materialize. We can’t invest resources in solving every hypothetical problem.

But exceptions have always been made for problems which, if left unaddressed, could have catastrophic effects. Nuclear technology, biotechnology, and climate change are all examples. Risk from advanced AI represents another such challenge. Like biological and nuclear risk, it calls for a co-ordinated, whole-of-government response.

Public safety agencies should establish AI observatories that produce unclassified reporting on publicly available information about AI capabilities and risks, and begin studying how to frame AI through a counterproliferation lens

Given the pivotal role played by semiconductors and advanced processors in the development of what are effectively new AI weapons, we should be tightening export control measures for hardware or resources that feed into the semiconductor supply chains of countries like China and Russia. 

Our defence and security agencies could follow the lead of the U.K.’s Ministry of Defence, whose Defence AI Strategy involves tracking and mitigating extreme and catastrophic risks from advanced AI.

AI has entered an era of remarkable, rapidly accelerating capabilities. Navigating the transition to a world with advanced AI will require that we take seriously possibilities that would have seemed like science fiction until very recently. We’ve got a lot to rethink, and now is the time to get started.

Source: The future of malicious artificial intelligence applications is here

Trudel: Intelligence artificielle discriminatoire

Somewhat shallow analysis, as the only area that IRCC is using AI is with respect to visitor visas, not international students or other categories (unless that has changed). So Trudel’s argumentation may be based on a false understanding.

While concerns regarding AI are legitimate and need to be addressed, bias and noise are common to human decision making.

And differences in outcomes don’t necessarily reflect bias and discrimination but these differences do signal potential issues:

Les étudiants francophones internationaux subissent un traitement qui a toutes les allures de la discrimination systémique. Les Africains, surtout francophones, encaissent un nombre disproportionné de refus de permis de séjourner au Canada pour fins d’études. On met en cause des systèmes d’intelligence artificielle (IA) utilisés par les autorités fédérales en matière d’immigration pour expliquer ces biais systémiques.

Le député Alexis Brunelle-Duceppe rappelait ce mois-ci que « les universités francophones arrivent […] en tête du nombre de demandes d’études refusées. Ce ne sont pas les universités elles-mêmes qui les refusent, mais bien le gouvernement fédéral. Par exemple, les demandes d’étudiants internationaux ont été refusées à 79 % à l’Université du Québec à Trois-Rivières et à 58 % à l’Université du Québec à Chicoutimi. Pour ce qui est de l’Université McGill, […] on parle de 9 % ».

En février, le vice-recteur de l’Université d’Ottawa, Sanni Yaya, relevait qu’« au cours des dernières années, de nombreuses demandes de permis, traitées par Immigration, Réfugiés et Citoyenneté Canada, ont été refusées pour des motifs souvent incompréhensibles et ont demandé des délais anormalement longs. » Il s’agit pourtant d’étudiants qui ont des bourses garanties par leur établissement et un bon dossier. Le vice-recteur se demande à juste titre s’il n’y a pas là un préjugé implicite de la part de l’agent responsable de leur évaluation, convaincu de leur intention de ne pas quitter le Canada une fois que sera expiré leur permis d’études.

En somme, il existe un faisceau d’indices donnant à conclure que les outils informatiques d’aide à la décision utilisés par les autorités fédérales amplifient la discrimination systémique à l’encontre des étudiants francophones originaires d’Afrique.

Outils faussés

Ce cafouillage doit nous interpeller à propos des préjugés amplifiés par les outils d’IA. Tout le monde est concerné, car ces technologies font partie intégrante de la vie quotidienne. Les téléphones dotés de dispositifs de reconnaissance faciale ou les assistants domestiques ou même les aspirateurs « intelligents », sans parler des dispositifs embarqués dans plusieurs véhicules, carburent à l’IA.

La professeure Karine Gentelet et l’étudiante Lily-Cannelle Mathieu expliquent, dans un article diffusé sur le site de l’Observatoire international sur les impacts sociétaux de l’IA et du numérique, que les technologies d’IA, bien que souvent présentées comme étant neutres, sont marquées par l’environnement social duquel elles sont issues. Elles tendent à reproduire et même à amplifier les préjugés et les apports de pouvoir inéquitables.

Les chercheuses rappellent que plusieurs études ont montré que, si elles ne sont pas adéquatement encadrées, ces technologies excluent des populations racisées, ou bien les surreprésentent au sein de catégories sociales considérées comme « problématiques » ou encore, fonctionnent inadéquatement lorsqu’elles sont appliquées à des individus racisés. Elles peuvent accentuer les tendances discriminatoires dans divers processus décisionnels, comme la surveillance policière, des diagnostics médicaux, des décisions de justice, des processus d’embauche ou d’admission scolaire, ou même le calcul des taux hypothécaires.

Une loi nécessaire

En juin dernier, le ministre fédéral de l’Innovation, des Sciences et de l’Industrie a présenté le projet de loi C-27 afin d’encadrer l’usage des technologies d’intelligence artificielle. Le projet de loi entend imposer des obligations de transparence et de reddition de comptes aux entreprises qui font un usage important des technologies d’IA.

Le projet propose d’interdire certaines conduites relativement aux systèmes d’IA qui peuvent causer un préjudice sérieux aux individus. Il comporte des dispositions afin de responsabiliser les entreprises qui tirent parti de ces technologies. La loi garantirait une gouvernance et un contrôle appropriés des systèmes d’IA afin de prévenir les dommages physiques ou psychologiques ou les pertes économiques infligés aux individus.

On veut aussi prévenir les résultats faussés qui établissent une distinction négative non justifiée sur un ou plusieurs des motifs de discrimination interdits par les législations sur les droits de la personne. Les utilisateurs des technologies d’IA seraient tenus à des obligations d’évaluation et d’atténuation des risques inhérents à leurs systèmes. Le projet de loi entend mettre en place des obligations de transparence pour les systèmes ayant un potentiel de conséquences importantes sur les personnes. Ceux qui rendent disponibles des systèmes d’IA seraient obligés de publier des explications claires sur leurs conditions de fonctionnement de même que sur les décisions, recommandations ou prédictions qu’ils font.

Le traitement discriminatoire que subissent plusieurs étudiants originaires de pays africains francophones illustre les biais systémiques qui doivent être repérés, analysés et supprimés. C’est un rappel que le déploiement de technologies d’IA s’accompagne d’importants risques de reconduire les tendances problématiques des processus de décision. Pour faire face à de tels risques, il faut des législations imposant aussi bien aux entreprises qu’aux autorités publiques de fortes exigences de transparence et de reddition de comptes. Il faut surtout se défaire du mythe de la prétendue « neutralité » de ces outils techniques.

Source: Intelligence artificielle discriminatoire

Roose: We Need to Talk About How Good A.I. Is Getting

Of note. World is going to become more complex, and the potential for AI in many fields will continue to grow, with these tools and programs being increasing able to replace, at least in part, professionals including government workers:

For the past few days, I’ve been playing around with DALL-E 2, an app developed by the San Francisco company OpenAI that turns text descriptions into hyper-realistic images.

OpenAI invited me to test DALL-E 2 (the name is a play on Pixar’s WALL-E and the artist Salvador Dalí) during its beta period, and I quickly got obsessed. I spent hours thinking up weird, funny and abstract prompts to feed the A.I. — “a 3-D rendering of a suburban home shaped like a croissant,” “an 1850s daguerreotype portrait of Kermit the Frog,” “a charcoal sketch of two penguins drinking wine in a Parisian bistro.” Within seconds, DALL-E 2 would spit out a handful of images depicting my request — often with jaw-dropping realism.

Here, for example, is one of the images DALL-E 2 produced when I typed in “black-and-white vintage photograph of a 1920s mobster taking a selfie.” And how it rendered my request for a high-quality photograph of “a sailboat knitted out of blue yarn.”

DALL-E 2 can also go more abstract. The illustration at the top of this article, for example, is what it generated when I asked for a rendering of “infinite joy.” (I liked this one so much I’m going to have it printed and framed for my wall.)

What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.

DALL-E 2 got a lot of attention when it was announced this year, and rightfully so. It’s an impressive piece of technology with big implications for anyone who makes a living working with images — illustrators, graphic designers, photographers and so on. It also raises important questions about what all of this A.I.-generated art will be used for, and whether we need to worry about a surge in synthetic propaganda, hyper-realistic deepfakes or even nonconsensual pornography.

But art is not the only area where artificial intelligence has been making major strides.

Over the past 10 years — a period some A.I. researchers have begun referring to as a “golden decade” — there’s been a wave of progress in many areas of A.I. research, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive A.I. models.

Some of that progress has been slow and steady — bigger models with more data and processing power behind them yielding slightly better results.

But other times, it feels more like the flick of a switch — impossible acts of magic suddenly becoming possible.

Just five years ago, for example, the biggest story in the A.I. world was AlphaGo, a deep learning model built by Google’s DeepMind that could beat the best humans in the world at the board game Go. Training an A.I. to win Go tournaments was a fun party trick, but it wasn’t exactly the kind of progress most people care about.

But last year, DeepMind’s AlphaFold — an A.I. system descended from the Go-playing one — did something truly profound. Using a deep neural network trained to predict the three-dimensional structures of proteins from their one-dimensional amino acid sequences, it essentially solved what’s known as the “protein-folding problem,” which had vexed molecular biologists for decades.

This summer, DeepMind announced that AlphaFold had made predictions for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.

Or look at what’s happening with A.I.-generated text.

Only a few years ago, A.I. chatbots struggled even with rudimentary conversations — to say nothing of more difficult language-based tasks.

But now, large language models like OpenAI’s GPT-3 are being used to write screenplayscompose marketing emails and develop video games. (I even used GPT-3 to write a book review for this paper last year — and, had I not clued in my editors beforehand, I doubt they would have suspected anything.)

A.I. is writing code, too — more than a million people have signed up to use GitHub’s Copilot, a tool released last year that helps programmers work faster by automatically finishing their code snippets.

Then there’s Google’s LaMDA, an A.I. model that made headlines a couple of months ago when Blake Lemoine, a senior Google engineer, was fired after claiming that it had become sentient.

Google disputed Mr. Lemoine’s claims, and lots of A.I. researchers have quibbled with his conclusions. But take out the sentience part, and a weaker version of his argument — that LaMDA and other state-of-the-art language models are becoming eerily good at having humanlike text conversations — would not have raised nearly as many eyebrows.

In fact, many experts will tell you that A.I. is getting better at lots of things these days — even in areas, such as language and reasoning, where it once seemed that humans had the upper hand.

“It feels like we’re going from spring to summer,” said Jack Clark, a co-chair of Stanford University’s annual A.I. Index Report. “In spring, you have these vague suggestions of progress, and little green shoots everywhere. Now, everything’s in bloom.”

In the past, A.I. progress was mostly obvious only to insiders who kept up with the latest research papers and conference presentations. But recently, Mr. Clark said, even laypeople can sense the difference.

“You used to look at A.I.-generated language and say, ‘Wow, it kind of wrote a sentence,’” Mr. Clark said. “And now you’re looking at stuff that’s A.I.-generated and saying, ‘This is really funny, I’m enjoying reading this,’ or ‘I had no idea this was even generated by A.I.’”

There is still plenty of bad, broken A.I. out there, from racist chatbots to faulty automated driving systems that result in crashes and injury. And even when A.I. improves quickly, it often takes a while to filter down into products and services that people actually use. An A.I. breakthrough at Google or OpenAI today doesn’t mean that your Roomba will be able to write novels tomorrow.

But the best A.I. systems are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift. Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing A.I.; many now believe that major changes are right around the corner, for better or worse.

Ajeya Cotra, a senior analyst with Open Philanthropy who studies A.I. risk, estimated two years ago that there was a 15 percent chance of “transformational A.I.” — which she and others have defined as A.I. that is good enough to usher in large-scale economic and societal changes, such as eliminating most white-collar knowledge jobs — emerging by 2036.

But in a recent post, Ms. Cotra raised that to a 35 percent chance, citing the rapid improvement of systems like GPT-3.

“A.I. systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time,” Ms. Cotra told me. “People should take more seriously that A.I. could change things soon, and that could be really scary.”

There are, to be fair, plenty of skeptics who say claims of A.I. progress are overblown. They’ll tell you that A.I. is still nowhere close to becoming sentient, or replacing humans in a wide variety of jobs. They’ll say that models like GPT-3 and LaMDA are just glorified parrots, blindly regurgitating their training data, and that we’re still decades away from creating true A.G.I. — artificial general intelligence — that is capable of “thinking” for itself.

There are also tech optimists who believe that A.I. progress is accelerating, and who want it to accelerate faster. Speeding A.I.’s rate of improvement, they believe, will give us new tools to cure diseases, colonize space and avert ecological disaster.

I’m not asking you to take a side in this debate. All I’m saying is: You should be paying closer attention to the real, tangible developments that are fueling it.

After all, A.I. that works doesn’t stay in a lab. It gets built into the social media apps we use every day, in the form of Facebook feed-ranking algorithms, YouTube recommendations and TikTok “For You” pages. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use A.I. to determine who’s eligible for loans, and police departments use it to investigate crimes.

Even if the skeptics are right, and A.I. doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society. In a few years, the vast majority of the photos, videos and text we encounter on the internet could be A.I.-generated. Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. And tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale, distorting the political process in ways we won’t see coming.

It’s a cliché, in the A.I. world, to say things like “we need to have a societal conversation about A.I. risk.” There are already plenty of Davos panels, TED talks, think tanks and A.I. ethics committees out there, sketching out contingency plans for a dystopian future.

What’s missing is a shared, value-neutral way of talking about what today’s A.I. systems are actually capable of doing, and what specific risks and opportunities those capabilities present.

I think three things could help here.

First, regulators and politicians need to get up to speed.

Because of how new many of these A.I. systems are, few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the A.I. frontier.

We’ve seen a few efforts to close the gap — Stanford’s Institute for Human-Centered Artificial Intelligence recently held a three-day “A.I. boot camp” for congressional staff members, for example — but we need more politicians and regulators to take an interest in the technology. (And I don’t mean that they need to start stoking fears of an A.I. apocalypse, Andrew Yang-style. Even reading a book like Brian Christian’s “The Alignment Problem” or understanding a few basic details about how a model like GPT-3 works would represent enormous progress.)

Otherwise, we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance, which resulted in nothing but gridlock and testy hearings.

Second, big tech companies investing billions in A.I. development — the Googles, Metas and OpenAIs of the world — need to do a better job of explaining what they’re working on, without sugarcoating or soft-pedaling the risks. Right now, many of the biggest A.I. models are developed behind closed doors, using private data sets and tested only by internal teams. When information about them is made public, it’s often either watered down by corporate P.R. or buried in inscrutable scientific papers.

Downplaying A.I. risks to avoid backlash may be a smart short-term strategy, but tech companies won’t survive long term if they’re seen as having a hidden A.I. agenda that’s at odds with the public interest. And if these companies won’t open up voluntarily, A.I. engineers should go around their bosses and talk directly to policymakers and journalists themselves.

Third, the news media needs to do a better job of explaining A.I. progress to nonexperts. Too often, journalists — and I admit I’ve been a guilty party here — rely on outdated sci-fi shorthand to translate what’s happening in A.I. to a general audience. We sometimes compare large language models to Skynet and HAL 9000, and flatten promising machine learning breakthroughs to panicky “The robots are coming!” headlines that we think will resonate with readers. Occasionally, we betray our ignorance by illustrating articles about software-based A.I. models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.

In a broad sense, most people think about A.I. narrowly as it relates to us — Will it take my job? Is it better or worse than me at Skill X or Task Y? — rather than trying to understand all of the ways A.I. is evolving, and what that might mean for our future.

I’ll do my part, by writing about A.I. in all its complexity and weirdness without resorting to hyperbole or Hollywood tropes. But we all need to start adjusting our mental models to make space for the new, incredible machines in our midst.

Source: We Need to Talk About How Good A.I. Is Getting

How VR and AI could revolutionize language training for newcomers

Innovative:

Adla Hitou is shamelessly showcasing her stellar work experience as she tries to convince the interviewer to hire her.

The Syrian newcomer answers every question that’s tossed her way.

“Give me that $3-trillion job,” she says, before bursting into laughter.

The assertive and fun-loving Hitou undertaking this mock job interview in virtual reality seems like an entirely different person from the timid mother of two who normally only whispers to others in her real-world classroom.

“I don’t feel nervous speaking in English in the virtual world because I just disappear. When I talk, I’m not afraid to make mistakes anymore. I just feel more confident,” says the 51-year-old Mississaugan, a former pharmacist who resettled in Canada in 2018 via Lebanon.

Alexander called the project’s use of VR in language learning “groundbreaking,” adding the technology appears to help participants overcome the self-consciousness in communicating in their second language.

“At this point, VR is a great equalizer. When students are using VR, they seem to feel this freedom to want to be able to speak. I think part of it is, when they’re in the VR world of it, they’re less concerned about themselves and about making mistakes, where they are actually being represented by a cartoon character.”

On this sweltering Saturday, instructor Anthony Faulkner and volunteers prepared the four female and three male adult learners on how to tackle questions at a job interview.

At first glance, there’s nothing atypical about the drill, a part of the English as a Second Language curriculum to help adult immigrants learn the language for their successful integration in their adopted country.

They talked about how to make a formal introduction of themselves, highlight their accomplishments and think on their feet when faced with the unexpected.

Sitting in a circle in the lab, Hitou — in a soft and gentle voice — told classmates in the physical world about her training as a pharmacist, her work experience as a project manager with the World Health Organization’s food and vaccination programs in Syria, and the civil war that forced her exile.

“I’m good at communication. I can take your ideas, relate the information and make a good presentation,” she said, adding personal details: “I love volunteering and do handcraft. I’ve made crochets and sold them at bazaars to raise money for charities. I am a bad seller but I have a kind heart.”

After the in-class session, with help of a team of volunteers, the participants were invited to put on their headsets, lift the hand-held controllers and enter Faulkner’s virtual office resembling an executive suite — wood panelling, rows of bookshelves and a bronze chandelier.

In an instant, Hitou, in her white hijab and blue one-piece dress, transformed into an avatar in the virtual world, revealing her long silver hair and sporting a black business suit.

“How do you deal with failures and mistakes?” asked Faulkner, standing in the middle of the classroom and moving his hand-held controllers in the air to make his avatar do a “hands-open, palms-up” gesture.

Caught by the surprise question, Hitou, behind her headset and facing a whiteboard in the physical world, confidently replied: “We are all humans. We have to learn from our mistakes and understand why we made the mistakes and failed. I do not give up.”

“This is so much more fun than learning English from books and notes in the traditional classroom,” she said later. “Now, I remember every word and thing that I see and learn in the virtual world when I go back to the real world.”

The virtual job interview is just one of many thematic VR experiences covered over the eight-week course by the research team that developed the customized scenarios with help from ENGAGE, a virtual platform that simulates the way people interact in the physical world for multi-user events, collaboration, training and education. One scene includes a dinner party at a virtual highrise loft; another involves the planning of a Canadian road trip where each newcomer is assigned to research a part of the country before going to the different booths in a virtual conference hall to make a presentation to their peers about these places and activities.

Instead of just viewing some Canadian landmarks on television as peers in a regular class might, the VR participants can take virtual tours of Niagara Falls, Toronto’s Eaton Centre and St. Lawrence Market through the VR 360 videos on YouTube VR.

Oakville’s Manar Mustafa, a computer engineer who fled war in Syria and came here in 2016, said she attended a regular English program at a newcomer settlement service agency but nothing can compare to VR learning.

“This is a perfect experience. I never used VR but everything feels so real in the VR world. I have not visited the Niagara Falls but now I have. It was right in front of me in the classroom and I didn’t even get wet,” the 36-year-old mother of four said with a chuckle.

“Initially, I felt dizzy (with motion sickness), but now I really love it. I feel very comfortable with it.” 

Her classmate, Afghan journalist Abdul Mujib Ebrahimi, who only arrived in Canada last November, said he hopes to quickly translate what he has learned from the virtual world to the real world.

“The VR experience pushes you in a real situation. You are a character and you use your imagination to communicate with others. If I don’t know a word or how to say something, I just explain it in a different way for people to understand me,” said the 27-year-old from Badakhshan. 

“I’m still trying to learn English and work with English, but I am more confident when I talk to real people.” 

Alexander cautions that it’s too early to determine the effectiveness of VR in language learning.

“We have to be careful of the novelty effect of VR. Our students are enjoying the experience maybe because they’re trying VR for the first time and they can tell their friends and family about it,” he explained.

The VR class this summer is part of a series of projects that also include participants in traditional classrooms and artificial-intelligence-assisted learning in front of a computer. The AI session will be launched in winter.

There are about 80 newcomers waiting to get in the program, according to Marwa Khobieh, executive director of the Syrian Canadian Foundation, which has been running a joint English tutoring program for newcomers with volunteers from U of T since 2017.

She concedes that VR language learning is expensive, with the required infrastructure and equipment in its infancy, but if it succeeds, there’s a huge potential to use it to accelerate the learning and ultimately the integration of new immigrants.

“If we’re able to help newcomers learn and improve their language skills in two years instead of four, it’s worth the investment,” noted Khobieh. “Technology is our future and this can change the future of language training for newcomers.”

Source: How VR and AI could revolutionize language training for newcomers

‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Legitimate concerns about AI bias (which individual decision-makers have), also need to address “noise,” variability among decision-making by people for comparable cases:

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the act, once adopted, cannot be overstated,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EU’s final list of “high risk” uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or – in the case of lenders – assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

“AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture,” Sarah Kocianski, an independent financial technology consultant said. “If designed correctly, such systems can provide wider access to affordable credit.”

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said. “Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as “black-box” syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called “trustworthy AI” models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. “Correlation-based models are learning the injustices from the past and they’re just replaying it into the future,” Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

“It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model,” he said. “We don’t know how many people haven’t gone to university because of a haywire algorithm. We don’t know how many people weren’t able to get their mortgage because of algorithm biases. We just don’t know.”

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. “Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it,” he said.

While the EU’s new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

“The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present,” Circiumaru said.

“AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they won’t.”

Source: ‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives