ICYMI: Ottawa will prevent AI tools from discriminating against potential hires, Anand says

Of note:

The federal government will work to prevent artificial intelligence from discriminating against people applying for jobs in federal government departments, says Treasury Board President Anita Anand.

In a wide-ranging year-end interview with CBC News, Anand acknowledged concerns about the use of AI tools in hiring.

“There is no question that at all times, a person’s privacy needs to be respected in accordance with privacy laws, and that our hiring practices must be non-discriminatory and must be embedded with a sense of equality,” Anand said when asked about the government’s use of AI in its hiring process.

“Certainly, as a racialized woman, I feel this very deeply … We need to ensure that any use of AI in the workplace … has to be compliant with existing law and has to be able to stand the moral test of being non-discriminatory….

Source: Ottawa will prevent AI tools from discriminating against potential hires, Anand says

How Canada is using AI to catch immigration fraud — and why some say it’s a problem

While I understand the worries, I also find that they are overwrought, given that the only way to manage large numbers is through AI and related IT tools.

And as Kahneman’s exhaustive survey of automated vs human systems in Noise indicates, automated systems deliver greater consistency than solely human systems.

So by all means, IRCC has to make every effort to ensure no untoward bias and discrimation is embedded in these systems and ensure that the inherent discrimination in any immigration or citizenship processes, who gets in/who doesn’t, is evidence based and aligned to policy objectives:

Canada is using a new artificial intelligence tool to screen would-be international students and visitors — raising questions about what role AI should be playing in determining who gets into the country.

Immigration officials say the tool improves their ability to figure out who may be trying to game Canada’s system, and insist that, at the end of the day, it’s human beings making the final decisions.

Experts, however, say we already know that AI can reinforce very human biases. One expert, in fact, said he expects some legitimate applicants to get rejected as a result.

Rolled out officially in January, the little-known Integrity Trends Analysis Tool (ITAT) — formerly called Lighthouse or Watertower — has mined the data set of 1.4 million study-permit applications and 2.9 million visitor applications.

What it’s searching for are clues of “risk and fraud patterns” — a combination of elements that, together, may be cause for additional scrutiny on a given file.

Officials say that, among study-permit applications alone, they have already identified more than 800 “unique risk patterns.”

Through ongoing updates based on fresh data, the AI-driven apparatus not only analyses these risk patterns but also flags incoming applications that match them.

It produces reports to assist officers in Immigration Risk Assessment Units, who determine whether an application warrants further scrutiny.

“Maintaining public confidence in how our immigration system is managed is of paramount importance,” Immigration Department spokesperson Jeffrey MacDonald told the Star in an email.

“The use of ITAT has effectively allowed us to improve the way we manage risk by using technology to examine risk with a globalized lens.”

Helping with a big caseload

Each year, Canada receives millions of immigration applications — for temporary and permanent residence, as well as for citizenship — and the number has continued to grow.

The Immigration Department says the total number of decisions it renders per year increased from 4.1 million in 2018 to 5.2 million last year, with the overwhelming majority of applicants trying to obtain temporary-resident status as students, foreign workers and visitors; last year temporary-resident applications accounted for 80 per cent of the decisions the department rendered.

During the pandemic, the department was overwhelmed by skyrocketing backlogs in every single program, which spurred Ottawa to go on a hiring spree and fast-track its modernization to tackle the rising inventory of applications.

Enter: a new tool

ITAT, which was developed in-house and first piloted in the summer of 2020, is the latest instrument in the department’s tool box, one that goes beyond performing simple administrative tasks, such as responding to online inquiries, to more sophisticated functions, like detecting fraud.

MacDonald said ITAT can readily find connections across application records in immigration databases, which may include reports and dossiers produced by Canada Border Services Agency or other law enforcement bodies. The tool, he said, helps officials identify applications that share similar characteristics of previously refused applications.

He said that in order to protect the integrity of the immigration system and investigative techniques, he could not disclose details of the risk patterns that are used to assess applications.

However, MacDonald stressed that “every effort is taken to ensure risk patterns do not create actual or perceived bias as it relates to Charter-protected factors, such as gender, age, race or religion.

“These are reviewed carefully before weekly reports are distributed to risk assessment units.”

A government report about ITAT released last year did make reference to the “adverse characteristics” monitored for in an application, such as inadmissibility findings (e.g. criminality and misrepresentation) and other records of immigration violations, such as overstaying or working without authorization.

The report said that in the past, risk assessment units conducted a random sample of applications to detect frauds through various techniques, including phone calls, site visits or in-person interviews. The results of the verification activity are shared with processing officers whether or not fraudulent information was found.

The report suggested the new tool is meant to assist these investigations. MacDonald emphasized that ITAT does not recommend or make decisions on applications, and the final decisions on applications still rest with the processing officers.

Unintended influence?

However, that doesn’t mean the use of the tool won’t influence an officer’s decision-making, said Ebrahim Bagheri, director of the Natural Sciences and Engineering Research Council of Canada’s collaborative program on responsible AI development. He said he expects human staff to wrongfully flag and reject applicants out of deference to ITAT.

Bagheri, who specializes in information retrieval, social media analytics and software and knowledge engineering, said humans tend to heed such programs too much: “You’re inclined to agree with the AI at the unconscious level, thinking — again unconsciously — there may have been things that you may have missed and the machine, which is quite rigorous, has picked up.”

While the shift toward automation, AI-assisted and data-driven decision-making is part of a global trend, immigration lawyer Mario Bellissimo says the technology is not advanced enough yet for immigration processing.

“Most of the experts are pretty much saying, relying on automated statistical tools to make decisions or to predict risk is a bad idea,” said Bellissimo, who takes a personal interest in studying the use of AI in Canadian immigration.

“AI is required to achieve precision, scale and personalization. But the tools aren’t there yet to do that without discrimination.”

The shortcomings of AI

A history of multiple marriages might be a red flag to AI, suggesting a marriage of convenience, Bellissimo said. But what could’ve been omitted in the assessment of an application were the particular facts — that the person’s first spouse had passed away, for instance, or even that the second ran away because it was a forced marriage.

“You need to know what the paradigm and what the data set is. Is it all based on the Middle East, Africa? Are there different rules?” asked Bellissimo.

“To build public confidence in data, you need external audits. You need a data scientist and a couple of immigration practitioners to basically validate (it). That’s not being done now and it’s a problem.”

Bagheri said AI can reinforce its own findings and recommendations when its findings are acted on, creating new data of rejections and approvals that conform to its conclusions.

“Let’s think about an AI system that’s telling you who’s the risk to come to Canada. It flags a certain set of applications. The officers will look at it. They will decide on the side of caution. They flag it. That goes back to the system,” he said.

“And you just think that you’re becoming more accurate where they’re just intensifying the biases.”

Bellissimo said immigration officials have been doing a poor job in communicating to the public about the tool: “There is such a worry about threat actors that they’re putting so much behind the curtain (and) the public generally has no confidence in this use.”

Bagheri said immigration officials should just limit their use of AI tools to optimize resources and administer its processes, such as using robots to answer emails, screen eligibility and triage applications — freeing up officers for the actual risk assessment and decision making.

“I think the decisions on who we welcome should be based on compassion and a welcoming approach, rather than a profiling approach,” he said.

Source: How Canada is using AI to catch immigration fraud — and why some say it’s a problem

In Reversal Because of A.I., Office Jobs Are Now More at Risk

Implications for governments are immense, given the large number of clerical and administrative jobs. However, given government inertia, bureaucracy, unions and other interests, will likely lag private sector significantly, to the likely detriment of citizen service:

The American workers who have had their careers upended by automation in recent decades have largely been less educated, especially men working in manufacturing.

But the new kind of automation — artificial intelligence systems called large language models, like ChatGPT and Google’s Bard — is changing that. These tools can rapidly process and synthesize information and generate new content. The jobs most exposed to automation now are office jobs, those that require more cognitive skills, creativity and high levels of education. The workers affected are likelier to be highly paid, and slightly likelier to be women, a variety of research has found.

“It’s surprised most people, including me,” said Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered A.I., who had predicted that creativity and tech skills would insulate people from the effects of automation. “To be brutally honest, we had a hierarchy of things that technology could do, and we felt comfortable saying things like creative work, professional work, emotional intelligence would be hard for machines to ever do. Now that’s all been upended.”

A range of new research has analyzed the tasks of American workers, using the Labor Department’s O*Net database, and hypothesized which of them large language models could do. It has found these models could significantly help with tasks in one-fifth to one-quarter of occupations. In a majority of jobs, the models could do some of the tasks, found the analyses, including from Pew Research Center and Goldman Sachs.

For now, the models still sometimes produce incorrect information, and are more likely to assist workers than replace them, said Pamela Mishkin and Tyna Eloundou, researchers at OpenAI, the company and research lab behind ChatGPT. They did a similar study, analyzing the 19,265 tasks done in 923 occupations, and found that large language models could do some of the tasks that 80 percent of American workers do.

Yet they also found reason for some workers to fear that large language models could displace them, in line with what Sam Altman, OpenAI’s chief executive, told The Atlantic last month: “Jobs are definitely going to go away, full stop.”

The researchers asked an advanced model of ChatGPT to analyze the O*Net data and determine which tasks large language models could do. It found that 86 jobs were entirely exposed (meaning every task could be assisted by the tool). The human researchers said 15 jobs were. The job that both the humans and the A.I. agreed was most exposed was mathematician.

Just 4 percent of jobs had zero tasks that could be assisted by the technology, the analysis found. They included athletes, dishwashers and those assisting carpenters, roofers or painters. Yet even tradespeople could use A.I. for parts of their jobs like scheduling, customer service and route optimization, said Mike Bidwell, chief executive of Neighborly, a home services company.

While OpenAI has a business interest in promoting its technology as a boon to workers, other researchers said there were still uniquely human capabilities that were not (yet) able to be automated — like social skills, teamwork, care work and the skills of tradespeople. “We’re not going to run out of things for humans to do anytime soon,” Mr. Brynjolfsson said. “But the things are different: learning how to ask the right questions, really interacting with people, physical work requiring dexterity.”

For now, large language models will probably help many workers be more productive in their existing jobs, researchers say, akin to giving office workers, even entry-level ones, a chief of staff or a research assistant (though that could signal trouble for human assistants).

Take writing code: A study of Github’s Copilot, an A.I. program that helps programmers by suggesting code and functions, found that those using it were 56 percent faster than those doing the same task without it.

“There’s a misconception that exposure is necessarily a bad thing,” Ms. Mishkin said. After reading descriptions of every occupation for the study, she and her colleagues learned “an important lesson,” she said: “There’s no way a model is ever going to do all of this.”

Large language models could help write legislation, for instance, but could not pass laws. They could act as therapists — people could share their thoughts, and the models could respond with ideas based on proven regimens — but they do not have human empathy or the ability to read nuanced situations.

The version of ChatGPT open to the public has risks for workers — it often gets things wrong, can reflect human biases, and is not secure enough for businesses to trust with confidential information. Companies that use it get around these obstacles with tools that tap its technology in a so-called closed domain — meaning they train the model only on certain content and keep any inputs private.

Morgan Stanley uses a version of OpenAI’s model made for its business that was fed about 100,000 internal documents, more than a million pages. Financial advisers use it to help them find information to answer client questions quickly, like whether to invest in a certain company. (Previously, this required finding and reading multiple reports.)

It leaves advisers more time to talk with clients, said Jeff McMillan, who leads data analytics and wealth management at the firm. The tool does not know about individual clients and any human touch that might be needed, like if they are going through a divorce or illness.

Aquent Talent, a staffing firm, is using a business version of Bard. Usually, humans read through workers’ résumés and portfolios to find a match for a job opening; the tool can do it much more efficiently. Its work still requires a human audit, though, especially in hiring, because human biases are built in, said Rohshann Pilla, president of Aquent Talent.

Harvey, which is funded by OpenAI, is a start-up selling a tool like this to law firms. Senior partners use it for strategy, like coming up with 10 questions to ask in a deposition or summarizing how the firm has negotiated similar agreements.

“It’s not, ‘Here’s the advice I’d give a client,’” said Winston Weinberg, a co-founder of Harvey. “It’s, ‘How can I filter this information quickly so I can reach the advice level?’ You still need the decision maker.”

He says it’s especially helpful for paralegals or associates. They use it to learn — asking questions like: What is this type of contract for, and why was it written like this? — or to write first drafts, like summarizing a financial statement.

“Now all of a sudden they have an assistant,” he said. “People will be able to do work that’s at a higher level faster in their career.”

Other people studying how workplaces use large language models have found a similar pattern: They help junior employees most. A study of customer support agents by Professor Brynjolfsson and colleagues found that using A.I. increased productivity 14 percent overall, and 35 percent for the lowest-skilled workers, who moved up the learning curve faster with its assistance.

“It closes gaps between entry-level workers and superstars,” said Robert Seamans of N.Y.U.’s Stern School of Business, who co-wrote a paper finding that the occupations most exposed to large language models were telemarketers and certain teachers.

The last round of automation, affecting manufacturing jobs, increased income inequality by depriving workers without college educations of high-paying jobs, research has shown.

Some scholars say large language models could do the opposite — decreasing inequality between the highest-paid workers and everyone else.

“My hope is it will actually allow people with less formal education to do more things,” said David Autor, a labor economist at M.I.T., “by lowering barriers to entry for more elite jobs that are well paid.”

Source: In Reversal Because of A.I., Office Jobs Are Now More at Risk

The Gatekeepers of Knowledge Don’t Want Us to See What They Know

Meanwhile, the Conservative focus solely on Canadian gatekeepers:

We are living through an information revolution. The traditional gatekeepers of knowledge — librarians, journalists and government officials — have largely been replaced by technological gatekeepers — search engines, artificial intelligence chatbots and social media feeds.

Whatever their flaws, the old gatekeepers were, at least on paper, beholden to the public. The new gatekeepers are fundamentally beholden only to profit and to their shareholders.

That is about to change, thanks to a bold experiment by the European Union.

With key provisions going into effect on Aug. 25, an ambitious package of E.U. rules, the Digital Services Act and Digital Markets Act, is the most extensive effort toward checking the power of Big Tech (beyond the outright bans in places like China and India). For the first time, tech platforms will have to be responsive to the public in myriad ways, including giving users the right to appeal when their content is removed, providing a choice of algorithms and banning the microtargeting of children and of adults based upon sensitive data such as religion, ethnicity and sexual orientation. The reforms also require large tech platforms to audit their algorithms to determine how they affect democracy, human rights and the physical and mental health of minors and other users.

This will be the first time that companies will be required to identify and address the harms that their platforms enable. To hold them accountable, the law also requires large tech platforms like Facebook and Twitter to provide researchers with access to real-time data from their platforms. But there is a crucial element that has yet to be decided by the European Union: whether journalists will get access to any of that data.

Journalists have traditionally been at the front lines of enforcement, pointing out harms that researchers can expand on and regulators can act upon. The Cambridge Analytica scandal, in which we learned how consultants for Donald Trump’s presidential campaign exploited the Facebook data of millions of users without their permission, was revealed by The New York Times and The Observer of London. BuzzFeed News reported on the offensive posts that detailed Facebook’s role in enabling the massacre of Rohingyas. My team when I worked at ProPublica uncovered how Facebook allows advertisers to discriminate in employment andhousing ads.

But getting data from platforms is becoming harder and harder. Facebook has been particularly aggressive, shutting down the accounts of researchers at New York University in 2021 for “unauthorized means” of accessing Facebook ads. That year, it also legally threatened a European research group, AlgorithmWatch, forcing it to shut down its Instagram monitoring project. And earlier this month, Twitter began limiting all its users’ ability to view tweets in what the company described as an attempt to blockautomated collection of information from Twitter’s website by A.I. chatbots as well as bots, spammers and other “bad actors.”

Meanwhile, the tech companies have also been shutting down authorized access to their platforms. In 2021, Facebook disbanded the team that oversaw the analytics tool CrowdTangle, which many researchers used to analyze trends. This year, Twitter replaced its free researcher tools with a paid version that is prohibitively expensive and unreliable. As a result, the public has less visibility than ever into how our global information gatekeepers are behaving.

Last month, the U.S. senator Chris Coons introduced the Platform Accountability and Transparency Act, legislation that would require social media companies to share more data with researchers and provide immunity to journalists collecting data in the public interest with reasonable privacy protections.

But as it stands, the European Union’s transparency efforts rest on European academics who will apply to a regulatory body for access to data from the platforms and then, hopefully, issue research reports.

That is not enough. To truly hold the platforms accountable, we must support the journalists who are on the front lines of chronicling how despots, trolls, spies, marketers and hate mobs are weaponizing tech platforms or being enabled by them.

The Nobel Peace Prize winner Maria Ressa runs Rappler, a news outlet in the Philippines that has been at the forefront of analyzing how Filipino leaders have used social media to spread disinformation, hijack social media hashtags, manipulate public opinion and attack independent journalism.

Last year, for instance, Rappler revealed that the majority of Twitter accounts using certain hashtags in support of Ferdinand Marcos Jr., who was then a presidential candidate, had been created in a one-month period, making it likely that many of them were fake accounts. With the Twitter research feed that Rappler used now shuttered, and the platforms cracking down on data access, it’s not clear how Ms. Ressa and her colleagues can keep doing this type of important accountability journalism.

Ms. Ressa asked the European Commission, in public comments filed in May, to provide journalists with “access to real-time data” so they can provide “a macro view of patterns and trends that these technology companies create and the real-world harms they enable.” (I also filed comments to the European Commission, along with more than a dozen journalists, asking the commission to support access to platform data for journalists.)

As Daphne Keller, the director of the program on platform regulation at Stanford’s Cyber Policy Center, argues in her comments to the European Union, allowing journalists and researchers to use automated tools to collect publicly available data from platforms is one of the best ways to ensure transparency because it “is a rare form of transparency that does not depend on the very platforms who are being studied to generate information or act as gatekeepers.”

Of course, the tech platforms often push back against transparency requests by claiming that they must protect the privacy of their users. Which is hilarious, given that their business models are based on mining and monetizing their users’ personal data. But putting that aside, the privacy interests of users are not being implicated here: The data that journalists need is already public for anyone who has an account on these services.

What journalists lack is access to large quantities of public data from tech platforms in order to understand whether an event is an anomaly or representative of a larger trend. Without that access, we will continue to have what we have now: a lot of anecdotes about this piece of content or that user being banned, but no real sense of whether these stories are statistically significant.

Journalists write the first draft of history. If we can’t see what is happening on the biggest speech platforms in the globe, that history will be written for the benefit of platforms — not the public.

Source: The Gatekeepers of Knowledge Don’t Want Us to See What They Know

AI Makes Its Way to Immigration With New Tool to Aid Attorneys

Perhaps this may make some immigration lawyers less instinctively hostile to the use of AI by the government:

The makers of a new software platform are turning to artificial intelligence to boost immigration attorneys’ research and drafting efforts.

The American Immigration Lawyers Association is partnering with Visalaw.Ai, a platform built to aid attorneys with research and summarizing and drafting documents, to launch a product similar to OpenAI’s ChatGPT that will specialize in immigration-focused administrative and case law. AILA will allow its 16,000 members to beta test a tool—dubbed Gen—focused on research and summarization beginning this week at its annual conference outside of Orlando, Fla.

Additional tools are planned for subsequent roll outs that will aid in drafting legal documents and engaging clients.

“We think this will be a tremendous time saver for lawyers conducting research on a regular basis,” said Greg Siskind, a co-founder of Visalaw.Ai and partner at immigration firm Siskind Susser PC.

Attorneys’ use of AI tools like ChatGPT—a chatbot that searches vast tracts of information online based on human-like exchanges—can come with legal pitfalls.

One lawyer landed in hot water in federal district court in New York after filing a brief full of fictitious citations generated by the platform. And use of the open source software potentially could expose confidential client information because users submit information to train the AI platforms.

Siskind said the Visalaw platform will include a private feature, allowing members to draw on information from the platform without sending client information back. Partnering with AILA will also address quality issues by feeding the tool specific information related to immigration law that’s drawn from a huge legal library of regulations and secondary sources, he said.

“It’s set to be conservative in how it answers,” Siskind said of the platform.

Expanding use of technology could help close the gap in immigrants’ access to legal representation, said AILA Executive Director Benjamin Johnson. It’s also important for the organization to get involved in shaping new technology platforms for the immigration bar while they’re being developed, instead of reacting afterward, he said.

Much of the work of immigration law involves submitting forms and documents, rather than practicing in court. But the risks of technology being improperly used mean AILA has a responsibility to make sure any tools offered to its members are accurate and effective, Johnson said.

“We can stand on the sidelines and let somebody else shape the future for us. Or we can get engaged and determine how this should affect the immigration bar and the practice of immigration law,” he said. “In this environment, nobody can afford to stand on the sidelines.”

Access to the platform will be subscription-based, although Siskind said final pricing is still being worked out. AILA’s long-term relationship with the platform will be determined by members’ interactions with it, Johnson said.

To contact the reporter on this story: Andrew Kreighbaum in Washington at akreighbaum@bloombergindustry.com

Source: AI Makes Its Way to Immigration With New Tool to Aid Attorneys

How AI is helping Canada keep some people out of the country. And why there are those who say it’s a problem

Well, that’s what filtering, whether human or AI-based does. With high levels, AI needed to maintain applicant service. Yes, more transparency and accountability needed, but this applies to both human and artificial intelligence and decision-making:

Artificial intelligence is helping authorities keep some people out of Canada.

“Project Quantum,” as it’s been dubbed, is a largely unknown AI-assisted pilot project that’s been undertaken by the Canada Border Services Agency.

It essentially screens air travellers before they take off for Canada. In thousands of cases in recent years, it has led the CBSA to recommend a traveller be stopped before even getting on their flight.

Authorities say the program is meant to flag people who could be a threat to this country.

But just who the government is stopping at international airports — and the criteria used to select them — isn’t clear. That has led critics to question how we know the AI-assisted program is targeting the right people, and that discrimination isn’t somehow baked into its process.

Language on the CBSA’s website also says the program is meant to address the issue of irregular migration. Some are concerned it’s having the effect of making asylum — already restricted under Canada’s Safe Third Country Agreement with the U.S.— even harder to gain in this country.


Officials say the pre-departure risk-assessment matches passengers’ personal information from commercial air carriers with pre-established indicators of risk identification models that are designed by border officials.

The risk identification models have been developed, they say, based on passenger information sent from commercial carriers to CBSA.

Between its inception in 2019 and the end of last year, Project Quantum referred 13,863 travellers to its overseas liaison officers for further assessments. In total, CBSA recommended to air carriers that they refuse boarding 6,182 travellers on flights to Canada.

The program comes against the backdrop of increasing constraints on irregular migration to Canada. Earlier this year, Ottawa and Washington expanded a bilateral agreement to deny foreign nationals access to asylum across the entire Canada-U.S. border — not just at the official ports of entry. As a result, the number of irregular migrants to Canada has plummeted.

Given Canada’s unique geography and how it is buffered by the U.S., the new interdiction tool against air travellers further limits asylum seekers’ options.

“The objective of this strategy is to push the border out as far as possible, ideally outside of Canada, to where a person lives,” contends University of New Brunswick law professor Benjamin Perryman, who represents two Hungarian Roma families in fighting the CBSA cancellation of their electronic travel authorizations.

“This new technology comes with substantial risks of human rights violations. We’ve seen that in other areas. When we don’t have transparency and oversight in place, it raises some pretty big concerns.”

The CBSA, established in 2003 as the immigration enforcement arm, is the only public safety department without an outside civilian oversight body, despite the border officers’ power to carry firearms, arrest and detain — authorities similar to those of police officers.

Since being elected in 2015, the Liberal government has promised to establish a watchdog for the CBSA, but a bill has yet to be passed to set up such an infrastructure for accountability.

In an email to the Star, CBSA spokesperson Jacqueline Roby said the pilot program seeks to “detect illicit migration concerns” for air travellers at the earliest point.

“It is a targeting approach, i.e. an operational practice, that supports and guides the CBSA officers to identify high-risk and potential illegal activity,” Roby explained.

Those activities, she added, include terrorism or terror-related crimes, human-smuggling or trafficking and other serious transnational crimes.

However, advocates are concerned that the risk indicators of these models could be rife with unintended biases and that they are being used to detect and interdict undesirable travellers, including prospective asylum seekers in search of protection in Canada.

‘Based on quantum referral’

Gabor Lukacs, founder and president of Air Passenger Rights, an advocacy group for travellers, says there’s been very little public information about the pilot program. He only came across Project Quantum through a recent court case involving two Roma travellers who were refused boarding an Air Transat flight in London.

Immigration documents showed the couple were flagged “based on quantum referral.”

The two were to visit a family member in Canada, who arrived previously and sought asylum and who is now a permanent resident. Both travellers had their valid electronic travel authorization — an entry requirement for visa-exempt visitors — cancelled as a result.

The case raises questions, in Lukacs view, of whether an ethnic name, such as the Roma’s, or information about connections to a former refugee in Canada, were among the indicators used by the program to flag passengers. The CBSA declined to answer questions about these concerns, saying to do so could compromise the program’s integrity.

“The problem with AI is it has very high potential of unintended racial and ethnic biases. It’s far from clear to me if there’s a proper distinction in the training of this software between refugee claimants and criminals,” he noted.

Lukacs’s concern is partly based on a written presentation in 2017 by the immigration department about the application of advanced and predictive analytics to identify patterns that enable prediction of future behaviours, and how combinations of applicant characteristics correlate with application approvals, refusals and frauds.

“In the future, we aim to predict undesirable behaviours (e.g. criminality or refugee claims),” said the document released in response to an access-to-information request by Perryman.

The border agency would not say whether potential refugees are flagged, but it said Project Quantum is part of its National Targeting Program, which identifies people and goods bound for Canada that may pose a threat to the country’s security and safety.

Assisted by human intelligence, it uses automated advance information sources from carriers and importers to identify those risks.

The number of quantum referrals for assessments has grown with the number of “flights” tested with the modelling — from 18 in 2019 to 32 by February 2022. The border agency would not say if the “flight” refers to route or participating air carrier, but said the pilot project is “ongoing.”

The National Targeting Centre will alert the relevant CBSA liaison officer abroad to assess the referral if a traveller matches the criteria set out in the risk identification models. The officer, if warranted, engages with the air carrier and/or traveller before making a “board” or “no-board” recommendation.

Opened in Ottawa 2012, the targeting centre, among other responsibilities, runs the liaison officer network, which started with more than 60 officers in 40 countries.

Roby declined to reveal how the flights or routes were selected, in what regions the modelling assessment was deployed or what indicators travellers were measured against, saying that “would compromise the integrity of the program.”

Critics call the lack of information and transparency troubling, given the complaints of alleged ethnic profiling against CBSA in recent years, including an ongoing court case by a Hungarian couple, who were denied boarding to visit family members who were former refugees.

“If CBSA considers association with refugees an indicator of the person allegedly intending to do something nefarious like overstaying or worse, that’s in and of itself a problem,” said Lukacs.

“The bigger issue is what data sets and indicators have been used for teaching and training the algorithms to decide who to flag.”

Roby of the CBSA says the agency takes these concerns seriously in developing a new targeting tool to ensure compliance with the Canadian Charter of Rights and Freedoms.

“The Agency works to eliminate any systemic racism or unconscious bias in its operations, its work and policies, which includes addressing instances where racialized Canadians and newcomers have faced additional barriers, and ensuring that minority communities are not subject to unfair treatment,” she said.

Officials must follow strict guidelines to protect the privacy of passengers and crew members and the data is stored in a secure system accessible only by authorized personnel, she added. The use of this data is subject to an audit process and users are liable for any misuse.

However, Project Quantum is not governed by the federal oversight required under the directive on automated decision-making.

The Treasury Board’s “Algorithmic Impact Assessment (AIA) would not apply here. The CBSA relies on the knowledge, training, expertise, and experience of border officers to make the final determination on what or who should be targeted,” Roby explained in an email.

“The CBSA provides advice to an air carrier, however, it is then up to the air carrier to decide whether or not to follow the recommendation.”

The initiative also raises legal questions about Canadian officials’ authority to engage in extraterritorial enforcement and their compliance with the Charter of Rights and international human rights law, said Perryman.

Perryman said he’s open to claims by law enforcement that certain aspects of the CBSA investigation techniques need to be kept confidential to be effective but said there needs to be sufficient transparency and oversight.

“This claim of racial profiling as a legitimate technique is something that we’ve seen police rely on initially in Canada. And when the full spectrum of that racial profiling became public, we decided as a society that it was not a legitimate law enforcement tool,” he said.

“We’ve taken steps to end that type of racial profiling domestically. I’m not completely hostile to that argument, but I think it’s one that has to be approached with some degree of scrutiny and care.”

The border agency said travellers who are refused boarding may file a complaint in writing using the CBSA web form or by mail to its recourse directorate.

Source: How AI is helping Canada keep some people out of the country. And why there are those who say it’s a problem

This company adopted AI. Here’s what happened to its human workers

This is a really interesting study. Given that it involved call centres and customer support, IRCC, ESDC, CRA and others should be studying this example of how to improve productivity and citizen service:

Lately, it’s felt like technological change has entered warp speed. Companies like OpenAI and Google have unveiled new Artificial Intelligence systems with incredible capabilities, making what once seemed like science fiction an everyday reality. It’s an era that is posing big, existential questions for us all, about everything from literally the future of human existence to — more to the focus of Planet Money — the future of human work.

“Things are changing so fast,” says Erik Brynjolfsson, a leading, technology-focused economist based at Stanford University.

Back in 2017, Brynjolfsson published a paper in one of the top academic journals, Science, which outlined the kind of work that he believed AI was capable of doing. It was called “What Can Machine Learning Do? Workforce Implications.” Now, Brynjolfsson says, “I have to update that paper dramatically given what’s happened in the past year or two.”

Sure, the current pace of change can feel dizzying and kinda scary. But Brynjolfsson is not catastrophizing. In fact, quite the opposite. He’s earned a reputation as a “techno-optimist.” And, recently at least, he has a real reason to be optimistic about what AI could mean for the economy.

Last week, Brynjolfsson, together with MIT economists Danielle Li and Lindsey R. Raymond, released what is, to the best of our knowledge, the first empirical study of the real-world economic effects of new AI systems. They looked at what happened to a company and its workers after it incorporated a version of ChatGPT, a popular interactive AI chatbot, into workflows.

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.

The Rise Of Cyborg Customer Service Reps

The story of this study starts a few years ago, when an unnamed Fortune 500 company — Brynjolfsson and his colleagues have not gotten permission to disclose its identity — decided to adopt an earlier version of OpenAI’s ChatGPT. This AI system is an example of what computer scientists call “generative AI” and also a “Large Language Model,” systems that have crunched a ton of data — especially text — and learned word patterns that enable them to do things like answer questions and write instructions.

This company provides other companies with administrative software. Think like programs that help businesses do accounting and logistics. A big part of this company’s job is helping its customers, mostly small businesses, with technical support.

The company’s customer support agents are based primarily in the Philippines, but also the United States and other countries. And they spend their days helping small businesses tackle various kinds of technical problems with their software. Think like, “Why am I getting this error message?” or like, “Help! I can’t log in!”

Instead of talking to their customers on the phone, these customer service agents mostly communicate with them through online chat windows. These troubleshooting sessions can be quite long. The average conversation between the agents and customers lasts about 40 minutes. Agents need to know the ins and outs of their company’s software, how to solve problems, and how to deal with sometimes irate customers. It’s a stressful job, and there’s high turnover. In the broader customer service industry, up to 60 percent of reps quit each year.

Facing such high turnover rates, this software company was spending a lot of time and money training new staffers. And so, in late 2020, it decided to begin using an AI system to help its constantly churning customer support staff get better at their jobs faster. The company’s goal was to improve the performance of their workers, not replace them.

Now, when the agents look at their computer screens, they don’t only see a chat window with their customers. They also see another chat window with an AI chatbot, which is there to help them more effectively assist customers in real time. It advises them on what to potentially write to customers and also provides them with links to internal company information to help them more quickly find solutions to their customers’ technical problems.

This interactive chatbot was trained by reading through a ton of previous conversations between reps and customers. It has recognized word patterns in these conversations, identifying key phrases and common problems facing customers and how to solve them. Because the company tracks which conversations leave its customers satisfied, the AI chatbot also knows formulas that often lead to success. Think, like, interactions that customers give a 5 star rating. “I’m so sorry you’re frustrated with error message 504. All you have to do is restart your computer and then press CTRL-ALT-SHIFT. Have a blessed day!”

Equipped with this new AI system, the company’s customer support representatives are now basically part human, part intelligent machine. Cyborg customer reps, if you will.

Lucky for Brynjolfsson, his colleagues, and econ nerds like us at Planet Money, this software company gave the economists inside access to rigorously evaluate what happened when customer service agents were given assistance from intelligent machines. The economists examine the performance of over 5,000 agents, comparing the outcomes of old-school customer reps without AI against new, AI-enhanced cyborg customer reps.

What Happened When This Company Adopts AI

The economists’ big finding: after the software company adopted AI, the average customer support representative became, on average, 14 percent more productive. They were able to resolve more customer issues per hour. That’s huge. The company’s workforce is now much faster and more effective. They’re also, apparently, happier. Turnover has gone down, especially among new hires.

Not only that, the company’s customers are more satisfied. They give higher ratings to support staff. They also generally seem to be nicer in their conversations and are less likely to ask to speak to an agent’s supervisor.

So, yeah, AI seems to really help improve the work of the company’s employees. But what’s even more interesting is that not all employees gained equally from using AI. It turns out that the company’s more experienced, highly skilled customer support agents saw little or no benefit from using it. It was mainly the less experienced, lower-skilled customer service reps who saw big gains in their job performance.

“And what this system did was it took people with just two months of experience and had them performing at the level of people with six months of experience,” Brynjolfsson says. “So it got them up the learning curve a lot faster — and that led to very positive benefits for the company.”

Brynjolfsson says these improvements make a lot of sense when you think about how the AI system works. The system has analyzed company records and learned from highly rated conversations between agents and customers. In effect, the AI chatbot is basically mimicking the company’s top performers, who have experience on the job. And it’s pushing newbies and low performers to act more like them. The machine has essentially figured out the recipe for the magic sauce that makes top performers so good at their jobs, and it’s offering that recipe for the workers who are less good at their jobs.

That’s great news for the company and its customers, as well as the company’s low performers, who are now better at their jobs. But, Brynjolfsson says, it also raises the question: should the company’s top performers be getting paid even more? After all, they’re now not only helping the customers they directly interact with. They’re now also, indirectly, helping all the company’s customers, by modeling what good interactions look like and providing vital source material for the AI.

“It used to be that high-skilled workers would come up with a good answer and that would only help them and their customer,” Brynjolfsson says. “Now that good answer gets amplified and used by people throughout the organization.”

The Big Picture

While Brynjolfsson is cautious, noting that this is one company in one study, he also says one of his big takeaways is that AI could make our economy much more productive in the near future. And that’s important. Productivity gains — doing more in less time — are a crucial component for rising living standards. After years of being disappointed by lackluster productivity growth, Brynjolfsson is excited by this possibility. Not only does AI seem to be delivering productivity gains, it seems to deliver them pretty fast.

“And the fact that we’re getting some really significant benefits suggests that we could have some big benefits over the next few years or decades as these systems are more widely used,” Brynjolfsson says. When machines take over more work and boost our productivity, Brynjolfsson says, that’s generally a great thing. It means that society is getting richer, that the economic pie is getting larger.

At the same time, Brynjolfsson says, there are no guarantees about how this pie will be distributed. Even when the pie gets bigger, there are people who could see their slice get smaller or even disappear. “It’s very clear that it’s not automatic that the bigger pie is evenly shared by everybody,” Brynjolfsson says. “We have to put in place policies, whether it’s in tax policy or the strategy of companies like this one, which make sure the gains are more widely shared.”

Higher productivity is a really important finding. But what’s probably most fascinating about this study is that it adds to a growing body of evidence that suggests that AI could have a much different effect on the labor market than previous waves of technological change.

For the last few decades, we’ve seen a pattern that economists have called “skill-biased technological change.” The basic idea is that so-called “high-skill” office workers have disproportionately benefited from the use of computers and the internet. Things like Microsoft Word and Excel, Google, and so on have made office workers and other high-paid professionals much better at their jobs.

Meanwhile, however, so-called “low-skill” workers, who often work in the service industry, have not benefited as much from new technology. Even worse, this body of research finds, new technology killed many “middle-skill” jobs that once offered non-college-educated workers a shot at upward mobility and a comfortable living in the middle class. In this previous technological era, the jobs that were automated away were those that focused on doing repetitive, “routine” tasks. Tasks that you could provide a machine with explicit, step-by-step instructions how to do. It turned out that, even before AI, computer software was capable of doing a lot of secretarial work, data entry, bookkeeping, and other clerical tasks. And robots, meanwhile, were able to do many tasks in factories. This killed lots of middle class jobs.

The MIT economist David Autor has long studied this phenomenon. He calls it “job polarization” and a “hollowing out” of the middle class. Basically, the data suggests that the last few decades of technological change was a major contributor to increasing inequality. Technology has mostly boosted the incomes of college-educated and skilled workers while doing little for — and perhaps even hurting — the incomes of non-college-educated and low-skilled workers.

Upside Downside

But, what’s interesting is, as Brynjolfsson notes, this new wave of technological change looks like it could be pretty different. You can see it in his new study. Instead of experienced and skilled workers benefiting mostly from AI technology, it’s the opposite. It’s the less experienced and less skilled workers who benefit the most. In this customer support center, AI improved the know-how and intelligence of those who were new at the job and those who were lower performers. It suggests that AI could benefit those who were left behind in the previous technological era.

“And that might be helpful in terms of closing some of the inequality that previous technologies actually helped amplify,” Brynjolfsson says. So one benefit of intelligence machines is — maybe — they will improve the know-how and smarts of low performers, thereby reducing inequality.

But — and Brynjolfsson seemed a bit skeptical about this — it’s also possible that AI could lower the premium on being experienced, smart, or knowledgeable. If anybody off the street can now come in and — augmented by a machine — start doing work at a higher level, maybe the specialized skills and intelligence of people who were previously in the upper echelon become less valuable. So, yeah, AI could reduce inequality by bringing the bottom up. But it could also reduce inequality by bringing the top and middle down, essentially de-skilling a whole range of occupations, making them easier for anyone to do and thus lowering their wage premium.

Of course, it’s also possible that AI could end up increasing inequality even more. For one, it could make the Big AI companies, which own these powerful new systems, wildly rich. It could also empower business owners to replace more and more workers with intelligent machines. And it could kill jobs for all but the best of the best in various industries, who keep their jobs because maybe they’re superstars or because maybe they have seniority. Then, with AI, these workers could become much more productive, and so their industries might need fewer of these types of jobs than before.

The effects of AI, of course, are still very much being studied — and these systems are evolving fast — so this is all just speculation. But it does look like AI may have different effects than previous technologies, especially because machines are now more capable of doing “non-routine” tasks. Previously, as stated, it was only “routine” tasks that proved to be automatable. But, now, with AI, you don’t have to program machines with specific instructions. They are much more capable of figuring out things on the fly. And this machine intelligence could upend much of the previous thinking on which kinds of jobs will be affected by automation.

Source: This company adopted AI. Here’s what happened to its human workers

Government ‘hackathon’ to search for ways to use AI to cut asylum backlog

For all the legitimate worries about AI and algorithms, many forget that human systems have similar biases and the additional issue of inconsistencies (see Kahneman’s Noise). Given numbers, irresponsible not to develop these tools, but take steps to avoid bias. And I think we need to get off the mindset that every case is unique as many, if not most, have more commonalities than differences:

The Home Office plans to use artificial intelligence to reduce the asylum backlog, and is launching a three-day hackathon in the search for quicker ways to process the 138,052 undecided asylum cases.

The government is convening academics, tech experts, civil servants and business people to form 15 multidisciplinary teams tasked with brainstorming solutions to the backlog. Teams will be invited to compete to find the most innovative solutions, and will present their ideas to a panel of judges. The winners are expected to meet the prime minister, Rishi Sunak, in Downing Street for a prize-giving ceremony.

Inspired by Silicon Valley’s approach to problem-solving, the hackathon will take place in London and Peterborough in May. One possible method of speeding up the processing of asylum claims, discussed in preliminary talks before the event, involves establishing whether AI can be used to transcribe and analyse the Home Office’s huge existing database of thousands of hours of previous asylum interviews, to identify trends.

Source: Government ‘hackathon’ to search for ways to use AI to cut asylum backlog

ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Of interest (I am starting to find it useful as an editor):

“Canada’s historical monuments are also symbols of Indigenous genocide.”

“Police brutality in Canada is just as real as in the U.S.”

Those seemed to me like articles that my colleague, Shree Paradkar, a Toronto Star social and racial justice columnist, could have plausibly written. They were provided by an AI chatbot in response to my request for a list of articles by Paradkar.

The problem is that they don’t exist.

“At first blush it might seem easy to associate me with these headlines. As an opinion writer, I even agree with the premise of some of them,” Paradkar wrote to me after I emailed her the list.

“But there are two major red flags. The big one: they’re false. No articles I wrote have these headlines. And two, they either bludgeon nuance (the first headline) or summarize what I quote other people saying and what I write in different articles into one piece,” she said.

Paradkar’s discomfort reflects wider concerns about the abundance of fake references dished out by popular chatbots including ChatGPT — and worry that with rapidly evolving technology, people may not know how to identify false information. 

The use of artificial intelligence chatbots to summarize large volumes of online information is now widely known, and while some school districts have banned AI-assisted research, some educators advocate for the use of AI as a learning tool.

Users may think that one way to verify information from a chatbot is to ask it to provide references. The problem? The citations look real and even come with hyperlinks. But they are usually fake.

In recent months, academics have issued multiple warnings that ChatGPT was making up academic studies, including convincing scientific research abstracts. This came to the attention of Oxford University professor David Wilkinson when a student turned in a paper with a reference to a study that he couldn’t locate, but which was similar to fake references he found on ChatGPT.

It is less well known that media sources provided by chatbots are often fabricated as well. The Guardian recently called attention to the confusion that ensued at the newspaper when a reader inquired about an article that did not appear on The Guardian’s website.

The headline was so consistent with the newspaper’s coverage that staff thought it could have been something the reporter had written in the past. Staff went deep into computer systems to try to track down the article in case it was deleted. Luckily before more time was wasted, the reader disclosed that the reference came from ChatGPT.

“The invention of sources is particularly troubling for trusted news organizations,” wrote Chris Moran, The Guardian’s head of editorial innovation.

“It opens up whole new questions about whether citations can be trusted in any way, and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place,” said Moran.

In cases of both fake media and academic references from chatbots, this is because “generative” AI chatbots are large language models. They are trained on a huge body of internet data, and can reproduce human language. But the important point is that they are not search engines, said Subodha Kumar, professor of statistics, operations and data science at Temple University in Philadelphia.

“The general public using ChatGPT now doesn’t really know how it works. It creates links and references that don’t exist, because it is designed to generatecontent,” Kumar told me.

“That’s what makes AI beautiful, but also problematic. False references can fool the public and policymakers, so if we have a newspaper article based on a false reference, then we’ll be in big trouble,” he added.

I wondered if ChatGPT has undergone any modifications since researchers raised concerns about the prevalence of fake citations. The U.S.-based company Open AI did not reply to my request for clarification. So like many other curious journalists have done, I interacted with the bot itself.

“Could you give me some articles by Globe and Mail journalist André Picard?” I asked.

“As an AI language model, I do not have access to specific articles or publications by individual authors, including André Picard, as I do not have browsing capabilities or access to the internet. I can, however, provide you with a general format for citing articles,” it replied.

This response appeared to clearly explain ChatGPT’s limitations. I was pleasantly surprised.

But alas, when I asked ChatGPT for a summary of Huawei executive Meng Wanzhou’s legal battle in Canada and asked it to provide sources, the list of references it provided were all fake.

The hyperlinks were either broken or led to unrelated pages. A note at the bottom said that since it was trained on information up to September 2021, it may not reflect current events. But there was no disclaimer that the references themselves were fake.

“Why are your citations fake?” I asked ChatGPT.

“As an AI language model, I do not generate fake citations deliberately. However, I do not have direct access to external sources or the internet, and I do not provide real-time information … It’s always a good practice to cross-check and verify information from reputable and reliable sources independently, especially for critical or sensitive topics.”

But Kumar said it is very possible for developers to train chatbots to verify the references they generate.

“They don’t have to fundamentally change the concept to be just like Google Search, but the algorithm can be easily improved by giving it one more step: When references or links are created, check for the veracity of the content.”

Kumar added that other companies may soon offer new AI products that provide more reliable references, but as a “first mover” in the field, OpenAI has a special responsibility to address the issue.

OpenAI has said it is aware of the potential of generative AI to spread disinformation. In January, the organization partnered with Stanford University and Georgetown University to release a study forecasting potential misuses of language models for disinformation campaigns.

“For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations,” the study found.

And ChatGPT is only one out of a plethora of chatbot products from different companies, including apps that purport to be based on ChatGPT’s open API. I had found the list of my colleague’s fake opinion articles on one such Android app, “AI Chat by GPT,” (ChatGPT doesn’t currently offer a mobile version.)

For Ezra Levant, a conservative Canadian media commentator, the app offered up fake headlines on hot-button issues such as a fake column alleging that global migration will “undermine Canadian sovereignty” and another that Prime Minister Justin Trudeau’s carbon tax is in fact a “wealth tax.”

Paradkar pointed out that the generation of fake stories attributed to real people is particularly dangerous during a time of increasing physical violence and online abuse against journalists worldwide.

“When AI puts out data that is incorrect but plausible, it counts as misinformation. And I fear that it offers ammunition to trolls and bad actors confirming their worst biases and giving them more reason to abuse journalists.”

Source: ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Friedman: Our New Promethean Moment

Friedman is always interesting as to where future conversations and emerging issues are headed:

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

Then he did a demonstration. And I realized Craig’s words were an understatement.

First, he asked GPT-4 — for which Craig was a selected advanced tester and which was just released to the public — to summarize Planet Word and its mission in 400 words. It did so perfectly — in a few seconds.

Then he asked it to do the same in 200 words. Another few seconds.

Then he asked it to do the same in Arabic. Just as quickly. Then in Mandarin. Two more seconds. Then in English again — but in the form of a Shakespearean sonnet. A few more seconds.

Then Craig asked GPT-4 to write the same description in an abecedarian verse — where the first line begins with the letter A, the second with B and so on through the alphabet. It did it with stunning creativity, beginning:

Alluring in Washington, is a museum so grand,
Built to teach, inspire, and help us understand.
Curious minds Planet flock to Word’s embrace,
Delving into language and its intricate grace
Every exhibit here has a story to tell,
From the origins of speech to the art of the quill.

And so on, through Z.

I could barely sleep that night. To observe an A.I. system — its software, microchips and connectivity — produce that level of originality in multiple languages in just seconds each time, well, the first thing that came to mind was the observation by the science fiction writer Arthur C. Clarke that “any sufficiently advanced technology is indistinguishable from magic.”

The second thing that came to mind was a moment at the start of “The Wizard of Oz” — the tornado scene where everything and everyone are lifted into a swirling gyre, including Dorothy and Toto, and then swept away from mundane, black and white Kansas to the gleaming futuristic Land of Oz, where everything is in color.

We are about to be hit by such a tornado. This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

We know the key Promethean eras of the last 600 years: the invention of the printing press, the scientific revolution, the agricultural revolution combined with the industrial revolution, the nuclear power revolution, personal computing and the internet and … now this moment.

Only this Promethean moment is not driven by a single invention, like a printing press or a steam engine, but rather by a technology super-cycle. It is our ability to sense, digitize, process, learn, share and act, all increasingly with the help of A.I. That loop is being put into everything — from your car to your fridge to your smartphone to fighter jets — and it’s driving more and more processes every day.

It’s why I call our Promethean era “The Age of Acceleration, Amplification and Democratization.” Never have more humans had access to more cheap tools that amplify their power at a steadily accelerating rate — while being diffused into the personal and working lives of more and more people all at once. And it’s happening faster than most anyone anticipated.

The potential to use these tools to solve seemingly impossible problems — from human biology to fusion energy to climate change — is awe-inspiring. Consider just one example that most people probably haven’t even heard of — the way DeepMind, an A.I. lab owned by Google parent Alphabet, recently used its AlphaFold A.I. system to solve one of the most wicked problems in science — at a speed and scope that was stunning to the scientists who had spent their careers slowly, painstakingly creeping closer to a solution.

The problem is known as protein folding. Proteins are large complex molecules, made up of strings of amino acids. And as my Times colleague Cade Metz explained in a story on AlphaFold, proteins are “the microscopic mechanisms that drive the behavior of the human body and all other living things.”

What each protein can do, though, largely depends on its unique three-dimensional structure. Once scientists can “identify the shapes of proteins,” added Metz, “they can accelerate the ability to understand diseases, create new medicines and otherwise probe the mysteries of life on Earth.”

But, Science News noted, it has taken “decades of slow-going experiments” to reveal “the structure of more than 194,000 proteins, all housed in the Protein Data Bank.” In 2022, though, “the AlphaFold database exploded with predicted structures for more than 200 million proteins.” For a human that would be worthy of a Nobel Prize. Maybe two.

And with that our understanding of the human body took a giant leap forward. As a 2021 scientific paper, “Unfolding AI’s Potential,” published by the Bipartisan Policy Center, put it, AlphaFold is a meta technology: “Meta technologies have the capacity to … help find patterns that aid discoveries in virtually every discipline.”

ChatGPT is another such meta technology.

But as Dorothy discovered when she was suddenly transported to Oz, there was a good witch and a bad witch there, both struggling for her soul. So it will be with the likes of ChatGPT, Google’s Bard and AlphaFold.

Are we ready? It’s not looking that way: We’re debating whether to ban books at the dawn of a technology that can summarize or answer questions about virtually every book for everyone everywhere in a second.

Like so many modern digital technologies based on software and chips, A.I is “dual use” — it can be a tool or a weapon.

The last time we invented a technology this powerful we created nuclear energy — it could be used to light up your whole country or obliterate the whole planet. But the thing about nuclear energy is that it was developed by governments, which collectively created a system of controls to curb its proliferation to bad actors — not perfectly but not bad.

A.I., by contrast, is being pioneered by private companies for profit. The question we have to ask, Craig argued, is how do we govern a country, and a world, where these A.I. technologies “can be weapons or tools in every domain,” while they are controlled by private companies and are accelerating in power every day? And do it in a way that you don’t throw the baby out with the bathwater.

We are going to need to develop what I call “complex adaptive coalitions” — where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I. No one player in this coalition can fix the problem alone. It requires a very different governing model from traditional left-right politics. And we will have to transition to it amid the worst great-power tensions since the end of the Cold War and culture wars breaking out inside virtually every democracy.

We better figure this out fast because, Toto, we’re not in Kansas anymore.

Source: Our New Promethean Moment