ChatGPT summary of reactions to immigration section of Budget 2025

Reasonable summary (still going through the old way as well):

Nice — here’s a compact, source-backed briefing on reactions to the immigration measures in Canada’s 2025 federal budget, organized by the sub-sections you asked for.

Quick framing: the Budget and accompanying 2026–28 Immigration Levels Plan signal sizeable reductions in temporary resident targets (including international students and temporary foreign workers) and cuts to some refugee resettlement streams. Those policy choices are driving the reactions summarized below. (Budget Canada)

Media commentary

  • Overall tone: widespread note of political risk and policy reset — most national outlets frame the changes as a significant scaling back from the high post-pandemic targets and stress both the political logic (housing, service pressure) and economic trade-offs (labour supply). (Global News)
  • Coverage highlights the scale of reductions: reporters emphasize steep drops in international student arrivals (roughly 60% lower in 2025 vs 2024) and large cuts to temporary worker targets. Coverage also flags the government’s claim that arrivals and asylum claims have already fallen this year. (Budget Canada)
  • Some outlets place the budget in an election-cycle context — discussing short-term political calculations vs longer-term economic impact. (Global News)

Business reaction

  • Broad concern from employer and industry groups that lower targets will worsen labour shortages in key sectors (hospitality, restaurants, health care, construction, seasonal work). Many business groups warn the rollbacks send negative signals to investors and could constrain growth. (HCAMag)
  • Sector example: Restaurants Canada called the reductions “incredibly concerning,” pointing to a fall in temporary resident admissions from ~673,650 (2025 baseline) to ~370,000 by 2027 and warned of staffing crises for foodservice. (Restaurants Canada)
  • Some business commentators acknowledge the government’s stated goals (relief on housing and services) but emphasize that tightening labour supply may raise costs and reduce capacity for many SMEs. (HCAMag)

Provincial reactions

  • Mixed responses across provinces: jurisdictions with tight housing/health pressures (and those with political sensitivity on immigration) publicly welcomed the “sustainable” framing, while labour-short provinces and municipalities expressed alarm about workforce impacts. Local leaders in some cities framed the budget as a cautious “sign of confidence” for fiscal matters but cautiously noted labour impacts. (Global News)
  • Provinces that rely heavily on international students and temporary workers (e.g., provinces with big post-secondary or seasonal industries) have highlighted immediate operational concerns for employers, colleges, and municipalities. (Provincial press releases and municipal reaction pieces stress localized impacts.) (Global News)

Education sector

  • Strong, largely negative reaction from universities and college leaders: cuts to international study-permit targets are described as “alarming” and “deep” (reports indicate new international student arrivals were roughly 60% lower in 2025 vs 2024), creating immediate financial and operational stress for institutions and student supports. (University Affairs)
  • Association and sector commentaries note the budget also funds research recruitment (~$1.7B referenced in sector analysis) — so while research investment is welcomed, the near-term loss of fee revenue and campus diversity from fewer international students is a major concern. (University Affairs)

NGOs / civil society

  • Refugee and migrant advocacy groups are highly critical: NGOs (including Migrant Rights Network and the Canadian Council for Refugees) flag sharp cuts to refugee resettlement slots (e.g., big reductions in privately sponsored and government-assisted refugee allocations for 2026) and warn of humanitarian consequences. They also criticise proposed changes that would increase costs/limits on refugee healthcare supports. (Migrant Rights Network)
  • NGOs frame the budget as reversing prior resettlement commitments and argue the cuts are out of step with global resettlement needs; several are calling for immediate policy reversals or mitigations. (Canadian Council for Refugees)

Key numbers & facts (short)

  • Government/ Budget document: says temporary foreign worker arrivals have already fallen ~50% this year and international student arrivals roughly 60% lower so far in 2025; asylum claims down ~1/3. (Government emphasizes “restoring control” / “sustainable levels”.) (Budget Canada)
  • Industry claim (Restaurants Canada): temporary resident admissions projected to fall from ~673,650 in 2025 to ~370,000 by 2027 (sector-level alarm). (Restaurants Canada)
  • NGOs: significant cuts in refugee streams (e.g., privately sponsored refugees from 23,000 in 2025 → 16,000 in 2026; government-assisted refugees reduced to ~13,250 in 2026 in some NGO tallies). (Canadian Council for Refugees)

Overall read / implications

  • Short term: a mix of political kudos for addressing housing/service pressures and immediate pushback from business, post-secondary institutions, and refugee advocates about labour, revenue and humanitarian impacts. (Global News)
  • Medium term risk: continued sectoral labour shortfalls, reduced international-education revenues for campuses and communities, and reputational/humanitarian questions from refugee/NGO communities unless mitigations are introduced. (HCAMag)

Which do you want next?

Citizenship Oath Self-affirmation Canada Gazette feedback: Using LLM and ChatGPT

Prompted by the thorough analysis by Nora Sobel of Red River University of feedback to the government’s proposal to allow for self-affirmation of the citizenship oath (“citizenship on a click”) as well as my earlier more informal analysis, it is interesting to note the respective conclusions.

My take, remarkably good in this example in terms of summarizing the main comments and the relative weighting of comments. ChatGPT’s conclusion reads partly like a general comment calling for “thoughtful implementation that preserves the integrity, symbolism, and inclusiveness of the citizenship oath” while stating a hybrid option as being the most broadly acceptable (i.e., a policy recommendation based on trying to please both groups rather than the essence):

From Sobel:

The analysis identified that a substantial portion of the online comments that supported the new regulations were reflective of specific experiences of the individuals submitting their opinions, especially related to decreasing the time of the citizenship process and increasing flexibility for participating in the citizenship ceremony.

On the other hand, a substantial portion of the online comments that opposed the new regulations were more general and connected to broader political matters, including the importance of the citizenship process, avoiding devaluing Canadian citizenship, respecting Canada, improper process for change in the regulations, improper reasons for change in the regulations, and concerns about the possibility of fraud.

The analysis also uncovered several alternative solutions proposed by participants with distinct positions about the regulations that could build a consensus about this public policy related to the Oath of Citizenship process. Some of the most common proposals were related to identifying a larger number of appropriate authorized officials to be present in the citizenship ceremonies, increasing the options of dates and times offering citizenship ceremonies, and advocating and legislating for paid work time for new Canadian citizens attending the citizenship ceremonies.

In addition, the analysis of the online comments showed another opportunity to build consensus given the common understanding between Canadian-born and not Canadian-born participants that the citizenship ceremony presents a high value for Canada, and that one of the main purposes in this ceremony should be to honour new Canadians.

Google’s LLM:

The provided text contains public feedback and commentary regarding a proposal to alter the process for taking the Canadian Oath of Citizenship, specifically suggesting allowing for self-administered online oaths without an official present. Many respondents support the proposal, citing the potential for reduced waiting times and increased convenience for applicants. Conversely, a significant number of comments strongly oppose the change, arguing it diminishes the importance and solemnity of becoming a citizen and should remain a meaningful in-person ceremonyto foster a sense of community and allegiance to Canada. Concerns about potential fraud and the devaluation of Canadian citizenship were also frequently raised by those against the proposed changes.

ChatGPT (only source cited is Canada Gazette notice, i.e., Sobel’s article not used)

Summary of Canada Gazette Feedback on Proposed Changes to Citizenship Oath Administration

The proposed changes to the Canadian citizenship oath aim to modernize the process by allowing individuals to self-affirm or declare the oath using a computer, without the physical presence of a citizenship official. This initiative is part of a broader effort to streamline administrative procedures, reduce costs, and improve accessibility.

Key Themes from Public and Stakeholder Feedback:

  1. Support for Accessibility and Efficiency: Many respondents welcomed the proposed changes as a positive step toward improving accessibility, especially for individuals in remote areas, those with disabilities, or with scheduling constraints. The digital approach was seen as a cost-effective and time-efficient solution.
  2. Concerns About Integrity and Symbolism: Some stakeholders expressed reservations about the potential erosion of the ceremonial and symbolic aspects of the citizenship oath. Concerns were raised about the importance of the shared experience and the presence of an official to lend solemnity and legitimacy to the process.
  3. Recommendations for Safeguards: Several submissions advocated for built-in safeguards to ensure the authenticity and security of self-administered oaths. Suggestions included secure digital platforms, identity verification protocols, and post-oath follow-ups.
  4. Call for Hybrid Options: There was a general consensus among some stakeholders for offering both virtual and in-person options to cater to diverse preferences and circumstances. Flexibility was emphasized as key to balancing modernization with tradition.
  5. Cultural and Emotional Significance: Feedback underscored the deep cultural and emotional significance of the oath ceremony for new Canadians. Respondents highlighted the importance of preserving a sense of community and recognition in the citizenship process.

Conclusion:

While the majority of feedback reflected support for the proposed modernization efforts, there remains a clear call for thoughtful implementation that preserves the integrity, symbolism, and inclusiveness of the citizenship oath. A hybrid model with robust safeguards may be the most broadly acceptable path forward.

This company adopted AI. Here’s what happened to its human workers

This is a really interesting study. Given that it involved call centres and customer support, IRCC, ESDC, CRA and others should be studying this example of how to improve productivity and citizen service:

Lately, it’s felt like technological change has entered warp speed. Companies like OpenAI and Google have unveiled new Artificial Intelligence systems with incredible capabilities, making what once seemed like science fiction an everyday reality. It’s an era that is posing big, existential questions for us all, about everything from literally the future of human existence to — more to the focus of Planet Money — the future of human work.

“Things are changing so fast,” says Erik Brynjolfsson, a leading, technology-focused economist based at Stanford University.

Back in 2017, Brynjolfsson published a paper in one of the top academic journals, Science, which outlined the kind of work that he believed AI was capable of doing. It was called “What Can Machine Learning Do? Workforce Implications.” Now, Brynjolfsson says, “I have to update that paper dramatically given what’s happened in the past year or two.”

Sure, the current pace of change can feel dizzying and kinda scary. But Brynjolfsson is not catastrophizing. In fact, quite the opposite. He’s earned a reputation as a “techno-optimist.” And, recently at least, he has a real reason to be optimistic about what AI could mean for the economy.

Last week, Brynjolfsson, together with MIT economists Danielle Li and Lindsey R. Raymond, released what is, to the best of our knowledge, the first empirical study of the real-world economic effects of new AI systems. They looked at what happened to a company and its workers after it incorporated a version of ChatGPT, a popular interactive AI chatbot, into workflows.

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.

The Rise Of Cyborg Customer Service Reps

The story of this study starts a few years ago, when an unnamed Fortune 500 company — Brynjolfsson and his colleagues have not gotten permission to disclose its identity — decided to adopt an earlier version of OpenAI’s ChatGPT. This AI system is an example of what computer scientists call “generative AI” and also a “Large Language Model,” systems that have crunched a ton of data — especially text — and learned word patterns that enable them to do things like answer questions and write instructions.

This company provides other companies with administrative software. Think like programs that help businesses do accounting and logistics. A big part of this company’s job is helping its customers, mostly small businesses, with technical support.

The company’s customer support agents are based primarily in the Philippines, but also the United States and other countries. And they spend their days helping small businesses tackle various kinds of technical problems with their software. Think like, “Why am I getting this error message?” or like, “Help! I can’t log in!”

Instead of talking to their customers on the phone, these customer service agents mostly communicate with them through online chat windows. These troubleshooting sessions can be quite long. The average conversation between the agents and customers lasts about 40 minutes. Agents need to know the ins and outs of their company’s software, how to solve problems, and how to deal with sometimes irate customers. It’s a stressful job, and there’s high turnover. In the broader customer service industry, up to 60 percent of reps quit each year.

Facing such high turnover rates, this software company was spending a lot of time and money training new staffers. And so, in late 2020, it decided to begin using an AI system to help its constantly churning customer support staff get better at their jobs faster. The company’s goal was to improve the performance of their workers, not replace them.

Now, when the agents look at their computer screens, they don’t only see a chat window with their customers. They also see another chat window with an AI chatbot, which is there to help them more effectively assist customers in real time. It advises them on what to potentially write to customers and also provides them with links to internal company information to help them more quickly find solutions to their customers’ technical problems.

This interactive chatbot was trained by reading through a ton of previous conversations between reps and customers. It has recognized word patterns in these conversations, identifying key phrases and common problems facing customers and how to solve them. Because the company tracks which conversations leave its customers satisfied, the AI chatbot also knows formulas that often lead to success. Think, like, interactions that customers give a 5 star rating. “I’m so sorry you’re frustrated with error message 504. All you have to do is restart your computer and then press CTRL-ALT-SHIFT. Have a blessed day!”

Equipped with this new AI system, the company’s customer support representatives are now basically part human, part intelligent machine. Cyborg customer reps, if you will.

Lucky for Brynjolfsson, his colleagues, and econ nerds like us at Planet Money, this software company gave the economists inside access to rigorously evaluate what happened when customer service agents were given assistance from intelligent machines. The economists examine the performance of over 5,000 agents, comparing the outcomes of old-school customer reps without AI against new, AI-enhanced cyborg customer reps.

What Happened When This Company Adopts AI

The economists’ big finding: after the software company adopted AI, the average customer support representative became, on average, 14 percent more productive. They were able to resolve more customer issues per hour. That’s huge. The company’s workforce is now much faster and more effective. They’re also, apparently, happier. Turnover has gone down, especially among new hires.

Not only that, the company’s customers are more satisfied. They give higher ratings to support staff. They also generally seem to be nicer in their conversations and are less likely to ask to speak to an agent’s supervisor.

So, yeah, AI seems to really help improve the work of the company’s employees. But what’s even more interesting is that not all employees gained equally from using AI. It turns out that the company’s more experienced, highly skilled customer support agents saw little or no benefit from using it. It was mainly the less experienced, lower-skilled customer service reps who saw big gains in their job performance.

“And what this system did was it took people with just two months of experience and had them performing at the level of people with six months of experience,” Brynjolfsson says. “So it got them up the learning curve a lot faster — and that led to very positive benefits for the company.”

Brynjolfsson says these improvements make a lot of sense when you think about how the AI system works. The system has analyzed company records and learned from highly rated conversations between agents and customers. In effect, the AI chatbot is basically mimicking the company’s top performers, who have experience on the job. And it’s pushing newbies and low performers to act more like them. The machine has essentially figured out the recipe for the magic sauce that makes top performers so good at their jobs, and it’s offering that recipe for the workers who are less good at their jobs.

That’s great news for the company and its customers, as well as the company’s low performers, who are now better at their jobs. But, Brynjolfsson says, it also raises the question: should the company’s top performers be getting paid even more? After all, they’re now not only helping the customers they directly interact with. They’re now also, indirectly, helping all the company’s customers, by modeling what good interactions look like and providing vital source material for the AI.

“It used to be that high-skilled workers would come up with a good answer and that would only help them and their customer,” Brynjolfsson says. “Now that good answer gets amplified and used by people throughout the organization.”

The Big Picture

While Brynjolfsson is cautious, noting that this is one company in one study, he also says one of his big takeaways is that AI could make our economy much more productive in the near future. And that’s important. Productivity gains — doing more in less time — are a crucial component for rising living standards. After years of being disappointed by lackluster productivity growth, Brynjolfsson is excited by this possibility. Not only does AI seem to be delivering productivity gains, it seems to deliver them pretty fast.

“And the fact that we’re getting some really significant benefits suggests that we could have some big benefits over the next few years or decades as these systems are more widely used,” Brynjolfsson says. When machines take over more work and boost our productivity, Brynjolfsson says, that’s generally a great thing. It means that society is getting richer, that the economic pie is getting larger.

At the same time, Brynjolfsson says, there are no guarantees about how this pie will be distributed. Even when the pie gets bigger, there are people who could see their slice get smaller or even disappear. “It’s very clear that it’s not automatic that the bigger pie is evenly shared by everybody,” Brynjolfsson says. “We have to put in place policies, whether it’s in tax policy or the strategy of companies like this one, which make sure the gains are more widely shared.”

Higher productivity is a really important finding. But what’s probably most fascinating about this study is that it adds to a growing body of evidence that suggests that AI could have a much different effect on the labor market than previous waves of technological change.

For the last few decades, we’ve seen a pattern that economists have called “skill-biased technological change.” The basic idea is that so-called “high-skill” office workers have disproportionately benefited from the use of computers and the internet. Things like Microsoft Word and Excel, Google, and so on have made office workers and other high-paid professionals much better at their jobs.

Meanwhile, however, so-called “low-skill” workers, who often work in the service industry, have not benefited as much from new technology. Even worse, this body of research finds, new technology killed many “middle-skill” jobs that once offered non-college-educated workers a shot at upward mobility and a comfortable living in the middle class. In this previous technological era, the jobs that were automated away were those that focused on doing repetitive, “routine” tasks. Tasks that you could provide a machine with explicit, step-by-step instructions how to do. It turned out that, even before AI, computer software was capable of doing a lot of secretarial work, data entry, bookkeeping, and other clerical tasks. And robots, meanwhile, were able to do many tasks in factories. This killed lots of middle class jobs.

The MIT economist David Autor has long studied this phenomenon. He calls it “job polarization” and a “hollowing out” of the middle class. Basically, the data suggests that the last few decades of technological change was a major contributor to increasing inequality. Technology has mostly boosted the incomes of college-educated and skilled workers while doing little for — and perhaps even hurting — the incomes of non-college-educated and low-skilled workers.

Upside Downside

But, what’s interesting is, as Brynjolfsson notes, this new wave of technological change looks like it could be pretty different. You can see it in his new study. Instead of experienced and skilled workers benefiting mostly from AI technology, it’s the opposite. It’s the less experienced and less skilled workers who benefit the most. In this customer support center, AI improved the know-how and intelligence of those who were new at the job and those who were lower performers. It suggests that AI could benefit those who were left behind in the previous technological era.

“And that might be helpful in terms of closing some of the inequality that previous technologies actually helped amplify,” Brynjolfsson says. So one benefit of intelligence machines is — maybe — they will improve the know-how and smarts of low performers, thereby reducing inequality.

But — and Brynjolfsson seemed a bit skeptical about this — it’s also possible that AI could lower the premium on being experienced, smart, or knowledgeable. If anybody off the street can now come in and — augmented by a machine — start doing work at a higher level, maybe the specialized skills and intelligence of people who were previously in the upper echelon become less valuable. So, yeah, AI could reduce inequality by bringing the bottom up. But it could also reduce inequality by bringing the top and middle down, essentially de-skilling a whole range of occupations, making them easier for anyone to do and thus lowering their wage premium.

Of course, it’s also possible that AI could end up increasing inequality even more. For one, it could make the Big AI companies, which own these powerful new systems, wildly rich. It could also empower business owners to replace more and more workers with intelligent machines. And it could kill jobs for all but the best of the best in various industries, who keep their jobs because maybe they’re superstars or because maybe they have seniority. Then, with AI, these workers could become much more productive, and so their industries might need fewer of these types of jobs than before.

The effects of AI, of course, are still very much being studied — and these systems are evolving fast — so this is all just speculation. But it does look like AI may have different effects than previous technologies, especially because machines are now more capable of doing “non-routine” tasks. Previously, as stated, it was only “routine” tasks that proved to be automatable. But, now, with AI, you don’t have to program machines with specific instructions. They are much more capable of figuring out things on the fly. And this machine intelligence could upend much of the previous thinking on which kinds of jobs will be affected by automation.

Source: This company adopted AI. Here’s what happened to its human workers

ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Of interest (I am starting to find it useful as an editor):

“Canada’s historical monuments are also symbols of Indigenous genocide.”

“Police brutality in Canada is just as real as in the U.S.”

Those seemed to me like articles that my colleague, Shree Paradkar, a Toronto Star social and racial justice columnist, could have plausibly written. They were provided by an AI chatbot in response to my request for a list of articles by Paradkar.

The problem is that they don’t exist.

“At first blush it might seem easy to associate me with these headlines. As an opinion writer, I even agree with the premise of some of them,” Paradkar wrote to me after I emailed her the list.

“But there are two major red flags. The big one: they’re false. No articles I wrote have these headlines. And two, they either bludgeon nuance (the first headline) or summarize what I quote other people saying and what I write in different articles into one piece,” she said.

Paradkar’s discomfort reflects wider concerns about the abundance of fake references dished out by popular chatbots including ChatGPT — and worry that with rapidly evolving technology, people may not know how to identify false information. 

The use of artificial intelligence chatbots to summarize large volumes of online information is now widely known, and while some school districts have banned AI-assisted research, some educators advocate for the use of AI as a learning tool.

Users may think that one way to verify information from a chatbot is to ask it to provide references. The problem? The citations look real and even come with hyperlinks. But they are usually fake.

In recent months, academics have issued multiple warnings that ChatGPT was making up academic studies, including convincing scientific research abstracts. This came to the attention of Oxford University professor David Wilkinson when a student turned in a paper with a reference to a study that he couldn’t locate, but which was similar to fake references he found on ChatGPT.

It is less well known that media sources provided by chatbots are often fabricated as well. The Guardian recently called attention to the confusion that ensued at the newspaper when a reader inquired about an article that did not appear on The Guardian’s website.

The headline was so consistent with the newspaper’s coverage that staff thought it could have been something the reporter had written in the past. Staff went deep into computer systems to try to track down the article in case it was deleted. Luckily before more time was wasted, the reader disclosed that the reference came from ChatGPT.

“The invention of sources is particularly troubling for trusted news organizations,” wrote Chris Moran, The Guardian’s head of editorial innovation.

“It opens up whole new questions about whether citations can be trusted in any way, and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place,” said Moran.

In cases of both fake media and academic references from chatbots, this is because “generative” AI chatbots are large language models. They are trained on a huge body of internet data, and can reproduce human language. But the important point is that they are not search engines, said Subodha Kumar, professor of statistics, operations and data science at Temple University in Philadelphia.

“The general public using ChatGPT now doesn’t really know how it works. It creates links and references that don’t exist, because it is designed to generatecontent,” Kumar told me.

“That’s what makes AI beautiful, but also problematic. False references can fool the public and policymakers, so if we have a newspaper article based on a false reference, then we’ll be in big trouble,” he added.

I wondered if ChatGPT has undergone any modifications since researchers raised concerns about the prevalence of fake citations. The U.S.-based company Open AI did not reply to my request for clarification. So like many other curious journalists have done, I interacted with the bot itself.

“Could you give me some articles by Globe and Mail journalist André Picard?” I asked.

“As an AI language model, I do not have access to specific articles or publications by individual authors, including André Picard, as I do not have browsing capabilities or access to the internet. I can, however, provide you with a general format for citing articles,” it replied.

This response appeared to clearly explain ChatGPT’s limitations. I was pleasantly surprised.

But alas, when I asked ChatGPT for a summary of Huawei executive Meng Wanzhou’s legal battle in Canada and asked it to provide sources, the list of references it provided were all fake.

The hyperlinks were either broken or led to unrelated pages. A note at the bottom said that since it was trained on information up to September 2021, it may not reflect current events. But there was no disclaimer that the references themselves were fake.

“Why are your citations fake?” I asked ChatGPT.

“As an AI language model, I do not generate fake citations deliberately. However, I do not have direct access to external sources or the internet, and I do not provide real-time information … It’s always a good practice to cross-check and verify information from reputable and reliable sources independently, especially for critical or sensitive topics.”

But Kumar said it is very possible for developers to train chatbots to verify the references they generate.

“They don’t have to fundamentally change the concept to be just like Google Search, but the algorithm can be easily improved by giving it one more step: When references or links are created, check for the veracity of the content.”

Kumar added that other companies may soon offer new AI products that provide more reliable references, but as a “first mover” in the field, OpenAI has a special responsibility to address the issue.

OpenAI has said it is aware of the potential of generative AI to spread disinformation. In January, the organization partnered with Stanford University and Georgetown University to release a study forecasting potential misuses of language models for disinformation campaigns.

“For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations,” the study found.

And ChatGPT is only one out of a plethora of chatbot products from different companies, including apps that purport to be based on ChatGPT’s open API. I had found the list of my colleague’s fake opinion articles on one such Android app, “AI Chat by GPT,” (ChatGPT doesn’t currently offer a mobile version.)

For Ezra Levant, a conservative Canadian media commentator, the app offered up fake headlines on hot-button issues such as a fake column alleging that global migration will “undermine Canadian sovereignty” and another that Prime Minister Justin Trudeau’s carbon tax is in fact a “wealth tax.”

Paradkar pointed out that the generation of fake stories attributed to real people is particularly dangerous during a time of increasing physical violence and online abuse against journalists worldwide.

“When AI puts out data that is incorrect but plausible, it counts as misinformation. And I fear that it offers ammunition to trolls and bad actors confirming their worst biases and giving them more reason to abuse journalists.”

Source: ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Friedman: Our New Promethean Moment

Friedman is always interesting as to where future conversations and emerging issues are headed:

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

Then he did a demonstration. And I realized Craig’s words were an understatement.

First, he asked GPT-4 — for which Craig was a selected advanced tester and which was just released to the public — to summarize Planet Word and its mission in 400 words. It did so perfectly — in a few seconds.

Then he asked it to do the same in 200 words. Another few seconds.

Then he asked it to do the same in Arabic. Just as quickly. Then in Mandarin. Two more seconds. Then in English again — but in the form of a Shakespearean sonnet. A few more seconds.

Then Craig asked GPT-4 to write the same description in an abecedarian verse — where the first line begins with the letter A, the second with B and so on through the alphabet. It did it with stunning creativity, beginning:

Alluring in Washington, is a museum so grand,
Built to teach, inspire, and help us understand.
Curious minds Planet flock to Word’s embrace,
Delving into language and its intricate grace
Every exhibit here has a story to tell,
From the origins of speech to the art of the quill.

And so on, through Z.

I could barely sleep that night. To observe an A.I. system — its software, microchips and connectivity — produce that level of originality in multiple languages in just seconds each time, well, the first thing that came to mind was the observation by the science fiction writer Arthur C. Clarke that “any sufficiently advanced technology is indistinguishable from magic.”

The second thing that came to mind was a moment at the start of “The Wizard of Oz” — the tornado scene where everything and everyone are lifted into a swirling gyre, including Dorothy and Toto, and then swept away from mundane, black and white Kansas to the gleaming futuristic Land of Oz, where everything is in color.

We are about to be hit by such a tornado. This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

We know the key Promethean eras of the last 600 years: the invention of the printing press, the scientific revolution, the agricultural revolution combined with the industrial revolution, the nuclear power revolution, personal computing and the internet and … now this moment.

Only this Promethean moment is not driven by a single invention, like a printing press or a steam engine, but rather by a technology super-cycle. It is our ability to sense, digitize, process, learn, share and act, all increasingly with the help of A.I. That loop is being put into everything — from your car to your fridge to your smartphone to fighter jets — and it’s driving more and more processes every day.

It’s why I call our Promethean era “The Age of Acceleration, Amplification and Democratization.” Never have more humans had access to more cheap tools that amplify their power at a steadily accelerating rate — while being diffused into the personal and working lives of more and more people all at once. And it’s happening faster than most anyone anticipated.

The potential to use these tools to solve seemingly impossible problems — from human biology to fusion energy to climate change — is awe-inspiring. Consider just one example that most people probably haven’t even heard of — the way DeepMind, an A.I. lab owned by Google parent Alphabet, recently used its AlphaFold A.I. system to solve one of the most wicked problems in science — at a speed and scope that was stunning to the scientists who had spent their careers slowly, painstakingly creeping closer to a solution.

The problem is known as protein folding. Proteins are large complex molecules, made up of strings of amino acids. And as my Times colleague Cade Metz explained in a story on AlphaFold, proteins are “the microscopic mechanisms that drive the behavior of the human body and all other living things.”

What each protein can do, though, largely depends on its unique three-dimensional structure. Once scientists can “identify the shapes of proteins,” added Metz, “they can accelerate the ability to understand diseases, create new medicines and otherwise probe the mysteries of life on Earth.”

But, Science News noted, it has taken “decades of slow-going experiments” to reveal “the structure of more than 194,000 proteins, all housed in the Protein Data Bank.” In 2022, though, “the AlphaFold database exploded with predicted structures for more than 200 million proteins.” For a human that would be worthy of a Nobel Prize. Maybe two.

And with that our understanding of the human body took a giant leap forward. As a 2021 scientific paper, “Unfolding AI’s Potential,” published by the Bipartisan Policy Center, put it, AlphaFold is a meta technology: “Meta technologies have the capacity to … help find patterns that aid discoveries in virtually every discipline.”

ChatGPT is another such meta technology.

But as Dorothy discovered when she was suddenly transported to Oz, there was a good witch and a bad witch there, both struggling for her soul. So it will be with the likes of ChatGPT, Google’s Bard and AlphaFold.

Are we ready? It’s not looking that way: We’re debating whether to ban books at the dawn of a technology that can summarize or answer questions about virtually every book for everyone everywhere in a second.

Like so many modern digital technologies based on software and chips, A.I is “dual use” — it can be a tool or a weapon.

The last time we invented a technology this powerful we created nuclear energy — it could be used to light up your whole country or obliterate the whole planet. But the thing about nuclear energy is that it was developed by governments, which collectively created a system of controls to curb its proliferation to bad actors — not perfectly but not bad.

A.I., by contrast, is being pioneered by private companies for profit. The question we have to ask, Craig argued, is how do we govern a country, and a world, where these A.I. technologies “can be weapons or tools in every domain,” while they are controlled by private companies and are accelerating in power every day? And do it in a way that you don’t throw the baby out with the bathwater.

We are going to need to develop what I call “complex adaptive coalitions” — where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I. No one player in this coalition can fix the problem alone. It requires a very different governing model from traditional left-right politics. And we will have to transition to it amid the worst great-power tensions since the end of the Cold War and culture wars breaking out inside virtually every democracy.

We better figure this out fast because, Toto, we’re not in Kansas anymore.

Source: Our New Promethean Moment

Krauss: Artificially Intelligent Offense?

Of note, yet another concern and issue that needs to be addressed:

…Let’s be clear about this: Valid, empirically derived information is not, in the abstract, either harmful or offensive.

The reception of information can be offensive, and it can, depending upon the circumstances of the listener, potentially result in psychological or physical harm. But precisely because one cannot presume to know all such possible circumstances, following the OpenAI guidelines can instead sanction the censorship of almost any kind of information for fear that someone, somewhere, will be offended.

Even before ChatGPT, this was not a hypothetical worry. Recall the recent firing of a heralded NYT science reporter for using “the N-word” with a group of students in the process of explaining why the use of that word could be inappropriate or hurtful. The argument the NYT editors made was that “intent” was irrelevant. Offense is in the ear of the listener, and that overrides the intent of the speaker or the veracity of his or her argument.

A more relevant example, perhaps, involves the loony guidelines recently provided to editors and reviewers for the journals of the Royal Society of Chemistry to “minimise the risk of publishing inappropriate or otherwise offensive content.” As they describe it, “[o]ffence is a subjective matter and sensitivity to it spans a considerable range; however, we bear in mind that it is the perception of the recipient that we should consider, regardless of the author’s intention [italics mine] … Please consider whether or not any content (words, depictions or imagery) might have the potential to cause offence, referring to the guidelines as needed.”

Moreover, they define offensive content specifically as “Any content that could reasonably offend someone on the basis of their age, gender, race, sexual orientation, religious or political beliefs, marital or parental status, physical features, national origin, social status or disability.”

The mandate against offensiveness propounded by the RSC was taken to another level by the journal Nature Human Behaviour, which indicated that not only would they police language, but they would restrict the nature of scientific research they publish on the basis of social justice concerns about possible “negative social consequences for studied groups.” One can see echoes of both the RSC and Nature actions in the ChatGPT response to my questions.

The essential problem here is removing the obligation, or rather, the opportunity, all of us should have to rationally determine how we respond to potentially offensive content by instead ensuring that any such potentially offensive content may be censored. Intent and accuracy become irrelevant. Veto power in this age of potential victimization is given to the imaginary recipient of information.

Free and open access to information, even information that can cause pain or distress, is essential in a free society. As Christopher Hitchens so often stressed, freedom of speech is primarily important not because it provides an opportunity for speakers to speak out against prevailing winds but because that speech gives listeners or readers the freedom to realize they might want to change their minds.

The problem with the dialogues presented above is that ChatGPT appears to be programmed with a biased perception of what might be offensive or harmful. Moreover, it has been instructed to limit the information it provides to that which its programmers have deemed is neither. What makes this example more than an interesting—or worrying—anecdote is the emerging potential of AI chatbots to further exacerbate already disturbing trends.

As chatbot responses begin to proliferate throughout the Internet, they will, in turn, impact future machine learning algorithms that mine the Internet for information, thus perpetuating and amplifying the impact of the current programming biases evident in ChatGPT.

ChatGPT is admittedly a work in progress, but how the issues of censorship and offense ultimately play out will be important. The last thing anyone should want in the future is a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver. Providing information guaranteed not to disturb is a sure way to squash knowledge and progress. It is also a clear example of the fallacy of attempting to input “universal human values” into AI systems, because one can bet that the choice of which values to input will be subjective.

If the future of AI follows the current trend apparent in ChatGPT, a more dangerous, dystopic machine-based future might not be the one portrayed in the Terminator films but, rather, a future populated by AI versions of Fahrenheit 451firemen.

Source: Artificially Intelligent Offense?

Mims: The AI Boom That Could Make Google and Microsoft Even More Powerful

Good long read. Hard to be optimistic about how the technology will be used. And the regulators will likely be more than a few steps behind corporations:

Seeing the new artificial intelligence-powered chatbots touted in dueling announcements this past week by Microsoft and Googledrives home two major takeaways. First, the feeling of “wow, this definitely could change everything.” And second, the realization that for chat-based search and related AI technologies to have an impact, we’re going to have to put a lot of faith in them and the companies they come from.

When AI is delivering answers, and not just information for us to base decisions on, we’re going to have to trust it much more deeply than we have before. This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value.

The same will be true of tools that help generate text, spreadsheets, code, images and anything else we create on our devices—some version of which both Microsoft MSFT -0.20%decrease; red down pointing triangle and Google have promised to offer within their existing productivity services, Microsoft 365 and Google Workspace. 

These technologies, and chat-based search, are all based on the latest generation of “generative” AI, capable of creating verbal and visual content and not just processing it the way more established AI has done. And the added trust it will require is one of several ways in which this new generative AI technology is poised to shift even more power into the hands of the biggest tech companies

Generative AI in all its forms will insinuate technology more deeply into the way we live and work than it already is—not just answering our questions but writing our memos and speeches or even producing poetry and art. And because of the financial, intellectual and computational resources needed to develop and run the technology are so enormous, the companies that control these AI systems will be the largest, richest companies.

OpenAI, the creator of the ChatGPT chatbot and DALL-E 2 image generator AIs that have fueled much of the current hype, seemed like an exception to that: a relatively small startup that has driven major AI innovation. But it has leapt into the arms of Microsoft, which has made successive rounds of investment, in part because of the need to pay for the computing power needed to make its systems work. 

The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes.

Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk.

Galactica was “the output of a research project versus something intended for commercial use,” says a Meta spokeswoman. In a public statement, Joelle Pineau, managing director of fundamental AI research at Meta, wrote that “given the propensity of large language models such as Galactica to generate text that may appear authentic, but is inaccurate, and because it has moved beyond the research community, we chose to remove the demo from public availability.”

On the other hand, proving your AI more trustworthy could be a competitive advantage more powerful than being the biggest, best or fastest repository of answers. This seems to be Google’s bet, as the company has emphasized in recent announcements and a presentation on Wednesday that as it tests and rolls out its own chat-based and generative AI systems, it will strive for “Responsible AI,” as outlined in 2019 in its “AI Principles.”

My colleague Joanna Stern this past week provided a helpful description of what it’s like to use Microsoft’s Bing search engine and Edge web browser with ChatGPT incorporated. You can join a list to test the service—and Google says it will make its chatbot, named Bard, available at some point in the coming months.

But in the meantime, to see just why trust in these kinds of search engines is so tricky, you can visit other chat-based search engines that already exist. There’s You.com, which will answer your questions via a chatbot, or Andisearch.com, which will summarize any article it returns when you search for a topic on it.

Even these smaller services feel a little like magic. If you ask You.com’s chat module a question like “Please list the best chat AI-based search engines,” it can, under the right circumstances, give you a coherent and succinct answer that includes all the best-known startups in this space. But it can also, depending on small changes in how you phrase that question, add complete nonsense to its answer. 

In my experimentation, You.com would, more often than not, give a reasonably accurate answer, but then add to it the name of a search engine that doesn’t exist at all. Googling the made-up search engine names it threw in revealed that You.com seemed to be misconstruing the names of humans quoted in articles as the names of search engines.

Andi doesn’t return search results in a chat format, precisely because making sure that those answers are accurate is still so difficult, says Chief Executive Angela Hoover. “It’s been super exciting to see these big players validating that conversational search is the future, but nailing factual accuracy is hard to do,” she adds. As a result, for now, Andi offers search results in a conventional format, but offers to use AI to summarize any page it returns.

Andi currently has a team of fewer than 10 people, and has raised $2.5 million so far. It’s impressive what such a small team has accomplished, but it’s clear that making trustworthy AI will require enormous resources, probably on the scale of what companies like Microsoft and Google possess.

There are two reasons for this: The first is the enormous amount of computing infrastructure required, says Tinglong Dai, a professor of operations management at Johns Hopkins University who studies human-AI interaction. That means tens of thousands of computers in big technology companies’ current cloud infrastructures. Some of those computers are used to train the enormous “foundation” models that power generative AI systems. Others specialize in making the trained models available to users, which as the number of users grows can become a more taxing task than the original training.

The second reason, says Dr. Dai, is that it requires enormous human resources to continually test and tune these models, in order to make sure they’re not spouting an inordinate amount of nonsense or biased and offensive speech.

Google has said that it has called on every employee in the company to test its new chat-based search engine and flag any issues with the results it generates. Microsoft, which is already rolling out its chat-based search engine to the public on a limited basis, is doing that kind of testing in public. ChatGPT, on which Microsoft’s chat-based search engine is based, has already proved to be vulnerable to attempts to “jailbreak” it into producing inappropriate content. 

Big tech companies can probably overcome the issues arising from their rollout of AI—Google’s go-slow approach, ChatGPT’s sometimes-inaccurate results, and the incomplete or misleading answers chat-based Bing could offer—by experimenting with these systems on a large scale, as only they can.

“The only reason ChatGPT and other foundational models are so bad at bias and even fundamental facts is they are closed systems, and there is no opportunity for feedback,” says Dr. Dai. Big tech companies like Google have decades of practice at soliciting feedback to improve their algorithmically-generated results. Avenues for such feedback have, for example, long been a feature of both Google Search and Google Maps.

Dr. Dai says that one analogy for the future of trust in AI systems could be one of the least algorithmically-generated sites on the internet: Wikipedia. While the entirely human-written and human-edited encyclopedia isn’t as trustworthy as primary-source material, its users generally know that and find it useful anyway. Wikipedia shows that “social solutions” to problems like trust in the output of an algorithm—or trust in the output of human Wikipedia editors—are possible.

But the model of Wikipedia also shows that the kind of labor-intensive solutions for creating trustworthy AI—which companies like Meta and Google have already employed for years and at scale in their content moderation systems—are likely to entrench the power of existing big technology companies. Only they have not just the computing resources, but also the human resources, to deal with all the misleading, incomplete or biased information their AIs will be generating.

In other words, creating trust by moderating the content generated by AIs might not prove to be so different from creating trust by moderating the content generated by humans. And that is something the biggest technology companies have already shown is a difficult, time-consuming and resource-intensive task they can take on in a way that few other companies can.

The obvious and immediate utility of these new kinds of AIs, when integrated into a search engine or in their many other potential applications, is the reason for the current media, analyst and investor frenzy for AI. It’s clear that this could be a disruptive technology, resetting who is harvesting attention and where they’re directing it, threatening Google’s search monopoly and opening up new markets and new sources of revenue for Microsoft and others.

Based on the runaway success of the ChatGPT AI—perhaps the fastest service to reach 100 million users in history, according to a recent UBS report—it’s clear that being an aggressive first mover in this space could matter a great deal. It’s also clear that being a successful first-mover in this space will require the kinds of resources that only the biggest tech companies can muster.

Source: The AI Boom That Could Make Google and Microsoft Even More Powerful

ChatGPT: Remarkably good on immigration and citizenship

With all the coverage on ChatGPT and its ability to answer normal language questions, I tried out some questions.

Remarkable comprehensive and balanced responses, suggesting that the programmers and others behind it have minimized biases and distortions in the software.

While currently cannot search the internet (may be a blessing!), I am thinking it may supplant Wikipedia for many requests:

—–

What is multiculturalism?

Multiculturalism is the idea that different cultures and ethnic groups can coexist within a single society. It is based on the belief that diversity is a positive aspect of society and that different cultures can enrich one another. In a multicultural society, different cultural groups are allowed to maintain their own customs and traditions, and there is a general tolerance and acceptance of cultural differences. Multiculturalism is often seen as a way to promote harmony and understanding between different groups within a society.

What are the problems with multiculturalism?

There are several potential problems with multiculturalism. One is that it can lead to the segregation of different cultural groups within a society, with each group living in its own separate community. This can make it difficult for people from different backgrounds to interact and understand one another, and it can create social tensions.

Another problem is that some people argue that multiculturalism can undermine national unity and a sense of shared identity. Critics argue that when different cultural groups are allowed to maintain their own customs and traditions, it can create a fragmented society where people are more loyal to their own cultural group than to the nation as a whole.

Additionally, some people argue that the emphasis on cultural diversity can lead to a lack of recognition of the values and traditions that are common to all members of a society. This can make it difficult to build a sense of shared identity and unity among people from different cultural backgrounds.

Which countries have been most successful in implementing multiculturalism? Which countries have been the least successful?

I’m sorry, but I do not have the information needed to answer that question. As a large language model trained by OpenAI, my knowledge is limited to the text that I have been trained on, which includes a wide range of general knowledge but does not include specific, up-to-date information on current events or recent changes in the policies of different countries. Additionally, I am not able to browse the internet, so I do not have access to the most recent data and statistics on the success of different countries’ multiculturalism policies.

What are the benefits of immigration?

There are many potential benefits of immigration. One is that it can help to fill labor shortages and support economic growth. Immigrants often have different skills and experiences than the native population, and they can bring new ideas and perspectives to a country. This can help to drive innovation and support the growth of new industries.

Immigration can also help to promote cultural diversity and understanding. When people from different backgrounds and cultures come together in a new country, they can learn from one another and gain a deeper understanding of different perspectives and ways of life. This can help to create a more vibrant and dynamic society.

Additionally, immigration can help to address demographic challenges, such as an aging population. In countries with low birth rates and an aging population, immigration can help to maintain a healthy balance between the number of workers and the number of people who are retired or unable to work. This can help to ensure that there are enough workers to support social programs and maintain a strong economy.

What are the disadvantages of immigration?

There are also potential disadvantages of immigration. One is that it can put pressure on a country’s public services and infrastructure, such as schools, hospitals, and public transportation. When a large number of people move to a new country, it can be challenging for the government to provide the necessary resources to support them. This can lead to overcrowding and reduced access to public services for everyone.

Another potential disadvantage is that immigration can lead to job competition and wage stagnation for the native population. When there is a large influx of immigrants into a labor market, it can create competition for jobs, which can lead to lower wages for some workers. This can be particularly challenging for workers with lower levels of education and skills, who may have difficulty competing with immigrants for jobs.

Additionally, immigration can create social tensions and conflict, particularly if there is a perception that immigrants are taking jobs away from the native population or are not assimilating into the country’s culture. This can lead to resentment and hostility between different cultural groups within a society, which can be difficult to address.

What are the advantages of immigrants becoming citizens?

There are several advantages to immigrants becoming citizens. One is that it gives them the right to vote and participate in the political process, which allows them to have a say in the decisions that affect their lives and their communities. Citizenship also gives immigrants the right to hold public office, which can help to ensure that their voices and perspectives are represented in government.

Another advantage of citizenship is that it provides immigrants with greater security and stability. As citizens, they are less likely to be deported or separated from their families, and they have the right to live and work in the country without the threat of losing their legal status. This can give them the freedom to pursue their goals and build their lives without the constant fear of being forced to leave.

Additionally, citizenship can give immigrants access to certain benefits and protections that are not available to non-citizens. For example, in many countries, citizens have the right to access public education and healthcare, and they may be eligible for certain government programs and services. Citizenship can also make it easier for immigrants to travel internationally and to sponsor their family members to join them in the country. Overall, becoming a citizen can provide immigrants with greater opportunities and a sense of belonging in their new country.

Write an article on becoming a Canadian citizen

Becoming a Canadian citizen is a rewarding and exciting experience, but it can also be a complex and challenging process. In this article, we will provide an overview of the steps involved in becoming a Canadian citizen, as well as some tips and advice to help you navigate the process successfully.

The first step in becoming a Canadian citizen is to determine if you are eligible. To be eligible, you must be a permanent resident of Canada, have lived in Canada for at least three out of the last five years, and have a good knowledge of English or French. You must also have good moral character and be willing to swear allegiance to the Queen and to the values of Canada.

If you meet these requirements, the next step is to gather the necessary documentation. This includes proof of your permanent resident status, evidence of your time spent in Canada, and proof of your language skills. You will also need to fill out an application form and pay the required fees.

Once you have submitted your application, it will be reviewed by Immigration, Refugees and Citizenship Canada (IRCC). If your application is approved, you will be invited to attend a citizenship ceremony, where you will take the Oath of Citizenship and become a Canadian citizen.

Becoming a Canadian citizen can bring many benefits, such as the right to vote, the ability to hold public office, and access to certain government programs and services. It can also provide you with a sense of belonging and connection to your new country.

If you are interested in becoming a Canadian citizen, it is important to carefully research the process and to ensure that you have all the necessary information and documentation. You can find more information on the IRCC website, or you can contact the IRCC for assistance. With the right preparation and guidance, you can successfully navigate the process and become a proud Canadian citizen.

What is birth tourism?

Birth tourism is the practice of traveling to another country for the purpose of giving birth in order to obtain citizenship for the child. This is typically done in countries that have generous citizenship laws, such as the United States, Canada, and some countries in Europe, where children born on their soil are automatically granted citizenship.

Birth tourism is a controversial practice, as it can allow people to bypass the usual immigration processes and obtain citizenship for their child without having to meet the normal requirements. Critics argue that it undermines the integrity of a country’s citizenship laws and can create a backdoor route to citizenship for people who may not otherwise be eligible. Some countries have introduced measures to crack down on birth tourism, such as requiring parents to provide proof of their ties to the country or imposing additional fees on foreign-born children.

What is employment equity?

Employment equity is a policy or set of policies aimed at promoting equal opportunities and reducing discrimination in the workplace. The goal of employment equity is to ensure that all individuals, regardless of their race, gender, age, or other personal characteristics, have an equal chance to access employment opportunities and to advance in their careers.

In many countries, employment equity programs are designed to address systemic discrimination and to ensure that historically disadvantaged groups, such as women and minorities, have the same opportunities as other groups in the labor market. This can include measures such as setting targets for hiring and promoting members of these groups, providing training and support to help them succeed in the workplace, and addressing workplace policies and practices that may be biased against them.

Overall, the goal of employment equity is to create a more inclusive and diverse workplace, where all individuals have the opportunity to reach their full potential and contribute to the success of the organization.

Krugman: Does :ChatGPT Mean Robots Are Coming For the Skilled Jobs?

Of interest. Much of government work is potentially vulnerable to these technologies. Hope IRCC is exploring this and comparable chat systems to address some of the service pressures:

Will robots take away our jobs?

People have been asking that question for an astonishingly long time. The Regency-era British economist David Ricardo added to the third edition of his classic “Principles of Political Economy,” published in 1821, a chapter titled “On Machinery,” in which he tried to show how the technologies of the early Industrial Revolution could, at least initially, hurt workers. Kurt Vonnegut’s 1952 novel “Player Piano” envisaged a near-future America in which automation has eliminated most employment.

At the level of the economy as a whole, the verdict is clear: So far, machines haven’t done away with the need for workers. U.S. workers are almost five times as productive as they were in the early postwar years, but there has been no long-term upward trend in unemployment:

Higher productivity hasn’t hurt overall employment.
Higher productivity hasn’t hurt overall employment.Credit…FRED

That said, technology can eliminate particular kinds of jobs. In 1948 half a million Americans were employed mining coal; the great bulk of those jobs had disappeared by the early 21st century not because we stopped mining coal — the big decline in coal production, in favor first of natural gas and then of renewable energy, started only around 15 years ago — but because strip mining and mountaintop removal made it possible to extract an increasing amount of coal with many fewer workers:

Some jobs have largely disappeared.
Some jobs have largely disappeared.Credit…FRED

It’s true that the jobs that disappear in the face of technological progress have generally been replaced by other jobs. But that doesn’t mean that the process has been painless. Individual workers may not find it easy to change jobs, especially if the new jobs are in different places. They may find their skills devalued; in some cases, as with coal, technological change can uproot communities and their way of life.

This kind of dislocation has, as I said, been a feature of modern societies for at least two centuries. But something new may be happening now.

In the past, the jobs replaced by technology tended to involve manual labor. Machines replaced muscles. On the one hand, industrial robots replaced routine assembly-line work. On the other hand, there has been ever-growing demand for knowledge workers, a term coined by the management consultant Peter Drucker in 1959 for people engaged in nonrepetitive problem solving. Many people, myself included, have said that we’re increasingly becoming a knowledge economy.

But what if machines can take over a large chunk of what we have historically thought of as knowledge work?

Last week the research company OpenAI released — to enormous buzz from tech circles — a program called ChatGPT, which can carry out what look like natural-language conversations. You can ask questions or make requests and get responses that are startlingly clear and even seem well-informed. You can also do fun things — one colleague recently asked for and received an analysis of secular stagnation in sonnet form — but let’s stick with things that might be economically useful.

ChatGPT is only the latest example of technology that seems to be able to carry out tasks that not long ago seemed to require the services not just of human beings but of humans with substantial formal education.

For example, machine translation from one language to another used to be a joke; some readers may have heard the apocryphal tale of the Russian-English translation program that took “the spirit was willing, but the flesh was weak” and ended up with “the vodka was good, but the meat was spoiled.” These days, translation programs may not produce great literature, but they’re adequate for many purposes. And the same is true in many fields.

You can argue that what we often call artificial intelligence isn’t really intelligence. Indeed, it may be a long time before machines can be truly creative or offer deep insight. But then, how much of what human beings do is truly creative or deeply insightful? (Indeed, how much of what gets published in academic journals — a field of endeavor I know pretty well — meets those criteria?)

So quite a few knowledge jobs may be eminently replaceable.

What will this mean for the economy?

It is difficult to predict exactly how A.I. will impact the demand for knowledge workers, as it will likely vary, depending on the industry and specific job tasks. However, it is possible that in some cases, A.I. and automation may be able to perform certain knowledge-based tasks more efficiently than humans, potentially reducing the need for some knowledge workers. This could include tasks such as data analysis, research and report writing. However, it is also worth noting that A.I. and automation may also create new job opportunities for knowledge workers, particularly in fields related to A.I. development and implementation.

OK, I didn’t write the paragraph you just read; ChatGPT did, in response to the question “How will A.I. affect the demand for knowledge workers?” The giveaway, to me at least, is that I still refuse to use “impact” as a verb. And it didn’t explicitly lay out exactly why we should, overall, expect no impact on aggregate employment. But it was arguably better than what many humans, including some people who imagine themselves smart, would have written.

In the long run, productivity gains in knowledge industries, like past gains in traditional industries, will make society richer and improve our lives in general (unless Skynet kills us all). But in the long run, we are all dead, and even before that, some of us may find ourselves either unemployed or earning far less than we expected, given our expensive educations.

Source: Does ChatGPT Mean Robots Are Coming For the Skilled Jobs?