ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Of interest (I am starting to find it useful as an editor):

“Canada’s historical monuments are also symbols of Indigenous genocide.”

“Police brutality in Canada is just as real as in the U.S.”

Those seemed to me like articles that my colleague, Shree Paradkar, a Toronto Star social and racial justice columnist, could have plausibly written. They were provided by an AI chatbot in response to my request for a list of articles by Paradkar.

The problem is that they don’t exist.

“At first blush it might seem easy to associate me with these headlines. As an opinion writer, I even agree with the premise of some of them,” Paradkar wrote to me after I emailed her the list.

“But there are two major red flags. The big one: they’re false. No articles I wrote have these headlines. And two, they either bludgeon nuance (the first headline) or summarize what I quote other people saying and what I write in different articles into one piece,” she said.

Paradkar’s discomfort reflects wider concerns about the abundance of fake references dished out by popular chatbots including ChatGPT — and worry that with rapidly evolving technology, people may not know how to identify false information. 

The use of artificial intelligence chatbots to summarize large volumes of online information is now widely known, and while some school districts have banned AI-assisted research, some educators advocate for the use of AI as a learning tool.

Users may think that one way to verify information from a chatbot is to ask it to provide references. The problem? The citations look real and even come with hyperlinks. But they are usually fake.

In recent months, academics have issued multiple warnings that ChatGPT was making up academic studies, including convincing scientific research abstracts. This came to the attention of Oxford University professor David Wilkinson when a student turned in a paper with a reference to a study that he couldn’t locate, but which was similar to fake references he found on ChatGPT.

It is less well known that media sources provided by chatbots are often fabricated as well. The Guardian recently called attention to the confusion that ensued at the newspaper when a reader inquired about an article that did not appear on The Guardian’s website.

The headline was so consistent with the newspaper’s coverage that staff thought it could have been something the reporter had written in the past. Staff went deep into computer systems to try to track down the article in case it was deleted. Luckily before more time was wasted, the reader disclosed that the reference came from ChatGPT.

“The invention of sources is particularly troubling for trusted news organizations,” wrote Chris Moran, The Guardian’s head of editorial innovation.

“It opens up whole new questions about whether citations can be trusted in any way, and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place,” said Moran.

In cases of both fake media and academic references from chatbots, this is because “generative” AI chatbots are large language models. They are trained on a huge body of internet data, and can reproduce human language. But the important point is that they are not search engines, said Subodha Kumar, professor of statistics, operations and data science at Temple University in Philadelphia.

“The general public using ChatGPT now doesn’t really know how it works. It creates links and references that don’t exist, because it is designed to generatecontent,” Kumar told me.

“That’s what makes AI beautiful, but also problematic. False references can fool the public and policymakers, so if we have a newspaper article based on a false reference, then we’ll be in big trouble,” he added.

I wondered if ChatGPT has undergone any modifications since researchers raised concerns about the prevalence of fake citations. The U.S.-based company Open AI did not reply to my request for clarification. So like many other curious journalists have done, I interacted with the bot itself.

“Could you give me some articles by Globe and Mail journalist André Picard?” I asked.

“As an AI language model, I do not have access to specific articles or publications by individual authors, including André Picard, as I do not have browsing capabilities or access to the internet. I can, however, provide you with a general format for citing articles,” it replied.

This response appeared to clearly explain ChatGPT’s limitations. I was pleasantly surprised.

But alas, when I asked ChatGPT for a summary of Huawei executive Meng Wanzhou’s legal battle in Canada and asked it to provide sources, the list of references it provided were all fake.

The hyperlinks were either broken or led to unrelated pages. A note at the bottom said that since it was trained on information up to September 2021, it may not reflect current events. But there was no disclaimer that the references themselves were fake.

“Why are your citations fake?” I asked ChatGPT.

“As an AI language model, I do not generate fake citations deliberately. However, I do not have direct access to external sources or the internet, and I do not provide real-time information … It’s always a good practice to cross-check and verify information from reputable and reliable sources independently, especially for critical or sensitive topics.”

But Kumar said it is very possible for developers to train chatbots to verify the references they generate.

“They don’t have to fundamentally change the concept to be just like Google Search, but the algorithm can be easily improved by giving it one more step: When references or links are created, check for the veracity of the content.”

Kumar added that other companies may soon offer new AI products that provide more reliable references, but as a “first mover” in the field, OpenAI has a special responsibility to address the issue.

OpenAI has said it is aware of the potential of generative AI to spread disinformation. In January, the organization partnered with Stanford University and Georgetown University to release a study forecasting potential misuses of language models for disinformation campaigns.

“For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations,” the study found.

And ChatGPT is only one out of a plethora of chatbot products from different companies, including apps that purport to be based on ChatGPT’s open API. I had found the list of my colleague’s fake opinion articles on one such Android app, “AI Chat by GPT,” (ChatGPT doesn’t currently offer a mobile version.)

For Ezra Levant, a conservative Canadian media commentator, the app offered up fake headlines on hot-button issues such as a fake column alleging that global migration will “undermine Canadian sovereignty” and another that Prime Minister Justin Trudeau’s carbon tax is in fact a “wealth tax.”

Paradkar pointed out that the generation of fake stories attributed to real people is particularly dangerous during a time of increasing physical violence and online abuse against journalists worldwide.

“When AI puts out data that is incorrect but plausible, it counts as misinformation. And I fear that it offers ammunition to trolls and bad actors confirming their worst biases and giving them more reason to abuse journalists.”

Source: ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction

Baron: We want objective judges and doctors. Why not journalists too?

Required reading by journalists, would be journalists and j-schools, with broader application including overly activist academics:

Objectivity in journalism has attracted a lot of attention lately. It also is a subject that has suffered from confusion and an abundance of distortion.

I’m about to do something terribly unpopular in my profession these days: Defend the idea.

Let’s step back a bit. First, a dictionary definition of objectivity. This is from Merriam-Webster: “expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations.”

Source: We want objective judges and doctors. Why not journalists too?

Friedman: Our New Promethean Moment

Friedman is always interesting as to where future conversations and emerging issues are headed:

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

Then he did a demonstration. And I realized Craig’s words were an understatement.

First, he asked GPT-4 — for which Craig was a selected advanced tester and which was just released to the public — to summarize Planet Word and its mission in 400 words. It did so perfectly — in a few seconds.

Then he asked it to do the same in 200 words. Another few seconds.

Then he asked it to do the same in Arabic. Just as quickly. Then in Mandarin. Two more seconds. Then in English again — but in the form of a Shakespearean sonnet. A few more seconds.

Then Craig asked GPT-4 to write the same description in an abecedarian verse — where the first line begins with the letter A, the second with B and so on through the alphabet. It did it with stunning creativity, beginning:

Alluring in Washington, is a museum so grand,
Built to teach, inspire, and help us understand.
Curious minds Planet flock to Word’s embrace,
Delving into language and its intricate grace
Every exhibit here has a story to tell,
From the origins of speech to the art of the quill.

And so on, through Z.

I could barely sleep that night. To observe an A.I. system — its software, microchips and connectivity — produce that level of originality in multiple languages in just seconds each time, well, the first thing that came to mind was the observation by the science fiction writer Arthur C. Clarke that “any sufficiently advanced technology is indistinguishable from magic.”

The second thing that came to mind was a moment at the start of “The Wizard of Oz” — the tornado scene where everything and everyone are lifted into a swirling gyre, including Dorothy and Toto, and then swept away from mundane, black and white Kansas to the gleaming futuristic Land of Oz, where everything is in color.

We are about to be hit by such a tornado. This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

We know the key Promethean eras of the last 600 years: the invention of the printing press, the scientific revolution, the agricultural revolution combined with the industrial revolution, the nuclear power revolution, personal computing and the internet and … now this moment.

Only this Promethean moment is not driven by a single invention, like a printing press or a steam engine, but rather by a technology super-cycle. It is our ability to sense, digitize, process, learn, share and act, all increasingly with the help of A.I. That loop is being put into everything — from your car to your fridge to your smartphone to fighter jets — and it’s driving more and more processes every day.

It’s why I call our Promethean era “The Age of Acceleration, Amplification and Democratization.” Never have more humans had access to more cheap tools that amplify their power at a steadily accelerating rate — while being diffused into the personal and working lives of more and more people all at once. And it’s happening faster than most anyone anticipated.

The potential to use these tools to solve seemingly impossible problems — from human biology to fusion energy to climate change — is awe-inspiring. Consider just one example that most people probably haven’t even heard of — the way DeepMind, an A.I. lab owned by Google parent Alphabet, recently used its AlphaFold A.I. system to solve one of the most wicked problems in science — at a speed and scope that was stunning to the scientists who had spent their careers slowly, painstakingly creeping closer to a solution.

The problem is known as protein folding. Proteins are large complex molecules, made up of strings of amino acids. And as my Times colleague Cade Metz explained in a story on AlphaFold, proteins are “the microscopic mechanisms that drive the behavior of the human body and all other living things.”

What each protein can do, though, largely depends on its unique three-dimensional structure. Once scientists can “identify the shapes of proteins,” added Metz, “they can accelerate the ability to understand diseases, create new medicines and otherwise probe the mysteries of life on Earth.”

But, Science News noted, it has taken “decades of slow-going experiments” to reveal “the structure of more than 194,000 proteins, all housed in the Protein Data Bank.” In 2022, though, “the AlphaFold database exploded with predicted structures for more than 200 million proteins.” For a human that would be worthy of a Nobel Prize. Maybe two.

And with that our understanding of the human body took a giant leap forward. As a 2021 scientific paper, “Unfolding AI’s Potential,” published by the Bipartisan Policy Center, put it, AlphaFold is a meta technology: “Meta technologies have the capacity to … help find patterns that aid discoveries in virtually every discipline.”

ChatGPT is another such meta technology.

But as Dorothy discovered when she was suddenly transported to Oz, there was a good witch and a bad witch there, both struggling for her soul. So it will be with the likes of ChatGPT, Google’s Bard and AlphaFold.

Are we ready? It’s not looking that way: We’re debating whether to ban books at the dawn of a technology that can summarize or answer questions about virtually every book for everyone everywhere in a second.

Like so many modern digital technologies based on software and chips, A.I is “dual use” — it can be a tool or a weapon.

The last time we invented a technology this powerful we created nuclear energy — it could be used to light up your whole country or obliterate the whole planet. But the thing about nuclear energy is that it was developed by governments, which collectively created a system of controls to curb its proliferation to bad actors — not perfectly but not bad.

A.I., by contrast, is being pioneered by private companies for profit. The question we have to ask, Craig argued, is how do we govern a country, and a world, where these A.I. technologies “can be weapons or tools in every domain,” while they are controlled by private companies and are accelerating in power every day? And do it in a way that you don’t throw the baby out with the bathwater.

We are going to need to develop what I call “complex adaptive coalitions” — where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I. No one player in this coalition can fix the problem alone. It requires a very different governing model from traditional left-right politics. And we will have to transition to it amid the worst great-power tensions since the end of the Cold War and culture wars breaking out inside virtually every democracy.

We better figure this out fast because, Toto, we’re not in Kansas anymore.

Source: Our New Promethean Moment

Adams and Parkin: One issue on which Canadians aren’t polarized — the U.S. 

Quite a remarkable change. I remember the free trade debates:

It is easy to list the political issues that divide Canadians today. Leaders and parties stand far apart on what to do about health care, climate change and firearms, to name but a few. But before we conclude that our politics is more polarized than ever, let’s remember it is possible to overcome even long-standing divisions and find common ground.

U.S. President Joe Biden’s visit to Canada next week brings this into focus. Canada-U.S. relations has been a political flashpoint throughout our history. Typically, one major party was accused of getting too close to the Americans, the other of ignoring the economic benefits that this closeness would bring. Originally, it was the Liberals who sought to strengthen north-south trade while the Tories championed an east-west vision. But by the mid-1980s, the roles had reversed. Views on the United States remained one of the country’s primordial political cleavages, but with the political right now seen as too pro-American.

This dynamic was most evident during the 1988 federal election, fought almost exclusively on free trade. If ever our politics were polarized, it was then. The free trade agreement (FTA) that had been negotiated by Brian Mulroney’s government was supported by 61 per cent of Progressive Conservative party supporters, but by only 21 per cent of Liberals and 17 per cent of those voting NDP. The Mulroney government won re-election despite this heated opposition, and the FTA was ratified — and soon expanded to include Mexico.

At first, acrimony intensified in the early 1990s as the country faced the twin challenges of a recession and a constitutional crisis. But as both of these faded, so did opposition to free trade. By the mid-1990s, more Canadians favoured free trade than opposed it; Liberal supporters in particular became almost as favourable to the policy as Conservatives. By 2000, seven in 10 Canadians favoured the North American Free Trade Agreement (NAFTA), including identical proportions of Liberals and Conservatives, and — for the first time — a majority of those supporting the NDP.

Today, the consensus has solidified. Support for NAFTA stands at 83 per cent, including 82 per cent of Conservatives, 89 per cent of Liberals, and (gasp) 84 per cent of NDP supporters. A similarly strong 88 per cent of Bloc Québécois supporters and 82 per cent of those voting Green favour the policy. Thirty-five years after the country squared off in an epic battle over free trade, it has become a non-issue, attracting close to unanimous support among supporters of every party in the House of Commons.

Opposition to free trade melted away in part because it was accompanied, not by the erasure of differences between the two societies, but by their enhancement. It turned out that economic integration did not lead inexorably to the loss of Canada’s cultural distinctness, as Liberal leader John Turner had warned in 1988. This, in turn, has led to a growing public self-confidence about the Canadian identity, especially among younger Canadians and those on the political left — both of whom have become much less likely than they were a generation ago to say that Canadian culture needs to be protected from outside influences.

The growing differences between the political cultures of the two countries speaks to the second reason we are seeing less division in Canada about our relations with the U.S. Those on the political left can no longer accuse Conservatives of being sellouts just because they want to sell more of our products to the Americans. But at the same time, Conservatives must now be wary of criticizing the Liberals for being too anti-American. Canadians’ opinion of the U.S. soured considerably during the presidencies of George W. Bush and Donald Trump and has recovered only partially since Biden’s inauguration. Now is not the time for Pierre Poilievre to hint that Canada’s influence would improve by building closer ties with Washington once Conservatives and Republicans return to office in their respective countries.

Canadians overwhelmingly preferred Biden to Trump in the 2020 election, and he will be warmly welcomed during his visit. But the wider issue of Canada-U.S. relations no longer deeply divides us. Canadians of all political backgrounds have become increasingly wary of the direction in which Americans are headed. We now favour a pragmatic approach, keeping the bridges open to trade, countering buy-American jingoism, mounting joint defence operations to shoot down errant balloons, and otherwise being friendly with our neighbours — but not too friendly.

On some of the biggest issues we have faced, it is possible for Canadians to actually become less polarized than ever before.

Michael Adams is the founder and president of the Environics Institute for Survey ResearchAndrew Parkin is the Institute’s executive director.

Source: Adams and Parkin: One issue on which Canadians aren’t polarized — the U.S.

Mims: The AI Boom That Could Make Google and Microsoft Even More Powerful

Good long read. Hard to be optimistic about how the technology will be used. And the regulators will likely be more than a few steps behind corporations:

Seeing the new artificial intelligence-powered chatbots touted in dueling announcements this past week by Microsoft and Googledrives home two major takeaways. First, the feeling of “wow, this definitely could change everything.” And second, the realization that for chat-based search and related AI technologies to have an impact, we’re going to have to put a lot of faith in them and the companies they come from.

When AI is delivering answers, and not just information for us to base decisions on, we’re going to have to trust it much more deeply than we have before. This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value.

The same will be true of tools that help generate text, spreadsheets, code, images and anything else we create on our devices—some version of which both Microsoft MSFT -0.20%decrease; red down pointing triangle and Google have promised to offer within their existing productivity services, Microsoft 365 and Google Workspace. 

These technologies, and chat-based search, are all based on the latest generation of “generative” AI, capable of creating verbal and visual content and not just processing it the way more established AI has done. And the added trust it will require is one of several ways in which this new generative AI technology is poised to shift even more power into the hands of the biggest tech companies

Generative AI in all its forms will insinuate technology more deeply into the way we live and work than it already is—not just answering our questions but writing our memos and speeches or even producing poetry and art. And because of the financial, intellectual and computational resources needed to develop and run the technology are so enormous, the companies that control these AI systems will be the largest, richest companies.

OpenAI, the creator of the ChatGPT chatbot and DALL-E 2 image generator AIs that have fueled much of the current hype, seemed like an exception to that: a relatively small startup that has driven major AI innovation. But it has leapt into the arms of Microsoft, which has made successive rounds of investment, in part because of the need to pay for the computing power needed to make its systems work. 

The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes.

Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk.

Galactica was “the output of a research project versus something intended for commercial use,” says a Meta spokeswoman. In a public statement, Joelle Pineau, managing director of fundamental AI research at Meta, wrote that “given the propensity of large language models such as Galactica to generate text that may appear authentic, but is inaccurate, and because it has moved beyond the research community, we chose to remove the demo from public availability.”

On the other hand, proving your AI more trustworthy could be a competitive advantage more powerful than being the biggest, best or fastest repository of answers. This seems to be Google’s bet, as the company has emphasized in recent announcements and a presentation on Wednesday that as it tests and rolls out its own chat-based and generative AI systems, it will strive for “Responsible AI,” as outlined in 2019 in its “AI Principles.”

My colleague Joanna Stern this past week provided a helpful description of what it’s like to use Microsoft’s Bing search engine and Edge web browser with ChatGPT incorporated. You can join a list to test the service—and Google says it will make its chatbot, named Bard, available at some point in the coming months.

But in the meantime, to see just why trust in these kinds of search engines is so tricky, you can visit other chat-based search engines that already exist. There’s You.com, which will answer your questions via a chatbot, or Andisearch.com, which will summarize any article it returns when you search for a topic on it.

Even these smaller services feel a little like magic. If you ask You.com’s chat module a question like “Please list the best chat AI-based search engines,” it can, under the right circumstances, give you a coherent and succinct answer that includes all the best-known startups in this space. But it can also, depending on small changes in how you phrase that question, add complete nonsense to its answer. 

In my experimentation, You.com would, more often than not, give a reasonably accurate answer, but then add to it the name of a search engine that doesn’t exist at all. Googling the made-up search engine names it threw in revealed that You.com seemed to be misconstruing the names of humans quoted in articles as the names of search engines.

Andi doesn’t return search results in a chat format, precisely because making sure that those answers are accurate is still so difficult, says Chief Executive Angela Hoover. “It’s been super exciting to see these big players validating that conversational search is the future, but nailing factual accuracy is hard to do,” she adds. As a result, for now, Andi offers search results in a conventional format, but offers to use AI to summarize any page it returns.

Andi currently has a team of fewer than 10 people, and has raised $2.5 million so far. It’s impressive what such a small team has accomplished, but it’s clear that making trustworthy AI will require enormous resources, probably on the scale of what companies like Microsoft and Google possess.

There are two reasons for this: The first is the enormous amount of computing infrastructure required, says Tinglong Dai, a professor of operations management at Johns Hopkins University who studies human-AI interaction. That means tens of thousands of computers in big technology companies’ current cloud infrastructures. Some of those computers are used to train the enormous “foundation” models that power generative AI systems. Others specialize in making the trained models available to users, which as the number of users grows can become a more taxing task than the original training.

The second reason, says Dr. Dai, is that it requires enormous human resources to continually test and tune these models, in order to make sure they’re not spouting an inordinate amount of nonsense or biased and offensive speech.

Google has said that it has called on every employee in the company to test its new chat-based search engine and flag any issues with the results it generates. Microsoft, which is already rolling out its chat-based search engine to the public on a limited basis, is doing that kind of testing in public. ChatGPT, on which Microsoft’s chat-based search engine is based, has already proved to be vulnerable to attempts to “jailbreak” it into producing inappropriate content. 

Big tech companies can probably overcome the issues arising from their rollout of AI—Google’s go-slow approach, ChatGPT’s sometimes-inaccurate results, and the incomplete or misleading answers chat-based Bing could offer—by experimenting with these systems on a large scale, as only they can.

“The only reason ChatGPT and other foundational models are so bad at bias and even fundamental facts is they are closed systems, and there is no opportunity for feedback,” says Dr. Dai. Big tech companies like Google have decades of practice at soliciting feedback to improve their algorithmically-generated results. Avenues for such feedback have, for example, long been a feature of both Google Search and Google Maps.

Dr. Dai says that one analogy for the future of trust in AI systems could be one of the least algorithmically-generated sites on the internet: Wikipedia. While the entirely human-written and human-edited encyclopedia isn’t as trustworthy as primary-source material, its users generally know that and find it useful anyway. Wikipedia shows that “social solutions” to problems like trust in the output of an algorithm—or trust in the output of human Wikipedia editors—are possible.

But the model of Wikipedia also shows that the kind of labor-intensive solutions for creating trustworthy AI—which companies like Meta and Google have already employed for years and at scale in their content moderation systems—are likely to entrench the power of existing big technology companies. Only they have not just the computing resources, but also the human resources, to deal with all the misleading, incomplete or biased information their AIs will be generating.

In other words, creating trust by moderating the content generated by AIs might not prove to be so different from creating trust by moderating the content generated by humans. And that is something the biggest technology companies have already shown is a difficult, time-consuming and resource-intensive task they can take on in a way that few other companies can.

The obvious and immediate utility of these new kinds of AIs, when integrated into a search engine or in their many other potential applications, is the reason for the current media, analyst and investor frenzy for AI. It’s clear that this could be a disruptive technology, resetting who is harvesting attention and where they’re directing it, threatening Google’s search monopoly and opening up new markets and new sources of revenue for Microsoft and others.

Based on the runaway success of the ChatGPT AI—perhaps the fastest service to reach 100 million users in history, according to a recent UBS report—it’s clear that being an aggressive first mover in this space could matter a great deal. It’s also clear that being a successful first-mover in this space will require the kinds of resources that only the biggest tech companies can muster.

Source: The AI Boom That Could Make Google and Microsoft Even More Powerful

Liberals resist Tory, Bloc push for Quebec language law to rule federally regulated businesses

Just as Quebec zealously guards against and protests federal incursions into areas of provincial responsibility, so should the federal government with respect to federal jurisdiction. One of the recent rare times it is doing so:
Repeated attempts in committee Tuesday by Conservatives and the Bloc Québécois to incorporate elements of Quebec’s new Charter of the French Language in the modernization of the federal Official Languages Act were met with fierce resistance from federal Liberals.The study of C-13 in the parliamentary committee is still in its early stages, and the Bloc and the Conservatives have made it clear that they are siding with the Quebec government. They want businesses in Quebec, including federally regulated businesses, to comply with the provincial charter, which restricts the right of Quebec anglophones to work and be served in English.

More than once during Tuesday’s meeting, the Bloc and Tories introduced amendments that would lead to Quebec’s new language charter — formerly known as “Bill 96” — to prevail over federal jurisdiction, but they were defeated by the Liberals with the help of the lone NDP MP on the committee.

Source: Liberals resist Tory, Bloc push for Quebec language law to rule federally regulated businesses

Angus-Reid: Canadians strongly support COVID-19 test requirement for travellers from China, but also question its efficacy

Of note. 13 percent call the policy racist, perhaps an indicator of the more activist and woke portion of the population (my understanding of the testing requirement is that it is partly due to the unavailability of credible Chinese government data):

China abandoning its COVID zero strategy has caused a ripple of concern around the globe as the world’s second-most populous country faces an unprecedented wave of infections affecting as many as four-in-five people.

In response to rising cases in China, Canada, alongside other countries, set a new requirement this month that travellers form China must produce a negative COVID-19 test prior to takeoff.

Data from the non-profit Angus Reid Institute finds a majority of Canadians supportive of this policy, but unsure if it will be effective at reducing the spread of COVID-19 in their country. Indeed, Canadians who support the policy (77%) outnumber those who are opposed (16%) by nearly five-to-one.

However, those who believe the policy will be effective at reducing COVID-19 infections in Canada (34%) are in the minority. More Canadians believe it will be ineffective (38%) or are unsure (28%). And even among Canadians who support the policy, fewer than half (44%) say they believe it will be effective at preventing the spread of COVID-19.

There are other concerns with this policy. Some, including the Chinese government, have called it “discriminatory”. Others have gone further and called it “racist”. The pandemic has produced plenty of negative side effects, including discrimination and racism experienced by Canadians of Chinese descent. Some worry this new policy of testing travellers from China will rekindle those ugly sentiments. 

One-in-eight (13%) Canadians call the policy racist. However, more (73%) believe it’s not. Canadians who identify as visible minorities are twice as likely to label the policy racist (23%) than those who don’t identify as such (10%). Still, majorities of those who identify as visible minority (62%) and those who don’t (76%) say the policy is not racist.

More Key Findings:

  • Nearly all (94%) of those who oppose the COVID-19 testing policy for travellers from China believe it won’t be effective at reducing the spread of the virus in Canada.
  • One-in-five (19%) Canadians say they are not travelling at all because they are worried about COVID-19. A further 33 per cent say they have approached their recent travel with caution. Two-in-five (41%) are less worried about the risk of COVID-19 when it comes to travel.
  • Two-in-five (37%) of those who have not travelled at all outside of their province since March 2022 say they aren’t travelling because they worry about catching COVID-19.

Source: Canadians strongly support COVID-19 test requirement for travellers from China, but also question its efficacy

Buruma: Does soccer really unite the world? Of course not

As we enter the semi-finals, a good column (I’m rooting for Morocco):

Count on the International Federation of Association Football, better known as FIFA, to come up with a fatuous slogan for the World Cup in Qatar: “Football Unites the World.” An official promotional video has Argentina’s Lionel Messi and Brazil’s Neymar mouthing the words in Spanish and Portuguese, respectively.

Is it true? Does football really unite the world?

Of course not. It does not even unite countries. In Brazil, the team’s yellow-and-green colours have been co-opted by supporters of the recently ousted president Jair Bolsonaro (backed by Neymar), which has annoyed supporters of President Luiz Inacio Lula da Silva (backed by Brazilian striker Richarlison).

The idea that sporting events unite the world is an old obsession, going back to Baron Pierre de Coubertin’s invention of the modern Olympic Games in 1896. Sports, in the Baron’s mind, ought to transcend politics, international tensions, and any other discord. FIFA, too, subscribes to the fantasy of a world without politics, where conflict is confined to the playing fields.

In fact, the choice to hold this year’s tournament in Qatar, a tiny oil-rich sheikdom with no footballing history or evidence of robust local interest in the game, is itself political. The country’s ruling emir craved the prestige of a global event, and Qatar had the money to buy it. Thick envelopes are said to have been slipped into the pockets of voting FIFA officials. And FIFA was richly rewarded for giving broadcasting rights to Al Jazeera, Qatar’s state-funded TV channel.

FIFA, evidently, was unbothered by Qatar’s poor human-rights record, abuse of immigrant workers, and laws that punish homosexuality – certainly no more than even dodgier venues of the past. After all, the last World Cup tournament was held in Russia, which was already under international sanctions.

But the fact that tiny Qatar, the first Middle Eastern country to host the World Cup tournament, wields such clout, shows how much power has shifted in recent times. FIFA, like the International Olympic Committee, always bends to the power of money, making it clear that neither the players nor visiting European dignitaries should wear armbands with the phrase “OneLove.” That expression of support for people’s right to love who and how they want was seen as a political statement, and FIFA cannot allow sports and politics to mix.

Except that they can and they do. It has been perfectly acceptable for Moroccan, Saudi or Qatari fans to express solidarity with the Palestinian cause by waving Palestine’s flag in the World Cup stadiums. Meanwhile, the Dutch Minister for Sport, Conny Helder, could do no more than wear a tiny “OneLove” pin to a match as the Qatari official sitting next to her calmly tied on a Palestinian armband.

Any criticism of human-rights violations in Qatar has been swiftly met with accusations of racism, backed by FIFA’s Swiss president, Gianni Infantino, who reminded fellow Europeans of “3,000 years” of Western imperialism. T-shirts bearing the words “woman” and “freedom” were prohibited as well, lest they irritate the Iranian theocracy, which is being challenged with those slogans at home.

Just as notable is the lack of national unity. Demonstrators in Tehran and other Iranian cities, protesting the regime’s efforts to bask in the glow of its football victories, cheered when their team lost to the United States, of all countries. Most remarkable of all was the refusal of the Iranian players themselves to sing the national anthem before their opening match. They were reportedly warned by the Iranian Revolutionary Guard Corps not to repeat this brave act of defiance.

Then there was the extraordinary defeat of the young German team. Like most national teams, the German side is multiethnic, and when they failed to proceed to the knock-out stage (only because Spain lost to Japan) conservative pundits in Germany blamed it on a lack of the traditional German fighting spirit. Even before this World Cup, the team was attackedin certain right-wing circles and accused of not being truly German.

One of the ironies of modern football is that national teams whip up passions in a kind of carnivalesque performance of patriotic partisanship. But the players themselves are mostly colleagues in club teams all over Europe who usually speak several languages and are often close friends off the field, making them unsuitable avatars for this type of chauvinism. They are members of an extremely wealthy, truly cosmopolitan elite. So, the football stars are, in a sense, united, even if the World Cup unites no one else.

Still, one can understand why FIFA chose its 2022 World Cup slogan. “Money makes the world go ‘round” would have been a little too honest.

Ian Buruma is the author, most recently, of The Churchill Complex: The Curse of Being Special, from Winston and FDR to Trump and Brexit.

Source: Does soccer really unite the world? Of course not

Csernyik: Canada’s overly educated work force is nothing to be proud of

Of note. Valid points on the imbalance, most of the labour pressures are in trades and service jobs:

Several months after receiving my second bachelor’s degree, I found myself working behind an espresso machine once again. When I graduated from high school in 2004, postsecondary education was presented as the ticket to high salaries and trappings of middle-class life such as home ownership.

Instead, my generation graduated from university into a global recession, followed by rising home and living costs and the global COVID-19 pandemic. The conventional wisdom was thrown on its head. Today, with the exception of certain professions, higher education guarantees little to workers.

This week, Statistics Canada released 2021 census results that show our nation has the G7′s most educated work force, with 57.5 per cent of Canadians aged 25 to 64 possessing college or university credentials. The number of workers with a bachelor’s degree or higher has increased by nearly one-fifth since the 2016 census, largely due to highly-credentialed recent immigrants.

While Statscan acknowledges some of this education may be underused, the milestone is presented as a feat worth celebrating. But in our current economic climate, especially when some industries suffer from outsized vacancies – the spinoff effects felt broadly by Canadians – it feels like a vanity metric.

Statscan notes this level of educated workers helps Canada meet labour market needs today and will do so in the future, and that it’s “essential to maintaining our standard of living as a country.” But shortfalls in certain job categories – including those that don’t require postsecondary education – are impacting that standard of living in tangible ways.

Reduced business hours and slower service due to a lack of staff in retail and food service businesses have been problems since the pandemic started, and show no sign of waning. Accommodation and food services, one of the leading job vacancy categories, continues to struggle to fill positions despite help wanted signs blanketing communities across the country.

This is also true in industries such as construction, which lack enough skilled tradespeople to fill roles and are necessary for building new housing and infrastructure. Working-age holders of apprenticeship certificates in fields such as repair technologies and construction and mechanical trades have “stagnated or fallen,” according to Statscan’s findings.

It’s notable that low-wage customer service work and skilled trades, despite their importance to our economy, are still given short shrift in political and public discussion. This leads to little advancement on critical issues such as wages, which can explain, at least in part, why these positions are tough to fill now. But these positions are frequently – and incorrectly – seen as roles people only do if they haven’t gone to university, as though they are jobs of last resort.

Slightly less than 25 per cent of minimum-wage employees had a postsecondary diploma or higher in 1998, but by 2018 that was slightly more than one in three. Having worked in these roles with postsecondary credentials, I’ve been one of these people and have worked with many others. Critically, some of the new immigrants contributing to this mismatch are underemployed – including in minimum-wage jobs. Statscan even acknowledges “the educational qualifications of some foreign-educated workers being underused.”

My first-hand experience has also shown me how little attention is paid to working conditions, wages and other concerns of sub-white-collar-workers in Canada. Yet people not wanting these jobs is often categorized as a failure on the part of workers, rather than a systemic one.

It feels like an offshoot of the credentialism that has been rampant in North American society for years. This has led to headline-making grade inflation in high schools, which has students entering postsecondary programs with puffed-up marks. Then, once at university, there’s a mismatch between classes and programs available and what’s needed in the work force.

Skills gaps are high in all industries – an average of 56.1 per cent of employees are not proficient enough to do their job, according to Statscan. But the gaps surge to nearly 80 per cent in accommodation and food services, and 67.8 per cent in retail trade, two categories that employ millions of Canadians, but which are often left out of the skills and training discussion in favour of more white-collar pursuits such as computer science.

For many workers in this country, the earnings power education is supposed to create isn’t the case. That’s why attention should be turned to what can be done in fields such as retail, food services and skilled trades in order to fill the positions that help keep our country running. This involves everything from living wages, to housing affordability initiatives – so workers can afford to live in the communities where they work – to shedding societal stigmas about these careers.

As COVID-19 recedes, there’s an opportunity to review our perspective on credentialism and, more critically, a need. Metrics such as being the most educated work force look good on paper. But as labour shortages disrupt day-to-day Canadian life, those metrics feel hollow and, at worst, like a distraction from finding solutions for increasing employment in industries that don’t get enough thoughtful care and consideration from policy makers and the public.

Let’s get these sorted out instead of throwing our caps in the air.

Rob Csernyik is a freelance journalist who is writing a book about minimum-wage work.

Source: Canada’s overly educated work force is nothing to be proud of

The disappeared: Ukrainians plead for answers on family members forcefully taken to Russia

Yet another series of war crimes and brutality:

It’s been nearly seven months since Anna Zaitseva and her toddler last came under bombardment by the Russian military in a shelter beneath Ukraine’s Azovstal steel plant – and her young son still cannot fall sleep until she holds her hands over his eyes.

“He’s developed a habit. When he’s trying to sleep, he takes my hands and puts them onto his face to cover it,” Ms. Zaitseva, 25, said in an interview.

The gesture mimics how she used to protect her son, Svyatoslav, as pieces of the bomb shelter’s ceiling rained down on them under the Azovstal steel complex in Mariupol in southeastern Ukraine.

Ms. Zaitseva was one of numerous civilians trapped there for 65 days before a safe-passage operation conducted by the Red Cross this spring.

Now a refugee in Berlin, she travelled to the Halifax International Security Forum this weekend to draw attention to the huge numbers of Ukrainian civilians and soldiers forcefully taken to Russia where they have all but disappeared.

Her husband, Kirillo Zaitsev, 23, was a steel worker turned Azov Regiment soldier. He was one of the last group of Ukrainian fighters holding out in the Azovstal complex until their surrender in mid-May.

Mr. Zaitsev was taken prisoner by the Russians and his wife has not heard from him since. She presumes he’s in a prison camp in Russia, where, by all accounts, Ukrainians are being mistreated and where, she fears, Moscow is failing to live up to the Geneva Convention on the treatment of prisoners of war.

She said photos of Ukrainian soldiers imprisoned in Russia show how they have lost significant amounts of weight; accounts of the conditions say the jailed troops lack access to proper food, water and medicine. “They are trying to kill them physically and kill their morale.”

Olga Stefanishyna, Ukraine’s deputy prime minister for European and Euro-Atlantic integration, told journalists at the Halifax forum that Kyiv estimates 1.5 million Ukrainian women and children have been “forcefully displaced” to Russia.

“We do not have any access to information on where they live or under what conditions,” she said. These Ukrainians are deprived of “any access to communications” that would enable them to talk to those back in Ukraine.

She could not provide an estimate on how many thousands of Ukrainian soldiers such as Kirillo Zaitsev have been taken as prisoners to Russia.

Ms. Zaitseva, who was a French teacher before the war, still copes with post-traumatic stress disorder as well as a concussion from a blast caused by Russia’s bombardment of the steel plant. She was caught in one attack while in a makeshift kitchen one floor above the bomb shelter where she was mixing baby formula for her son and heating it by candle.

Ms. Zaitseva says her breast milk stopped from the stress of the siege and she believes her son would not have lived through the ordeal if soldiers hadn’t discovered a cache of infant formula.

After leaving the steel plant in late April, she and her son and parents were taken to a Russian “filtration camp” where she says she was forced to stripped naked and interrogated by agents from Moscow’s Federal Security Service because she was a wife of an Azov Regiment soldier. The unit has a history of far-right leanings but is now part of the Ukrainian army.

“They told me to take off all my clothing and they were touching me everywhere,” Ms. Zaitseva said.

“They took our phones and downloaded all of the data. They told me to tell the truth otherwise I could be killed.”

She said she believes the only reason she was allowed to go free from the Russian filtration camp was because representatives of the Red Cross and United Nations had accompanied her there.

Ms. Zaitseva said civilians hiding in the labyrinthine steel plant were chronically short of food and forced to use rain and melted snow for water. A lack of sufficient power meant they had to live in complete darkness for 12 hours a day. The Soviet-era bomb shelter was plagued by high levels of humidity and she had bedsores from sleeping on makeshift beds.

People were hungry all the time. Some played games related to food, pretending they were in cafés or supermarkets. Many lost weight. Ms. Zaitseva lost 10 kilograms and her father lost 20. When they emerged after more than two months their skin was pale.

She worries for Ukrainian children forcefully taken to Russia. “Russians are taught to hate Ukrainians and nobody will adopt a Ukrainian child.” Ms. Zaitseva fears these parentless-children will end up exploited for human trafficking or worse.

Her story is also part of a new documentary, Freedom on Fire: Ukraine’s Fight For Freedom by Israeli-American director Evgeny Afineevsky, which was screened at the Halifax forum, a gathering of Canadian, American and European leaders, as well as military and security experts from NATO and its allies.

Source: The disappeared: Ukrainians plead for answers on family members forcefully taken to Russia