Angus-Reid: Canadians strongly support COVID-19 test requirement for travellers from China, but also question its efficacy

Of note. 13 percent call the policy racist, perhaps an indicator of the more activist and woke portion of the population (my understanding of the testing requirement is that it is partly due to the unavailability of credible Chinese government data):

China abandoning its COVID zero strategy has caused a ripple of concern around the globe as the world’s second-most populous country faces an unprecedented wave of infections affecting as many as four-in-five people.

In response to rising cases in China, Canada, alongside other countries, set a new requirement this month that travellers form China must produce a negative COVID-19 test prior to takeoff.

Data from the non-profit Angus Reid Institute finds a majority of Canadians supportive of this policy, but unsure if it will be effective at reducing the spread of COVID-19 in their country. Indeed, Canadians who support the policy (77%) outnumber those who are opposed (16%) by nearly five-to-one.

However, those who believe the policy will be effective at reducing COVID-19 infections in Canada (34%) are in the minority. More Canadians believe it will be ineffective (38%) or are unsure (28%). And even among Canadians who support the policy, fewer than half (44%) say they believe it will be effective at preventing the spread of COVID-19.

There are other concerns with this policy. Some, including the Chinese government, have called it “discriminatory”. Others have gone further and called it “racist”. The pandemic has produced plenty of negative side effects, including discrimination and racism experienced by Canadians of Chinese descent. Some worry this new policy of testing travellers from China will rekindle those ugly sentiments. 

One-in-eight (13%) Canadians call the policy racist. However, more (73%) believe it’s not. Canadians who identify as visible minorities are twice as likely to label the policy racist (23%) than those who don’t identify as such (10%). Still, majorities of those who identify as visible minority (62%) and those who don’t (76%) say the policy is not racist.

More Key Findings:

  • Nearly all (94%) of those who oppose the COVID-19 testing policy for travellers from China believe it won’t be effective at reducing the spread of the virus in Canada.
  • One-in-five (19%) Canadians say they are not travelling at all because they are worried about COVID-19. A further 33 per cent say they have approached their recent travel with caution. Two-in-five (41%) are less worried about the risk of COVID-19 when it comes to travel.
  • Two-in-five (37%) of those who have not travelled at all outside of their province since March 2022 say they aren’t travelling because they worry about catching COVID-19.

Source: Canadians strongly support COVID-19 test requirement for travellers from China, but also question its efficacy

Buruma: Does soccer really unite the world? Of course not

As we enter the semi-finals, a good column (I’m rooting for Morocco):

Count on the International Federation of Association Football, better known as FIFA, to come up with a fatuous slogan for the World Cup in Qatar: “Football Unites the World.” An official promotional video has Argentina’s Lionel Messi and Brazil’s Neymar mouthing the words in Spanish and Portuguese, respectively.

Is it true? Does football really unite the world?

Of course not. It does not even unite countries. In Brazil, the team’s yellow-and-green colours have been co-opted by supporters of the recently ousted president Jair Bolsonaro (backed by Neymar), which has annoyed supporters of President Luiz Inacio Lula da Silva (backed by Brazilian striker Richarlison).

The idea that sporting events unite the world is an old obsession, going back to Baron Pierre de Coubertin’s invention of the modern Olympic Games in 1896. Sports, in the Baron’s mind, ought to transcend politics, international tensions, and any other discord. FIFA, too, subscribes to the fantasy of a world without politics, where conflict is confined to the playing fields.

In fact, the choice to hold this year’s tournament in Qatar, a tiny oil-rich sheikdom with no footballing history or evidence of robust local interest in the game, is itself political. The country’s ruling emir craved the prestige of a global event, and Qatar had the money to buy it. Thick envelopes are said to have been slipped into the pockets of voting FIFA officials. And FIFA was richly rewarded for giving broadcasting rights to Al Jazeera, Qatar’s state-funded TV channel.

FIFA, evidently, was unbothered by Qatar’s poor human-rights record, abuse of immigrant workers, and laws that punish homosexuality – certainly no more than even dodgier venues of the past. After all, the last World Cup tournament was held in Russia, which was already under international sanctions.

But the fact that tiny Qatar, the first Middle Eastern country to host the World Cup tournament, wields such clout, shows how much power has shifted in recent times. FIFA, like the International Olympic Committee, always bends to the power of money, making it clear that neither the players nor visiting European dignitaries should wear armbands with the phrase “OneLove.” That expression of support for people’s right to love who and how they want was seen as a political statement, and FIFA cannot allow sports and politics to mix.

Except that they can and they do. It has been perfectly acceptable for Moroccan, Saudi or Qatari fans to express solidarity with the Palestinian cause by waving Palestine’s flag in the World Cup stadiums. Meanwhile, the Dutch Minister for Sport, Conny Helder, could do no more than wear a tiny “OneLove” pin to a match as the Qatari official sitting next to her calmly tied on a Palestinian armband.

Any criticism of human-rights violations in Qatar has been swiftly met with accusations of racism, backed by FIFA’s Swiss president, Gianni Infantino, who reminded fellow Europeans of “3,000 years” of Western imperialism. T-shirts bearing the words “woman” and “freedom” were prohibited as well, lest they irritate the Iranian theocracy, which is being challenged with those slogans at home.

Just as notable is the lack of national unity. Demonstrators in Tehran and other Iranian cities, protesting the regime’s efforts to bask in the glow of its football victories, cheered when their team lost to the United States, of all countries. Most remarkable of all was the refusal of the Iranian players themselves to sing the national anthem before their opening match. They were reportedly warned by the Iranian Revolutionary Guard Corps not to repeat this brave act of defiance.

Then there was the extraordinary defeat of the young German team. Like most national teams, the German side is multiethnic, and when they failed to proceed to the knock-out stage (only because Spain lost to Japan) conservative pundits in Germany blamed it on a lack of the traditional German fighting spirit. Even before this World Cup, the team was attackedin certain right-wing circles and accused of not being truly German.

One of the ironies of modern football is that national teams whip up passions in a kind of carnivalesque performance of patriotic partisanship. But the players themselves are mostly colleagues in club teams all over Europe who usually speak several languages and are often close friends off the field, making them unsuitable avatars for this type of chauvinism. They are members of an extremely wealthy, truly cosmopolitan elite. So, the football stars are, in a sense, united, even if the World Cup unites no one else.

Still, one can understand why FIFA chose its 2022 World Cup slogan. “Money makes the world go ‘round” would have been a little too honest.

Ian Buruma is the author, most recently, of The Churchill Complex: The Curse of Being Special, from Winston and FDR to Trump and Brexit.

Source: Does soccer really unite the world? Of course not

Csernyik: Canada’s overly educated work force is nothing to be proud of

Of note. Valid points on the imbalance, most of the labour pressures are in trades and service jobs:

Several months after receiving my second bachelor’s degree, I found myself working behind an espresso machine once again. When I graduated from high school in 2004, postsecondary education was presented as the ticket to high salaries and trappings of middle-class life such as home ownership.

Instead, my generation graduated from university into a global recession, followed by rising home and living costs and the global COVID-19 pandemic. The conventional wisdom was thrown on its head. Today, with the exception of certain professions, higher education guarantees little to workers.

This week, Statistics Canada released 2021 census results that show our nation has the G7′s most educated work force, with 57.5 per cent of Canadians aged 25 to 64 possessing college or university credentials. The number of workers with a bachelor’s degree or higher has increased by nearly one-fifth since the 2016 census, largely due to highly-credentialed recent immigrants.

While Statscan acknowledges some of this education may be underused, the milestone is presented as a feat worth celebrating. But in our current economic climate, especially when some industries suffer from outsized vacancies – the spinoff effects felt broadly by Canadians – it feels like a vanity metric.

Statscan notes this level of educated workers helps Canada meet labour market needs today and will do so in the future, and that it’s “essential to maintaining our standard of living as a country.” But shortfalls in certain job categories – including those that don’t require postsecondary education – are impacting that standard of living in tangible ways.

Reduced business hours and slower service due to a lack of staff in retail and food service businesses have been problems since the pandemic started, and show no sign of waning. Accommodation and food services, one of the leading job vacancy categories, continues to struggle to fill positions despite help wanted signs blanketing communities across the country.

This is also true in industries such as construction, which lack enough skilled tradespeople to fill roles and are necessary for building new housing and infrastructure. Working-age holders of apprenticeship certificates in fields such as repair technologies and construction and mechanical trades have “stagnated or fallen,” according to Statscan’s findings.

It’s notable that low-wage customer service work and skilled trades, despite their importance to our economy, are still given short shrift in political and public discussion. This leads to little advancement on critical issues such as wages, which can explain, at least in part, why these positions are tough to fill now. But these positions are frequently – and incorrectly – seen as roles people only do if they haven’t gone to university, as though they are jobs of last resort.

Slightly less than 25 per cent of minimum-wage employees had a postsecondary diploma or higher in 1998, but by 2018 that was slightly more than one in three. Having worked in these roles with postsecondary credentials, I’ve been one of these people and have worked with many others. Critically, some of the new immigrants contributing to this mismatch are underemployed – including in minimum-wage jobs. Statscan even acknowledges “the educational qualifications of some foreign-educated workers being underused.”

My first-hand experience has also shown me how little attention is paid to working conditions, wages and other concerns of sub-white-collar-workers in Canada. Yet people not wanting these jobs is often categorized as a failure on the part of workers, rather than a systemic one.

It feels like an offshoot of the credentialism that has been rampant in North American society for years. This has led to headline-making grade inflation in high schools, which has students entering postsecondary programs with puffed-up marks. Then, once at university, there’s a mismatch between classes and programs available and what’s needed in the work force.

Skills gaps are high in all industries – an average of 56.1 per cent of employees are not proficient enough to do their job, according to Statscan. But the gaps surge to nearly 80 per cent in accommodation and food services, and 67.8 per cent in retail trade, two categories that employ millions of Canadians, but which are often left out of the skills and training discussion in favour of more white-collar pursuits such as computer science.

For many workers in this country, the earnings power education is supposed to create isn’t the case. That’s why attention should be turned to what can be done in fields such as retail, food services and skilled trades in order to fill the positions that help keep our country running. This involves everything from living wages, to housing affordability initiatives – so workers can afford to live in the communities where they work – to shedding societal stigmas about these careers.

As COVID-19 recedes, there’s an opportunity to review our perspective on credentialism and, more critically, a need. Metrics such as being the most educated work force look good on paper. But as labour shortages disrupt day-to-day Canadian life, those metrics feel hollow and, at worst, like a distraction from finding solutions for increasing employment in industries that don’t get enough thoughtful care and consideration from policy makers and the public.

Let’s get these sorted out instead of throwing our caps in the air.

Rob Csernyik is a freelance journalist who is writing a book about minimum-wage work.

Source: Canada’s overly educated work force is nothing to be proud of

The disappeared: Ukrainians plead for answers on family members forcefully taken to Russia

Yet another series of war crimes and brutality:

It’s been nearly seven months since Anna Zaitseva and her toddler last came under bombardment by the Russian military in a shelter beneath Ukraine’s Azovstal steel plant – and her young son still cannot fall sleep until she holds her hands over his eyes.

“He’s developed a habit. When he’s trying to sleep, he takes my hands and puts them onto his face to cover it,” Ms. Zaitseva, 25, said in an interview.

The gesture mimics how she used to protect her son, Svyatoslav, as pieces of the bomb shelter’s ceiling rained down on them under the Azovstal steel complex in Mariupol in southeastern Ukraine.

Ms. Zaitseva was one of numerous civilians trapped there for 65 days before a safe-passage operation conducted by the Red Cross this spring.

Now a refugee in Berlin, she travelled to the Halifax International Security Forum this weekend to draw attention to the huge numbers of Ukrainian civilians and soldiers forcefully taken to Russia where they have all but disappeared.

Her husband, Kirillo Zaitsev, 23, was a steel worker turned Azov Regiment soldier. He was one of the last group of Ukrainian fighters holding out in the Azovstal complex until their surrender in mid-May.

Mr. Zaitsev was taken prisoner by the Russians and his wife has not heard from him since. She presumes he’s in a prison camp in Russia, where, by all accounts, Ukrainians are being mistreated and where, she fears, Moscow is failing to live up to the Geneva Convention on the treatment of prisoners of war.

She said photos of Ukrainian soldiers imprisoned in Russia show how they have lost significant amounts of weight; accounts of the conditions say the jailed troops lack access to proper food, water and medicine. “They are trying to kill them physically and kill their morale.”

Olga Stefanishyna, Ukraine’s deputy prime minister for European and Euro-Atlantic integration, told journalists at the Halifax forum that Kyiv estimates 1.5 million Ukrainian women and children have been “forcefully displaced” to Russia.

“We do not have any access to information on where they live or under what conditions,” she said. These Ukrainians are deprived of “any access to communications” that would enable them to talk to those back in Ukraine.

She could not provide an estimate on how many thousands of Ukrainian soldiers such as Kirillo Zaitsev have been taken as prisoners to Russia.

Ms. Zaitseva, who was a French teacher before the war, still copes with post-traumatic stress disorder as well as a concussion from a blast caused by Russia’s bombardment of the steel plant. She was caught in one attack while in a makeshift kitchen one floor above the bomb shelter where she was mixing baby formula for her son and heating it by candle.

Ms. Zaitseva says her breast milk stopped from the stress of the siege and she believes her son would not have lived through the ordeal if soldiers hadn’t discovered a cache of infant formula.

After leaving the steel plant in late April, she and her son and parents were taken to a Russian “filtration camp” where she says she was forced to stripped naked and interrogated by agents from Moscow’s Federal Security Service because she was a wife of an Azov Regiment soldier. The unit has a history of far-right leanings but is now part of the Ukrainian army.

“They told me to take off all my clothing and they were touching me everywhere,” Ms. Zaitseva said.

“They took our phones and downloaded all of the data. They told me to tell the truth otherwise I could be killed.”

She said she believes the only reason she was allowed to go free from the Russian filtration camp was because representatives of the Red Cross and United Nations had accompanied her there.

Ms. Zaitseva said civilians hiding in the labyrinthine steel plant were chronically short of food and forced to use rain and melted snow for water. A lack of sufficient power meant they had to live in complete darkness for 12 hours a day. The Soviet-era bomb shelter was plagued by high levels of humidity and she had bedsores from sleeping on makeshift beds.

People were hungry all the time. Some played games related to food, pretending they were in cafés or supermarkets. Many lost weight. Ms. Zaitseva lost 10 kilograms and her father lost 20. When they emerged after more than two months their skin was pale.

She worries for Ukrainian children forcefully taken to Russia. “Russians are taught to hate Ukrainians and nobody will adopt a Ukrainian child.” Ms. Zaitseva fears these parentless-children will end up exploited for human trafficking or worse.

Her story is also part of a new documentary, Freedom on Fire: Ukraine’s Fight For Freedom by Israeli-American director Evgeny Afineevsky, which was screened at the Halifax forum, a gathering of Canadian, American and European leaders, as well as military and security experts from NATO and its allies.

Source: The disappeared: Ukrainians plead for answers on family members forcefully taken to Russia

Usher: Viewpoint diversity [at universities]

Sound critique of the methodology used and resulting conclusions:

Last week, the MacDonald-Laurier Institute released a truly bad paper on “viewpoint diversity” at Canadian Universities.  How bad was it, you ask?  Really bad.  Icelandic rotting shark bad.  Crystal Pepsi bad.  Final Season of Game of Thrones bad. 

The basic thrust of the paper, co-written by Christopher Dummitt and Zachary Patterson, is that

  • The Canadian professoriate is well to the left of the Canadian public
  • Within the academy, those who describe themselves as being on the right are much more likely to say they “self-censor” or find work a “hostile environment”
  • This is an attack on academic freedom
  • There should therefore, in the name of academic freedom, be a significant government bureaucracy devoted to ensuring that right-wingers are hired more often and feel more at home within the academy.

Dummitt and Peterson are not, of course, the first to note that the academy is somewhat left-leaning.  Back in 2008, MR Nakhaie and Barry D. Adam, both then at the University of Windsor, published a study in the Canadian Journal of Sociology showing that university professors were about three times as likely to have voted NDP in the 2000 general election than the general population (the NDP got about 8.5% of the vote in that election), about as likely to have voted Liberal, and less likely to have voted Bloc, Conservative, or Reform.   Being at a more prestigious institution made a professor less likely to support the NDP, as did being a professor in business or in the natural sciences. 

(This effect of discipline on faculty political beliefs is not a Canadian phenomenon but a global one.  Here is a summary of US research on the issue, and an old but still interesting article from Australia which touches on some of the same issues). 

Anyways, this new study starts out with a survey of professors.  The sample they ended up with was ludicrously biased: 30% from the humanities, 47% from the Social Sciences and 23% from what they call “STEM” (where are health professions?  I am going to assume they are in STEM).  In fact, humanities professors are 13% of the overall faculty, social science profs 23%, and the rest of the professoriate 64%.  Despite having read the Nakhaie/Adam paper, which explains exactly how to get the data that would allow a re-weighting of the data (you can buy it from Stastcan, or you can look up table 3.15 in the CAUT Almanac, which is a couple of years out of date but hardly incorrect) , the authors claim that “relatively little information was available for the population of professors in Canada so no weights were developed”.  In other words, either through incompetence or deliberate feigning of ignorance, the authors created a sample which overrepresented the known-to-be most leftist bits of the academy by 2 times and underrepresented the known-to-be less leftists bits of the academy by a similar factor, and just blithely carried on as if nothing were amiss.

Then – this is the good one if you are familiar with conventions of Canadian political science – they divided respondents into “left-wing” and “right-wing” partially by asking them to self-locate on a four-point likert scale which left no space to self-identify as a centrist and partly by asking them about their views on various issues or how they self-described on a simple left-right scale.  If they voted Green, Bloc, NDP or Liberal they were “left-wing” and if they voted Conservative or People’s party they were “right-wing”.  Both methods came up with a similar division between “left” and “right” among professors (roughly 88-12, though again that’s a completely unadjusted figure).  Now, generally speaking, no one in Canadian political science forces such a left-right choice, because the Liberals really aren’t particularly left wing.  That’s why there is nearly always room for a “centre” option.  Certainly, Nakhaie and Adam did so – why didn’t Dummitt and Peterson? 

Anyways, having vastly exaggerated the degree of polarization and the pinkness of professorial views, and on this basis declared a “political monoculture”, the authors then go on to note that the embittered right-wing professors appear to have different feelings about the workplace than do the rest of their colleagues.  They are three times more likely, for example, to say their departmental climate is “hostile”, for instance, or to say that they “fear negative repercussions” if their political views – specifically, on social justice, gender, and Equity Diversity and Inclusion – were to become known. They are twice as likely to say they have “refrained from airing views or avoided pursuing or publishing research” (which is a hell of a conflation of things if what you’re interested in examining is academic freedom).  On the basis of this, plus a couple of other questions that conflate things like job loss with “missed professional opportunities” or that pose ludicrous hypothetical questions about prioritizing social justice versus academic freedom, they declare a “serious crisis” which has “disturbing implications for the ability of universities to continue to act as bastions of open inquiry and rational thought in modern Canada” which requires things like legislation on academic freedom, and a bunch of things which would effectively ban universities from anti-racism initiatives.

Look, this is a bad study, full stop.  The methodology and question design are so obviously terrible that it seems hard to avoid the conclusion that its main purpose was to confirm the authors’ biases, and clearly whatever editorial/peer review process the Macdonald Laurier Institute uses to oversee these publications needs major work.  But if a result is significant enough, even a bad methodology can find it: might this be such a case?

Maybe.  Part of the problem is that this paper spends a lot of effort conflating “viewpoint diversity” with “party identification diversity”, which is absurd.  I mean, there are countries which allocate academic places based on party identity, but I doubt that those are places where many Canadian academics would want to teach.  Further, on the specific issues where people apparently feel they have a need to “not share their opinions” on issues concerning race and gender, there are in addition to a censorious left a lot of bad faith right-wing concern trolls too, which kind of tempers my ability to share the authors concern that this is a necessarily “bad thing”.  And finally, this idea that the notion of being an academic means you should be able to say whatever you want without possibility of facing criticism or social ostracism – which I think is implicitly what the authors are suggesting – is a rather significant widening of the concept of academic freedom that wouldn’t find universal acceptance.

I think the most you can say about these issues really is first that viewpoint diversity should be a concern of every department, but that to reduce it to “party identification” diversification or some notion of both-sidesism (anti-vaxxers in virology departments, anyone?) should be seen for the grotesquerie that it is.  Second, yes, society (not just universities) is more polarized around issues like gender and race and finding acceptable and constructive common language in which to talk about these concepts is difficult, but, my dudes, banging on about why someone who happens to have a teaching position is absolved from the hard work of finding that language because of some abstract notion of academic freedom is not helpful. 

And in any event, you could make such points without the necessity of publishing a methodologically omnishambles of a report like this one.  Just sayin.’

Source: Viewpoint diversity

How to Change Minds? A Study Makes the Case for Talking It Out.

Interesting study although hard to apply in practice (except ensure that no blowhards!):

Co-workers stuck on a Zoom call, deliberating a new strategy for a crucial project. Roommates at the kitchen table, arguing about how to split utility bills fairly. Neighbors at a city meeting, debating how to pay for street repairs.

We’ve all been there — in a group, trying our best to get everyone on the same page. It’s arguably one of the most important and common undertakings in human societies. But reaching agreement can be excruciating.

“Much of our lives seem to be in this sort of Rashomon situation — people see things in different ways and have different accounts of what’s happening,” Beau Sievers, a social neuroscientist at Dartmouth College, said.

A few years ago, Dr. Sievers devised a study to improve understanding of how exactly a group of people achieves a consensus and how their individual brains change after such discussions. The results, recently published online but not yet peer-reviewed, showed that a robust conversation that results in consensus synchronizes the talkers’ brains — not only when thinking about the topic that was explicitly discussed, but related situations that were not.

The study also revealed at least one factor that makes it harder to reach accord: a group member whose strident opinions drown out everyone else.

“Conversation is our greatest tool to align minds,” said Thalia Wheatley, a social neuroscientist at Dartmouth College who advises Dr. Sievers. “We don’t think in a vacuum, but with other people.”

Dr. Sievers designed the experiment around watching movies because he wanted to create a realistic situation in which participants could show fast and meaningful changes in their opinions. But he said it was surprisingly difficult to find films with scenes that could be viewed in different ways. “Directors of movies are very good at constraining the kinds of interpretations that you might have,” he said.

Reasoning that smash hits typically did not offer much ambiguity, Dr. Sievers focused on films that critics loved but did not bring blockbuster audiences, including “The Master,” “Sexy Beast” and “Birth,” a 2004 drama in which a mysterious young boy shows up at a woman’s engagement party.

None of the study’s volunteers had seen any of the films before. While lying in a brain scanner, they watched scenes from the various movies without sound, including one from “Birth” in which the boy collapses in a hallway after a tense conversation with the elegantly dressed woman and her fiancé.

After watching the clips, the volunteers answered survey questions about what they thought had happened in each scene. Then, in groups of three to six people, they sat around a table and discussed their interpretations, with the goal of reaching a consensus explanation.

All of the participants were students in the same master of business administration program, and many of them knew each other to varying degrees, which made for lively conversations reflecting real-world social dynamics, the researchers said.

After their chats, the students went back into the brain scanners and watched the clips again, as well as new scenes with some of the same characters. The additional “Birth” scene, for example, showed the woman tucking the little boy into bed and crying.

The study found that the group members’ brain activity — in regions related to vision, sound, attention, language and memory, among others — became more aligned after their conversation. Intriguingly, their brains were synchronized while they watched the scenes they had discussed, as well as the novel ones.

Groups of volunteers came up with different interpretations of the same movie clip. Some groups, for example, thought the woman was the boy’s mother and had abandoned him, whereas others thought they were unrelated. Despite having watched the same clips, the brain patterns from one group to another were meaningfully different, but within each group, the activity was far more synchronized.

The results have been submitted for publication in a scientific journal and are under review.

“This is a bold and innovative study,” said Yuan Chang Leong, a cognitive neuroscientist at University of Chicago who was not involved in the work.

The results jibe with previous research showing people who share beliefs tend to share brain responses. For example, a 2017 studypresented volunteers with one of two opposite interpretations of “Pretty Mouth and Green My Eyes,” a short story by J.D. Salinger. The participants that had received the same interpretation had more aligned brain activity when listening to the story in the brain scanner.

And in 2020, Dr. Leong’s team reported that when watching news footage, brain activity in conservatives looked more like that in other conservatives than that in liberals, and vice versa.

The new study “suggests that the degree of similarity in brain responses depends not only on people’s inherent predispositions, but also the common ground created by having a conversation,” Dr. Leong said.

The experiment also underscored a dynamic familiar to anyone who has been steamrollered in a work meeting: An individual’s behavior can drastically influence a group decision. Some of the volunteers tried to persuade their groupmates of a cinematic interpretation with bluster, by barking orders and talking over their peers. But others — particularly those who were central players in the students’ real-life social networks — acted as mediators, reading the room and trying to find common ground.

The groups with blowhards were less neurally aligned than were those with mediators, the study found. Perhaps more surprising, the mediators drove consensus not by pushing their own interpretations, but by encouraging others to take the stage and then adjusting their own beliefs — and brain patterns — to match the group.

“Being willing to change your own mind, then, seems key to getting everyone on the same page,” Dr. Wheatley said.

Because the volunteers were eagerly trying to collaborate, the researchers said that the study’s results were most relevant to situations, like workplaces or jury rooms, in which people are working toward a common goal.

But what about more adversarial scenarios, in which people have a vested interest in a particular position? The study’s results might not hold for a person negotiating a raise or politicians arguing over the integrity of our elections. And for some situations, like creative brainstorming, groupthink may not be an ideal outcome.

“The topic of conversation in this study was probably pretty ‘safe,’ in that no personally or societally relevant beliefs were at stake,” said Suzanne Dikker, a cognitive neuroscientist and linguist at New York University, who was not involved in the study.

Future studies could zero in on brain activity during consensus-building conversations, she said. This would require a relatively new technique, known as hyperscanning, which can simultaneously measure multiple people’s brains. Dr. Dikker’s work in this arena has shown that personality traits and conversational dynamics like taking turns can affect brain-to-brain synchrony.

Dr. Wheatley agreed. The neuroscientist said she has long been frustrated with her field’s focus on the isolated brain.

“Our brains evolved to be social: We need frequent interaction and conversation to stay sane,” she said. “And yet, neuroscience still putters along mapping out the single brain as if that will achieve a deep understanding of the human mind. This has to, and will, change.”

Source: How to Change Minds? A Study Makes the Case for Talking It Out.

Data sharing should not be an afterthought in digital health innovation

Agree that data sharing is intrinsic to healthcare innovation, not just digital.

In Ontario, Epic provides a measure of integration for providers and patients, which I have found useful for my personal health data (and I like the fact that I see my test results sometimes earlier than my doctors!).

But surprising no mention of CIHI and its current role in compiling healthcare data, which I have used to analyse trends in birth tourism:

Within Canada and abroad, many health-care organizations and health authorities struggle to share data effectively with biomedical researchers. The pandemic has accentuated and brought more attention to the need for a better data-sharing ecosystem in biomedical sciences to enable research and innovation.

The siloed and often entirely disconnected data systems suffer from a lack of an interoperable infrastructure and a common policy framework for big data-sharing. These are required not only for rapidly responding to emergency situations such as a global pandemic, but also for addressing inefficiencies in hospitals, clinics and public health organizations. Ultimately this may result in delays in providing critical care and formulating public health interventions. An integrated framework could improve collaboration among practitioners and researchers across disciplines and yield improvements and innovations.

Significant investments and efforts are currently underway in Canada by hospitals and health authorities to modernize health data management. This includes the adoption of electronic health record systems (EHRs) and cloud computing infrastructure. However, these large-scale investments do not consider data-sharing needs to maximize secondary use of health data by research communities.

For example, the adoption of Cerner, a health information technology provider, as an EHR system in British Columbia represents the single largest investment in the history of B.C. health care. It promises improved data-sharing, and yet the framework for data-sharing is non-existent.

Operationalization of a data-sharing system is complex and costly, and runs the risk of being both too little and too much of a regulatory burden. Much can be learned from both the SARS and COVID-19 pandemics in formulating the next steps. For example, a national committee was formed after SARS to propose the creation of a centralized database to share public health data (the National Advisory Committee on SARS). A more recent example is the Pan-Canadian Health Data Strategy, which aims to support the effective creation, exchange and use of critical health data for the benefit of Canadians.

New possibilities to help heath care providers and users safely share information are providing innovative solutions that deal with a growing body of data while protecting privacy. The decrease in storage costs, an increase of inexpensive processing power and the advance of platforms as a service (PaaS) via cloud computing democratize and commoditize analytics in health care. Privacy-enhancing technologies (PETs), backed by national statistical organizations, signal new possibilities to help providers and users safely share information.

Researchers as major data consumers recognize the importance of sound management practices. While these practices focus on the responsibilities of research institutions, they also promote sharing of biomedical data. Two examples are the National Institutes of Health’s data-sharing policy and Canada’s tri-agency research data management policy. These policies are based on an understanding of what’s needed in infrastructure modernization, in tandem with what’s needed for robust data-sharing and good management policies.

What about hospitals and health authorities as data producers? Who is forging a new structure and policy to direct them across Canada to increase data-sharing capacity?

Public health organizations operate with a heavy burden to comply with a multitude of regulations that affect data-sharing and management. This challenge is compounded by uncertainty surrounding risk quantification for open data-sharing and community-based computing. This uncertainty often translates into the perception of high risk where risk tolerance is low by necessity. As a result, there is a barrier to investing in new infrastructure and, just as importantly, investing in cultural change in management during decision-making processes related to budgeting.

Better understanding of the system is needed before taking the next steps, particularly when looking at outdated infrastructure governed by policies that never anticipated innovation and weren’t designed to accommodate rapid software deployment. Examining and assessing the current state of the Canadian health-care IT infrastructure should include an evaluation of the benefits of broad data-sharing to help foster momentum for biomedical advances. By looking at the IT infrastructure as it stands now, we can see how inaction costs society time, money and patient health.

One approach is to create a federated system. What this means is a common system capable of federated data-sharing and query processing. Federated data-sharing is defined as a series of decentralized, interconnected systems that allow data to be queried or analyzed by trusted participants. These systems require compliance with regulations, including legal compliance; system security and data protection by design; records of processing activities; encryption; managing data subject consent; managing personal data deletion; managing personal data portability; and security of personal data.

Because much of Canada’s IT infrastructure for health data management is obsolete, there needs to be significant investment. As well, the underlying infrastructure needs to be rebuilt to communicate externally with digital applications through a security framework for continuous authentication and authorization.

Whatever system is used must be capable of ensuring patient privacy. For example, individuals might be identified by reverse engineering data sets that are cross-referenced. The goal is to significantly minimize ambiguity in assessing the associated risk to allow compliance with privacy protections in law and practice. Widely used frameworks exist that address these issues.

The market is providing available technologies and cost-effective methods that can be used to enable large-scale data-sharing that meet privacy protection criteria. What is needed is the collective will to proceed, to upgrade obsolete data infrastructure and address policy barriers. Initiatives and applications in other jurisdictions or settings face similar challenges, but our research and development can be accelerated to help enhance data sharing and improve health outcomes.

Source: Data sharing should not be an afterthought in digital health innovation

Does the federal government fund and support racially discriminating groups and individuals?

Drawing the contrast between the relative kid glove treatment of the “Freedom” Convoy and providing them a further platform in the Rouleau Commission:

Federal funding of hateful messaging has been in the news lately after Prime Minister Justin Trudeau condemned the comments of a senior consultant who was working on a federally funded anti-racist project. As a result of Laith Marouf’s anti-French and anti-Semitic postings, the $130,000 funding of the group, the Community Media Advocacy Centre, was suspended. This happened, despite the federal government apparently knowing about Marouf’s past.

Contrast that to the multi-million dollar federal Public Order Emergency Commission inquiry into the federal government temporary use of the 1988 Emergencies Act this past February to remove the Freedom Convoy protesters from downtown Ottawa.

Yet the Freedom Convoy group antics, which paralyzed Ottawa’s downtown core for more than three weeks this past January and February, forced authorities to spend millions of dollars in policing costs. The Freedom Convoy leaders are now wanting even more money than could be granted under Treasury Board guidelines and are asking for $450,000 of their $5-million in donations to be unfrozen, now held in escrow. This request may actually happen even though it’s a bit rich when such participation brings with it more propaganda for their disruptive causes.

In addition to the Rouleau inquiry giving the Freedom Convoy legal standing and funding, this federally funded commission is encouraging written submissions from Freedom Convoy supporters. The federally funded commission has made a point of encouraging Freedom Convoy supporters to make written submissions to it

The federal commission’s lead question on its website asks for those on side of the Freedom Convoy “protest” to describe their experiences. Those who were affected by the “protest” activities are asked secondarily to offer their experiences.

Rouleau is, in effect, placing the Freedom Convoy participants in an important, if not equal, position to those who were affected by the convoy, giving them more space and prominence than they deserve.

Canada’s Public Safety Minister Marco Mendicino, meanwhile, according to highly redacted cabinet minutes, saw the Freedom Convoy protesters falling into two categories: the “harmless and happy with a strong relationship to faith communities,” and the “harder extremists trying to undermine government institutions and law enforcement.” However, it seemed as if the police in Ottawa were siding with the Freedom Convoy protesters during February’s occupation.

Mendicino did not comment on the tacit mix and dynamics of the happy folks and extremists who occupied downtown Ottawa for more than three weeks.

Does this leave federal authorities less than neutral or too easy on the illegal activities of the Freedom Convoy participants? Do we not remember the federal government’s past racist actions with residential schools or the internment of Canadian Japanese citizens during the Second World War?

The federal government continues to mount ineffective anti-racist “campaigns” and decries anti-Semitic activities without taking action.

It’s also standing idly by when it comes to Quebec’s racist Bill 21 and its overt discrimination against minorities and those, for instance, who teach and wear hijabs being ousted.

The Freedom Convoy protesters appear to be treated as official interveners.

So let’s call the feds for what they are: a bunch of yellow-shrinking-stand-by con artists.

Ken Rubin founded the Ottawa People’s Commission to hear from residents about the harm incurred during last February’s Freedom Convoy siege in Ottawa, though the views expressed here are his own personal ones.

Source: Does the federal government fund and support racially discriminating groups and individuals?

Harris: The future of malicious artificial intelligence applications is here

More on some of the more fundamental risks of AI:

The year is 2016. Under close scrutiny by CCTV cameras, 400 contractors are working around the clock in a Russian state-owned facility. Many are experts in American culture, tasked with writing posts and memes on Western social media to influence the upcoming U.S. Presidential election. The multimillion dollar operation would reach 120 million people through Facebook alone. 

Six years later, the impact of this Russian info op is still being felt. The techniques it pioneered continue to be used against democracies around the world, as Russia’s “troll factory” — the Russian internet Research Agency — continues to fuel online radicalization and extremism. Thanks in no small part to their efforts, our world has become hyper-polar, increasingly divided into parallel realities by cherry-picked facts, falsehoods, and conspiracy theories.

But if making sense of reality seems like a challenge today, it will be all but impossible tomorrow. For the past two years, a quiet revolution has been brewing in AI — and despite some positive consequences, it’s also poised to hand authoritarian regimes unprecedented new ways to spread misinformation across the globe at an almost inconceivable scale.

In 2020, AI researchers created a text generation system called GPT-3. GPT-3 can produce text that’s indistinguishable from human writing — including viral articles, tweets, and other social media posts. GPT-3 was one of the most significant breakthroughs in the history of AI: it offered a simple recipe that AI researchers could follow to radically accelerate AI progress, and build much more capable, humanlike systems. 

But it also opened a Pandora’s box of malicious AI applications. 

Text-generating AIs — or “language models” — can now be used to massively augment online influence campaigns. They can craft complex and compelling arguments, and be leveraged to create automated bot armies and convincing fake news articles. 

This isn’t a distant future concern: it’s happening already. As early as 2020, Chinese efforts to interfere with Taiwan’s national election involved “the instant distribution of artificial-intelligence-generated fake news to social media platforms.”

But the 2020 AI breakthrough is now being harnessed for more than just text. New image-generation systems, able to create photorealistic pictures based on any text prompt, have become reality this year for the first time. As AI-generated content becomes better and cheaper, the posts, pictures, and videos we consume in our social media feeds will increasingly reflect the massively amplified interests of tech-savvy actors.

And malicious applications of AI go far beyond social media manipulation. Language models can already write better phishing emails than humans, and have code-writing capabilities that outperform human competitive programmers. AI that can write code can also write malware, and many AI researchers see language models as harbingers of an era of self-mutating AI-powered malicious software that could blindside the world. Other recent breakthroughs have significant implications for weaponized drone control and even bioweapon design.

Needed: a coherent plan

Policy and governance usually follow crises, rather than anticipate them. And that makes sense: the future is uncertain, and most imagined risks fail to materialize. We can’t invest resources in solving every hypothetical problem.

But exceptions have always been made for problems which, if left unaddressed, could have catastrophic effects. Nuclear technology, biotechnology, and climate change are all examples. Risk from advanced AI represents another such challenge. Like biological and nuclear risk, it calls for a co-ordinated, whole-of-government response.

Public safety agencies should establish AI observatories that produce unclassified reporting on publicly available information about AI capabilities and risks, and begin studying how to frame AI through a counterproliferation lens

Given the pivotal role played by semiconductors and advanced processors in the development of what are effectively new AI weapons, we should be tightening export control measures for hardware or resources that feed into the semiconductor supply chains of countries like China and Russia. 

Our defence and security agencies could follow the lead of the U.K.’s Ministry of Defence, whose Defence AI Strategy involves tracking and mitigating extreme and catastrophic risks from advanced AI.

AI has entered an era of remarkable, rapidly accelerating capabilities. Navigating the transition to a world with advanced AI will require that we take seriously possibilities that would have seemed like science fiction until very recently. We’ve got a lot to rethink, and now is the time to get started.

Source: The future of malicious artificial intelligence applications is here

Roose: We Need to Talk About How Good A.I. Is Getting

Of note. World is going to become more complex, and the potential for AI in many fields will continue to grow, with these tools and programs being increasing able to replace, at least in part, professionals including government workers:

For the past few days, I’ve been playing around with DALL-E 2, an app developed by the San Francisco company OpenAI that turns text descriptions into hyper-realistic images.

OpenAI invited me to test DALL-E 2 (the name is a play on Pixar’s WALL-E and the artist Salvador Dalí) during its beta period, and I quickly got obsessed. I spent hours thinking up weird, funny and abstract prompts to feed the A.I. — “a 3-D rendering of a suburban home shaped like a croissant,” “an 1850s daguerreotype portrait of Kermit the Frog,” “a charcoal sketch of two penguins drinking wine in a Parisian bistro.” Within seconds, DALL-E 2 would spit out a handful of images depicting my request — often with jaw-dropping realism.

Here, for example, is one of the images DALL-E 2 produced when I typed in “black-and-white vintage photograph of a 1920s mobster taking a selfie.” And how it rendered my request for a high-quality photograph of “a sailboat knitted out of blue yarn.”

DALL-E 2 can also go more abstract. The illustration at the top of this article, for example, is what it generated when I asked for a rendering of “infinite joy.” (I liked this one so much I’m going to have it printed and framed for my wall.)

What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.

DALL-E 2 got a lot of attention when it was announced this year, and rightfully so. It’s an impressive piece of technology with big implications for anyone who makes a living working with images — illustrators, graphic designers, photographers and so on. It also raises important questions about what all of this A.I.-generated art will be used for, and whether we need to worry about a surge in synthetic propaganda, hyper-realistic deepfakes or even nonconsensual pornography.

But art is not the only area where artificial intelligence has been making major strides.

Over the past 10 years — a period some A.I. researchers have begun referring to as a “golden decade” — there’s been a wave of progress in many areas of A.I. research, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive A.I. models.

Some of that progress has been slow and steady — bigger models with more data and processing power behind them yielding slightly better results.

But other times, it feels more like the flick of a switch — impossible acts of magic suddenly becoming possible.

Just five years ago, for example, the biggest story in the A.I. world was AlphaGo, a deep learning model built by Google’s DeepMind that could beat the best humans in the world at the board game Go. Training an A.I. to win Go tournaments was a fun party trick, but it wasn’t exactly the kind of progress most people care about.

But last year, DeepMind’s AlphaFold — an A.I. system descended from the Go-playing one — did something truly profound. Using a deep neural network trained to predict the three-dimensional structures of proteins from their one-dimensional amino acid sequences, it essentially solved what’s known as the “protein-folding problem,” which had vexed molecular biologists for decades.

This summer, DeepMind announced that AlphaFold had made predictions for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.

Or look at what’s happening with A.I.-generated text.

Only a few years ago, A.I. chatbots struggled even with rudimentary conversations — to say nothing of more difficult language-based tasks.

But now, large language models like OpenAI’s GPT-3 are being used to write screenplayscompose marketing emails and develop video games. (I even used GPT-3 to write a book review for this paper last year — and, had I not clued in my editors beforehand, I doubt they would have suspected anything.)

A.I. is writing code, too — more than a million people have signed up to use GitHub’s Copilot, a tool released last year that helps programmers work faster by automatically finishing their code snippets.

Then there’s Google’s LaMDA, an A.I. model that made headlines a couple of months ago when Blake Lemoine, a senior Google engineer, was fired after claiming that it had become sentient.

Google disputed Mr. Lemoine’s claims, and lots of A.I. researchers have quibbled with his conclusions. But take out the sentience part, and a weaker version of his argument — that LaMDA and other state-of-the-art language models are becoming eerily good at having humanlike text conversations — would not have raised nearly as many eyebrows.

In fact, many experts will tell you that A.I. is getting better at lots of things these days — even in areas, such as language and reasoning, where it once seemed that humans had the upper hand.

“It feels like we’re going from spring to summer,” said Jack Clark, a co-chair of Stanford University’s annual A.I. Index Report. “In spring, you have these vague suggestions of progress, and little green shoots everywhere. Now, everything’s in bloom.”

In the past, A.I. progress was mostly obvious only to insiders who kept up with the latest research papers and conference presentations. But recently, Mr. Clark said, even laypeople can sense the difference.

“You used to look at A.I.-generated language and say, ‘Wow, it kind of wrote a sentence,’” Mr. Clark said. “And now you’re looking at stuff that’s A.I.-generated and saying, ‘This is really funny, I’m enjoying reading this,’ or ‘I had no idea this was even generated by A.I.’”

There is still plenty of bad, broken A.I. out there, from racist chatbots to faulty automated driving systems that result in crashes and injury. And even when A.I. improves quickly, it often takes a while to filter down into products and services that people actually use. An A.I. breakthrough at Google or OpenAI today doesn’t mean that your Roomba will be able to write novels tomorrow.

But the best A.I. systems are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift. Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing A.I.; many now believe that major changes are right around the corner, for better or worse.

Ajeya Cotra, a senior analyst with Open Philanthropy who studies A.I. risk, estimated two years ago that there was a 15 percent chance of “transformational A.I.” — which she and others have defined as A.I. that is good enough to usher in large-scale economic and societal changes, such as eliminating most white-collar knowledge jobs — emerging by 2036.

But in a recent post, Ms. Cotra raised that to a 35 percent chance, citing the rapid improvement of systems like GPT-3.

“A.I. systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time,” Ms. Cotra told me. “People should take more seriously that A.I. could change things soon, and that could be really scary.”

There are, to be fair, plenty of skeptics who say claims of A.I. progress are overblown. They’ll tell you that A.I. is still nowhere close to becoming sentient, or replacing humans in a wide variety of jobs. They’ll say that models like GPT-3 and LaMDA are just glorified parrots, blindly regurgitating their training data, and that we’re still decades away from creating true A.G.I. — artificial general intelligence — that is capable of “thinking” for itself.

There are also tech optimists who believe that A.I. progress is accelerating, and who want it to accelerate faster. Speeding A.I.’s rate of improvement, they believe, will give us new tools to cure diseases, colonize space and avert ecological disaster.

I’m not asking you to take a side in this debate. All I’m saying is: You should be paying closer attention to the real, tangible developments that are fueling it.

After all, A.I. that works doesn’t stay in a lab. It gets built into the social media apps we use every day, in the form of Facebook feed-ranking algorithms, YouTube recommendations and TikTok “For You” pages. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use A.I. to determine who’s eligible for loans, and police departments use it to investigate crimes.

Even if the skeptics are right, and A.I. doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society. In a few years, the vast majority of the photos, videos and text we encounter on the internet could be A.I.-generated. Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. And tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale, distorting the political process in ways we won’t see coming.

It’s a cliché, in the A.I. world, to say things like “we need to have a societal conversation about A.I. risk.” There are already plenty of Davos panels, TED talks, think tanks and A.I. ethics committees out there, sketching out contingency plans for a dystopian future.

What’s missing is a shared, value-neutral way of talking about what today’s A.I. systems are actually capable of doing, and what specific risks and opportunities those capabilities present.

I think three things could help here.

First, regulators and politicians need to get up to speed.

Because of how new many of these A.I. systems are, few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the A.I. frontier.

We’ve seen a few efforts to close the gap — Stanford’s Institute for Human-Centered Artificial Intelligence recently held a three-day “A.I. boot camp” for congressional staff members, for example — but we need more politicians and regulators to take an interest in the technology. (And I don’t mean that they need to start stoking fears of an A.I. apocalypse, Andrew Yang-style. Even reading a book like Brian Christian’s “The Alignment Problem” or understanding a few basic details about how a model like GPT-3 works would represent enormous progress.)

Otherwise, we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance, which resulted in nothing but gridlock and testy hearings.

Second, big tech companies investing billions in A.I. development — the Googles, Metas and OpenAIs of the world — need to do a better job of explaining what they’re working on, without sugarcoating or soft-pedaling the risks. Right now, many of the biggest A.I. models are developed behind closed doors, using private data sets and tested only by internal teams. When information about them is made public, it’s often either watered down by corporate P.R. or buried in inscrutable scientific papers.

Downplaying A.I. risks to avoid backlash may be a smart short-term strategy, but tech companies won’t survive long term if they’re seen as having a hidden A.I. agenda that’s at odds with the public interest. And if these companies won’t open up voluntarily, A.I. engineers should go around their bosses and talk directly to policymakers and journalists themselves.

Third, the news media needs to do a better job of explaining A.I. progress to nonexperts. Too often, journalists — and I admit I’ve been a guilty party here — rely on outdated sci-fi shorthand to translate what’s happening in A.I. to a general audience. We sometimes compare large language models to Skynet and HAL 9000, and flatten promising machine learning breakthroughs to panicky “The robots are coming!” headlines that we think will resonate with readers. Occasionally, we betray our ignorance by illustrating articles about software-based A.I. models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.

In a broad sense, most people think about A.I. narrowly as it relates to us — Will it take my job? Is it better or worse than me at Skill X or Task Y? — rather than trying to understand all of the ways A.I. is evolving, and what that might mean for our future.

I’ll do my part, by writing about A.I. in all its complexity and weirdness without resorting to hyperbole or Hollywood tropes. But we all need to start adjusting our mental models to make space for the new, incredible machines in our midst.

Source: We Need to Talk About How Good A.I. Is Getting