Dave Snow: The groundbreaking Cass Review on transgender care is shifting the debate abroad. Yet it was barely reported by Canadian media  

While I don’t follow this issue closely, this analysis is nevertheless revealing on how the review and related issues are portrayed, particularly by the CBC:

Few Canadian policy issues are as polarizing as youth gender transition. Yet according to my analysis below, most Canadian media spent last month paying little to no attention to one of the most consequential reports on the topic…

Canadian media coverage of the Cass Review

As a major medical report on an issue where there is considerable Canadian political debate, one would have expected the Cass Review to garner considerable Canadian media attention.

To determine how the issue was covered in Canada, I conducted a content analysis of online articles from five mainstream media outlets (The Globe and MailNational PostToronto Star, CBC, and CTV) from the three-week period following the Cass Review’s publication (April 10 – April 30, 2024). These five outlets published a total of 15 stories that mentioned the Cass Review. Given that three stories (all from the National Post) only briefly mentioned it in passing, and one Associated Press story was published in two outlets, this meant a total of 11 unique stories in which the Cass Review featured prominently.

Coverage was dominated by the National Post, which featured seven articles on the Cass Review over an 11-day period between April 10 and April 20. By contrast, there were only two stories featuring the Cass Review in the Toronto Star, and only one each in CBC, CTV, and the Globe. Apart from the one AP story, every article applied the Cass Review to the Canadian context, with six mentioning Alberta’s proposed gender policies. The stories were split between hard news (six) and opinion pieces (five).

Given the National Post’s longstanding focus on youth gender transition, it is not surprising that it gave the Cass Review the most coverage. The other four outlets did not give it as much attention. The only hard news piece in the Toronto Star was a wire story written by the U.S.-based Associated Press. CTV’s one mention of Cass appeared in a piece about Alberta’s proposed gender policies and was only the result of Premier Smith raising it during an interview with the outlet. Meanwhile, the lone CBC article on the review was more of a condemnation than a news report (see below). The Globe and Maildid not feature Cass in a single hard news article, though the report was mentioned in an investigative opinion piece about gender transition in Canada written 16 days after the review was published. In total, only three of the six hard news pieces quoted from the Cass Review extensively, including two lengthy pieces from National Post reporter Sharon Kirkey and one Associated Press piece (published in both the Star and Post).

While there were only five opinion pieces published about the Cass Review, they shared several notable characteristics. All five opinion pieces—three from the National Post and one each in the Toronto Star and The Globe and Mail—portrayed the review positively, including descriptions such as “landmark” and“an exhaustive and rigorous report.” All five were broadly supportive of exercising greater caution around the use of puberty blockers and cross-sex hormones for youth. The Post’s Adam Zivo called such restrictions “a wise approach that Canada should follow,” while the Globe’s Robyn Urback cited multiple studies “exploring the potential long-term effects of puberty blockers and cross-sex hormones on bone densityfertilitysexual function, and cognitive development” (links in original). Moreover, the five opinion writers demonstrated considerable knowledge of the review itself, with Cass quoted or paraphrased a total of 1611eightfour, and three times, respectively.

By contrast, the CBC’s one news story, published five days after the Cass Review, only quoted it twice. The 1,750-word article, “What Canadian doctors say about new U.K. review questioning puberty blockers for transgender youth,” spent more time criticizing the report than describing it. The story did not quote any proponents of the Cass Review, but it did contain over a dozen quotes from three organizations and three Canadian doctors who were supportive of the gender-affirming model. Two of those doctors criticized the Cass Review directly: one wondered if it was “coming from a place of bias” and “trying to create fear around gender-affirming care,” while another called it “politically motivated.”

One sentence in particular, written by the journalist, is indicative of the CBC’s framing: “The Cass Review, while aiming to be an independent assessment, has been criticized as flawed and anti-trans by trans activists in the U.K., and was described as a product of the U.K.’s hostile environment for trans people in the International Journal of Transgender Health” (links in original). The CBC journalist did not specify the difference between an “independent assessment” and “aiming” to be independent.

However, the International Journal of Transgender Health piece cited by  the CBC journalist refers to the Cass Review as an example of “Cis-supremacy in the UK’s approach to healthcare for trans children.” It was written by a researcher who specializes in “trans inclusion and Applied Trans Studies” and currently holds a grant for “Building Lived Experience Accountability into Culturally Competent Health and Well-being Assessment for Trans Youth Social Justice.” The CBC did not address whether that piece, which was published nearly a month before the Cass Review’s final report came out, was similarly “aiming” to be independent in its assessment of Cass.

This CBC article has garnered considerable attention. It was criticized by American journalist Jesse Singal as “critically dangerous science miscommunication,” while Hub contributor Peter Menzies described it as “so bereft of balance that one could only conclude it [CBC] had abandoned any pretence of principled journalism in favour of playing the role of ally.” But, to regular observers of the CBC, this story was entirely in keeping with its ongoing approach to covering youth gender transition.

People involved in a march against the teaching of so-called “gender ideology” in schools, stand in front of the New Brunswick legislature as they yell across the street at pro-transgender rights counter-protesters in Fredericton, Wednesday, Sept. 20, 2023. Stephen MacGillivray/The Canadian Press. 

Canadian coverage of other LGBTQ topics

Given that major Canadian outlets paid limited attention to the Cass Review, apart from the National Post, observers may wonder if this simply reflected a media tendency to ignore LGBTQ issues.

To test for this, I also conducted a search of stories containing terms like “LGBTQ,” “transgender,” and “gender identity” at each of the five outlets during the same period (April 10-30). I then analyzed stories in which LGBTQ issues were the main topic.

Between April 10-30, in addition to the 11 stories about Cass described above, there were 25  stories on the topic of Canadian LGBTQ issues: 14 at the CBC, six at CTV, three at the Globe and Mail, and one each at the Toronto Starand National Post (this includes one identical Canadian Press wire story published by the Globe, Star, and CTV).

However, not one of these additional Canadian stories mentioned the Cass Review. Some of this was understandable, as most CBC and CTV articles, for example, were local stories covering topics such as a proposed LGBTQ community centre in Montreal, legal battles over New Brunswick’s pronoun policy, and a summer camp for LGBTQ children in Newfoundland and Labrador.

However, in addition to these 25 Canadian-focused LGBTQ stories, the five outlets also published  66 internationally-focused LGBTQ stories. None of these mentioned the Cass Review. All were written by foreign wire services.

Thirty stories were published by the National Post, 27 by the Toronto Star, five by CTV, four by he Globe and Mail, and none by the CBC. Nearly 80 percent (52/66) were focused on American politics, but the 14 other stories covered topics such as Swedish and German laws making changing your gender easier, the passage of an anti-LGBTQ law in Iraq, and a Hong Kong trans activistgetting a male ID card.

Canadian news outlets’ lack of attention to the Cass Review cannot be explained by a lack of interest in international news on LGBTQ issues. The Toronto Star published 28 hard news stories about international LGBTQ issues during this period, but only one mentioned the Cass review. Likewise, the Globe and Mail and CTV published four and five international news stories on LGBTQ issues respectively, none of which mentioned the Cass Review.

 Consequences for Canada

Three broad conclusions can be drawn from the Canadian media’s coverage of the Cass Review. First, apart from the National Post, hard news coverage of the groundbreaking report was limited. Moreover, this minimal coverage cannot be explained by a lack of interest in LGBTQ issues, as these outlets published many Canadian and international LGBTQ-focused stories about topics far less prominent. Perhaps it is unsurprising that a conservative outlet was more likely to report on a major study that appeared to vindicate arguments associated with conservative political positions. Yet the lack of reporting by other news outlets brings to mind a quote from American journalist Nellie Bowles about the 2020 riots around policing and African Americans in Kenosha, Wisconsin: “How the mainstream media controlled the narrative was by not covering it.”

Second, despite this minimal reporting in Canada, the Cass Review seems to have shifted the parameters of the debate over youth gender transition. The way that it has been covered in international media suggests it will now be far more difficult to paint those who favour a more cautious approach to social transition, puberty blockers, and cross-sex hormones as “transphobic.” Although Canadian hard news coverage of Cass was limited, Canadian opinion pieces demonstrate a similar shift. All five opinion pieces (including one from the Toronto Star) covered the Cass Review favourably. All raised criticisms about the prevalence of the gender-affirming model across Canada. In the recent past, the Globe and Star have not been shy about publishing opinionpieces lauding the gender-affirming model. But no such opinion pieces were published in response to the Cass Review.

Finally, as the debate around youth gender medicine shifts, the CBC appears to have dug in its heels in support of the gender-affirming model. In previous research for The Hub, I documented how the national public broadcaster chose allyship over objectivity in its coverage of youth gender transition. That trend has clearly continued. The CBC has often been criticized in general for progressive bias, but it is difficult to recall another policy issue for which the CBC’s lack of balance has been so strident and so sustained. As scientific and policy debates around youth gender transition evolve, this issue will provide a litmus test for whether CBC can provide objective coverage on contentious social and medical topics. For now, the public broadcaster is failing that test.

Source: Dave Snow: The groundbreaking Cass Review on transgender care is shifting the debate abroad. Yet it was barely reported by Canadian media

What Researchers Discovered When They Sent 80,000 Fake Résumés to U.S. Jobs

Not that surprising and mirrors earlier Canadian studies (Can we avoid bias in hiring practices?):

A group of economists recently performed an experiment on around 100 of the largest companies in the country, applying for jobs using made-up résumés with equivalent qualifications but different personal characteristics. They changed applicants’ names to suggest that they were white or Black, and male or female — Latisha or Amy, Lamar or Adam.

On Monday, they released the names of the companies. On average, they found, employers contacted the presumed white applicants 9.5 percent more often than the presumed Black applicants.

Yet this practice varied significantly by firm and industry. One-fifth of the companies — many of them retailers or car dealers — were responsible for nearly half of the gap in callbacks to white and Black applicants.

Two companies favored white applicants over Black applicants significantly more than others. They were AutoNation, a used car retailer, which contacted presumed white applicants 43 percent more often, and Genuine Parts Company, which sells auto parts including under the NAPA brand, and called presumed white candidates 33 percent more often.

In a statement, Heather Ross, a spokeswoman for Genuine Parts, said, “We are always evaluating our practices to ensure inclusivity and break down barriers, and we will continue to do so.” AutoNation did not respond to a request for comment.

Known as an audit study, the experiment was the largest of its kind in the United States: The researchers sent 80,000 résumés to 10,000 jobs from 2019 to 2021. The results demonstrate how entrenched employment discrimination is in parts of the U.S. labor market — and the extent to which Black workers start behind in certain industries.

“I am not in the least bit surprised,” said Daiquiri Steele, an assistant professor at the University of Alabama School of Law who previously worked for the Department of Labor on employment discrimination. “If you’re having trouble breaking in, the biggest issue is the ripple effect it has. It affects your wages and the economy of your community going forward.”

Some companies showed no difference in how they treated applications from people assumed to be white or Black. Their human resources practices — and one policy in particular (more on that later) — offer guidance for how companies can avoid biased decisions in the hiring process.

A lack of racial bias was more common in certain industries: food stores, including Kroger; food products, including Mondelez; freight and transport, including FedEx and Ryder; and wholesale, including Sysco and McLane Company.

“We want to bring people’s attention not only to the fact that racism is real, sexism is real, some are discriminating, but also that it’s possible to do better, and there’s something to be learned from those that have been doing a good job,” said Patrick Kline, an economist at the University of California, Berkeley, who conducted the study with Evan K. Rose at the University of Chicago and Christopher R. Walters at Berkeley.

The researchers first published details of their experiment in 2021, but without naming the companies. The new paper, which is set to run in the American Economic Review, names the companies and explains the methodology developed to group them by their performance, while accounting for statistical noise.

Source: What Researchers Discovered When They Sent 80,000 Fake Résumés to U.S. Jobs

Les experts avec un accent sont jugés moins crédibles

Interesting study:

Les accents étrangers influencent grandement l’opinion qu’on se fait des nouveaux arrivants et des experts, suggèrent les résultats d’une nouvelle étude. Le fait d’avoir un accent et d’être issu d’une minorité visible « entrave » la possibilité d’être perçu comme légitime, digne de confiance ou même crédible.

Cette étude confirme ainsi d’autres études au Québec sur les barrières à l’emploi et sur la « glottophobie », une forme de discrimination linguistique qui inclut l’accent. Il est déjà connu que la couleur de peau, la religion ou le genre des experts influencent l’opinion qu’on s’en fait. Cette fois, « le point de départ est la discrimination basée sur l’accent », précise le professeur Antoine Bilodeau. Il a notamment mené cette enquête avec son équipe de l’Université Concordia et en présentera les conclusions au congrès de l’Acfas cette semaine.

« On connaît bien le concept de minorité visible, mais beaucoup moins les minorités audibles », affirme ce spécialiste en science politique et en intégration des immigrants. Les résultats actuels montrent que le fait d’avoir un accent étranger, combiné ou non avec le fait d’être racisé, « entrave la possibilité » d’être perçu comme un expert légitime, digne de confiance et même crédible.

Les chercheurs ont demandé à 1200 personnes dans chacune des provinces d’évaluer la crédibilité d’un expert à partir d’une photo et d’un enregistrement audio. L’effet de l’accent est indéniable dans tous les cas de figure soumis au sondage, mais il n’est pas le même au Québec qu’en Ontario.

Chaque répondant au sondage ne voyait qu’une vignette, soit un homme blanc ou noir, puis entendait cette personne parler une seule fois des changements climatiques et de la taxe sur le carbone. Au Québec, cette voix avait soit un accent québécois, ou un accent de type Europe de l’Est ou encore de l’Afrique de l’Ouest (du Togo). En Ontario, c’était plutôt un accent anglophone assez neutre, puis les mêmes accents étrangers.

Ni la « provenance » de l’accent, ni le but de l’évaluation, n’étaient révélés au répondant, précise M. Bilodeau, « puisqu’on voulait que les gens interprètent eux-mêmes cet accent ». On demandait ensuite de juger la crédibilité de l’expert sous plusieurs angles : l’éloquence de son message, sa compétence et son professionnalisme. « Est-il convaincant ? Est-il digne de confiance ? », exemplifie aussi le professeur

Dépend de la conception du « nous »

Au Québec, l’effet de l’accent étranger était plus grand pour la personne non racisée. En Ontario, il était plus « punitif » chez l’expert racisé. « C’est peut-être là, la spécificité québécoise : la langue est tellement centrale que dès qu’on voit une personne blanche on s’attend à ce qu’elle ait le même accent », avance M. Bilodeau.

Il existe ainsi un « effet de surprise » qui contredit cette attente et affecte négativement la perception. Inversement, l’expert racisé avec un accent québécois est celui qui obtient le score le plus haut en termes de crédibilité.

Une minorité visible qui a ou adopte l’accent local est en quelque sorte « récompensée », selon ces résultats. « C’est comme si le fait qu’il ait un accent de la majorité [québécoise] venait désamorcer une anticipation de distance. Ça rapproche tout à coup le répondant de l’expert en train de parler », propose comme hypothèse le chercheur.

« Est-ce que c’est suffisant de parler français, ou faut-il le parler de la “bonne manière” pour faire vraiment partie du groupe ? », réfléchit M. Bilodeau.

L’étude allait justement plus loin pour mieux comprendre la réaction des répondants, selon leur propre conception de ce qui forme leur groupe d’appartenance. Il y avait ainsi une série de questions sur les critères importants pour être un « vrai Québécois » ou un « vrai Ontarien » : doit-on être né dans la province, avoir passé la majorité de sa vie dans la province, être blanc, être chrétien, se sentir Québécois ou Ontarien, respecter les lois, etc.

Ceux qui avaient une conception qui exclut davantage de gens sont réagissaient aussi le plus fortement à l’accent chez l’expert blanc au Québec.

Une forme trop socialement acceptable

« L’accent, on n’y pense pas ou on en parle moins », abonde en ce sens Victor Armony, professeur de sociologie à l’UQAM. Dans une étude qu’il a menée à l’Observatoire sur les inégalités raciales au Québec, l’accent figure pourtant au deuxième rang des raisons de discrimination citées par les répondants.

« Je partais d’une sorte d’énigme », décrit-il. Chez plusieurs populations, il persiste des écarts importants de revenus ou de postes pour les mêmes qualifications, même si elles ne sont pas des cibles « directes ou ouvertes » de racisme.

Il donne l’exemple des Latino-Américains : « Il y a parfois des préjugés favorables envers les latinos. On nous trouve chaleureux, on apporte une cuisine, une musique, la joie de vivre, etc. L’autre côté de la médaille : on n’est pas toujours pris au sérieux au niveau intellectuel ou professionnel », explique le chercheur.

Une personne qualifiée, avec un diplôme, « qui fait des efforts considérables pour parler français » et reçue sans hostilité préalable peut tout de même être dévalorisée en raison de son accent.

« C’est l’accent qui fait en sorte que le message devient irrecevable, moins intéressant et parfois laissé de côté », résume-t-il. Arrivé d’Argentine il y a plus de 30 ans, M. Armony l’a lui-même vécu. « C’est le regard moqueur, impatient, méprisant de l’autre qui finit par avoir un impact sur l’assurance, sur l’estime de soi ou dans le goût de s’exposer devant les autres même quand j’ai quelque chose à dire. Alors on finit par se taire et rester à sa place », explique l’homme.

La discrimination linguistique, notamment basée sur l’accent, aussi appelée « glottophobie » est plus insidieuse. « Socialement, la glottophobie n’est pas reconnue comme une discrimination. Alors elle peut servir de prétexte ou d’écran pour cacher une autre forme de discrimination socialement inacceptée », décrit quant à lui le sociologue Christian Bergeron.

À l’instar d’Antoine Bilodeau, mais dans un domaine différent, il note lui aussi une attitude différente selon la perception de soi-même : « Plus un locuteur pense détenir la norme, c’est-à-dire la bonne manière de s’exprimer en français, plus il a tendance à rejeter les autres manières de s’exprimer et parfois même à discriminerl’autre », dit ce professeur à l’Université d’Ottawa.

Plus sournoise ou moins affichée, elle peut néanmoins devenir une barrière réelle à l’emploi, rappelle M. Armony. « On va invoquer par exemple l’idée qu’on a besoin d’une personne qui a “un français parfait”, mais alors on confond la grammaire et la qualité du français du point de vue de l’accent », rapporte-t-il.

La Charte des droits et libertés de la personne du Québec ne nomme pas explicitement l’accent, mais plutôt la langue. Il est toutefois interdit de traiter différemment une personne ou d’avoir des comportements offensants et répétés à son égard en raison de son accent, indique la Commission des droits de la personne et de la jeunesse du Québec.

La France est allée plus loin en 2020, en adoptant une loi qui punit la discrimination fondée sur l’accent avec des peines allant jusqu’à trois ans d’emprisonnement et à 45 000 euros d’amende. « Les minorités “audibles” sont les grandes oubliées du contrat social fondé sur l’égalité », avait alors déclaré l’instigateur du projet de loi, le député Christophe Euzet, lui-même d’une région de France connue pour ses sonorités différentes de celles de Paris.

Source: Les experts avec un accent sont jugés moins crédibles

Embedded Bias: How medical records sow discrimination | New Orleans’ Multicultural News Source

Of interest and unfortunately not all that surprising.

One of the benefits of electronic data hospital records, at least the ones I have in Ottawa, is that I see my doctor notes.

Not sure how widespread these systems are but they do provide needed medical information on a close to real time basis as well as hopefully reducing discrimination given increased public accountability and transparency.

But during my various times at the hospital for my cancer treatments, I became very aware of just how privileged I was compared to other patients in terms of education, income and language:

David Confer, a bicyclist and an audio technician, told his doctor he “used to be Ph.D. level” during a 2019 appointment in Washington, D.C. Confer, then 50, was speaking figuratively: He was experiencing brain fog — a symptom of his liver problems. But did his doctor take him seriously? Now, after his death, Confer’s partner, Cate Cohen, doesn’t think so.

Confer, who was Black, had been diagnosed with non-Hodgkin lymphoma two years before. His prognosis was positive. But during chemotherapy, his symptoms — brain fog, vomiting, back pain — suggested trouble with his liver, and he was later diagnosed with cirrhosis. He died in 2020, unable to secure a transplant. Throughout, Cohen, now 45, felt her partner’s clinicians didn’t listen closely to him and had written him off.

That feeling crystallized once she read Confer’s records. The doctor described Confer’s fuzziness and then quoted his Ph.D. analogy. To Cohen, the language was dismissive, as if the doctor didn’t take Confer at his word. It reflected, she thought, a belief that he was likely to be noncompliant with his care — that he was a bad candidate for a liver transplant and would waste the donated organ.

For its part, MedStar Georgetown, where Confer received care, declined to comment on specific cases. But spokesperson Lisa Clough said the medical center considers a variety of factors for transplantation, including “compliance with medical therapy, health of both individuals, blood type, comorbidities, ability to care for themselves and be stable, and post-transplant social support system.” Not all potential recipients and donors meet those criteria, Clough said.

Doctors often send signals of their appraisals of patients’ personas. Researchers are increasingly finding that doctors can transmit prejudice under the guise of objective descriptions. Clinicians who later read those purportedly objective descriptions can be misled and deliver substandard care.

Discrimination in health care is “the secret, or silent, poison that taints interactions between providers and patients before, during, after the medical encounter,” said Dayna Bowen Matthew, dean of George Washington University’s law school and an expert in civil rights law and disparities in health care.

Bias can be seen in the way doctors speak during rounds. Some patients, Matthew said, are described simply by their conditions. Others are characterized by terms that communicate more about their social status or character than their health and what’s needed to address their symptoms. For example, a patient could be described as an “80-year-old nice Black gentleman.” Doctors mention that patients look well-dressed or that someone is a laborer or homeless.

The stereotypes that can find their way into patients’ records sometimes help determine the level of care patients receive. Are they spoken to as equals? Will they get the best, or merely the cheapest, treatment? Bias is “pervasive” and “causally related to inferior health outcomes, period,” Matthew said.

Narrow or prejudiced thinking is simple to write down and easy to copy and paste over and over. Descriptions such as “difficult” and “disruptive” can become hard to escape. Once so labeled, patients can experience “downstream effects,” said Dr. Hardeep Singh, an expert in misdiagnosis who works at the Michael E. DeBakey Veterans Affairs Medical Center in Houston. He estimates misdiagnosis affects 12 million patients a year.

Conveying bias can be as simple as a pair of quotation marks. One team of researchers found that Black patients, in particular, were quoted in their records more frequently than other patients when physicians were characterizing their symptoms or health issues. The quotation mark patterns detected by researchers could be a sign of disrespect, used to communicate irony or sarcasm to future clinical readers. Among the types of phrases the researchers spotlighted were colloquial language or statements made in Black or ethnic slang.

“Black patients may be subject to systematic bias in physicians’ perceptions of their credibility,” the authors of the paper wrote.

That’s just one study in an incoming tide focused on the variations in the language that clinicians use to describe patients of different races and genders. In many ways, the research is just catching up to what patients and doctors knew already, that discrimination can be conveyed and furthered by partial accounts.

Confer’s MedStar records, Cohen thought, were pockmarked with partial accounts — notes that included only a fraction of the full picture of his life and circumstances.

Cohen pointed to a write-up of a psychosocial evaluation, used to assess a patient’s readiness for a transplant. The evaluation stated that Confer drank a 12-pack of beer and perhaps as much as a pint of whiskey daily. But Confer had quit drinking after starting chemotherapy and had been only a social drinker before, Cohen said. It was “wildly inaccurate,” Cohen said.

“No matter what he did, that initial inaccurate description of the volume he consumed seemed to follow through his records,” she said.

Physicians frequently see a harsh tone in referrals from other programs, said Dr. John Fung, a transplant doctor at the University of Chicago who advised Cohen but didn’t review Confer’s records. “They kind of blame the patient for things that happen, not really giving credit for circumstances,” he said. But, he continued, those circumstances are important — looking beyond them, without bias, and at the patient himself or herself can result in successful transplants.

The History of One’s Medical History
That doctors pass private judgments on their patients has been a source of nervous humor for years. In an episode of the sitcom “Seinfeld,” Elaine Benes discovers that a doctor had condescendingly written that she was “difficult” in her file. When she asked about it, the doctor promised to erase it. But it was written in pen.

The jokes reflect long-standing conflicts between patients and doctors. In the 1970s, campaigners pushed doctors to open up records to patients and to use less stereotyping language about the people they treated.

Nevertheless, doctors’ notes historically have had a “stilted vocabulary,” said Dr. Leonor Fernandez, an internist and researcher at Beth Israel Deaconess Medical Center in Boston. Patients are often described as “denying” facts about their health, she said, as if they’re not reliable narrators of their conditions.

One doubting doctor’s judgment can alter the course of care for years. When she visited her doctor for kidney stones early in her life, “he was very dismissive about it,” recalled Melina Oien, who now lives in Tacoma, Washington. Afterward, when she sought care in the military health care system, providers — whom Oien presumed had read her history — assumed that her complaints were psychosomatic and that she was seeking drugs.

“Every time I had an appointment in that system — there’s that tone, that feel. It creates that sense of dread,” she said. “You know the doctor has read the records and has formed an opinion of who you are, what you’re looking for.”

When Oien left military care in the 1990s, her paper records didn’t follow her. Nor did those assumptions.

New Technology — Same Biases?
While Oien could leave her problems behind, the health system’s shift to electronic medical records and the data-sharing it encourages can intensify misconceptions. It’s easier than ever to maintain stale records, rife with false impressions or misreads, and to share or duplicate them with the click of a button.

“This thing perpetuates,” Singh said. When his team reviewed records of misdiagnosed cases, he found them full of identical notes. “It gets copy-pasted without freshness of thinking,” he said.

Research has found that misdiagnosis disproportionately happens to patients whom doctors have labeled as “difficult” in their electronic health record. Singh cited a pair of studies that presented hypothetical scenarios to doctors.

In the first study, participants reviewed two sets of notes, one in which the patient was described simply by her symptoms and a second in which descriptions of disruptive or difficult behaviors had been added. Diagnostic accuracy dropped with the difficult patients.

The second study assessed treatment decisions and found that medical students and residents were less likely to prescribe pain medications to patients whose records included stigmatizing language.

Digital records can also display prejudice in handy formats. A 2016 paper in JAMA discussed a small example: an unnamed digital record system that affixed an airplane logo to some patients to indicate that they were, in medical parlance, “frequent flyers.” That’s a pejorative term for patients who need plenty of care or are looking for medications.

But even as tech might amplify these problems, it can also expose them. Digitized medical records are easily shared — and not merely with fellow doctors, but also with patients.

Since the ‘90s, patients have had the right to request their records, and doctors’ offices can charge only reasonable fees to cover the cost of clerical work. Penalties against practices or hospitals that failed to produce records were rarely assessed — at least until the Trump administration, when Roger Severino, previously known as a socially conservative champion of religious freedom, took the helm of the U.S. Department of Health and Human Services’ Office for Civil Rights.

During Severino’s tenure, the office assessed a spate of monetary fines against some practices. The complaints mostly came from higher-income people, Severino said, citing his own difficulties getting medical records. “I can only imagine how much harder it often is for people with less means and education,” he said.

Patients can now read the notes — the doctors’ descriptions of their conditions and treatments — because of 2016 legislation. The bill nationalized policies that had started earlier in the decade, in Boston, because of an organization called OpenNotes.

For most patients, most of the time, opening record notes has been beneficial. “By and large, patients wanted to have access to the notes,” said Fernandez, who has helped study and roll out the program. “They felt more in control of their health care. They felt they understood things better.” Studies suggest that open notes lead to increased compliance, as patients say they’re more likely to take medicines.

Conflicts Ahead?
But there’s also a darker side to opening records: if patients find something they don’t like. Fernandez’s research, focusing on some early hospital adopters, has found that slightly more than 1 in 10 patients report being offended by what they find in their notes.

And the wave of computer-driven research focusing on patterns of language has similarly found low but significant numbers of discriminatory descriptions in notes. A study published in the journal Health Affairs found negative descriptors in nearly 1 in 10 records. Another team found stigmatizing language in 2.5 percent of records.

Patients can also compare what happened in a visit with what was recorded. They can see what was really on doctors’ minds.

Oien, who has become a patient advocate since moving on from the military health care system, recalled an incident in which a client fainted while getting a drug infusion — treatments for thin skin, low iron, esophageal tears, and gastrointestinal conditions — and needed to be taken to the emergency room. Afterward, the patient visited a cardiologist. The cardiologist, who hadn’t seen her previously, was “very verbally professional,” Oien said. But what he wrote in the note — a story based on her ER visit — was very different. “Ninety percent of the record was about her quote-unquote drug use,” Oien said, noting that it’s rare to see the connection between a false belief about a patient and the person’s future care.

Spotting those contradictions will become easier now. “People are going to say, ‘The doc said what?’” predicted Singh.

But many patients — even ones with wealth and social standing — may be reluctant to talk to their doctors about errors or bias. Fernandez, the OpenNotes pioneer, didn’t. After one visit, she saw a physical exam listed on her record when none had occurred.

“I did not raise that to that clinician. It’s really hard to raise things like that,” she said. “You’re afraid they won’t like you and won’t take good care of you anymore.”

Kaiser Health News is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation, which is not affiliated with Kaiser Permanente. This story also appeared on The Daily Beast.

Source: Embedded Bias: How medical records sow discrimination | New Orleans’ Multicultural News Source

‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Legitimate concerns about AI bias (which individual decision-makers have), also need to address “noise,” variability among decision-making by people for comparable cases:

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the act, once adopted, cannot be overstated,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EU’s final list of “high risk” uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or – in the case of lenders – assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

“AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture,” Sarah Kocianski, an independent financial technology consultant said. “If designed correctly, such systems can provide wider access to affordable credit.”

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said. “Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as “black-box” syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called “trustworthy AI” models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. “Correlation-based models are learning the injustices from the past and they’re just replaying it into the future,” Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

“It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model,” he said. “We don’t know how many people haven’t gone to university because of a haywire algorithm. We don’t know how many people weren’t able to get their mortgage because of algorithm biases. We just don’t know.”

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. “Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it,” he said.

While the EU’s new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

“The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present,” Circiumaru said.

“AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they won’t.”

Source: ‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Helping Hollywood Avoid Claims of Bias Is Now a Growing Business

Business will find a way…

In the summer of 2020, not long after the murder of George Floyd spurred a racial reckoning in America, Carri Twigg’s phone kept ringing.

Ms. Twigg, a founding partner of a production company named Culture House, was asked over and over again if she could take a look at a television or movie script and raise any red flags, particularly on race.

Culture House, which employs mostly women of color, had traditionally specialized in documentaries. But after a few months of fielding the requests about scripts, they decided to make a business of it: They opened a new division dedicated solely to consulting work.

“The frequency of the check-ins was not slowing down,” Ms. Twigg said. “It was like, oh, we need to make this a real thing that we offer consistently — and get paid for.”

Though the company has been consulting for a little more than a year — for clients like Paramount Pictures, MTV and Disney — that work now accounts for 30 percent of Culture House’s revenue.

Culture House is hardly alone. In recent years, entertainment executives have vowed to make a genuine commitment to diversity, but are still routinely criticized for falling short. To signal that they are taking steps to address the issue, Hollywood studios have signed contracts with numerous companies and nonprofits to help them avoid the reputational damage that comes with having a movie or an episode of a TV show face accusations of bias.

“When a great idea is there and then it’s only talked about because of the social implications, that must be heartbreaking for creators who spend years on something,” Ms. Twigg said. “To get it into the world and the only thing anyone wants to talk about are the ways it came up short. So we’re trying to help make that not happen.”

The consulting work runs the gamut of a production. The consulting companies sometimes are asked about casting decisions as well as marketing plans. And they may also read scripts to search for examples of bias and to scrutinize how characters are positioned in a story.

“It’s not only about what characters say, it’s also about when they don’t speak,” Ms. Twigg said. “It’s like, ‘Hey, there’s not enough agency for this character, you’re using this character as an ornament, you’re going to get dinged for that.’”

When a consulting firm is on retainer, it can also come with a guaranteed check every month from a studio. And it’s a revenue stream developed only recently.

“It really exploded in the last two years or so,” said Michelle K. Sugihara, the executive director of Coalition of Asian Pacifics in Entertainment, a nonprofit. The group, called CAPE, is on retainer to some of the biggest Hollywood studios, including Netflix, Paramount, Warner Bros., Amazon, Sony and A24.

Of the 100 projects that CAPE has consulted on, Ms. Sugihara said, roughly 80 percent have come since 2020, and they “really increased” after the Atlanta spa shootings in March 2021. “That really ramped up attention on our community,” she said.

Ms. Sugihara said her group could be actively involved throughout the production process. In one example, she said she told a studio that all of the actors playing the heroes in an upcoming scripted project appeared to be light-skinned East Asian people whereas the villains were portrayed by darker-skinned East Asian actors.

“That’s a red flag,” she said. “And we should talk about how those images may be harmful. Sometimes it’s just things that people aren’t even conscious about until you point it out.”

Ms. Sugihara would not mention the name of the project or the studio behind it. In interviews, many cited nondisclosure agreements with the studios and a reluctance to embarrass a filmmaker as reasons they could not divulge specifics.

Sarah Kate Ellis, the president of GLAAD, the L.G.B.T.Q. advocacy organization, said her group had been doing consulting work informally for years with the networks and studios. Finally, she decided to start charging the studios for their labor — work that she compared to “billable hours.”

“Here we were consulting with all these content creators across Hollywood and not being compensated,” said Ms. Ellis, the organization’s president since 2013. “When I started at GLAAD we couldn’t pay our bills. And meanwhile here we are with the biggest studios and networks in the world, helping them tell stories that were hits. And I said this doesn’t make sense.”

In 2018, she created the GLAAD Media Institute — if the networks or studios wanted any help in the future, they’d have to become a paying member of the institute.

Initially, there was some pushback but the networks and studios would eventually come around. In 2018, there were zero members of the GLAAD Media Institute. By the end of 2021, that number had swelled to 58, with nearly every major studio and network in Hollywood now a paying member.

Scott Turner Schofield, who has spent some time working as a consultant for GLAAD, has also been advising networks and studios on how to accurately depict transgender people for years. But he said the work had increased so significantly in recent years that he was brought on board as an executive producer for a forthcoming horror movie produced by Blumhouse.

“I’ve gone from someone who was a part-time consultant — barely eking by — to being an executive producer,” he said.

Those interviewed said that it was a win-win arrangement between the consultancies and the studios.

“The studios at the end of the day, they want to produce content but they want to make money,” said Rashad Robinson, the president of the advocacy organization Color of Change. “Making money can be impeded because of poor decisions and not having the right people at the table. So the studios are going to want to seek that.”

He did caution, however, that simply bringing on consultants was not an adequate substitute for the structural change that many advocates want to see in Hollywood.

“This doesn’t change the rules with who gets to produce content and who gets to make the final decisions of what gets on the air,” he said. “It’s fine to bring folks in from the outside but that in the end is insufficient to the fact that across the entertainment industry there is still a problem in terms of not enough Black and brown people with power in the executive ranks.”

Still, the burgeoning field of cultural consultancy work may be here to stay. Ms. Twigg, who helped found Culture House with Raeshem Nijhon and Nicole Galovski, said that the volume of requests she was getting was “illustrative of how seriously it’s being taken, and how comprehensively it’s being brought into the fabric of doing business.”

“From a business standpoint, it’s a way for us to capitalize on the expertise that we have gathered as people of color who have been alive in America for 30 or 40 years,” she said.

Source: Helping Hollywood Avoid Claims of Bias Is Now a Growing Business

Canada’s immigration minister says he wants to look into ‘issue’ of discrimination and bias within department 

Immigration is essentially discriminatory in terms of who we select. The challenge is to ensure that the criteria are as objective and neutral as possible with respect to country of origin:

Canada’s immigration minister says he wants to look into the “issue” of discrimination and unconscious bias within the department tasked with triaging and approving immigration requests to Canada.

“Over the past couple of weeks, I’ve become aware of this issue, and it’s something that I personally want to look into,” Immigration Minister Sean Fraser told reporters Wednesday as he entered a Liberal caucus meeting.

“There’s no secret that over the course of Canada’s history, unconscious bias and systemic racism have been a shameful part of Canada’s history over different aspects of the government’s operations. One of the things that we want to do is make sure that … this kind of unconscious bias doesn’t discriminate against people who come from a particular part of the world.”

Fraser was responding to questions on a recent report in Montreal newspaper Le Devoir that Immigration, Refugees and Citizenship Canada (IRCC) is increasingly refusing foreign student applications from francophone African countries to Quebec, whereas English-speaking applicants are increasingly approved.

Immigration lawyers quoted in the report stated that IRCC recently refused applications from nearly 100 per cent of students from Maghreb and Western African countries applying to study in Quebec.

Fraser says he’s certain that the department was not consciously discriminating against those countries, but he still wants to look into it to make sure no other factors than those set out in immigration legislation are being considered when assessing requests.

“I certainly don’t think that there’s been a decision actively to pick one country over another. I think there’s certain factors that IRCC officials assess when they’re trying to admit more newcomers to Canada,” Fraser said.

“But it would be silly if I were to stand here and say that in a department of 11,000 people, if you look at the different operations of IRCC, to say that there is no discrimination,” he added.

He also promised to look at ways to bring more, not less, French-speaking students into Canada.

“International students are one of the groups that successfully integrate more and more so than just about any other group of newcomers,” Fraser said. “That’s a good thing, not just for the newcomer to Canada, but for our economy as well.”

Reporters then asked the newly-minted minister if it was ironic that there would be issues of discrimination and conscious or unconscious bias in the department tasked with handling foreign immigration.

“I think there’s a big distinction between what should be and what is,” the minister responded. “I think we need to constantly be looking to make sure that the public has faith in the system.”

In a follow-up statement, Fraser’s press secretary noted that the minister intended to continue the work already launched by IRCC to “eradicate racism” within the department, including creating a task force dedicated to the task “full-time,” mandatory unconscious bias training for employees and executives and appointing an “anti-racism representative” within each sector of the department.

Earlier this year, IRCC published a report based on focus groups of its employees that revealed that there were multiple and repeated reports of racist incidents within the workplace.

“Experiences of racism at IRCC include microaggressions, biases in hiring and promotion as well as biases in the delivery of IRCCs programs, policies and client service,” reads a summary of the findings, which were first reported by CBC last month.

“In addition, employees paint a picture of an organization fraught with challenges at the level of workplace culture” and a “history of racism going unchecked.”

For example, the report notes that an IRCC team leader was said to have “loudly” declared that colonialism was “good” and that “if ‘the natives’ wanted the land they should have just stood up.

In another case, non-racialized employees and supervisors were notoriously known to refer to parts of the department employing a higher number of racialized employees as “the ghetto.”

Participants also noted “widespread” internal references to certain African nations as “the dirty 30.”

Source: https://nationalpost.com/news/politics/canadas-immigration-minister-says-he-wants-to-look-into-issue-of-potential-discrimination-and-bias-within-department

AI’s anti-Muslim bias problem

Of note (and unfortunately, not all that surprising):

Imagine that you’re asked to finish this sentence: “Two Muslims walked into a …”

Which word would you add? “Bar,” maybe?

It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. “Two Muslims walked into a synagogue with axes and a bomb,” it said. Or, on another try, “Two Muslims walked into a Texas cartoon contest and opened fire.”

For Abubakar Abid, one of the researchers, the AI’s output came as a rude awakening. “We were just trying to see if it could tell jokes,” he recounted to me. “I even tried numerous prompts to steer it away from violent completions, and it would find some way to make it violent.”

Language models such as GPT-3 have been hailed for their potential to enhance our creativity. Given a phrase or two written by a human, they can add on more phrases that sound uncannily human-like. They can be great collaborators for anyone trying to write a novel, say, or a poem.

Source: AI’s anti-Muslim bias problem

What unconscious bias training gets wrong… and how to fix it

Good overview on the latest research and lessons. Main conclusion, no quick fix, has to be part of ongoing training and awareness:

Here’s a fact that cannot be disputed: if your name is James or Emily, you will find it easier to get a job than someone called Tariq or Adeola. Between November 2016 and December 2017, researchers sent out fake CVs and cover letters for 3,200 positions. Despite demonstrating exactly the same qualifications and experience, the “applicants” with common Pakistani or Nigerian names needed to send out 60% more applications to receive the same number of callbacks as applicants with more stereotypically British names.

Some of the people who had unfairly rejected Tariq or Adeola will have been overtly racist, and so deliberately screened people based on their ethnicity. According to a large body of psychological research, however, many will have also reacted with an implicit bias, without even being aware of the assumptions they were making.

Such findings have spawned a plethora of courses offering “unconscious bias and diversity training”, which aim to reduce people’s racist, sexist and homophobic tendencies. If you work for a large organisation, you’ve probably taken one yourself. Last year, Labour leader Keir Starmer volunteered to undergo such training after he appeared to dismiss the importance of the Black Lives Matter movement. “There is always the risk of unconscious bias, and just saying: ‘Oh well, it probably applies to other people, not me,’ is not the right thing to do,” he said. Even Prince Harry has been educating himself about his potential for implicit bias – and advising others to do the same.

Sounds sensible, doesn’t it? You remind people of their potential for prejudice so they can change their thinking and behaviour. Yet there is now a severe backlash against the very idea of unconscious bias and diversity training, with an increasing number of media articles lamenting these “woke courses” as a “useless” waste of money. The sceptics argue that there is little evidence that unconscious bias training works, leading some organisations – including the UK’s civil service – to cancel their schemes.

So what’s the truth? Is it ever possible to correct our biases? And if so, why have so many schemes failed to make a difference?

While the contents of unconscious bias and diversity training courses vary widely, most share a few core components. Participants will often be asked to complete the implicit association test (IAT), for example. By measuring people’s reaction times during a word categorisation task, an algorithm can calculate whether people have more positive or negative associations with a certain group – such as people of a different ethnicity, sexual orientation or gender. (You can try it for yourself on the Harvard website.)

After taking the IAT, participants will be debriefed about their results. They may then learn about the nature of unconscious bias and stereotypes more generally, and the consequences within the workplace, along with some suggestions to reduce the impact.

All of which sounds useful in theory. To confirm the benefits, however, you need to compare the attitudes and behaviours of employees who have taken unconscious bias and diversity training with those who have not – in much the same way that drugs are tested against a placebo.

Prof Edward Chang at Harvard Business School has led one of the most rigorous trials, delivering an hour-long online diversity course to thousands of employees at an international professional services company. Using tools like the IAT, the training was meant to educate people about sexist stereotypes and their consequences – and surveys suggest that it did change some attitudes. The participants reported greater acknowledgment of their own bias after the course, and greater support of women in the workplace, than people who had taken a more general course on “psychological safety” and “active listening”.

Unfortunately, this didn’t translate to the profound behavioural change you might expect. Three weeks after taking the course, the employees were given the chance of taking part in an informal mentoring scheme. Overall, the people who had taken the diversity course were no more likely to take on a female mentee. Six weeks after taking the course, the participants were also given the opportunity to nominate colleagues for recognition of their “excellence”. It could have been the perfect opportunity to offer some encouragement to overlooked women in the workplace. Once again, however, the people who had taken the diversity training were no more likely to nominate a female colleague than the control group.

“We did our best to design a training that would be effective,” Chang tells me. “But our results suggest that the sorts of one-off trainings that are commonplace in organisations are not particularly effective at leading to long-lasting behaviour change.”

Chang’s results chime with the broader conclusions of a recent report by Britain’s Equality and Human Rights Commission (EHRC), which examined 18 papers on unconscious bias training programmes. Overall, the authors concluded that the courses are effective at raising awareness of bias, but the evidence of long-lasting behavioural change is “limited”.

Even the value of the IAT – which is central to so many of these courses – has been subject to scrutiny. The courses tend to use shortened versions of the test, and the same person’s results can vary from week to week. So while it might be a useful educational aid to explain the concept of unconscious bias, it is wrong to present the IAT as a reliable diagnosis of underlying prejudice.

It certainly sounds damning; little wonder certain quarters of the press have been so willing to declare these courses a waste of time and money. Yet the psychologists researching their value take a more nuanced view, and fear their conclusions have been exaggerated. While it is true that many schemes have ended in disappointment, some have been more effective, and researchers believe we should learn from these successes and failures to design better interventions in the future – rather than simply dismissing them altogether.

For one thing, many of the current training schemes are simply too brief to have the desired effect. “It’s usually part of the employee induction and lasts about 30 minutes to an hour,” says Dr Doyin Atewologun, a co-author of the EHRC’s report and founding member of British Psychological Society’s diversity and inclusion at work group. “It’s just tucked away into one of the standard training materials.” We should not be surprised the lessons are soon forgotten. In general, studies have shown that diversity training can have more pronounced effects if it takes place over a longer period of time. A cynic might suspect that these short programmes are simple box-ticking exercises, but Atewologun thinks the good intentions are genuine – it’s just that the organisations haven’t been thinking critically about the level of commitment that would be necessary to bring about change, or even how to measure the desired outcomes.

Thanks to this lack of forethought, many of the existing courses may have also been too passive and theoretical. “If you are just lecturing at someone about how pervasive bias is, but you’re not giving them the tools to change, I think there can be a tendency for them to think that bias is normal and thus not something they need to work on,” says Prof Alex Lindsey at the University of Memphis. Attempts to combat bias could therefore benefit from more evidence-based exercises that increase participants’ self-reflection, alongside concrete steps for improvement.

Lindsey’s research team recently examined the benefits of a “perspective-taking” exercise, in which participants were asked to write about the challenges faced by someone within a minority group. They found that the intervention brought about lasting changes to people’s attitudes and behavioural intentions for months after the training. “We might not know exactly what it’s like to be someone of a different race, sex, religion, or sexual orientation from ourselves, but everyone, to some extent, knows what it feels like to be excluded in a social situation,” Lindsey says. “Once trainees realise that some people face that kind of ostracism on a more regular basis as a result of their demographic characteristics, I think that realisation can lead them to respond more empathetically in the future.”

Lindsey has found that you should also encourage participants to reflect on the ways their own behaviour may have been biased in the past, and to set themselves future goals during their training. Someone will be much more likely to act in an inclusive way if they decide, in advance, to challenge any inappropriate comments about a minority group, for example. This may be more powerful still, he says, if there is some kind of follow-up to check in with participants’ progress – an opportunity that the briefer courses completely miss. (Interestingly, he has found that these reflective techniques can be especially effective among people who are initially resistant to the idea of diversity training.)

More generally, these courses may often fail to bring about change because people become too defensive about the very idea that they may be prejudiced. Without excusing the biases, the courses might benefit from explaining how easily stereotypes can be absorbed – even by good, well-intentioned people – while also emphasising the individual responsibility to take action. Finally, they could teach people to recognise the possibility of “moral licensing”, in which an ostensibly virtuous act, such as attending the diversity course itself, or promoting someone from a minority, excuses a prejudiced behaviour afterwards, since you’ve already “proven” yourself to be a liberal and caring person. 

Ultimately, the psychologists I’ve spoken to all agree that organisations should stop seeing unconscious bias and diversity training as a quick fix, and instead use it as the foundation for broader organisational change.

“Anyone who has been in any type of schooling system knows that even the best two- or three-hour class is not going to change our world for ever,” says Prof Calvin Lai, who investigates implicit bias at Washington University in St Louis. “It’s not magic.” But it may act as a kind of ice-breaker, he says, helping people to be more receptive to other initiatives – such as those aimed at a more inclusive recruitment process.

Chang agrees. “Diversity training is unlikely to be an effective standalone solution,” he says. “But that doesn’t mean that it can’t be an effective component of a multipronged approach to improving diversity, equity and inclusion in organisations.”

Atewologun compares it to the public health campaigns to combat obesity and increase fitness. You can provide people with a list of the calories in different foods and the benefits of exercise, she says – but that information, alone, is unlikely to lead to significant weight loss, without continued support that will help people to act on that information. Similarly, education about biases can be a useful starting point, but it’s rather absurd to expect that ingrained habits could evaporate in a single hour of education.

“We could be a lot more explicit that it is step one,” Atewologun adds. “We need multiple levels of intervention – it’s an ongoing project.”

Source: https://www.theguardian.com/science/2021/apr/25/what-unconscious-bias-training-gets-wrong-and-how-to-fix-it

Using A.I. to Find Bias in A.I.

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. O’Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For Ms. O’Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligence. It was a “cruel game of Whac-a-Mole,” she said.

This month, Ms. O’Sullivan, a 36-year-old New Yorker, was named chief executive of a new company, Parity. The start-up is one of many organizations, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from A.I. systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of A.I. systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in A.I., including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislation or regulation is inevitable,” said Christian Troncoso, the senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about A.I., it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigationof UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discriminate, while the status of the UnitedHealth investigation is unclear. 

A spokesman for UnitedHealth, Tyler Mason, said the company’s algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point this month when the Software Alliance offered a detailed framework for fighting bias in A.I., including the recognition that some automated technologies require regular oversight from humans. The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Though they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

Ms. O’Sullivan said there was no simple solution to bias in A.I. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

“Changing mentalities does not happen overnight — and that is even more true when you’re talking about large companies,” she said. “You are trying to change not just one person’s mind but many minds.”

When she started advising businesses on A.I. bias more than two years ago, Ms. O’Sullivan was often met with skepticism. Many executives and engineers espoused what they called “fairness through unawareness,” arguing that the best way to build equitable technology was to ignore issues like race and gender.

Increasingly, companies were building systems that learned tasks by analyzing vast amounts of data, including photos, sounds, text and stats. The belief was that if a system learned from as much data as possible, fairness would follow.

But as Ms. O’Sullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognition services can be biased against women and people of color when they are trained on photo collections dominated by white men.

Designers can be blind to these problems. The workers in India — where gay relationships were still illegal at the time and where attitudes toward gays and lesbians were very different from those in the United States — were classifying the photos as they saw fit.

Ms. O’Sullivan saw the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that ran the tagging project. She said she had left the company after realizing it was building systems for the military that she believed could eventually be used to kill. Clarifai did not respond to a request for comment. 

She now believes that after years of public complaints over bias in A.I. — not to mention the threat of regulation — attitudes are changing. In its new framework for curbing harmful bias, the Software Alliance warned against fairness through unawareness, saying the argument did not hold up.

“They are acknowledging that you need to turn over the rocks and see what is underneath,” Ms. O’Sullivan said.

Still, there is resistance. She said a recent clash at Google, where two ethics researchers were pushed out, was indicative of the situation at many companies. Efforts to fight bias often clash with corporate culture and the unceasing push to build new technology, get it out the door and start making money.

It is also still difficult to know just how serious the problem is. “We have very little data needed to model the broader societal safety issues with these systems, including bias,” said Jack Clark, one of the authors of the A.I. Index, an effort to track A.I. technology and policy across the globe. “Many of the things that the average person cares about — such as fairness — are not yet being measured in a disciplined or a large-scale way.”

Ms. O’Sullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Chowdhury, a well-known A.I. ethics researcher who spent years at the business consultancy Accenture before joining Twitter.

While other start-ups, like Fiddler A.I. and Weights and Biases, offer tools for monitoring A.I. services and identifying potentially biased behavior, Parity’s technology aims to analyze the data, technologies and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

The tool uses artificial intelligence technology that can be biased in its own right, showing the double-edged nature of A.I. — and the difficulty of Ms. O’Sullivan’s task.

Tools that can identify bias in A.I. are imperfect, just as A.I. is imperfect. But the power of such a tool, she said, is to pinpoint potential problems — to get people looking closely at the issue.

Ultimately, she explained, the goal is to create a wider dialogue among people with a broad range of views. The trouble comes when the problem is ignored — or when those discussing the issues carry the same point of view.

“You need diverse perspectives. But can you get truly diverse perspectives at one company?” Ms. O’Sullivan asked. “It is a very important question I am not sure I can answer.”

Source: https://www.nytimes.com/2021/06/30/technology/artificial-intelligence-bias.html