To combat hate in Canada, South Asians will have to move past their own divisions

Good reminder that multiculturalism is not just about the white/non-white but within and among visible minority groups, and not just South Asians:

…It’s time that we as a diaspora have a hard conversation about how we can talk to people from different religions and backgrounds without seeing only our differences. By talking, we can break away from the ill-informed caricatures so many of us have created of one another in our heads. But for a community that prides itself on maintaining traditions, this conversation is the most difficult thing to start. Indeed, any mention of change in front of extended family instantly gets me dismissed as the “Westernized child” who’s strayed far from home.

South Asia is far from a monolith. We have dozens of different and beautiful subcultures ingrained into our land. But rather than share the best parts, we too often choose to focus on what we see as the worst. Coming to Canada gave us all a chance to start over; instead, too many of us are throwing that away to perpetuate generational wounds. That only benefits those who already hate us.

South Asian Canadians don’t have to forget our history. But we do have to work together to move past it so that it doesn’t define our life here – if not for us now, then for future generations.

Khushy Vashisht is a Toronto-based freelance journalist.

Source: To combat hate in Canada, South Asians will have to move past their own divisions

Conservatives call for investigation into CBC after journalist resigns over ‘performative diversity, tokenism’

Interesting that Dhanraj represented by right leaning activist lawyer Marshall:

The Conservative party is calling for a parliamentary committee to investigate the CBC after journalist Travis Dhanraj resigned over the public broadcaster’s alleged “performative diversity, tokenism, a system designed to elevate certain voices and diminish others.”

Dhanraj was the host of Canada Tonight: With Travis Dhanraj on CBC. But he resigned on Monday, involuntarily, he says, because the CBC “has made it impossible for me to continue my work with integrity.”

“I have been systematically sidelined, retaliated against, and denied the editorial access and institutional support necessary to fulfill my public service role,” he wrote in his resignation letter. “I stayed as long as I could, but CBC leadership left me with no reasonable path forward.”

On Wednesday, Rachel Thomas, an Alberta Conservative member of Parliament, wrote a letter to the chair of the House of Commons standing committee on Canadian heritage, saying that Dhanraj’s claims have “reignited concerns about the organization’s workplace culture.”

The letter calls on the chair, Ontario Liberal MP Lisa Hepfner, to recall the committee.

“It is critical that we hear testimony from Mr. Dhanraj, CBC executives and Minister of Canadian Identity and Culture, Steven Guilbeault,” the letter states.

Kathryn Marshall, Dhanraj’s lawyer, told National Post in an interview that they welcome the attempt to recall the committee for hearings….

Source: Conservatives call for investigation into CBC after journalist resigns over ‘performative diversity, tokenism’

Racial bias exists in five-star ratings for gig workers, study shows. Thumbs up/thumbs down scale would fix that

Small but significant difference and impact:

Most of us do it, sometimes daily. After ordering a ride, getting a meal delivered or hiring someone for home repairs through an app, we’re asked to rate the service – often on a five-star scale.

Ratings are intended to be a fair way to reward good work and ensure those who provide exceptional service get more business.

“We want to ensure that the [rating] system allows shoppers’ effort to shine,” John Adams, vice-president of product at Instacart, states on the delivery company’s website. 

But a study published recently in the science journal Nature suggests otherwise. It found these seemingly neutral five-star systems harbour subtle but measurable racial bias that disappears when a “thumbs-up” or “thumbs-down” rating system is used.

Using data from an unnamed home-services app operating in Canada and the United States that had been using a five-star scale, researchers found a statistically significant difference between ratings given to white and non-white workers. After analyzing tens of thousands of reviews, the study showed white workers received an average rating of 4.79 stars, while non-white workers averaged 4.72.

That 0.07-point gap may seem trivial, but co-author Katherine DeCelles, a professor of organizational behaviour at the University of Toronto’s Rotman School of Management, says it has real financial consequences. Because many apps rely on customer ratings to determine which workers are recommended most often, non-white workers were found to earn 91 cents for every dollar their white counterparts made for the same work.

The researchers attribute part of the disparity to subtle and often unconscious bias. For instance, a customer giving a racial minority worker who performs well four stars, instead of five, “does not challenge the customer’s self-image as non-prejudiced, since four stars can still be seen as a positive rating,” according to the report….

Source: Racial bias exists in five-star ratings for gig workers, study shows. Thumbs up/thumbs down scale would fix that

Urback: A hard diversity quota for medical-school admissions is a terrible, counterproductive idea

Lot’s of (negative) commentary on the latest TMU initiative.

…All of this is in service to a genuinely noble goal. But the school’s execution – it’s practically boasting of its lax admission requirements – is clumsy, short-sighted and does a disservice to its own prospective students. The unintended consequences are obvious: Canadian patients will start Googling their physician’s educational background and wonder if the resident doctor performing their next procedure was one of the TMU students who got into med school with an art-history degree, a 3.3 GPA and a compelling personal essay. Indeed, the school’s quota system will inevitably condemn all of its graduates to public skepticism about their qualifications and capabilities, even if the physicians TMU produces are in fact very capable, qualified and skilled. It’s a bias of the school’s own making that it will have to fight to counter, and probably lose anyway….

Source: A hard diversity quota for medical-school admissions is a terrible, counterproductive idea

What is striking about most of the similar commentary I have seen, is that most do not look at what the data says about med school diversity. Earlier and the most recent study I found show largely an issue for Blacks and Indigenous; Chinese and South Asians are over-represented, whites under-represented.The latest analysis of diversity among medical students (English universities) that I found shows that:

A total of 1388 students responded to the survey, representing a response rate of 16.6%. Most respondents identified as women (63.1%) and were born after 1989 (82.1%). Respondents were less likely, compared to the Canadian Census population, to identify as black (1.7% vs 6.4%) (P < 0.001) or Aboriginal (3.5% vs. 7.4%) (P < 0.001), and have grown up in a rural area (6.4% vs. 18.7%) (P < 0.001). Respondents had higher socioeconomic status, indicated by parental education (29.0% of respondents’ parents had a master’s or doctoral degree, compared to 6.6% of Canadians aged 45–64), occupation (59.7% of respondents’ parents were high-level managers or professionals, compared to 19.2% of Canadians aged 45–64), and income (62.9% of respondents grew up in households with income >$100,000/year, compared to 32.4% of Canadians). [2016 census]

Source: Demographic and socioeconomic characteristics of Canadian medical students: a cross-sectional study

Algorithms help people see and correct their biases, study shows

Of interest:

Algorithms are a staple of modern life. People rely on algorithmic recommendations to wade through deep catalogs and find the best movies, routes, information, products, people and investments. Because people train algorithms on their decisions – for example, algorithms that make recommendations on e-commerce and social media sites – algorithms learn and codify human biases.

Algorithmic recommendations exhibit bias toward popular choices and information that evokes outrage, such as partisan news. At a societal level, algorithmic biases perpetuate and amplify structural racial bias in the judicial system, gender bias in the people companies hire, and wealth inequality in urban development.

Algorithmic bias can also be used to reduce human bias. Algorithms can reveal hidden structural biases in organizations. In a paper published in the Proceedings of the National Academy of Science, my colleagues and I found that algorithmic bias can help people better recognize and correct biases in themselves.

The bias in the mirror

In nine experiments, Begum CelikitutanRomain Cadario and Ihad research participants rate Uber drivers or Airbnb listings on their driving skill, trustworthiness or the likelihood that they would rent the listing. We gave participants relevant details, like the number of trips they’d driven, a description of the property, or a star rating. We also included an irrelevant biasing piece of information: a photograph revealed the age, gender and attractiveness of drivers, or a name that implied that listing hosts were white or Black.

After participants made their ratings, we showed them one of two ratings summaries: one showing their own ratings, or one showing the ratings of an algorithm that was trained on their ratings. We told participants about the biasing feature that might have influenced these ratings; for example, that Airbnb guests are less likely to rent from hosts with distinctly African American names. We then asked them to judge how much influence the bias had on the ratings in the summaries.The author describes how algorithms can be useful as a mirror of people’s biases.

Whether participants assessed the biasing influence of race, age, gender or attractiveness, they saw more bias in ratings made by algorithms than themselves. This algorithmic mirror effect held whether participants judged the ratings of real algorithms or we showed participants their own ratings and deceptively told them that an algorithm made those ratings. 

Participants saw more bias in the decisions of algorithms than in their own decisions, even when we gave participants a cash bonus if their bias judgments matched the judgments made by a different participant who saw the same decisions. The algorithmic mirror effect held even if participants were in the marginalized category – for example, by identifying as a woman or as Black.

Research participants were as able to see biases in algorithms trained on their own decisions as they were able to see biases in the decisions of other people. Also, participants were more likely to see the influence of racial bias in the decisions of algorithms than in their own decisions, but they were equally likely to see the influence of defensible features, like star ratings, on the decisions of algorithms and on their own decisions.

Bias blind spot

People see more of their biases in algorithms because the algorithms remove people’s bias blind spots. It is easier to see biases in others’ decisions than in your own because you use different evidence to evaluate them.

When examining your decisions for bias, you search for evidence of conscious bias – whether you thought about race, gender, age, status or other unwarranted features when deciding. You overlook and excuse bias in your decisions because you lack access to the associative machinery that drives your intuitive judgments, where bias often plays out. You might think, “I didn’t think of their race or gender when I hired them. I hired them on merit alone.”The bias blind spot explained.

When examining others’ decisions for bias, you lack access to the processes they used to make the decisions. So you examine their decisions for bias, where bias is evident and harder to excuse. You might see, for example, that they only hired white men.

Algorithms remove the bias blind spot because you see algorithms more like you see other people than yourself. The decision-making processes of algorithms are a black box, similar to how other people’s thoughts are inaccessible to you. 

Participants in our study who were most likely to demonstrate the bias blind spot were most likely to see more bias in the decisions of algorithms than in their own decisions. 

People also externalize bias in algorithms. Seeing bias in algorithms is less threatening than seeing bias in yourself, even when algorithms are trained on your choices. People put the blame on algorithms. Algorithms are trained on human decisions, yet people call the reflected bias “algorithmic bias.”

Corrective lens

Our experiments show that people are also more likely to correct their biases when they are reflected in algorithms. In a final experiment, we gave participants a chance to correct the ratings they evaluated. We showed each participant their own ratings, which we attributed either to the participant or to an algorithm trained on their decisions.

Participants were more likely to correct the ratings when they were attributed to an algorithm because they believed the ratings were more biased. As a result, the final corrected ratings were less biased when they were attributed to an algorithm.

Algorithmic biases that have pernicious effects have been well documented. Our findings show that algorithmic bias can be leveraged for good. The first step to correct bias is to recognize its influence and direction. As mirrors revealing our biases, algorithms may improve our decision-making.

Source: Algorithms help people see and correct their biases, study shows

Minority status biases evaluation of both women and men professors

Of interest:

Both men and women professors in the United States may receive lower course evaluation scores in departments where the majority of professors are of the other gender. However, because women are more often in the minority, they receive a disproportionate share of lower scores.

Further, since course evaluation scores are a significant factor in promotion and tenure decisions, this disparity negatively affects women professors’ career trajectories, hampering efforts to achieve equity and gender parity in the upper levels of the professoriate, says a new study published in the journal PNAS – Proceedings of the National Academy of Sciences.

“Our key finding is that regardless of which gender is in the minority, that gender receives lower course evaluation scores than does the dominant gender. We saw the same effects for men working in female-dominated departments and women working in male-dominated departments,” says Professor Oriana R Aragón, who teaches in the department of marketing at the Carl H Lindner College of Business of the University of Cincinnati.

She is lead author of the study published in PNAS earlier this year and titled “Gender bias in teaching evaluations: the causal role of department gender composition”.

“These findings are consistent with role congruity theory, which, in the context of academe, says that when a department is majority male or female, members of the opposite gender who teach in it are not deemed to be ‘authentic’ or as not being a bona fide expert.

“Students have a sense of, ‘It’s not quite right. I didn’t get the teacher that I should have had.’ This leads them to rate the professor lower, especially in upper-level courses; this negatively affects women professors because they are more often in the minority.”

The study and some findings

There are two parts to the study conducted by Aragón; Evava S Pietri, professor of psychology and neuroscience at the University of Colorado Boulder; and Brian A Powell, Fjeld professor in nuclear environmental engineering and science at Clemson University in South Carolina.

The first part utilised course evaluations from courses in which 115,647 students were enrolled in all of Clemson University’s 51 departments. These evaluations covered 1,885 educators who taught 4,700 courses during the 2018-19 academic year.

The evaluations utilised a Likert-type scale ranging from one (strongly disagree) to five (strongly agree). Since introductory courses have much larger enrolments, more than 72% of the courses were upper level, that is, years three and four.

These archived evaluations revealed that in departments with gender parity, students rated male and female educators almost equally in both the lower- and upper-level courses.

By contrast, in those departments that were majority male, female educators teaching lower-level courses were rated almost 0.1 point higher than were male teachers: 4.24 to 4.15. In upper-level courses, the relative position of the genders flipped, with women scoring 4.28 and men just under 4.37.

In the lower-level courses, in departments in which women made up the majority of the instructors, female educators actually scored 0.1 points lower than do males: 4.33 to 4.43. In upper-level courses in which the teaching staff is majority female, female educators are rated 4.48 while male teachers were rated more than a tenth of a point lower (4.36), a significant difference in scores.

Interpreting some findings

Central to understanding the results found in the evaluations from 2018-19, Aragón explained, is role congruity theory.

Female professors are rated more highly by both male and female students in lower-level courses, she says, partially because students value the interpersonal nurturing role that female instructors either provide or are seen to provide at that level of the university.

“Role congruity theory tells us that women are seen as more communal. Women are seen as caretakers of the home and of the sick, for example. In male dominated departments, at least at the lower level, it’s consistent with stereotypes to see women in these roles and that translates into rating them more highly on evaluations.”

The lower course evaluation scores that male instructors receive when they teach lower-level courses in male dominated departments can be understood as the flipside of why male professors are rated so much higher than are female professors in the upper-level courses in these same departments.

The expectation, according to role congruity theory, is that upper-level courses will be taught by experts in their fields. Since 72.6% (or 37) of Clemson University’s programmes have majority male staff, simple maths dictates that the cadre teaching the upper-level courses will be majority male.

Male educators in the lower-level courses pay a price of approximately 2/10ths of a point on their course evaluation scores because, Aragón and her co-authors aver, they are seen as fulfilling supporting (that is, stereotypically female) and not essential or agentic roles in their department’s educational and research ecosphere.

Women teachers in upper-level courses in female dominated departments are rated more highly than are those who teach lower-level courses (4.28 to 4.49). They also received higher course evaluation scores than men teachers who teach in departments in which female instructors dominate, such as nursing.

“Because upper-level courses signal high status and require expertise, broader gender stereotypes [that is, those beyond the university itself] would imply that men should teach upper-level courses,” Aragón et alwrite.

“However,” Aragón further explained to University World News, “the broader stereotype is overridden in female dominated departments, such as nursing or education where women may be considered bona fide members in those fields. And, so follows too, women’s higher evaluation scores, relative to men, when teaching these upper-level courses in female-dominated departments.”

The course evaluation scores for male teachers who teach lower-level courses in majority female departments is not only approximately 0.2 points higher than their male colleagues who teach in majority male departments, it is more than a point higher than the course evaluation scores of women professors who teach in male dominated departments.

Aragón and her co-authors explain why we see these biases against those in the gender minority in upper-level courses but not in lower-level courses by pointing to a societal paradox identified by role congruity theory.

“In the female-dominated domain of the family caregiver, men are evaluated negatively for filling the essential care-giver role of stay-at-home fathers or for taking extended family leave, which signals a primary caretaking position,” they write.

“Yet, men are viewed more positively than are women when they fill supportive roles in female domains, such as reducing work hours to help with the family’s needs or taking shorter leaves from work for supportive or interim caretaking. It seems that those in the gender minority are not penalised for entering gender incongruent domains when they are simply facilitating the more supposedly genuine measures of that domain.”

Shifting gender-based expectations

The second part of the “Gender bias” article reports on an experiment Aragón et al used to see if they could shift students’ gender-based expectations about professors and their ‘fit’.

In the research, 803 students were randomly assigned to departments, the descriptions of which were vague enough so that the students could not make stereotypical assumptions about whether the department was male- or female-dominated.

The students were then shown ‘faculty’ webpages that were manipulated to show male- or female-dominated departments and asked to evaluate the professors.

In the absence of classroom experience with professors the course evaluation scores were more stratified by gender. For example, female teachers in majority male departments who teach lower-level courses received course evaluation scores 0.17 higher than male teachers in the experimental group, while in the archived group the difference was 0.08.

“Our manipulation via a few moments with a faculty webpage,” writes Aragón, “was most likely not powerful enough to override broader gender stereotypes, particularly because the fields of study were not specified. Thus, the gender stereotypes appeared to play a larger part in shaping biases in the experimental than in the archival study” and significantly disadvantaged women.

Some conclusions

Aragón and her co-authors conclude the “Gender bias” article with two arguments.

The first addresses the question of whether, as departments become more balanced in terms of gender, existing stereotypes go by the wayside. While they answer yes, their example, computer programming, points to the paucity of examples of fields where the achievement of gender parity has improved the perception of women.

In the 1960s, computer programming, which involved preparing computer punch cards and, thus, was not seen as being far removed from bookkeeping or secretarial work, was a majority female job classification and was seen as a being supportive role. “Once the field became male dominated” – in the mid-1970s – they write, “the characterisation of the field changed to one of cerebral analysis”.

Secondly, the authors indicate strategies that departments and universities can use until various fields reach gender parity, so that women professors are not systematically disadvantaged by the bias in course evaluation scores.

Among these strategies is one they dub “fake it until you make it”, which would de-emphasise course evaluation scores and emphasise the achievements of both men and women in their departments. To try to neutralise gender expectations and course levels, they propose that “both male and female educators should teach lower- and upper-level courses”.

Finally, they call on tenure and promotion committees to make themselves aware of the bias inherent in course evaluation scores which, their study shows, have more to do with students’ sense of ‘fit’ than with performance in the classroom.

“Promotion and tenure decisions are made,” Aragón told University World News, “on very small differences. If the department average for a certain item on the questionnaire is 4.6 and you have a 4.55, you better believe I have gotten letters from the tenure promotion review committee that say, ‘You really need to get that score up a little bit’.

“That little fraction of a point can make a huge difference. It can decide who gets promoted, who gets tenure and who doesn’t. At present, the bias in these numbers disproportionately negatively affects the trajectory of women educators in colleges and universities.”

Source: Minority status biases evaluation of both women and men professors