Who gets to use NASA’s James Webb Space Telescope? Astronomers work to fight bias

Really neat example of how to reduce bias through blind approval processes:

The scientists who eventually get to peer out at the universe with NASA’s powerful new James Webb Space Telescope will be the lucky ones whose research proposals made it through a highly competitive selection process.

But those that didn’t make the cut this time can at least know that they got a fair shot, thanks to lessons learned from another famous NASA observatory.

Webb’s selection process was carefully designed to reduce the effect of unconscious biases or prejudices, by forcing decision-makers to focus on the scientific merit of a proposal rather than who submitted it.

“They assess every one of those proposals. They read them. They don’t know who wrote them,” explains Heidi Hammel, an interdisciplinary scientist with the James Webb Space Telescope. “These proposals are evaluated in a dual-anonymous way, so that all you can see is the science.”

This is a recent innovation in doling out observing time on space telescopes. And it’s a change that only came about after years of hard work done by astronomers who were concerned that not everyone who wanted to use the Hubble Space Telescope was getting equal consideration.

A bias emerges in who wins telescope time

One of their first clues came when Iain Neill Reid went looking for signs of any possible gender bias in the acceptance rate for Hubble proposals. He’s the associate director of science at the Space Telescope Science Institute, the science operations center for both Hubble and now Webb.

His results, published in 2014, were startling. Proposals that were led by women had a lower acceptance rate than proposals led by men. This discrepancy remained constant for more than a dozen years, the entire period of time he analyzed.

“I was surprised at how consistent it was,” says Reid. “There was a systematic effect.”

To try to fix this, he and his colleagues eventually developed the “blinded” proposal review process that’s now being used for Hubble, Webb, and NASA’s other major space telescopes. So far, the evidence suggests that this is working to level the playing field — even though the measure was initially opposed by a lot of the astronomy community

Since any telescope in space is a rare, precious resource, NASA wants to devote its time to the most-promising science. Anyone in the world can submit a proposal for where to point a space telescope, and there’s so much demand that the majority of ideas have to be rejected.

Even before the James Webb Space Telescope was launched, for example, the first call-out for proposals drew 1,173 ideas that would require 24,500 hours of prime observing time. But only 6,000 hours were available.

“It was a cutthroat competition. We rejected three-quarters of all the accepted proposals, and we’re taking the top ranked quarter,” says Jane Rigby, an astrophysicist with NASA’s Goddard Space Flight Center who serves as the operations project scientist for the new telescope.

And even though Hubble launched more than 30 years ago, astronomers still clamor to use it. Every year they submit 1,000 or more proposals.

“Only the top 20% of those proposals will actually make it through to the telescope to get time,” says Reid.

Focusing on the science, not the scientists

After his study showing a gender discrepancy in acceptance rates for Hubble proposals, Reid and his colleagues tried different solutions. First, instead of having the lead scientist’s name on the front page of a proposal, they tried putting it on the second page. Then, they tried just using initials. Nothing worked.

“Then we got sensible and we said, ‘Let’s actually talk to some experts in social sciences,’ because they can understand this better than we do,” says Reid.

They reached out to Stefanie Johnsonof the University of Colorado and her then-student, Jessica Kirk, now at the University of Memphis. The pair sat in on the meetings that evaluated and ranked proposals. And they noticed that a lot of the time, the discussion centered on who had submitted the proposal, rather than scientific considerations.

“There might be a question about it, like, ‘Oh, you know, this seems really good but can they actually do this?'” recalls Johnson. “A lot of times, there’s someone who will speak up in the room and say, ‘I know this person … they will figure it out, because that’s who they are.'”

“There is this evaluation not just of the science and the research, but of the researchers,” adds Kirk.

This means astronomers who were already established and well-known got an extra leg up.

“They were getting a pass,” says Reid. “They had a lower bar, in some ways, to overcome, than the scientists who were coming into the field completely fresh with no track record.”

Johnson and her colleagues recommended making the review process completely blinded and anonymous. Not only would the evaluation committees not get to see any names, all proposals would be required to be written in a way that made it totally impossible to know who the proposal was from.

Some doubted this new system would work

The institute surveyed the astronomy community to see what it thought of this potential change.

“You can imagine, the knee-jerk reaction was actually pretty polar,” says Lou Strolger, deputy head of the instruments division at the Space Telescope Science Institute and chair of its working group on anonymous proposing.

He says about half of those who responded favored the idea — and those tended to be women or people who were relatively young.

“They thought that this would be a good way to make it not only more fair but to encourage new people to participate,” he says.

But lots of astronomers had objections.

“They ranged from ‘This will totally upset how good science is done’ to ‘You’ll basically fool yourself into giving time to people who don’t know what they are doing’ — all sorts of things,” recalls Strolger.

Still, the institute’s director gave the go-ahead, and they plowed forward. In 2018, astronomers did their first truly anonymous review for Hubble proposals. Priya Natarajan, a theoretical astrophysicist at Yale University, was there and chaired the process. She says occasionally someone would try to guess who had submitted a proposal.

“But the buy-in from the community was so tremendous,” she says, “that there would be other people on the panels who would say, ‘Oh no, no, come on, let’s stick to the science.'”

“I was stunned”

And sticking to the science had a real impact. That year, for the first time ever, the acceptance rate for proposals led by women was higher than the acceptance rate for proposals led by men. The gender difference had flipped.

“I was stunned,” says Natarajan. “There was an effect right away.”

And when members of the selection committees were finally allowed to see who had submitted a proposal that they had just deemed worthy of telescope time, Strolger says that they never objected that the person wasn’t up to the job, although they were often surprised.

“There were a lot of, ‘Oh, that was not at all who I thought it was’ kind of reactions,” says Strolger.

Data from the last few years suggests that this process continues to help close the gap between men and women in acceptance rates for Hubble proposals, and it may have improved fairness in other ways, too.

There’s been a dramatic rise in approvals for astronomers who have never used Hubble before, says Strolger. “It went from something like a dozen per year, to 50 per year.”

What’s more, data from the first round of proposals for Webb shows hints of similar results, with “a much closer gap in male and female acceptance rates,” says Strolger.

“This seems to be working, and it seems to be working as we anticipated it would.”

What other biases could affect telescope users?

Still, anonymizing everything doesn’t solve all the problems in making sure everyone has equal access, says Johnson, who notes that unconscious bias can affect who in astronomy gets advantages like mentors and job opportunities.

“It’s not perfect. It doesn’t wipe out systemic bias, and I don’t know of the impact that the dual-anonymization has in terms of creating greater racial equity,” she says. “But it did seem to lift some of the gender bias.”

Trying to track equity issues is complicated by the fact that the Space Telescope Science Institute has historically not gathered demographic information about those who submit research proposals.

“Partly by policy and partly by federal law, we’re not permitted to collect that information,” explains Strolger.

That’s why, when Reid did his initial study looking at gender and Hubble, the best he could do was to make assumptions about gender based on the lead scientist’s name or his knowledge of people in the field.

The researchers are now looking for ways to learn more about submitters, perhaps by allowing people to voluntarily or anonymously submit information about themselves to a third party.

“We hope that by providing ways in which we can get access to more demographic data,” says Strolger, “we can begin to see where other biases may lie.”

Source: Who gets to use NASA’s James Webb Space Telescope? Astronomers work to fight bias

How Talking To People Can Reduce Prejudice

Interesting example of how face-to-face conversations that help people understand the other’s experiences, and identify some commonalities, can make a difference:

After the dust settled [from a previously falsified study], Broockman and Kalla went on with their experiment on transgender prejudices. LaCour’s misconduct only made them more determined to do the study for real. “There were all these volunteers who gave their Saturdays [to do the experiment],” Broockman says. “We had a certain sense of responsibility.”

They sent out surveys to thousands of homes in Miami, asking people to answer questions that included how they felt about transgender people and if they would support legal protection against discrimination for transgender people. Then volunteers from SAVE, an LGBT advocacy organization based in Florida, visited half of the 501 people who responded and canvassed them about an unrelated topic, recycling. Volunteers went to the other half and started the conversations that Fleischer thinks can help change minds.

After the canvass, the study participants answered the same questions about transgender people that they had answered before the study, including how positively or negatively they felt towards transgender people on a scale of 0 to 100. Those who had discussed prejudice they’d experienced felt about 10 points more positively toward transgender people, on average.

Broockman says that public opinion about gay people has improved by 8.5 points between 1998 and 2012. “So it’s about 15 years of progress that we’ve experienced in 10 minutes at the door,” he says.

Three months after the canvass, Broockman asked participants to fill out the survey again. They still felt more positively about transgender people than those who had gotten the unrelated canvass. “[That’s] the moment I backed away from my monitor and said, ‘Wow, something’s really unique here,’ ” he says. If the effect persists, Broockman says, the technique could be used to reduce prejudice across society.

That doesn’t mean everybody came away feeling more positive about transgender rights. Kalla says some people came away from the canvasser feeling very differently and some people not so much at all. And an uptick in 10 points on a feeling scale of 0 to 100 doesn’t sound like an epiphany. There wasn’t, however, any indication that those who started out with very negative feelings about transgender people were particularly resistant to the conversation. Broockman and Kalla published the results in Science on Thursday.

It is a landmark study, according to Elizabeth Paluck, a psychologist at Princeton University who was not involved with the work. “They were very transparent about all the statistics,” she says. “It was a really ingenious test of the change. If the change was at all fragile, we should have seen people change their minds back [after three months].” There are very few tests of prejudice reduction methods, and Paluck says this suggests the Los Angeles LGBT Center’s approach is actually far more effective than previous efforts, like TV ads.

There might be a couple of reasons for that. Broockman, now an assistant professor of political economy at Stanford University, says asking someone questions face-to-face like, “What are the reasons you wouldn’t support protections for transgender people, or what does this make you think about?” gets them to begin thinking hard about the issue. “Burning the mental calories to do effortful thinking about it, that leaves a lasting imprint on your attitudes,” he says.

Empathy may also be a factor. “Canvassers asked people to talk about a time they were treated differently. Most people have been judged because of gender, race or some other issue. For many voters, they reflect on it and they realize that’s a terrible feeling they don’t want anyone to have,” Broockman says.

The study’s conclusions differ from the conclusions of the LaCour’s falsified study from 2014 in one crucial way, Broockman says. LaCour claimed that there was only an effect from the deep canvass if it came from someone who was LGBT. “We found non-trans allies had a lasting effect as well,” Broockman says. That means canvassing is much more about conversational skill rather than identity.

It will take more studies and replications of this study before scientists know exactly what is influencing people’s opinions. But for now, the findings are a relief to David Fleischer. “To go into it with high hopes and then get this really bad piece of news, then to go forward anyway and have the accurate results? What a roller coaster of emotions,” he says.

The technique might be used to target any societal prejudice — or be used to increase prejudice, Broockman acknowledges. But even if that happens, he says, it at least will encourage people to think deeply about the issues they’re going to vote on.

Source: How Talking To People Can Reduce Prejudice : Shots – Health News : NPR

ICYMI: The False Promise of Meritocracy – The Atlantic

A note of caution of those who believe that hiring and other systems are based on objective meritocracy, without any influence of bias and prejudice, and a reminder of the need to be more mindful and aware of these biases:

Americans are, compared with populations of other countries, particularly enthusiastic about the idea of meritocracy, a system that rewards merit (ability + effort) with success. Americans are more likely to believe that people are rewarded for their intelligence and skills and are less likely to believe that family wealth plays a key role in getting ahead. And Americans’ support for meritocratic principles has remained stable over the last two decades despite growing economic inequality, recessions, and the fact that there is less mobility in the United States than in most other industrialized countries.

This strong commitment to meritocratic ideals can lead to suspicion of efforts that aim to support particular demographic groups. For example, initiativesdesigned to recruit or provide development opportunities to under-represented groups often come under attack as “reverse discrimination.” Some companies even justify not having diversity policies by highlighting their commitment to meritocracy. If a company evaluates people on their skills, abilities, and merit, without consideration of their gender, race, sexuality etc., and managers are objective in their assessments then there is no need for diversity policies, the thinking goes.

But is this true? Do commitments to meritocracy and objectivity lead to more fair workplaces?Emilio J. Castilla, a professor at MIT’s Sloan School of Management, has explored how meritocratic ideals and HR practices like pay-for-performance play out in organizations, and he’s come to some unexpected conclusions.In one company study, Castilla examined almost 9,000 employees who worked as support-staff at a large service-sector company. The company was committed to diversity and had implemented a merit-driven compensation system intended to reward high-level performance and to reward all employees equitably.

But Castilla’s analysis revealed some very non-meritocratic outcomes. Women, ethnic minorities, and non-U.S.-born employees received a smaller increase in compensation compared with white men, despite holding the same jobs, working in the same units, having the same supervisors, the same human capital, and importantly, receiving the same performance score. Despite stating that “performance is the primary bases for all salary increases,” the reality was that women, minorities, and those born outside the U.S. needed “to work harder and obtain higher performance scores in order to receive similar salary increases to white men.”

These findings led Castilla to wonder if organizational cultures and practices designed to promote meritocracy actually accomplished the opposite. Could it be that the pursuit of meritocracy somehow triggered bias? Along with his colleague, the Indiana University sociology professor Stephen Bernard, they designed a series of lab experiments to find out. Each experiment had the same outcome. When a company’s core values emphasized meritocratic values, those in managerial positions awarded a larger monetary reward to the male employee than to an equally performing female employee. Castilla and Bernard termed their counter intuitive result “the paradox of meritocracy.”

The paradox of meritocracy builds on other research showing that those who think they are the most objective can actually exhibit the most bias in their evaluations. When people think they are objective and unbiased then they don’t monitor and scrutinize their own behavior. They just assume that they are right and that their assessments are accurate. Yet, studies repeatedly show that stereotypes of all kinds (gender, ethnicity, age, disability etc.) are filters through which we evaluate others, often in ways that advantage dominant groups and disadvantage lower-status groups. For example, studies repeatedly find that the resumes of whites and men are evaluated more positively than are the identical resumes of minorities and women.

This dynamic is precisely why meritocracy can exacerbate inequality—because being committed to meritocratic principles makes people think that they actually are making correct evaluations and behaving fairly. Organizations that emphasize meritocratic ideals serve to reinforce an employee’s belief that they are impartial, which creates the exact conditions under which implicit and explicit biases are unleashed.

“The pursuit of meritocracy is more difficult than it appears,” Castilla said at a recent conference hosted by the Clayman Institute for Gender Research at Stanford, “but that doesn’t mean the pursuit is futile. My research provides a cautionary lesson that practices implemented to increase fairness and equity need to be carefully thought through so that potential opportunities for bias are addressed.” While companies may want to hire and promote the best and brightest, it’s easier said than done.

Source: The False Promise of Meritocracy – The Atlantic

Is ‘they all look alike to me’ pure racism or is there a scientific reason for mistaken identity?

Another aspect of how our brains work and the implications in terms of how we see others:

Scientists, pointing to decades of research, believe something else was at work. They call it the “other-race effect,” a cognitive phenomenon that makes it harder for people of one race to readily recognize or identify individuals of another.

It is not bias or bigotry, the researchers say, that makes it difficult for people to distinguish between people of another race. It is the lack of early and meaningful exposure to other groups that often makes it easier for us to quickly identify and remember people of our own ethnicity or race while we often struggle to do the same for others.

That racially loaded phrase “they all look alike to me,” turns out to be largely scientifically accurate, according to Roy S. Malpass, a professor emeritus of psychology at the University of Texas at El Paso who has studied the subject since the 1960s. “It has a lot of validity,” he said.

Looking for examples? There is no shortage — in the workplace, at schools and universities, and, of course, on the public stage.

Lucy Liu, the actress, has been mistaken for Lisa Ling, the journalist. “It’s like saying Hillary Clinton looks like Janet Reno,” Liu told USA Today.

Samuel L. Jackson, the actor, took umbrage last year when an entertainment reporter confused him with the actor Laurence Fishburne during a live television interview.

“Really? Really?” Jackson said, chiding the interviewer. “There’s more than one black guy doing a commercial. I’m the ‘What’s in your wallet?’ black guy. He’s the car black guy. Morgan Freeman is the other credit card black guy.”

And as a Washington correspondent, I managed a strained smile every time white officials and others remarked on my striking resemblance to Condoleezza Rice, then the secretary of state in the Bush administration. (No, we do not look alike.)

Psychologists say that starting when they are infants and young children, people become attuned to the key facial features and characteristics of the those around them. Whites often become accustomed to focusing on differences in hair color and eye color. African-Americans grow more familiar with subtle shadings of skin color.

“It’s a product of our perceptual experience,” said Christian A. Meissner, a professor of psychology at Iowa State University, “the extent to which we spend time with, the extent to which we have close friends of another race or ethnicity.”

(Minorities tend to be better at cross-race identification than whites, Meissner said, in part because they have more extensive and meaningful exposure to whites than the other way around.)

Distinguishing between two people of a race different from your own is certainly not impossible, cognitive experts say, but it can be difficult, even for those who are keenly aware of their limitations.

Alice O’Toole, a face-recognition expert and professor of behavioral and brain sciences at the University of Texas at Dallas, admits that she often confuses two of her Chinese graduate students, despite her expertise.

“It’s embarrassing, really embarrassing,” O’Toole, director of the university’s Face Perception Research Lab, said. “I think almost everyone has experienced it.”

But as Blake’s case has demonstrated, the other-race effect can have serious consequences, particularly in policing and the criminal justice system. …

If you sometimes mix up people of different races, it might not be racism but an effect of psychological development, researchers say.

Malpass, who has trained police officers and border patrol agents, urges law enforcement agencies to make sure black or Hispanic officers are involved when creating lineups of black and Hispanic suspects. And he warns of the dangers of relying on cross-racial identifications from eyewitnesses, who can be fallible.

The good news is that we can improve our cross-racial perceptions, researchers say, particularly if there is a strong need to do so. A white woman relocating to Accra, Ghana, for instance, would heighten her ability to distinguish between black faces, just as a black man living in Shanghai would enhance his ability to recognize Asians. (Malpass believes that people who need to identify those of other races — in the workplace or elsewhere — are more likely to be successful than people who simply have meaningful experiences with members of other racial groups.)

Source: Is ‘they all look alike to me’ pure racism or is there a scientific reason for mistaken identity?

Doctors Struggle With Unconscious Bias, Same As Police

Not surprising but some good examples of how these can play out:

Even as health overall has improved in the U.S., the disparities in treatment and outcomes between white patients, and black and Latino patients, are almost as big as they were 50 years ago. A growing body of research suggests that doctors’ unconscious behaviors play a role in these statistics, and the Institute of Medicine has called for more studies looking at discrimination and prejudice.

One study found that doctors were far less likely to refer black women for advanced cardiac care than white men with identical symptoms. Other studies show that African Americans and Latino patients are often prescribed less pain medication than white patients with the same complaints.

“We know that doctors spend more time with white patients than with patients of color,” says Howard Ross, founder of management consulting firm Cook Ross.

He’s developed a new diversity training curriculum for health care professionals that focuses on the role of unconscious bias in these scenarios.

Doctors and nurses don’t mean to treat people differently, Ross says. But, just like police, they harbor stereotypes that they’re not aware they have. Everybody does.

“This is normal human behavior,” Ross says. “We can no more stop having bias than we can stop breathing.

Unconscious biases often surface when we’re multitasking or when we’re stressed. They come up in tense situations where we don’t have time to think. Like police on the street at night who have to decide quickly if a person is reaching for a wallet, or a gun. It’s similar for doctors in the hospital.

“You’re dealing with people who are frightened, they’re reactive,” Ross says. “If you’re doing triage in the Emergency Room, for example, you don’t have time to sit back and contemplate, ‘why am I thinking about this,’ You have to instantaneously react.”

Doctors are trained to think fast, and to be confident in their decisions.

“There’s almost a trained arrogance,” Ross says.

This leads to treatments prescribed based on snap judgments, which can reveal internalized stereotypes. A doctor sees one black patient who doesn’t take his medication, perhaps because he can’t afford it. Without realizing it, the doctor starts to assume that all black  patients aren’t going to follow instructions.

Doctors Struggle With Unconscious Bias, Same As Police | State of Health | KQED News.

When Algorithms Discriminate – The New York Times

Given that people have biases, not surprising that the algorithms created reflect some of these biases:

Algorithms, which are a series of instructions written by programmers, are often described as a black box; it is hard to know why websites produce certain results. Often, algorithms and online results simply reflect people’s attitudes and behavior. Machine learning algorithms learn and evolve based on what people do online. The autocomplete feature on Google and Bing is an example. A recent Google search for “Are transgender,” for instance, suggested, “Are transgenders going to hell.”

“Even if they are not designed with the intent of discriminating against those groups, if they reproduce social preferences even in a completely rational way, they also reproduce those forms of discrimination,” said David Oppenheimer, who teaches discrimination law at the University of California, Berkeley.

But there are laws that prohibit discrimination against certain groups, despite any biases people might have. Take the example of Google ads for high-paying jobs showing up for men and not women. Targeting ads is legal. Discriminating on the basis of gender is not.

The Carnegie Mellon researchers who did that study built a tool to simulate Google users that started with no search history and then visited employment websites. Later, on a third-party news site, Google showed an ad for a career coaching service advertising “$200k+” executive positions 1,852 times to men and 318 times to women.

The reason for the difference is unclear. It could have been that the advertiser requested that the ads be targeted toward men, or that the algorithm determined that men were more likely to click on the ads.

Google declined to say how the ad showed up, but said in a statement, “Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed.”

Anupam Datta, one of the researchers, said, “Given the big gender pay gap we’ve had between males and females, this type of targeting helps to perpetuate it.”

It would be impossible for humans to oversee every decision an algorithm makes. But companies can regularly run simulations to test the results of their algorithms. Mr. Datta suggested that algorithms “be designed from scratch to be aware of values and not discriminate.”

“The question of determining which kinds of biases we don’t want to tolerate is a policy one,” said Deirdre Mulligan, who studies these issues at the University of California, Berkeley School of Information. “It requires a lot of care and thinking about the ways we compose these technical systems.”

Silicon Valley, however, is known for pushing out new products without necessarily considering the societal or ethical implications. “There’s a huge rush to innovate,” Ms. Mulligan said, “a desire to release early and often — and then do cleanup.”

When Algorithms Discriminate – The New York Times.

The Science of Why Cops Shoot Young Black Men

Good in-depth article on the psychology and neurology of subconscious bias and how it is part of our automatic thinking and sorting:

Science offers an explanation for this paradox—albeit a very uncomfortable one. An impressive body of psychological research suggests that the men who killed Brown and Martin need not have been conscious, overt racists to do what they did (though they may have been). The same goes for the crowds that flock to support the shooter each time these tragedies become public, or the birthers whose racially tinged conspiracy theories paint President Obama as a usurper. These people who voice mind-boggling opinions while swearing they’re not racist at all—they make sense to science, because the paradigm for understanding prejudice has evolved. There “doesn’t need to be intent, doesn’t need to be desire; there could even be desire in the opposite direction,” explains University of Virginia psychologist Brian Nosek, a prominent IAT researcher.”But biased results can still occur.”

The IAT is the most famous demonstration of this reality, but it’s just one of many similar tools. Through them, psychologists have chased prejudice back to its lair—the human brain.

Were not born with racial prejudices. We may never even have been “taught” them. Rather, explains Nosek, prejudice draws on “many of the same tools that help our minds figure out whats good and whats bad.” In evolutionary terms, its efficient to quickly classify a grizzly bear as “dangerous.” The trouble comes when the brain uses similar processes to form negative views about groups of people.

But here’s the good news: Research suggests that once we understand the psychological pathways that lead to prejudice, we just might be able to train our brains to go in the opposite direction.

The Science of Why Cops Shoot Young Black Men | Mother Jones.

And yes, I did take the Implicit Association Test (also available at UnderstandingPrejudice.org) and scored just as miserably as the Chris Mooney, the author of this article. Very sobering, and I encourage all to take it.

Can You Overcome Inbuilt Bias?

Interesting psych experiment, showing that appealing to higher motives less effective than more targeted tasking to reduce implicit biases:

Interestingly, most of the successful interventions were explicit about what they were trying to achieve and why. It’s important to remove the taboos around workplace discrimination and to educate people that bias is natural – what matters is that it doesn’t influence behavior. But worryingly, the majority of the successful interventions both associated black people with positive attributes and white people with negative attributes, reversing the natural direction of the white participants’ bias. Clearly reducing workplace bias by encouraging negativity towards a different group is not a solution.

The results of this comparison also raise an interesting question about the means of change and the outcome it achieves. Interventions which appealed to participants’ moral, conscious beliefs didn’t work, while those which targeted specific task behaviors – e.g. responding faster when black was paired with good – did. Some may argue that these interventions addressed the symptoms and not the cause. But in the workplace, when the ‘symptoms’ of implicit bias include unconsciously excluding and ostracizing others, addressing these behaviors may be a more effective use of time and resources than trying and failing to change the underlying beliefs which cause them.

It’s a tricky, emotive subject, but as more organizations wake up to the damaging consequences of implicit bias in terms of workforce engagement and performance, we can only hope for more research to shed light on how best to overcome it.

Can You Overcome Inbuilt Bias?.

Message to Richard Dawkins: ‘Islam is not a race’ is a cop out | Nesrine Malik

A reminder that prejudice is still prejudice, no matter how framed or cloaked.

Message to Richard Dawkins: ‘Islam is not a race’ is a cop out | Nesrine Malik | Comment is free | theguardian.com.