How should we think about implicit biases? 

Good discussion of the strengths, limits and how they should be used:

A couple of years ago, during Merrick Garland’s confirmation hearing for becoming the attorney-general of the United States, one of the senators questioned him about implicit bias: “Does it mean that I’m a racist … but I don’t know I’m a racist?” he asked Mr. Garland, who responded by saying, no, everyone has biases, and this doesn’t make you a racist.

This is a reasonable answer, but others would give a different one. Some people think research on implicit bias shows that, yes, in the words of the famous Avenue Q song: “Everyone’s a little bit racist.” The conclusion that everyone-is-racist (or at least every-majority-group-member-is-racist) is part of the public conversation, taught in schools, and pressed into employees during diversity training.

Which side is right? Well, it’s complicated. We need to think about what these tests are really measuring.

The most famous implicit bias test is the Implicit Association Test – the IAT, which was developed by the psychologists Anthony Greenwald and Mahzarin Banaji. To get a sense of it, I encourage you to go online and try it out yourself. Here’s­­ how it goes, taking as an example a test that’s developed to explore implicit attitudes toward the young and the elderly: The subject watches a screen as either words or pictures flash by. The pictures are of either old faces or young faces, and the words are either positive (like “pleasant”) or negative (like “poison”). Then, for one set of trials, subjects are asked to press one key for either a young face or a positive word and another key for either an old face or a negative word. For another set of trials, it’s reversed: one key for a young face or a negative word and another key for an old face or a positive word.

The logic here is that if you have a positive association with youth and a negative one with the elderly, then your performance on young-positive/old-negative trials will be quicker than young-negative/old-positive. And, in fact, people do find it more natural to associate young with positive and old with negative than the other way around.

Such studies have been done with millions of people and have found the same pattern of negative associations when tested on attitudes toward gay, overweight and disabled people, and, most relevant to the question of racism, Black people. These effects are present even when questions about explicit attitudes find no bias and are often present even in subjects who belong to the group that is less favoured. People who take this test are often shocked, and their takeaway is often something like “I’m racist against Black people and never knew it.”

There is a lot of value to this work. It’s worth knowing that someone might not want to be biased, might wish to treat people equally, but nonetheless be influenced by psychological forces that are beyond their control.

But do measures like the IAT tap racism in a real sense? Here are three big qualifications to keep in mind:

First, these methods get a lot of play in the popular media, where they are often portrayed as racism-detectors. The worst example I ever saw was on the television show Lie to Me, where a crack team of investigators uses a muddled version of the IAT to determine which of a group of firemen has committed a hate crime. They find that one firefighter is slower than the rest to associate positive words such as principled with Black faces such as Barack Obama’s, and this settles it. “I’m not a racist,” he later protests. His interrogator snaps back: “You don’t think you are.”

In fact, the test is too unreliable to be used this way. Your score on the same test taken at different times can vary, and so the same person might prefer white people when tested on Monday and have no bias when tested on Tuesday. If you take the test and don’t like the result, just take it again.

Second, it’s unclear that your score on the IAT can predict your actual behaviour. One meta-analysis finds that one’s score on the IAT provides very little insight into how you act toward people of other races. This is no surprise given the problem above – if your IAT score bounces around depending on when you take the test, how can it do a good job at predicting your behaviour in the real world?

Third, these biases might be unconscious in the sense that we don’t know how or when we are influenced by them, but it’s not like people don’t know they exist. When I list certain groups – Black people, the overweight and so on – nobody is surprised to hear that people (perhaps not themselves, but people in general) harbour biases against them.

So how should we think about implicit biases? One theory is that they might have nothing to do with negative attitudes toward a group – something which many people see as constitutive of racism. Instead, as the psychologists Keith Payne and Jason Hannay argue, measures such as the IAT tap our appreciation of regularities in the environment, including regularities in how people think about other people. In other words, tests like the IAT don’t measure attitudes, let alone bad attitudes – they pick up associations.

Such associations are everywhere: Given the environment I was raised in, I associate peanut butter with jelly, Ringo with George, O Canada with hockey games. I also associate airplane pilots with men and nurses with women. And I associate some groups, such as the young, with mostly good things and other groups, like the elderly, with mostly bad things. If my world were different, I would have different associations. Dr. Payne and Dr. Hannay conclude that we should think of implicit racial biases as “the natural outcome of a mind that generates associations based on statistical regularities, whenever that mind is immersed in an environment of systemic racism.”

Regardless of whether we see this recording of statistical generalizations as racism, we are left with a problem here. This is the tension between how we believe we should act and how we actually act. The first arises through reflection and is our considered view as to how we should treat people. The second is influenced by all sorts of forces, including all the associations, explicit and implicit, we carry about in our heads.

For some people, there is no clash at all. Consider certain findings about bias, such that bidders on eBay tend to offer less money for a baseball card held by a Black hand than by a white one, or that judges are more likely to give a scholarship to a student who is a member of their political party. Some people, learning that they are biased in this way, will shrug and say it’s fine. It’s okay to discriminate. But some of us are at war with ourselves. We don’t want to be swayed by our associations and stereotypes. We want to be fair, and we see this as requiring us to treat people as individuals and ignore the categories they fall into.

You might think that the solution here is to try hard to be unbiased. Perhaps learning about and thinking about implicit biases can help us override them, just through force of will. Unfortunately, the evidence suggests otherwise. We are good at self-justification. We make choices that are shaped by prejudice and bias and convince ourselves that we were being fair and impartial.

My own view is that we do better when we construct procedures that override the biases we don’t want to have. If you’re choosing who to hire and don’t think that race should matter, set up the situation in such a way that you don’t have this information about the people you are judging. This is the logic of procedures such as blind auditions. Or, from a different moral viewpoint, set up diversity requirements that explicitly take into account factors such as race so as to override the prejudices you’re trying to overcome. These are different solutions – and people have strong views about which is preferable – but the impetus is the same: to engineer processes to eradicate bias where we think that bias is wrong.

This is how moral progress happens more generally. We don’t typically become better merely through good intentions and force of will, just as we don’t usually lose weight or give up smoking just by wanting to and trying hard. But we are smart critters, and we can use our intelligence to manage our information and constrain our options, allowing our better selves to overcome those gut feelings and associations that we believe we would be better off without.

Paul Bloom is professor of psychology at the University of Toronto, and the Brooks and Suzanne Ragen Professor Emeritus of Psychology at Yale University. His latest book is Psych: The Story of the Human Mind.

Source: How should we think about implicit biases?

About Andrew
Andrew blogs and tweets public policy issues, particularly the relationship between the political and bureaucratic levels, citizenship and multiculturalism. His latest book, Policy Arrogance or Innocent Bias, recounts his experience as a senior public servant in this area.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: