Here’s the Conversation We Really Need to Have About Bias at Google

Ongoing issue of bias in algorithms:

Let’s get this out of the way first: There is no basis for the charge that President Trump leveled against Google this week — that the search engine, for political reasons, favored anti-Trump news outlets in its results. None.

Mr. Trump also claimed that Google advertised President Barack Obama’s State of the Union addresses on its home page but did not highlight his own. That, too, was false, as screenshots show that Google did link to Mr. Trump’s address this year.

But that concludes the “defense of Google” portion of this column. Because whether he knew it or not, Mr. Trump’s false charges crashed into a longstanding set of worries about Google, its biases and its power. When you get beyond the president’s claims, you come upon a set of uncomfortable facts — uncomfortable for Google and for society, because they highlight how in thrall we are to this single company, and how few checks we have against the many unseen ways it is influencing global discourse.

In particular, a raft of research suggests there is another kind of bias to worry about at Google. The naked partisan bias that Mr. Trump alleges is unlikely to occur, but there is a potential problem for hidden, pervasive and often unintended bias — the sort that led Google to once return links to many pornographic pages for searches for “black girls,” that offered “angry” and “loud” as autocomplete suggestions for the phrase “why are black women so,” or that returned pictures of black people for searches of “gorilla.”

I culled these examples — which Google has apologized for and fixed, but variants of which keep popping up — from “Algorithms of Oppression: How Search Engines Reinforce Racism,” a book by Safiya U. Noble, a professor at the University of Southern California’s Annenberg School of Communication.

Dr. Noble argues that many people have the wrong idea about Google. We think of the search engine as a neutral oracle, as if the company somehow marshals computers and math to objectively sift truth from trash.

But Google is made by humans who have preferences, opinions and blind spots and who work within a corporate structure that has clear financial and political goals. What’s more, because Google’s systems are increasingly created by artificial intelligence tools that learn from real-world data, there’s a growing possibility that it will amplify the many biases found in society, even unbeknown to its creators.

Google says it is aware of the potential for certain kinds of bias in its search results, and that it has instituted efforts to prevent them. “What you have from us is an absolute commitment that we want to continually improve results and continually address these problems in an effective, scalable way,” said Pandu Nayak, who heads Google’s search ranking team. “We have not sat around ignoring these problems.”

For years, Dr. Noble and others who have researched hidden biases — as well as the many corporate critics of Google’s power, like the frequent antagonist Yelp — have tried to start a public discussion about how the search company influences speech and commerce online.

There’s a worry now that Mr. Trump’s incorrect charges could undermine such work. “I think Trump’s complaint undid a lot of good and sophisticated thought that was starting to work its way into public consciousness about these issues,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who has studied Google and Facebook’s influence on society.

Dr. Noble suggested a more constructive conversation was the one “about one monopolistic platform controlling the information landscape.”

So, let’s have it.

Google’s most important decisions are secret

In the United States, about eight out of 10 web searches are conducted through Google; across Europe, South America and India, Google’s share is even higher. Google also owns other major communications platforms, among them YouTube and Gmail, and it makes the Android operating system and its app store. It is the world’s dominant internet advertising company, and through that business, it also shapes the market for digital news.

Google’s power alone is not damning. The important question is how it manages that power, and what checks we have on it. That’s where critics say it falls down.

Google’s influence on public discourse happens primarily through algorithms, chief among them the system that determines which results you see in its search engine. These algorithms are secret, which Google says is necessary because search is its golden goose (it does not want Microsoft’s Bing to know what makes Google so great) and because explaining the precise ways the algorithms work would leave them open to being manipulated.

But this initial secrecy creates a troubling opacity. Because search engines take into account the time, place and some personalized factors when you search, the results you get today will not necessarily match the results I get tomorrow. This makes it difficult for outsiders to investigate bias across Google’s results.

A lot of people made fun this week of the paucity of evidence that Mr. Trump put forward to support his claim. But researchers point out that if Google somehow went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. If the public did spot evidence of such an event, it would look thin and inconclusive, too.

“We really have to have a much more sophisticated sense of how to investigate and identify these claims,” said Frank Pasquale, a professor at the University of Maryland’s law school who has studied the role that algorithms play in society.

In a law review article published in 2010, Mr. Pasquale outlined a way for regulatory agencies like the Federal Trade Commission and the Federal Communications Commission to gain access to search data to monitor and investigate claims of bias. No one has taken up that idea. Facebook, which also shapes global discourse through secret algorithms, recently sketched out a plan to give academic researchers access to its data to investigate bias, among other issues.

Google has no similar program, but Dr. Nayak said the company often shares data with outside researchers. He also argued that Google’s results are less “personalized” than people think, suggesting that search biases, when they come up, will be easy to spot.

“All our work is out there in the open — anyone can evaluate it, including our critics,” he said.

Search biases mirror real-world ones

The kind of blanket, intentional bias Mr. Trump is claiming would necessarily involve many workers at Google. And Google is leaky; on hot-button issues — debates over diversity or whether to work with the military — politically minded employees have provided important information to the media. If there was even a rumor that Google’s search team was skewing search for political ends, we would likely see some evidence of such a conspiracy in the media.

That’s why, in the view of researchers who study the issue of algorithmic bias, the more pressing concern is not about Google’s deliberate bias against one or another major political party, but about the potential for bias against those who do not already hold power in society. These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.

It’s in these blind spots that we find the most problematic biases with Google, like in the way it once suggested a spelling correction for the search “English major who taught herself calculus” — the correct spelling, Google offered, was “English major who taught himself calculus.”

Why did it do that? Google’s explanation was not at all comforting: The phrase “taught himself calculus” is a lot more popular online than “taught herself calculus,” so Google’s computers assumed that it was correct. In other words, a longstanding structural bias in society was replicated on the web, which was reflected in Google’s algorithm, which then hung out live online for who knows how long, unknown to anyone at Google, subtly undermining every female English major who wanted to teach herself calculus.

Eventually, this error was fixed. But how many other such errors are hidden in Google? We have no idea.

Google says it understands these worries, and often addresses them. In 2016, some people noticed that it listed a Holocaust-denial site as a top result for the search “Did the Holocaust happen?” That started a large effort at the company to address hate speech and misinformation online. The effort, Dr. Nayak said, shows that “when we see real-world biases making results worse than they should be, we try to get to the heart of the problem.”

Google has escaped recent scrutiny

Yet it is not just these unintended biases that we should be worried about. Researchers point to other issues: Google’s algorithms favor recency and activity, which is why they are so often vulnerable to being manipulated in favor of misinformation and rumor in the aftermath of major news events. (Google says it is working on addressing misinformation.)

Some of Google’s rivals charge that the company favors its own properties in its search results over those of third-party sites — for instance, how it highlights Google’s local reviews instead of Yelp’s in response to local search queries.

Regulators in Europe have already fined Google for this sort of search bias. In 2012, the F.T.C.’s antitrust investigators found credible evidence of unfair search practices at Google. The F.T.C.’s commissioners, however, voted unanimously against bringing charges. Google denies any wrongdoing.

The danger for Google is that Mr. Trump’s charges, however misinformed, create an opening to discuss these legitimate issues.

On Thursday, Senator Orrin Hatch, Republican of Utah, called for the F.T.C. to reopen its Google investigation. There is likely more to come. For the last few years, Facebook has weathered much of society’s skepticism regarding big tech. Now, it may be Google’s time in the spotlight.

Source: Here’s the Conversation We Really Need to Have About Bias at …

It’s Easier To Call A Fact A Fact When It’s One You Like, Study Finds

Interesting nuanced study:

Study after study has found that partisan beliefs and bias shape what we believe is factually true.

Now the Pew Research Center has released a new study that takes a step back. They wondered: How good are Americans at telling a factual statement from an opinion statement — if they don’t have to acknowledge the factual statement is true?

By factual, Pew meant an assertion that could be proven or disproven by evidence. All the factual statements used in the study were true, to keep the results more consistent, but respondents didn’t know that.

An opinion statement, in contrast, is based on values and beliefs of the speaker, and can’t be either proven or disproven.

Pew didn’t provide people with definitions of those terms — “we didn’t want to fully hold their hands,” Michael Barthel, one of the authors of the study, told NPR. “We did, at the end of the day, want respondents to make their own judgment calls.”

The study asked people to identify a statement as factual, “whether you think it’s accurate or not,” or opinion, “whether you agree with it or not.”

They found that most Americans could identify more than three out of five in each category — slightly better than you’d expect from random luck.

(You can see how your evaluations stack up in Pew’s quiz.)

In general they found people were better at correctly identifying a factual statement if it aligned with or supported their political beliefs.

Republicans and Democrats more likely to correctly identify factual news statements when they favor their side

For instance, 89 percent of Democrats identified “President Barack Obama was born in the United States” as a factual statement, while only 63 percent of Republicans did the same.

Republicans, however, were more likely than Democrats to recognize that “Spending on Social Security, Medicare, and Medicaid make up the largest portion of the U.S. budget” is a factual statement — regardless of whether they thought it was accurate.

And opinions? Well, the opposite was true. Respondents who shared an opinion were more likely to call it a factual statement; people who disagreed with the opinion, more likely to accurately call it an opinion.

Pew was able to test that trend more precisely with a followup question: If someone called a statement an opinion, they asked if the respondent agreed or disagreed with that opinion.

If the opinion was actually an opinion, responses varied.

“If it wasn’t an opinion statement — it was a factual statement that they misclassified — they generally disagreed with it,” Barthel says.

Some groups of people were also more successful, in general, than others.

The “digitally savvy” and the politically aware were more likely to correctly identify each statement as opinion or factual. People with a lot of trust in the news media were also significantly more likely to get a perfect score: While just over a quarter of all adults got all five facts right, 39 percent of people who trust news swept that category.

But, interestingly, there was much less of an effect for people who said they were very interested in news. That population was slightly more likely to identify facts as facts — but less savvy than non-news-junkies at calling an opinion an opinion.

Political awareness, digital savviness and trust in the media all play large roles in the ability to distinguish between factual and opinion news statements

The results suggest that confirmation bias is not just a question of people rejecting facts as false — it can involve people rejecting facts as something that could be proven or disproven at all.

But Barthel saw a silver lining: In almost all cases, he said, a majority of people did classify a statement correctly — even with the trends revealing the influence of their beliefs.

“It does make a little bit of difference,” he said. “But normally, it doesn’t cross the line of making a majority of people get this wrong.”

Source: It’s Easier To Call A Fact A Fact When It’s One You Like, Study Finds

Fighting Bias With Board Games : Code Switch : NPR

Interesting and innovative approach:

Quick, think of a physicist.

If you’re anything like me, you probably didn’t have to think very hard before the names Albert Einstein and Isaac Newton popped up.

But what if I asked you to think of a female physicist? What about a black, female physicist?

You may have to think a bit harder about that. For years, mainstream accounts of history have largely ignored or forgotten the scientific contributions of women and people of color.

This is where Buffalo — a card game designed by Dartmouth University’s Tiltfactor Lab — comes in. The rules are simple. You start with two decks of cards. One deck contains adjectives like Chinese, tall or enigmatic; the other contains nouns like wizard or dancer.

Draw one card from each deck, and place them face up. And then all the players race to shout out a real person or fictional character who fits the description.

So say you draw “dashing” and “TV show character.”

You may yell out “David Hasselhoff in Knight Rider!”

“Female” and “olympian?”

Gabby Douglas!

Female physicist?

Hmm. If everyone is stumped, or “buffaloed,” you draw another noun and adjective pair and try again. When the decks run out, the player who has made the most matches wins.

It’s the sort of game you’d pull out at dinner parties when the conversation lulls. But the game’s creators says it’s good for something else — reducing prejudice. By forcing players to think of people that buck stereotypes, Buffalo subliminally challenges those stereotypes.

“So it starts to work on a conscious level of reminding us that we don’t really know a lot of things we might want to know about the world around us,” explains Mary Flanagan, who leads Dartmouth University’s Tiltfactor Lab, which makes games designed for social change and studies their effects.

Buffalo might nudge us to get better acquainted with the work of female physicists, “but it also unconsciously starts to open up stereotypical patterns in the way we think,” Flanagan says.

In one of many tests she conducted, Flanagan rounded up about 200 college students and assigned half to play Buffalo. After one game, the Buffalo players were slightly more likely than their peers to strongly agree with statements like, “There is potential for good and evil in all of us,” and, “I can see myself fitting into many groups.”

Students who played Buffalo also scored better on a standard psychological test for tolerance. “After 20 minutes of gameplay, you’ve got some kind of measurable transformation with a player — I think that’s pretty incredible,” Flanagan says.

Buffalo isn’t Flanagan’s only bias-busting game. Tiltfactor makes two others called “Awkward Moment” and “Awkward Moment At Work.” They’re designed to reduce gender discrimination at school and in the workplace, respectively.

“I’m really weary of saying things like, ‘Games are going to save the world,'” Flanagan says. But she adds, “it’s a serious question to look at how a little game could try to address a massive, lived social problem that affects so many individuals.”

Buffalo.

Maanvi Singh for NPR

Scientists have tried all sorts of quick-fix tactics to train away racism, sexism and homophobia. In one small study, researchers at Oxford University even looked into whether Propranolol, a drug that’s normally used to reduce blood pressure, could ease away racist attitudes. Unsurprisingly, it turns out that there is no panacea capable of curing bigotry.

There are, however, good reasons to get behind the idea that games or any other sort of entertainment can change the way we think.

“People aren’t excited about showing up to diversity trainings or listening to people lecture them. People don’t generally want to be told what to think,” explains Betsy Levy Paluck, a professor of psychology at Princeton University who studies how media can change attitudes and behaviors. “But people like entertainment. So, just on a pragmatic basis, that’s one reason to use it to teach.”

There’s a long history of using literature, music and TV shows to encourage social change. In a 2009 study, Paluck found that radio soap opera helped bridge the divides in post-genocide Rwanda. “We know that various forms of pop-culture and entertainment help reduce prejudice,” Paluck says. “In terms of other types of entertainment — there’s less research. We’re still finding out whether and how something like a game can help.”

Anthony Greenwald, a psychologist at the University of Washington who has dedicated his career to studying people’s deep-seated prejudices, is skeptical. Like Flanagan, he says, several well-intentioned researchers have proved a handful of interventions — including thought exercises, writing assignments and games — can indeed reduce prejudice for a short period of time. But, “these desired effects generally disappear rapidly. Very few studies have looked at the effects even as much as one day later.”

After all, how can 20 minutes of anything dislodge attitudes that society has pounded into our skulls over a lifetime?

Flanagan says her lab is still looking into that question, and hopes to conduct more studies in the future that track long-term effects. “We do know that people play games often. If it really is a good game, people will return to it. They’ll play it over and over again,” Flanagan says. Her philosophy: maybe a game a day can help us keep at least some of our prejudices away.

via Fighting Bias With Board Games : Code Switch : NPR

ICYMI – Black job seekers have harder time finding retail and service work than their white counterparts, study suggests | Toronto Star

Interesting study:

Black applicants may have a harder time finding an entry level service or retail job in Toronto than white applicants with a criminal record, a new study has found.

For a city that claims to be multicultural, the results were “shocking,” said Janelle Douthwright, the study’s author, who recently graduated with a Masters of Arts in Criminology and Socio-Legal Studies from the University of Toronto.

Douthwright read a similar study from Milwaukee, Wis., during her undergraduate courses and she was “floored” by the findings.

“I thought there was no way this would be true here in Toronto,” she said.

She pursued her graduate studies to find out.

Douthwright created four fictional female applicants and submitted their resumes for entry level service and retail positions in Toronto over the summer.

She gave two of the applicants Black sounding names — Khadija Nzeogwu and Tameeka Okwabi — and gave one a criminal record. The Black applicants also listed participation in a Black or African student association on their resumes.

She gave the two other applicants white sounding names — Beth Elliot and Katie Foster — and also gave one of them a criminal record. The candidates with criminal records indicated in their cover letters that they had been convicted of summary offences, which are often less serious crimes.

 

Both Black applicants applied to the same 64 jobs and the white applicants applied to another 64 jobs.

Douthwright explained that she didn’t submit all four applications to the same jobs because the applications for the two candidates with criminal records and the two applicants without criminal records were almost identical except for the elements she used to indicate race, so they might have aroused suspicions among the employers if they were all submitted for the same jobs.

Though the resumes were nearly identical — each applicant had a high school education and experience working as a hostess and retail sales associate — the white applicant who didn’t have a criminal record received the most callbacks by far.

 

Of the 64 applications, the white applicant with no criminal record received 20 callbacks, a callback rate of 31.3 per cent. The white applicant with a criminal record received 12 callbacks, a callback rate of 18.8 per cent.

The Black applicant with no criminal record, meanwhile, received seven callbacks, a rate of 10.9 per cent. The Black applicant with a criminal record received just one callback out of 64 applications, a rate of 1.6 per cent.

Lorne Foster, a professor in the Department of Equity Studies at York University said Douthwright’s study bolsters the thesis that “the workplace is discriminatory on a covert level.”

“We have a number of acts that protect us against discrimination and many people think that because of that strong infrastructure discrimination is gone,” he said.

That’s not the case. “Implicit” or unconscious bias is a persistent issue.

“All of these implicit biases are automatic, they’re ambivalent, they’re ambiguous, and they’re much more dangerous than the old-fashioned prejudices and discrimination that used to exist because they go undetected but they have an equally destructive impact on people’s lives,” Foster said.

“It’s an invisible and tasteless poison and it’s difficult to eliminate.”

Individual employers, he said, should take a proactive approach to ensure their hiring practices are inclusive or at least adhering to the human rights code by testing and challenging their processes to uncover any hidden prejudices.

He pointed to the Windsor Police Service, who shifted their hiring practices when they discovered their existing process was excluding women, as an example.

They were one of the first services to do a demographic scan of who works for them, said Foster, who worked on a human rights review of the service.

Through that process they realized there was a “dearth” of female officers. They realized that the original process, which involved a number of physical tests “where there was all this male testosterone flying around,” was inhibiting women from attending the session.

In response they organized a series of targeted recruitment sessions and were able to hire five new women at the end of that process, Foster said.

“We all need to be vigilant about our thoughts about other people, our hidden biases and images of them,” he said.

via Black job seekers have harder time finding retail and service work than their white counterparts, study suggests | Toronto Star

Bias creeps into reference checks, so is it time to ditch them?

Hadn’t thought of this aspect of bias in reference checks. When hiring in government, I was always conscious of the selection bias in the references submitted – people generally do not submit negative references! When asked if I was willing to be a reference, I would flag if I had any issues that I would have to include in the reference:

As much as we’d like to think we’ve refined the hiring process over the years to carefully select the best candidate for the job, bias still creeps in.

Candidates who come from privileged backgrounds are more able to source impressive, well-connected referrers and this perpetuates the cycle of privilege. While the referrer’s reputation and personal clout make up one aspect of the recommendation, what they actually say – the content – completes the picture.

Research shows gender bias even invades in the content of recommendations. In this study female applicants for post-doctoral research positions in the field of geoscience were only half as likely as their male counterparts to receive excellent (as opposed to just good) endorsements from their referees. Since it’s unlikely that of the 1,200 recommendation letters analysed, female candidates were less excellent than the male candidates, it means something else is going on.

A result like this may be explained by the gender role conforming adjectives that are used to describe female versus male applicants. Women are more likely to be observed and described as “nurturing” and “helpful”, whereas men are attributed with stronger, more competence-based words like “confident” and “ambitious”. This can, in turn, lead to stronger recommendations for male candidates.

Worryingly, in another study similar patterns emerged in the way black versus white, and female versus male, medical students were described in performance evaluations. These were used as input to select residents.

In both cases the members of minority groups were described using less impressive words (like “competent” versus “exceptional”), a pattern that was observed even after controlling for licensing examination scores, an objective measure of competence.

Recommendations aren’t good predictors of performance

Let’s put the concerns about bias aside for a moment while we examine an even bigger question: are recommendations actually helpful, valid indicators of future job performance or are they based on outdated traditions that we keep enforcing?

Even back in the 90s, researchers were trying to alert hiring managers to the ineffectiveness of this as a tool, noting some major problems.

The first problem is leniency, referees are allowed to be chosen by the candidate and tend to be overly positive. The second is too little knowledge of the applicant, as referees are unlikely to see all aspects of a prospective employees’ work and personal character.

Reliability is another problem. It turns out there is higher agreement between two letters written by the same referee for different candidates, than there is for two letters (written by two different referees) for the same candidate!

There is evidence that people behave in different ways when they are in different situations at work, which would reasonably lead to different recommendations from various referees. However, the fact that there is more consistency between what referees say about different candidates than between what different referees say about the same candidate remains a problem.

The alternatives to the referee

There are a few initiatives that are currently being used as alternatives to standard recruitment processes. One example is gamification – where candidates play spatial awareness or other job-relevant games to demonstrate their competence. For example, Deloitte has teamed up with software developer, Arctic Shores, for a fresh take on recruitment in an attempt to move away from the more traditional methods of recruitment.

However, gamification is not without its flaws – these methods would certainly favour individuals who are more experienced with certain kinds of video games, and gamers are more likely to be male. So it’s a bit of a catch-22 for recruiters who are introducing bias through a process designed to try to eliminate bias.

If companies are serious about overcoming potential bias in recruitment and selection processes, they should consider addressing gender, racial, economic and other forms of inequality. One way to do this is through broadening the recruitment pool by making sure the language they use in position descriptions and jobs ads is more inclusive. Employers can indicate flexible work options are available and make the decision to choose the minority candidates when they are equally qualified as other candidates.

Another option is to increase the diversity of the selection committee to add some new perspectives to previously homogeneous committees. Diverse selectors are more likely to speak up about and consider the importance of hiring more diverse candidates.

Job seekers could even try running a letter of reference through software, such as Textio, that reports gender bias in pieces of text and provides gender-neutral alternatives. But just as crucial is the need for human resources departments to start looking for more accurate mechanisms to evaluate candidates’ competencies.

via Bias creeps into reference checks, so is it time to ditch them?

People Suffer at Work When They Can’t Discuss the Racial Bias They Face Outside of It

Interesting HBR-published study on the racial bias link between the outside and work environments by Sylvia Ann Hewlett, Melinda Marshall and Trudy Bourgeois:

Last month, in an unprecedented show of solidarity, 150 CEOs from the world’s leading companies banded together to advance diversity and inclusion in the workplace and, through an online platform, shared best practices for doing so. To drive home the urgency, the coalition’s website, CEOAction.com, directs visitors to research showing that diverse teams and inclusive leaders unleash innovation, eradicate groupthink, and spur market growth. But as Tim Ryan, U.S. Chair and senior partner at PwC and one of the organizers of the coalition, explains, what galvanized the group was widespread recognition that “we are living in a world of complex divisions and tensions that can have a significant impact on our work environment” — and they need to be openly addressed.

At the Center for Talent Innovation, we wanted to look into these suspicions. Do the political, racial, and social experiences that divide us outside of work undermine our contributions on the job? Our nationwide survey of 3,570 white-collar professionals(374 black, 2,258 white, 393 Asian, and 395 Hispanic) paints an unsettling landscape: For black, Asian, and Hispanic professionals, race-based discrimination is rampant outside the workplace. Black individuals are especially struggling, as fully 78% of black professionals say they’ve experienced discrimination or fear that they or their loved ones will — nearly three times as many as white professionals.

But 38% of black professionals also feel that it is never acceptable at their companies to speak out about their experiences of bias — a silence that makes them more than twice as vulnerable to feelings of isolation and alienation in the workplace. Black employees who feel muzzled are nearly three times as likely as those who don’t to have one foot out the door, and they’re 13 times as likely to be disengaged.

W170626_HEWLETT_WHATHAPPENS

 

The response, at most organizations, is no response. Leaders don’t inquire about coworkers’ life experiences; they stay quiet when headlines blare reports of racial violence or videos capture acts of blatant discrimination. Their silence is often born of a conviction that race, like politics, is best discussed elsewhere.

But as evidenced by the formation of the coalition and the initiatives we captured in our report, that attitude is shifting. Conscious that breaking the silence begins with their own example, captains of industry are talking about race, both internally with their employees and externally with the public. After a spate of shootings of unarmed black men last summer, Ryan initiated a series of discussion days to ensure that all employees at PwC better understood the experiences of their black colleagues. Michael Roth, CEO of Interpublic Group, issued an enterprise-wide email imploring coworkers to “connect, affirm our commitment to one another, and acknowledge the pain being felt in so many of our communities.” Bernard Tyson, CEO of Kaiser Permanente, published an essay in which, in a plea for empathy, he shared his own experiences of discrimination. And in an emotional recounting of his black friend’s experience outside the office that went viral on YouTube, AT&T chairman Randall Stephenson encouraged employees to get to know each other better.

Leaders who display this kind of courage don’t always see immediate rewards, but in the long term, our research suggests that the payoff could be extraordinary. Of those who are aware of companies responding to societal incidents of racial discrimination, robust majorities of black (77%), white (65%), Hispanic (67%), and Asian (83%) professionals say they view those companies in a more positive way. Interviews with employees at firms like Ernst & Young point to stronger bonds forged between team leaders and members as a result of guidelines disseminated to managers on how to have a trust-building conversation. Town halls at New York Life with members of the C-suite and black executives have likewise paved pathways for greater understanding across racial and political divides.

Source: People Suffer at Work When They Can’t Discuss the Racial Bias They Face Outside of It

Babies show racial bias at nine months, U of T study suggests

A pair of interesting studies, with some caveats by other researchers:

Two new University of Toronto studies suggest racial bias can develop in babies at an early age — before they’ve even started walking.

Led by the school’s Ontario Institute of Child Study professor Kang Lee, in partnership with researchers from the U.S., U.K., France, and China, the studies examined how infants react to individuals of their own race, compared to individuals of another race.

“The goal of the study was to find out at which age infants begin to show racial bias,” Lee said. “With existing studies, the evidence shows that kids show bias around 3 or 4 years of age. We wanted to look younger.”

The first study looked at 193 Chinese infants from three to ninth months, recruited from a hospital in China, who hadn’t had direct contact with people of other races. The babies were then shown videos of six Asian women and six African women, paired with either happy or sad music.

The study found that infants from three to six months old didn’t associate sad or happy music with people of the same race or of other races, which indicates they “are not biologically predisposed to associate own- and other-race faces with music of different emotional valence.”

However, at around nine months old, the reactions were different.

According to the study, nine-month-old babies looked at their own-race faces paired with happy music for a longer period of time, as well as other-race faces paired with sad music. Lee says this supports the hypothesis that infants associate people of the same race with happy music, and other races with sad music.

That’s not to say parents are teaching their children how to discriminate against other raced individuals, Lee says.

“We are very confident that the cause of this early racial bias is actually the lack of exposure to other raced individuals,” he said. “It tells us that in Canada, if we introduce our kids to other-raced individuals, then we are likely to have less racial bias in our kids against other-raced people.”

Andrew Baron, an associate professor of psychology the University of British Columbia, said while the goal of the study is “terrific,” there are many reasons infants would look for longer amounts of time at faces of different races. For example, he says an infant could spend more time looking at an own-race face because it is familiar, or at an other-race face because it is different and unexpected.

“It’s impossible to draw that conclusion about association from a single experiment when you could have half a dozen reasons why you would look longer that don’t support the conclusion that was made in that paper,” said Baron, who was not involved in the studies, but specializes in a similar field — the development of implicit associations among infants.

“There’s multiple reasons — and contradictory reasons — why we look longer at things. We look longer at things we fear, we look longer at things we like. That’s an inherent tension in how you choose to interpret the data.”

The second study took a closer look at that bias and how it affects children’s learning skills.

Researchers showed babies videos of own-race and other-race adults looking in the same direction that photos of animals appeared (indicating they are reliable) and looking in the wrong direction of the animals (indicating they are unreliable).

The study found that when adults were reliable and looking in the direction of the animals, the infants followed both own- and other-raced individuals equally. The same results occurred when the adults were unreliable and looking in the wrong direction.

However, when the adults gaze was only sometimes correct, the children were more likely to take cues provided by adults of their own race.

“In this situation, very interestingly, kids treated their own-raced individuals — who are only 50 per cent correct — as if they were 100 per cent correct,” Lee said.

“There is discrimination, but only when there is uncertainty.”

The first study was published in Developmental Science and the second was in Child Development.

The study was conducted in China, Lee says, because the researchers were able to control the exposure to other-raced individuals.

Lee said he has been trying for nearly 10 years to organize a study looking at babies born into mixed-race families. He suspects infants born into mixed-race families would show less racial bias.

When it comes to parents who want to try to eliminate racial bias from a young age, Lee says exposure is key.

“If parents want to prevent racial biases from emerging, the best thing to do is expose their kids to TV programs, books, and friends from different races,” he said.

“And the important message is they have to know them by name . . . it’s extremely important to know them as individuals.”

Source: Babies show racial bias at nine months, U of T study suggests | Toronto Star

A black woman in tech makes $79,000 for every $100,000 a white man makes – Recode

Impressive large-scale data analysis that show the extent of bias in the hiring process:

It’s no secret that the technology field can be brutal to anyone who isn’t a white male. New data shows just how those inequalities play out in today’s tech workers’ paychecks.

Nearly two in three women receive lower salary offers than men for the same job at the same company, according to Hired, a job website that focuses on placing people in tech jobs such as software engineer, product manager or data scientist. That’s slightly better than last year, when 69 percent of women received lower offers.

Women, on average, were paid 4 percent less than men for the same kind of job, the study found.

For the study, Hired mined data from 120,000 salary offers to 27,000 candidates at 4,000 companies. In general, applicants to these tech fields skew male (75 percent), but that doesn’t account for the disparity in who gets interviewed.

Companies interviewed only men for a position 53 percent of the time; 6 percent of the time, they interviewed only women.

“Not only are women getting lower offers when they actually get offers, but a large amount of time, companies have openings and they’re not interviewing women at all,” said Jessica Kirkpatrick, Hired’s data scientist.

Hired’s data also breaks down offer salaries by race, compared with a white man in the same job. The effects of race are even more dramatic:

  • Black women are offered 79 cents to every dollar offered to a white man.
  • Black men make 88 cents.
  • Latina women make 83 cents.
  • White women make 90 cents.

Additionally, LGBTQ women and men are offered less money than their non-LGBTQ counterparts.

There are numerous reasons for this pay inequity. Part of the problem is that women, minorities and LGBTQ people ask for less than white males for the same position.

According to Kirkpatrick, these groups ask for less because people base their salary expectations on what they’re already making. For these groups, their lower pay often reflects a lot of historical inequities accrued over their careers, like being denied raises or promotions.

By not offering people comparable wages, Kirkpatrick said that companies are jeopardizing their job retention. “When people figure out what their teammates are making, it’s ultimately not good for maintaining talent and creating a collegial environment,” she said.

It also makes Silicon Valley’s already tight talent pool even smaller.

Source: A black woman in tech makes $79,000 for every $100,000 a white man makes – Recode

Applying for a job in Canada with an Asian name: Policy Options

More good work on implicit biases and their effect on discrimination in hiring by Jeffrey G. Reitz, Philip Oreopoulos, and Rupa Banerjee:

Our most recent study analyzed factors that might affect discriminatory hiring practices: the size of an employer, the skill level of the posted job and the educational level of the applicant.

First, we divided the employers into large (500 or more employees), medium-sized (50 to 499 employees) and small (less than 50 employees). We expected that large employers might treat applicants more fairly because they have greater resources devoted to recruitment and often have a more professionalized recruitment process. They also tend to have more experience with ethno-racial diversity in their workforces.

Asian-named applicants’ relative callback rates were indeed the lowest in small and medium-sized organizations, and somewhat higher in the largest employers. Compared with applicants with Anglo names, the Asian-named applicants with all-Canadian qualifications got 20 percent fewer calls from the largest organizations, but 39 percent fewer from the medium-sized organizations and 37 percent fewer from the smallest organizations. So, the disadvantage of having an Asian name is less for applicants to the large organizations, although it is still evident.

Looking at treatment of Asian-named applicants with some foreign qualifications, we found the largest organizations are generally the most likely to call these applicants for interviews. Large employers called these applicants 35 percent less often than Anglo-Canadian applicants with Canadian education and experience; medium-sized employers called 60 percent less often, and the smaller employers called 66 percent less often.

We were also interested in whether the skill level of the job affected discriminatory hiring practices and, in particular, whether Asian-named applicants faced greater barriers in higher-skill jobs, which are likely to be better paid. We found that the extent of discrimination against Asian-named applicants with all-Canadian qualifications is virtually the same for both high-skill jobs and lower-skill jobs. For the high-skill jobs, these applicants were 33 percent less likely to get a call; for the low-skill jobs, 31 percent less likely.

Skill level matters much more when Asian-named applicants have some foreign qualifications. Overall, these applicants had about a 53 percent lower chance of receiving a callback than comparable Anglo-Canadian applicants. But their rate of receiving calls is significantly lower at higher skill levels: they receive 59 percent fewer callbacks for high-skill jobs, 46 percent fewer for low-skill jobs. Employers may respond less favourably to Asian-named and foreign-qualified applicants for higher-skill positions because in those jobs, more is at stake, and assessing foreign credentials is more difficult than checking local sources. Avoiding the issue by not calling applicants to an interview is apparently viewed as the safer option.

Finally, we asked whether having a higher level of education than Anglo-Canadian-named applicants would lessen the negative effect of having an Asian name. We found that Asian-named applicants with Canadian education including a Canadian master’s degree were 19 percent less likely to be called in for an interview than their Anglo-Canadian counterparts holding only a Bachelor’s degree. For Asian applicants with foreign qualifications and a Canadian master’s degree, the likelihood of a callback was 54 percent lower than the rate for less-educated Anglo-Canadian-named applicants. Acquiring a higher level of education in Canada did not seem to give Asian-named applicants much of an edge.

Overall, we found that employers both large and small discriminate in assessing Asian-named applicants, even when the applicants have Canadian qualifications; and they show even more reluctance to consider Asian-named applicants with foreign qualifications. These biases are particularly evident in hiring for jobs with the highest skill levels. However, there is a substantial difference between larger and smaller organizations. Larger organizations are more receptive to Asian-named applicants than smaller ones, whether or not the applicants have Canadian qualifications.

In order to fully understand the disadvantages that racial minorities experience in the Canadian labour market, it is crucial to go beyond surveys, in which discrimination may be hidden and difficult to identify. Audit studies like ours capture “direct discrimination” by observing actual employer responses to simulated resumés. This form of discrimination is particularly significant since the inability to get an interview may prevent potentially qualified job-seekers from finding appropriate work. Its effect may be compounded in promotions and other stages of the career process and in turn exacerbate ethno-racial income inequality in Canada.

Meanwhile, employers might also be unwittingly disadvantaged, because it can prevent them from finding the best-qualified applicants. Small employers are particularly disadvantaged since they may lack the resources and expertise to fully tap more diverse segments of the workforce.

A number of measures may help to reduce name-based discrimination in the hiring process. First, a relatively low-cost measure would be for employers to introduce anonymized resumés. They could simply mask the names of applicants during the initial screening, and then track whether this results in more diverse hiring. Second, employers should ensure that more than one person is involved in the screening and interview process and that the process of resumé evaluation is open and transparent. Lastly, hiring managers should receive training on implicit bias and how to recognize and mitigate their own biases when recruiting job applicants.

Source: Applying for a job in Canada with an Asian name

No simple fix to weed out racial bias in the sharing economy

Two options to combat implicit bias and discrimination: less information (blind cv approach) or more information (expanded online reviews). The first has empirical evidence behind it, the second is exploratory at this stage:

One of the underlying flaws of any workplace is the assumption that the cream rises to the top, meaning that the best people get promoted and are given opportunities to shine.

While it’s tempting to be lulled into believing in a meritocracy, years of research on women and minorities in the work force demonstrate this is rarely the case. Fortunately, in most corporate settings, protocols exist to try to weed out discriminatory practices.

The same cannot necessarily be said for the sharing economy. While companies such as Uber and Airbnb boast transparency and even mutual reviews, they remain far from immune to discriminatory practices.

In 2014, Benjamin Edelman and Michael Luca, both associate professors of business administration at Harvard Business School, uncovered that non-black hosts can charge 12 per cent more than black hosts for a similar property. In this new economy, that simply means non-white hosts earn less for a similar service. This sounds painfully familiar to those who continue to fight this battle in the corporate world – although in this case, it occurs without the watchful eye of a human-resources division.

In the corporate world, companies have moved from focusing on overt to subconscious bias, according to Mr. Edelman and Mr. Luca, but the nature of the bias in the sharing economy remains unclear.

It’s either statistical, meaning users infer that the property remains inferior based on the owner’s profile, or “taste-based,” suggesting the decision to rent comes down to user preference. To curb discriminatory practices, the authors recommend concealing basic information, such as photos and names, until a transaction is complete, as on Craigslist.

Reached by e-mail this week, Mr. Edelman stands by that approach.

“Broadly, my instinct is to conceal information that might give rise to discrimination. If we think hosts might reject guests of [a] disfavoured race, let’s not tell hosts the race of a guest when they’re deciding whether to accept. If we think drivers might reject passengers of [a] disfavoured race, again, don’t reveal the race in advance,” he advised.

While Mr. Edelman feels those really bent on discrimination will continue to do so, other, more casual discriminators will realize it’s too costly.

An Uber driver who only notices a passenger’s race at the pickup point might think to himself he already has driven about five kilometres. If he cancels, not only will he be without a fare, but also Uber might notice and become suspicious, Mr. Edelman surmised.

Not everyone agrees that less information is the best route to take to combat discrimination in the sharing economy. In fact, more information may be the fix, according to recent research conducted by Ruomeng Cui, an assistant professor at Indiana University’s Kelley School of Business, Jun Li, an assistant professor at the University of Michigan’s Stephen M. Ross School of Business, and Dennis Zhang, an assistant professor at the John M. Olin Business School at Washington University in Saint Louis.

The trio of academics argues that rental decisions on platforms such as Airbnb are based on racial preferences only when not enough information is available. When more information is shared, specifically through peer reviews, discriminatory practices are reduced or even eliminated.

“We recommend platforms take advantage of the online reputation system to fight discrimination. This includes creating and maintaining an easy-to-use online review system, as well as encouraging users to write reviews after transactions. For example, sending multiple e-mail reminders or offering monetary incentives such as discounts or credits, especially for those relatively new users,” Dr. Li said.

“Eventually, sharing-economy platforms have to figure how to better signal user quality; nevertheless, whatever they do, concealing information will not help,” she added.

Still, others believe technology itself can offer a solution to the incidents of bias in the sharing economy, such as Copenhagen-based Sara Green Brodersen, founder and chief executive of Deemly, which launched last October. The company’s mission is to build trust in the sharing economy through social ID verification and reputation software, which enables users to take their reputation with them across platforms. For example, if a user has ratings on Airbnb, they can collate it with their reviews on Upwork.

“Recent studies in this area suggest that ratings and reviews are what creates most trust between peers. [For example] when a user on Airbnb looks at a host, they put the most emphasis on the previous reviews from other guests more than anything else on the profile. Essentially, this means platforms could present anonymous profiles showing only the user’s reputation, but not gender, profile picture, ethnicity, name and age and, in this way, we can avoid the bias which has been presented,” Ms. Brodersen said.

Regardless of the solution, platforms and their users need to recognize that combatting discriminatory practices is their responsibility and the sharing economy, like the traditional work force, is no meritocracy.

“This issue is not going to be smaller on its own,” Ms. Brodersen warned.

Source: No simple fix to weed out racial bias in the sharing economy – The Globe and Mail