New tool could point immigrants to spot in Canada where they’re most likely to succeed

A neat example of algorithms to assist immigrants assess their prospects although human factors such as presence of family members and community-specific food shopping and the like may be more determinate. But good that IRCC is exploring this approach. More sophisticated that the work I was involved in to develop the Canadian Index for Measuring Integration. Some good comments by Harald Bauder and Dan Hiebert:

Where should a newcomer with a background in banking settle in Canada?

What about an immigrant who’s an oil-production engineer?

Or a filmmaker?

Most newcomers flock to major Canadian cities. In doing so, some could be missing out better opportunities elsewhere.

A two-year-old research project between the federal government and Stanford University’s Immigration Policy Lab is offering hope for a tool that might someday point skilled immigrants toward the community in which they’d most likely flourish and enjoy the greatest economic success.

Immigration, Refugees and Citizenship Canada is eyeing a pilot program to test a matching algorithm that would make recommendations as to where a new immigrant might settle, department spokesperson Remi Lariviere told the Star.

“This type of pilot would allow researchers to see if use of these tools results in real-world benefits for economic immigrants. Testing these expected gains would also allow us to better understand the factors that help immigrants succeed,” he said in an email.

“This research furthers our commitment to evidence-based decision making and enhanced client service — an opportunity to leverage technology and data to benefit newcomers, communities and the country as a whole.”

Dubbed the GeoMatch project, researchers used Canada’s comprehensive historical datasets on immigrants’ background characteristics, economic outcomes and geographic locations to project where an individual skilled immigrant might start a new life.

Machine learning methods were employed to figure out how immigrants’ backgrounds, qualifications and skillsets were related to taxable earnings in different cities, while accounting for local trends, such as population and unemployment over time.

The models were then used to predict how newcomers with similar profiles would fare across possible destinations and what their expected earnings would be. The locations would be ranked based on the person’s unique profile.

“An immigrant’s initial arrival location plays a key role in shaping their economic success. Yet immigrants currently lack access to personalized information that would help them identify optimal destinations,” says a report about the pilot that was recently obtained by the Star.

“Instead, they often rely on availability heuristics, which can lead to the selection of suboptimal landing locations, lower earnings, elevated out-migration rates and concentration in the most well-known locations,” added the study completed last summer after two years of number crunching and sophisticated modelling.

About a quarter of economic immigrants settle in one of Canada’s four largest cities, with 31 per cent of all newcomers alone destined for Toronto.

“If initial settlement patterns concentrate immigrants in a few prominent landing regions, many areas of the country may not experience the economic growth associated with immigration,” the report pointed out. “Undue concentration may impose costs in the form of congestion in local services, housing, and labour markets.”

Researchers sifted through Canada’s longitudinal immigration database and income tax records to identify 203,290 principal applicants who arrived in the country between 2012 and 2017 under the federal skilled worker program, federal skilled trades program and the Canadian Experience Class.

They tracked the individuals’ annual incomes at the end of their first full year in Canada and predicated the modelling of their economic outcomes at a particular location on a long list of predictors: age at arrival, continent of birth, education, family status, gender, intended occupation, skill level, language ability, having studied or worked in Canada, arrival year and immigration category.

Researchers found that many economic immigrants were in what might be considered the wrong place.

For instance, the report says, among economic immigrants who chose to settle in Toronto, the city only ranked around 20th on average out of the 52 selected regions across Canada in terms of maximizing expected income in the year after arrival.

“In other words, the data suggest that for the average economic immigrant who settled in Toronto, there were 19 other (places) where that immigrant had a higher expected income than in Toronto,” it explains, adding that the same trend appeared from coast to coast.

Assuming only 10 per cent of immigrants would follow a recommendation, the models suggested an average gain of $1,100 in expected annual employment income for the 2015 and 2016 skilled immigrant cohort just by settling in a better suited place. That amounted to a gain of $55 million in total income, the report says.

However, researchers also warned against the “compositional effects” such as the concentration of immigrants with a similar profile in one location, which could lower the expected incomes due to saturation. Other issues, such as an individual’s personal abilities or motivation, were also not taken into account.

The use of artificial intelligence to assist immigrant settlement is an interesting idea as it puts expected income and geography as key considerations for settlement, said Ryerson University professor Harald Bauder

“It’s not revolutionizing the immigration system. It’s another tool in our tool box to better match local market conditions with what immigrants can bring to Canada,” says Bauder, director of Ryerson’s graduate program in immigration and settlement studies.

“This mechanism is probably too complex for immigrants themselves to see how a particular location is identified. It just spits out the ranking of locations, then the person wonders how I got this ranking. Is it because of my particular education? My particular country of origin? The information doesn’t seem to be clear or accessible to the end-users.”

New immigrants often gravitate toward a destination where they have family or friends or based on the perceived availability of jobs and personal preferences regarding climate, city size and cultural diversity.

“This tool will help those who are sufficiently detached, do not have family here and are willing to go anywhere,” says Daniel Hiebert, a University of British Columbia professor who specializes in immigration policy.

“People who exercise that kind of rational detachment will simply take that advice and lead to beneficial outcomes.”

But Hiebert has reservations as to how well the modelling can predict the future success of new immigrants when they are basing the advice and recommendations on the data of the past.

“This kind of future thinking is really difficult for these models to predict. There’s too much unknown to have a good sense about the future,” he says. “These models can predict yesterday and maybe sort of today, but they cannot predict tomorrow.”

Source: New tool could point immigrants to spot in Canada where they’re most likely to succeed

Can We Make Our Robots Less Biased Than Us? A.I. developers are committing to end the injustices in how their technology is often made and used.

Important read:

On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot.

Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled. Sending the machine was safer than sending in human officers, Mr. Brown said.

But some robotics researchers were troubled. “Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police offers in Dixmont, Maine, ended a shootout in a similar manner.). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems.

While Mr. Johnson’s death resulted from a human decision, in the future such a decision might be made by a robot — one created by humans, with their flaws in judgment baked in.

“Given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge,” Dr. Howard, a leader of the organization Black in Robotics, and Dr. Borenstein wrote, “it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Last summer, hundreds of A.I. and robotics researchers signed statements committing themselves to changing the way their fields work. One statement, from the organization Black in Computing, sounded an alarm that “the technologies we help create to benefit society are also disrupting Black communities through the proliferation of racial profiling.” Another manifesto, “No Justice, No Robots,” commits its signers to refusing to work with or for law enforcement agencies.

Over the past decade, evidence has accumulated that “bias is the original sin of A.I,” Dr. Howard notes in her 2020 audiobook, “Sex, Race and Robots.” Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one such system told the Detroit police that it had matched photos of a suspected thief with the driver’s license photo of Robert Julian-Borchak Williams, a Black man with no connection to the crime.)

There are A.I. systems enabling self-driving cars to detect pedestrians — last year Benjamin Wilson of Georgia Tech and his colleagues found that eight such systems were worse at recognizing people with darker skin tones than paler ones. Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the M.I.T. Media Lab, has encountered interactive robots at two different laboratories that failed to detect her. (For her work with such a robot at M.I.T., she wore a white mask in order to be seen.)

The long-term solution for such lapses is “having more folks that look like the United States population at the table when technology is designed,” said Chris S. Crawford, a professor at the University of Alabama who works on direct brain-to-robot controls. Algorithms trained mostly on white male faces (by mostly white male developers who don’t notice the absence of other kinds of people in the process) are better at recognizing white males than other people.

“I personally was in Silicon Valley when some of these technologies were being developed,” he said. More than once, he added, “I would sit down and they would test it on me, and it wouldn’t work. And I was like, You know why it’s not working, right?”

Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. So it was striking that many roboticists signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed themselves to actions aimed at making the creation and usage of robots less unjust.

“I think the protests in the street have really made an impact,” said Odest Chadwicke Jenkins, a roboticist and A.I. researcher at the University of Michigan. At a conference earlier this year, Dr. Jenkins, who works on robots that can assist and collaborate with people, framed his talk as an apology to Mr. Williams. Although Dr. Jenkins doesn’t work in face-recognition algorithms, he felt responsible for the A.I. field’s general failure to make systems that are accurate for everyone.

“This summer was different than any other than I’ve seen before,” he said. “Colleagues I know and respect, this was maybe the first time I’ve heard them talk about systemic racism in these terms. So that has been very heartening.” He said he hoped that the conversation would continue and result in action, rather than dissipate with a return to business-as-usual.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

As the Black in Computing open letter addressed how robots and A.I. are made, another manifesto appeared around the same time, focusing on how robots are used by society. Entitled “No Justice, No Robots,” the open letter pledges its signers to keep robots and robot research away from law enforcement agencies. Because many such agencies “have actively demonstrated brutality and racism toward our communities,” the statement says, “we cannot in good faith trust these police forces with the types of robotic technologies we are responsible for researching and developing.”

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

“I was dismayed by it,” said Cindy Bethel, director of the Social, Therapeutic and Robotic Systems Lab at Mississippi State University. “It’s such a blanket statement,” she said. “I think it’s naïve and not well-informed.” Dr. Bethel has worked with local and state police forces on robot projects for a decade, she said, because she thinks robots can make police work safer for both officers and civilians.

One robot that Dr. Bethel is developing with her local police department is equipped with night-vision cameras, that would allow officers to scope out a room before they enter it. “Everyone is safer when there isn’t the element of surprise, when police have time to think,” she said.

Adhering to the declaration would prohibit researchers from working on robots that conduct search-and-rescue operations, or in the new field of “social robotics.” One of Dr. Bethel’s research projects is developing technology that would use small, humanlike robots to interview children who have been abused, sexually assaulted, trafficked or otherwise traumatized. In one of her recent studies, 250 children and adolescents who were interviewed about bullying were often willing to confide information in a robot that they would not disclose to an adult.

Having an investigator “drive” a robot in another room thus could yield less painful, more informative interviews of child survivors, said Dr. Bethel, who is a trained forensic interviewer.

“You have to understand the problem space before you can talk about robotics and police work,” she said. “They’re making a lot of generalizations without a lot of information.”

Dr. Crawford is among the signers of both “No Justice, No Robots” and the Black in Computing open letter. “And you know, anytime something like this happens, or awareness is made, especially in the community that I function in, I try to make sure that I support it,” he said.

Dr. Jenkins declined to sign the “No Justice” statement. “I thought it was worth consideration,” he said. “But in the end, I thought the bigger issue is, really, representation in the room — in the research lab, in the classroom, and the development team, the executive board.” Ethics discussions should be rooted in that first fundamental civil-rights question, he said.

Dr. Howard has not signed either statement. She reiterated her point that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software.

“If external people who have ethical values aren’t working with these law enforcement entities, then who is?” she said. “When you say ‘no,’ others are going to say ‘yes.’ It’s not good if there’s no one in the room to say, ‘Um, I don’t believe the robot should kill.’”

Source: https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html?action=click&module=News&pgtype=Homepage

Twitter apologizes after users notice image-cropping algorithm favours white faces over Black

Big oops:

Twitter has apologized after users called its ‘image-cropping’ algorithm racist for automatically focusing on white faces over Black ones.

Users noticed that when two separate photos, one of a white face and the other of a Black face, were displayed in the post, the algorithm would crop the latter out and only show the former on its mobile version.

PhD student Colin Madland was among the first to point out the issue on Sept. 18, after a Black colleague asked him to help stop Zoom from removing his head while using a virtual background. Madland attempted to post a two-up display of him and his colleague with the head erased and noticed that Twitter automatically cropped his colleague out and focused solely on his face.

“Geez .. any guesses why @Twitter defaulted to show only the right side of the picture on mobile?” he tweeted along with a screenshot.

Entrepreneur Tony Arcieri experimented with the algorithm using a two-up image of Barack Obama and U.S. Senator Mitch McConnell. He discovered that the algorithm would consistently crop out Obama and instead show two images of McConnell.

Several other Twitter users also tested the feature out and noticed that the same thing happened with stock models, different characters from The Simpsons, and golden and black retrievers.

Dantley Davis, Twitter’s chief design officer, replied to Madland’s tweet and suggested his facial hair could be affecting the model “because of the contrast with his skin.”

Davis, who said he experimented with the algorithm after seeing Madland’s tweet, added that once he removed Madland’s facial hair from the photo, the Black colleague’s image showed in the preview.

“Our team did test for racial bias before shipping this model,” he said, but noted that the issue is “100% (Twitter’s) fault.” “Now the next step is fixing it,” he wrote in another tweet.

In a statement, a Twitter spokesperson conceded the company had some further testing to do. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate,” they said, as quoted by the Guardian.

Source: Twitter apologizes after users notice image-cropping algorithm favours white faces over Black

New policing technology may worsen inequality

Good discussion of the risks involved, although not convinced that a judicial enquiry is the best way to address the many policy issues involved:

The Canadian Charter of Rights and Freedoms guarantees the right to equal protection under the law. It is a beautiful thing and a hallmark of a free democracy. Unfortunately, the freedom to live without discrimination remains an unrealized dream for many in Canada. Worsening this problem, the growing use of algorithmic policing technology in Canada poses a fast-approaching threat to equality rights that our justice system is ill-equipped to confront.

Systemic bias in Canada’s criminal justice system is so notorious that Canadian courts no longer require proof of its existence. Indigenous and Black communities are among the worst affected. The critical question is: what can be done? The right to equality under section 15 of Canada’s Charter, a largely forgotten right in the justice system, should serve to remind governments and law enforcement services that bold change is not merely an option. It is a constitutional imperative.

Most often, courts respond to discrimination in the justice system by granting remedies such as compensation, or exclusion of evidence from court proceedings. But these case-specific remedies seem to operate as pyrrhic victories, while systemic change remains elusive. A case-by-case approach to remedying rights violations is also costly for the public and burdensome to the very individuals wronged.

Making matters worse, Canadian police services are beginning to explore the use of algorithmic technologies that may exacerbate systemic discrimination.

As described in a recent report jointly published by the University of Toronto’s Citizen Lab and International Human Rights Program (co-authored by myself), the widespread use of algorithmic policing technology would be deeply problematic. Predictive policing technology is used to attempt to forecast individuals or locations that are most likely to be involved in crimes that have not yet occurred (and may well never occur). Data sets (including data sets created by police) are fed into algorithms that are then supposed to produce “predictions” through machine-learning methods.

Given the continuing over-representation of Black and Indigenous individuals in policing data caused by over-policing and discrimination in the justice system, using such data to forecast potential crime risks perpetuating or amplifying existing inequality. As scholar Virginia Eubanks describes, policing algorithms can operate as “feedback loops of injustice.”

In the report, we call for moratoriums on these controversial technologies, and urge Ottawa to convene a judicial inquiry on the legality of repurposing police data for use in algorithms. Section 15 may well prohibit police decision-making that is guided by algorithmic predictions that are rooted in biased data.

A judicial inquiry is important because section 15 is under-utilized and rarely applied in Canadian courts. Its scope is not well understood. There are substantial costs and legal hurdles that must be overcome to bring a discrimination claim in court. Despite some recent signs of hope, in-court litigation is slow and has not ended the cyclical harm experienced by vulnerable groups.

In theory, the public does not need to wait for courts to painstakingly deliberate these problems over decades. Section 15 prohibits all government action taken in the criminal law enforcement system that has the adverse effect of disproportionately disadvantaging racialized and Indigenous communities (or other groups protected by section 15). The constitutional prohibition operates automatically and is in effect right now.

Section 15 also requires governments and police services to move beyond circular debates as to whether the justice system’s damage is caused by overt racism, historic racism, institutional bias, poverty, or depleted mental health-care systems. It is all of the above. But section 15 prohibits much more than overt racism. It prohibits all government activity that has the purpose or effect of disproportionately disadvantaging protected groups.

When the Charter was enacted in 1982, governments were given a three-year grace period to comply with section 15 in particular — a concession granted in recognition of the hard work and substantial legal reform that would be required by governments to fulfil their new obligations. Nearly 40 years later, it is time for the burden of that hard work to be taken up and completed.

 

Home Office to scrap ‘racist algorithm’ for UK visa applicants

Of note and a reminder that algorithms reflect the views and biases of the programmers and developers, and thus require careful management and oversight:

The Home Office is to scrap a controversial decision-making algorithm that migrants’ rights campaigners claim created a “hostile environment” for people applying for UK visas.

The “streaming algorithm”, which campaigners have described as racist, has been used since 2015 to process visa applications to the UK. It will be abandoned from Friday, according to a letter from Home Office solicitors seen by the Guardian.

The decision to scrap it comes ahead of a judicial review from the Joint Council for the Welfare of Immigrants (JCWI), which was to challenge the Home Office’s artificial intelligence system that filters UK visa applications.

Campaigners claim the Home Office decision to drop the algorithm ahead of the court case represents the UK’s first successful challenge to an AI decision-making system.

Chai Patel, JCWI’s legal policy director, said: “The Home Office’s own independent review of the Windrush scandal found it was oblivious to the racist assumptions and systems it operates.

“This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor such bias and to root it out.”

Source: Home Office to scrap ‘racist algorithm’ for UK visa applicants

Algorithms Learn Our Workplace Biases. Can They Help Us Unlearn Them?

“The nudge doesn’t focus on changing minds. It focuses on the system.”

— Iris Bohnet, a behavioral economist and professor at the Harvard Kennedy School


In 2014, engineers at Amazon began work on an artificially intelligent hiring tool they hoped would change hiring for good — and for the better. The tool would bypass the messy biases and errors of human hiring managers by reviewing résumé data, ranking applicants and identifying top talent.

Instead, the machine simply learned to make the kind of mistakesits creators wanted to avoid.

The tool’s algorithm was trained on data from Amazon’s hires over the prior decade — and since most of the hires had been men, the machine learned that men were preferable. It prioritized aggressive language like “execute,” which men use in their CVs more often than women, and downgraded the names of all-women’s colleges. (The specific schools have never been made public.) It didn’t choose better candidates; it just detected and absorbed human biases in hiring decisions with alarming speed. Amazon quietly scrapped the project.

Amazon’s hiring tool is a good example of how artificial intelligence — in the workplace or anywhere else — is only as smart as the input it gets. If sexism or other biases are present in the data, machines will learn and replicate them on a faster, bigger scale than humans could do alone.

On the flip side, if A.I. can identify the subtle decisions that end up excluding people from employment, it can also spot those that lead to more diverse and inclusive workplaces.

Humu Inc., a start-up based in Mountain View, Calif., is betting that, with the help of intelligent machines, humans can be nudged to make choices that make workplaces fairer for everyone, and make all workers happier as a result.

A nudge, as popularized by Richard Thayer, a Nobel-winning behavioral economist, and Cass Sunstein, a Harvard Law professor, is a subtle design choice that changes people’s behavior in a predictable way, without taking away their right to choose.

Laszlo Bock, one of Humu’s three founders and Google’s former H.R. chief, was an enthusiastic nudge advocate at Google, where behavioral economics — essentially, the study of the social, psychological and cultural factors that influence people’s economic choices — informed much of daily life.

Nudges showed up everywhere, like in the promotions process (women were more likely to self-promote after a companywide email pointed out a dearth of female nominees) and in healthy-eating initiatives in the company’s cafeterias (placing a snack table 17 feet away from a coffee machine instead of 6.5 feet, it turns out, reduces coffee-break snacking by 23 percent for men and 17 percent for women).

Humu uses artificial intelligence to analyze its clients’ employee satisfaction, company culture, demographics, turnover and other factors, while its signature product, the “nudge engine,” sends personalized emails to employees suggesting small behavioral changes (those are the nudges) that address identified problems.

One key focus of the nudge engine is diversity and inclusion. Employees at inclusive organizations tend to be more engaged. Engaged employees are happier, and happier employees are more productive and a lot more likely to stay.

With Humu, if data shows that employees aren’t satisfied with an organization’s inclusivity, for example, the engine might prompt a manager to solicit the input of a quieter colleague, while nudging a lower-level employee to speak up during a meeting. The emails are tailored to their recipients, but are coordinated so that the entire organization is gently guided toward the same goal.

Unlike Amazon’s hiring algorithm, the nudge engine isn’t supposed to replace human decision-making. It just suggests alternatives, often so subtly that employees don’t even realize they’re changing their behavior.

Jessie Wisdom, another Humu founder and former Google staff member who has a doctorate in behavioral decision research, said sometimes she would hear from people saying, “Oh, this is obvious, you didn’t need to tell me that.”

Even when people may not feel the nudges are helping them, she said, data would show “that things have gotten better. It’s interesting to see how people perceive what is actually useful, and what the data actually bears out.”

In part that’s because the nudge “doesn’t focus on changing minds,” said Iris Bohnet, a behavioral economist and professor at the Harvard Kennedy School. “It focuses on the system.” The behavior is what matters, and the outcome is the same regardless of the reason people give themselves for doing the behavior in the first place.

Of course, the very idea of shaping behavior at work is tricky, because workplace behaviors can be perceived differently based on who is doing them.

Take, for example, the suggestion that one should speak up in a meeting. Research from Victoria Brescoll at the Yale School of Management found that people rated male executives who spoke up often in meetings as more competent than peers; the inverse was true for female executives. At the same time, research from Robert Livingston at Northwestern’s Kellogg School of Management found that for black American executives, the penalties were reversed: Black female leaders were not penalizedfor assertive workplace behaviors, but black male executives were.

An algorithm that generates one-size-fits-all fixes isn’t helpful. One that takes into account the nuanced web of relationships and factors in workplace success, on the other hand, could be very useful.

So how do you keep an intelligent machine from absorbing human biases? Humu won’t divulge any specifics — that’s “our secret sauce,” Wisdom said.

It’s also the challenge of any organization attempting to nudge itself, bit by bit, toward something that looks like equity.

Source: In the ‘In Her Words’ NewsletterAlgorithms learn our workplace biases. Can they help us unlearn them?

ACLU Sues ICE Over Its Deliberately-Broken Immigrant ‘Risk Assessment’ Software

Good for the ACLU for launching a lawsuit and the research and study behind it:

from the can’t-really-call-it-an-‘option’-if-there-are-no-alternatives dept

A couple of years ago, a Reuters investigation uncovered another revamp of immigration policies under President Trump. ICE has a Risk Classification Assessment Tool that decides whether or not arrested immigrants can be released on bail or their own recognizance. The algorithm had apparently undergone a radical transformation under the new administration, drastically decreasing the number of detainees who could be granted release. The software now recommends detention in almost every case, no matter what mitigating factors are fed to the assessment tool.

ICE is now being sued for running software that declares nearly 100% of detained immigrants too risky to be released pending hearings. The ACLU’s lawsuit [PDF] opens with some disturbing stats that show how ICE has rigged the system to keep as many people detained as possible.

According to data obtained by the New York Civil Liberties Union under the Freedom of Information Act, from 2013 to June 2017, approximately 47% of those deemed to be low risk by the government were granted release. From June 2017 to September 2019, that figure plummeted to 3%. This dramatic drop in the release rate comes at a time when exponentially more people are being arrested in the New York City area and immigration officials have expanded arrests of those not convicted of criminal offenses. The federal government’s sweeping detention dragnet means that people who pose no flight or safety risk are being jailed as a matter of course—in an unlawful trend that is getting worse.

Despite there being plenty of evidence that immigrants commit fewer criminal acts than natural-born citizens, the administration adopted a “No-Release Policy.” That led directly to ICE tinkering with its software — one that was supposed to assess risk factors when making detention determinations. ICE may as well just skip this step in the process since it’s only going to give ICE (and the administration) the answer it wants: detention without bond. ICE agents can ask for a second opinion on detention from a supervisor, but the documents obtained by the ACLU show supervisors depart from detention recommendations less than 1% of the time.

The negative effects of this indefinite detention are real. The lawsuit points out zero-risk detainees can see their lives destroyed before they’re allowed anything that resembles due process.

Once denied release under the new policy, people remain unnecessarily incarcerated in local jails for weeks or even months before they have a meaningful opportunity to seek release in a hearing before an Immigration Judge. While waiting for those hearings, those detained suffer under harsh conditions of confinement akin to criminal incarceration. While incarcerated, they are separated from families, friends, and communities, and they risk losing their children, their jobs, and their homes. Because of inadequate medical care and conditions in the jails, unmet medical and mental-health needs often lead to serious and at times irreversible consequences.

When they do finally get to see a judge, nearly 40% of them are released on bond. ICE treats nearly 100% of detained immigrants as dangerous. Judges — judges employed by the DOJ and appointed by the Attorney General — clearly don’t agree with the agency’s rigged assessment system.

There will always be those who say, “Well, don’t break the law.” These aren’t criminal proceedings. These are civil proceedings where the detained are tossed into criminal facilities until they’re able to see a judge. This steady stripping of options began under the Obama administration but accelerated under Trump and his no-release policy.

ICE began to alter its custody determinations process in 2015, modifying its risk-assessment tool so that it could no longer recommend individuals be given the opportunity for release on bond. In mid-2017, ICE then removed the tool’s ability to recommend release on recognizance. As a result, the assessment tool—on which ICE offices across the country rely— can only make one substantive recommendation: detention without bond.

The ACLU is hoping to have a class action lawsuit certified that would allow it to hold ICE responsible for violating rights en masse, including the Fifth Amendment’s due process clause. Since ICE is no longer pretending to be targeting the “worst of the worst,” the agency and its deliberately-broken risk assessment tool are locking up immigrants who have lived here for an average of sixteen years — people who’ve added to their communities, held down jobs, and raised families. These are the people targeted by ICE and it is ensuring that it is these people who are thrown into prisons and jails until their hearings, tearing apart their lives and families while denying them the rights extended to them by our Constitution.

Source: ACLU Sues ICE Over Its Deliberately-Broken Immigrant ‘Risk Assessment’ Software

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Yet another article emphasizing the risks:

Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.

But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.

BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. Decades and even centuries of biases — along with a few new ones — are probably baked into all that material.

BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit. One program decided almost everything written about President Trump was negative, even if the actual content was flattering.

AI system for granting UK visas is biased, rights groups claim

Always a challenge with AI, ensuring that the algorithms do not replicate or create bias:

Immigrant rights campaigners have begun a ground-breaking legal case to establish how a Home Office algorithm that filters UK visa applications actually works.

The challenge is the first court bid to expose how an artificial intelligence program affects immigration policy decisions over who is allowed to enter the country.

Foxglove, a new advocacy group promoting justice in the new technology sector, is supporting the case brought by the Joint Council for the Welfare of Immigrants (JCWI) to legally force the Home Office to explain on what basis the algorithm “streams” visa applicants.

The two groups both said they feared the AI “streaming tool” created three channels for applicants including a “fast lane” that would lead to “speedy boarding for white people”.

The Home Office has insisted that the algorithm is used only to allocate applications and does not ultimately rule on them. The final decision remains in the hands of human caseworkers and not machines, it said.

A spokesperson for the Home Office said: “We have always used processes that enable UK Visas and Immigration to allocate cases in an efficient way.

“The streaming tool is only used to allocate applications, not to decide them. It uses data to indicate whether an application might require more or less scrutiny and it complies fully with the relevant legislation under the Equalities Act 2010.”

Cori Crider, a director at Foxglove, rejected the Home Office’s defence of the AI system.

Source: AI system for granting UK visas is biased, rights groups claim

Beware of Automated Hiring It won’t end employment discrimination. In fact, it could make it worse.

Some interesting ideas to reduce the risks of bias and discrimination:

Algorithms make many important decisions for us, like our creditworthiness, best romantic prospects and whether we are qualified for a job. Employers are increasingly turning to automated hiring platforms, believing they’re both more convenient and less biased than humans. However, as I describe in a new paper, this is misguided.

In the past, a job applicant could walk into a clothing store, fill out an application, and even hand it straight to the hiring manager. Nowadays, her application must make it through an obstacle course of online hiring algorithms before it might be considered. This is especially true for low-wage and hourly workers.

The situation applies to white-collar jobs too. People applying to be summer interns and first-year analysts at Goldman Sachs have their résumés digitally scanned for keywords that can predict success at the company. And the company has now embracedautomated interviewing.

Automated hiring can create a closed loop system. Advertisements created by algorithms encourage certain people to send in their résumés. After the résumés have undergone automated culling, a lucky few are hired and then subjected to automated evaluation, the results of which are looped back to establish criteria for future job advertisements and selections. This system operates with no transparency or accountability built in to check that the criteria are fair to all job applicants.