‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Legitimate concerns about AI bias (which individual decision-makers have), also need to address “noise,” variability among decision-making by people for comparable cases:

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the act, once adopted, cannot be overstated,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EU’s final list of “high risk” uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or – in the case of lenders – assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

“AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture,” Sarah Kocianski, an independent financial technology consultant said. “If designed correctly, such systems can provide wider access to affordable credit.”

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said. “Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as “black-box” syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called “trustworthy AI” models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. “Correlation-based models are learning the injustices from the past and they’re just replaying it into the future,” Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

“It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model,” he said. “We don’t know how many people haven’t gone to university because of a haywire algorithm. We don’t know how many people weren’t able to get their mortgage because of algorithm biases. We just don’t know.”

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. “Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it,” he said.

While the EU’s new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

“The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present,” Circiumaru said.

“AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they won’t.”

Source: ‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

‘Racism plays a role in immigration decisions,’ House Immigration Committee hears

While always important to recognize that bias and discrimination can influence decisions, different acceptance rates can also reflect other factors, and that misrepresentation may be more prevalent in some regions than others.

Training guides and materials need to provide illustrations and examples. Meurrens is one of the few lawyers who regularly looks at the data but his challenge of the training guide “Kids in India are not back-packers as they are in Canada.” is odd given that the data likely confirms that statement.

Moreover, the call for more transparency, welcome and needed, may provide opportunities for the more unscrupulous to “game the system.”

“Kids in India are not back-packers as they are in Canada” reads a note appended to a slide in a presentation used to train Canadian immigration officials in mid-2019.

In a recent access to information request, Immigration lawyer Steven Meurrens said he received a copy of the presentation which was used in a training session by Immigration, Refugees and Citizenship Canada (IRCC) officials, dated April 2019 and titled “India [Temporary Resident Visa]s: A quick introduction.” He shared the full results of the request with The Hill Times.

The slides, which detail the reasons why Indians may apply for a Temporary Resident Visa (TRV) and what officials should look for in applications—have notes appended to them, as if they were speaking notes for the person giving the presentation. On one slide detailing potential reasons for travel to Canada, the notes read: “Kids in India are not back-packers as they are in Canada.”

In an interview, Meurrens spoke to an apparent double standard for Indian people looking to travel to Canada.

“It drives me nuts, because I’ve often thought that, as a Canadian, a broke university student, I could hop on a plane, go anywhere, apply for visas, and no one would be like, ‘That’s not what Canadians do,’” Meurrens said, adding that he’s representing people from India who did in fact intend to come to Canada to backpack through the country.

A screenshot of the page wherein an IRCC presentation notes that ‘Kids in India are not back-packers as they are in Canada.’ Image courtesy of IRCC

“To learn that people are trained specifically that Indian people don’t backpack” was “over the top,” he said. It reminded him of another instance of generalizations made within IRCC about different nationalities of people, when in 2015, an ATIP he received showed that training materials within the department stated that when a Chinese person marrying a non-Chinese person was a likely indicator of marriage fraud.

At the time, the department said that document was more than five years old, and no longer in use.

“[I’d like us] to get to a state where someone’s country of origin doesn’t dictate the level of procedural fairness that they’ll get and how they’re assessed,” he said.

The fact of systemic racism within Canada’s Department of Citizenship, Immigration, and Refugees Canada (IRCC) is not new; evidence of such racism was uncovered through what is colloquially known as the Pollara report. This report, conducted by Pollara Strategic Insights and released in 2021, was the result of focus groups conducted with IRCC employees to better understand “current experiences of racism within the department.”

The report found that within the department, the use of the phrase “the dirty 30” was widely used to refer to certain African nations and that Nigerians in particular were stereotyped as “particularly corrupt or untrustworthy.”

As the House Immigration Committee heard last week, there remains much work to be done to combat systemic racism within IRCC.

On March 22, the House Committee on Immigration and Citizenship began its study on differential outcomes in immigration decisions at IRCC, and Immigration Minister Sean Fraser (Central Nova, N.S.) appeared at the committee on March 24. Other issues brought up by witnesses included a lack of transparency from the department as well as concerns of systemic racism and bias being embedded in any automated intelligence (AI) the department uses to assess applications.

From students in Nigeria being subjected to English-language proficiency tests when they hail from an English-speaking country, to the differential treatment of some groups of refugees versus others, to which groups are eligible for resettlement support and which are not, the committee heard several examples of differential treatment of potential immigrants to Canada due to systemic racism and bias within IRCC.

“I know it’s very uncomfortable raising the issue of racism,” said Dr. Gideon Christian, president of the African Scholars Initiative and an assistant professor of AI and law at the University of Calgary.

“But the fact is that we need to call racism for what it is—as uncomfortable as it might be. … Yes, this is a clear case of racism. And we should call it that. We should actually be having conversations around this problem with a clear framework as to how to address it,” he said.

According to Christian, Nigerian students looking to come to Canada to study through the Nigerian Study Express program are subjected to an English-language proficiency test, despite the fact that the official language in Nigeria is English, that English is the language used in all official academic institutions there, and that academic institutions in Canada do not require a language test from Nigerian students for their admission.

A spokesperson for IRCC said the department does not single out Nigeria in its requirement for a language test.

“IRCC is committed to a fair and non-discriminatory application process,” reads the written statement.

“While language testing is not a requirement to be eligible for a study permit, individual visa offices may require them as part of their review of whether the applicant is a bona fide student. This includes many applicants from English-speaking countries, including a large number from India and Pakistan, two nations where English is widely taught and top countries for international students in Canada.”

“Nigeria is not singled out by the requirement of language tests for the Nigeria Student Express initiative,” the spokesperson said.

Systemic racism embedded in AI

Christian, who is also an assistant professor of AI and law at the University of Calgary and has spent the last three years researching algorithmic racism, expressed concern that the “advanced analytics” IRCC uses to triage its immigration applications—including the Microsoft Excel-based software system called Chinook—has systemic racism and bias embedded within it.

“IRCC has in its possession a great deal of historical data that can enable it to train AI and automate its visa application processes,” Christian told the committee. As revealed by the Pollara report, systemic bias, racism and discrimination does account for differential treatment of immigration applications, particularly when it comes to study visa refusals for those applying from Sub-Saharan Africa, he said.

“External story of IRCC—especially the Pollara report—have revealed systemic bias, racism and discrimination in IRCC processing of immigration applications. Inevitably, this historical data imposition of IRCC is tainted by the same systemic bias, racism and discrimination. Now the problem is that the use of these tainted data to train any AI algorithm will inevitably result in algorithmic racism. Racist AI, making immigration decisions,” he said.

The Pollara report echoed these concerns in a section that laid out a few ways processes and procedures adopted for expediency’s sake “have taken on discriminatory undertones.” This included “concern that increased automation of processing will embed racially discriminatory practices in a way that will be harder to see over time.”

Meurrens, who also appeared at committee on March 22, said a lack of transparency from the government impedes the public’s ability to assess whether it is indeed making progress on the issue of addressing systemic racism or not.

He said he’d like to see the department publish Access to Information results pertaining to internal manuals, visa office specific training guides, and other similar documents as downloadable PDFs on its website, pointing out this is how the provincial government of B.C. releases its ATIP responses. He also said he thinks IRCC should publish “detailed explanations and reports of how its artificial intelligence triaging and new processing tools work in practice.”

“Almost everything public today [about the AI programs] has been obtained through access to information results that are heavily redacted and which I don’t believe present the whole picture,” he said.

Whether the concerns were actually reflected in the AI itself, Meurrens said, could not be known without more transparency from the department.

“In the absence of increased transparency, concerns like this are only growing,” he said.

Fraser: racism is a ‘sickness’

On Thursday, Fraser told the committee that he agrees that racism is a problem within the department, calling it a “sickness in our society.”

“There are examples of racism not just in one department but across different levels of government. It’s a sickness in our society that limits the productivity of human beings who want to fully participate in our communities. IRCC is not immune from that social phenomenon that hampers our success as a nation, and we have to do everything we can to eradicate racism, not just from our department,” he said.

Fraser said there is “zero tolerance for racism, discrimination, or harassment of any kind,” but acknowledged those problems do exist within the department.

The minister pointed towards the anti-racism task force which was created in 2020 and “guides the department’s strategy to eliminate racism and applies an anti-racism lens” to the department’s work. He also said IRCC has been “actively reviewing its human resource systems so that Indigenous, Black, racialized peoples and persons with disabilities are better represented across IRCC at every level.”

Fraser also referenced a three-year anti-racism strategy for the department, which includes plans to implement mandatory bias training, anti-racist work and training objectives, and trauma coaching sessions for Black employees and managers to recognize the impacts of racism on mental health, among other things.

“It’s not lost on me that there have been certain very serious issues that have pertained to IRCC,” he said.

These measures are different from the ones witnesses and opposition MPs are calling for, however.

NDP MP Jenny Kwan (Vancouver East, B.C.) her top priority on this topic is to convince the government to put an independent ombudsperson in place whose job it would be to assess IRCC policies and the application of said policies as they relate to differential treatment, systemic racism, and gender biases.

“Let’s dig deep. Have an officer of the House do this work completely independent from the government,” she said in an interview with The Hill Times.

At the March 22 meeting, Kwan asked all six witnesses to state for the record if they agreed that the government should put such an ombudsperson in place. All six witnesses agreed.

Kwan questioned the ability of the department to conduct its own internal reviews.

“As the minister said [at committee], he’s undertaking a variety of measures to address these issues and to see how they can rectify it. … But how deeply is it embedded? And if it’s done internally, then how independent is it?” she wondered.

Fraser said the implementation of an ombudsperson was something he would consider after reading the committee’s report.

Conservative MP Jasraj Singh Hallan (Calgary Forest Lawn, Alta.), his party’s immigration critic and the vice-chair of the committee, agreed with Meurrens’ calls for increased transparency. “We need more evidence that the government is serious about this,” he said in an interview.

Hallan also said he wants to see consequences for those within the department who participated in the racism documented by the Pollara report.

“[Fraser] should start by approaching those employees of IRCC that made these complaints from that Pollara report and find out who is making these remarks. Reprimand them, fire them if they need to be,” he said.

Source: ‘Racism plays a role in immigration decisions,’ House Immigration Committee hears

New tool could help immigrants decide where to live in Canada

Of interest. Useful experiment and it will important to see how much it is used and the extent that it improves immigrant outcomes:

Researchers are working on a new tool that will help newcomers identify which Canadian city they are most likely to be successful in.

Most immigrants end up choosing to live in one of Canada’s major cities. In fact, more than half of all immigrants and recent immigrants to Canada currently live in Toronto, Montreal or Vancouver, according to Statistics Canada.

However, there may be better opportunities for these immigrants elsewhere. Perhaps a film director or a tech worker may be suited to Toronto, but a petroleum engineer may not.

Since 2018, Immigration, Refugees and Citizenship Canada (IRCC) has been working on a research project alongside the Immigration Policy Lab (IPL) at Stanford University that may pave the way for this tool, dubbed GeoMatch, to come to fruition.

The project attempts to repurpose an algorithm that is used in resettlement efforts, to work for Canadian immigration. It uses historical data to help immigrants choose where they might thrive the most, IRCC spokesperson Isabelle Dubois told CIC News.

“The study suggested that prospective economic immigrants who followed the GeoMatch recommendation would be more likely to find a well-paying job after they arrived,” Dubois said in an email.

“Currently, newcomers tend to gravitate to cities they’ve heard of— which tend to be the largest. Yet such a tool could help change that by promoting different localities across Canada, beyond major urban centres like Toronto and Vancouver.”

According to their website, GeoMatch uses machine learning capabilities to make its predictions. It considers factors such as previous immigrants’ work history, education as well as personal characteristics. It then finds patterns in the data by focusing on how these factors were related to economic success in different locations.

GeoMatch may then be able to predict an immigrant’s likelihood of success in various locations across Canada.

“Research suggests that an immigrant’s initial arrival location plays a key role in shaping their economic success. Yet immigrants currently lack access to personalized information that would help them identify optimal destinations,” said a report published by the IPL.

The report reiterates that the approach is motivated by data that show an immigrant’s first landing location is influential in their outcomes.

“We find that for many economic immigrants the chosen [first] location is far from optimal in terms of expected income,” the report adds.

The report suggests that many economic immigrants choose Toronto simply because that is all they know of Canada, but they may be in “the wrong place” for their skillset. For example, Toronto is ranked number 20 out of 52 regions in terms of maximizing income in the year after arrival. This means that for many immigrants, there were 19 other regions where they would have likely made a higher income.

Immigrants may, of course, choose not to use the tool. However, it is worth mentioning that GeoMatch takes into consideration not just “data-driven predictions” but immigrants’ location preferences as well.

Source: New tool could help immigrants decide where to live in Canada

New tool could point immigrants to spot in Canada where they’re most likely to succeed

A neat example of algorithms to assist immigrants assess their prospects although human factors such as presence of family members and community-specific food shopping and the like may be more determinate. But good that IRCC is exploring this approach. More sophisticated that the work I was involved in to develop the Canadian Index for Measuring Integration. Some good comments by Harald Bauder and Dan Hiebert:

Where should a newcomer with a background in banking settle in Canada?

What about an immigrant who’s an oil-production engineer?

Or a filmmaker?

Most newcomers flock to major Canadian cities. In doing so, some could be missing out better opportunities elsewhere.

A two-year-old research project between the federal government and Stanford University’s Immigration Policy Lab is offering hope for a tool that might someday point skilled immigrants toward the community in which they’d most likely flourish and enjoy the greatest economic success.

Immigration, Refugees and Citizenship Canada is eyeing a pilot program to test a matching algorithm that would make recommendations as to where a new immigrant might settle, department spokesperson Remi Lariviere told the Star.

“This type of pilot would allow researchers to see if use of these tools results in real-world benefits for economic immigrants. Testing these expected gains would also allow us to better understand the factors that help immigrants succeed,” he said in an email.

“This research furthers our commitment to evidence-based decision making and enhanced client service — an opportunity to leverage technology and data to benefit newcomers, communities and the country as a whole.”

Dubbed the GeoMatch project, researchers used Canada’s comprehensive historical datasets on immigrants’ background characteristics, economic outcomes and geographic locations to project where an individual skilled immigrant might start a new life.

Machine learning methods were employed to figure out how immigrants’ backgrounds, qualifications and skillsets were related to taxable earnings in different cities, while accounting for local trends, such as population and unemployment over time.

The models were then used to predict how newcomers with similar profiles would fare across possible destinations and what their expected earnings would be. The locations would be ranked based on the person’s unique profile.

“An immigrant’s initial arrival location plays a key role in shaping their economic success. Yet immigrants currently lack access to personalized information that would help them identify optimal destinations,” says a report about the pilot that was recently obtained by the Star.

“Instead, they often rely on availability heuristics, which can lead to the selection of suboptimal landing locations, lower earnings, elevated out-migration rates and concentration in the most well-known locations,” added the study completed last summer after two years of number crunching and sophisticated modelling.

About a quarter of economic immigrants settle in one of Canada’s four largest cities, with 31 per cent of all newcomers alone destined for Toronto.

“If initial settlement patterns concentrate immigrants in a few prominent landing regions, many areas of the country may not experience the economic growth associated with immigration,” the report pointed out. “Undue concentration may impose costs in the form of congestion in local services, housing, and labour markets.”

Researchers sifted through Canada’s longitudinal immigration database and income tax records to identify 203,290 principal applicants who arrived in the country between 2012 and 2017 under the federal skilled worker program, federal skilled trades program and the Canadian Experience Class.

They tracked the individuals’ annual incomes at the end of their first full year in Canada and predicated the modelling of their economic outcomes at a particular location on a long list of predictors: age at arrival, continent of birth, education, family status, gender, intended occupation, skill level, language ability, having studied or worked in Canada, arrival year and immigration category.

Researchers found that many economic immigrants were in what might be considered the wrong place.

For instance, the report says, among economic immigrants who chose to settle in Toronto, the city only ranked around 20th on average out of the 52 selected regions across Canada in terms of maximizing expected income in the year after arrival.

“In other words, the data suggest that for the average economic immigrant who settled in Toronto, there were 19 other (places) where that immigrant had a higher expected income than in Toronto,” it explains, adding that the same trend appeared from coast to coast.

Assuming only 10 per cent of immigrants would follow a recommendation, the models suggested an average gain of $1,100 in expected annual employment income for the 2015 and 2016 skilled immigrant cohort just by settling in a better suited place. That amounted to a gain of $55 million in total income, the report says.

However, researchers also warned against the “compositional effects” such as the concentration of immigrants with a similar profile in one location, which could lower the expected incomes due to saturation. Other issues, such as an individual’s personal abilities or motivation, were also not taken into account.

The use of artificial intelligence to assist immigrant settlement is an interesting idea as it puts expected income and geography as key considerations for settlement, said Ryerson University professor Harald Bauder

“It’s not revolutionizing the immigration system. It’s another tool in our tool box to better match local market conditions with what immigrants can bring to Canada,” says Bauder, director of Ryerson’s graduate program in immigration and settlement studies.

“This mechanism is probably too complex for immigrants themselves to see how a particular location is identified. It just spits out the ranking of locations, then the person wonders how I got this ranking. Is it because of my particular education? My particular country of origin? The information doesn’t seem to be clear or accessible to the end-users.”

New immigrants often gravitate toward a destination where they have family or friends or based on the perceived availability of jobs and personal preferences regarding climate, city size and cultural diversity.

“This tool will help those who are sufficiently detached, do not have family here and are willing to go anywhere,” says Daniel Hiebert, a University of British Columbia professor who specializes in immigration policy.

“People who exercise that kind of rational detachment will simply take that advice and lead to beneficial outcomes.”

But Hiebert has reservations as to how well the modelling can predict the future success of new immigrants when they are basing the advice and recommendations on the data of the past.

“This kind of future thinking is really difficult for these models to predict. There’s too much unknown to have a good sense about the future,” he says. “These models can predict yesterday and maybe sort of today, but they cannot predict tomorrow.”

Source: New tool could point immigrants to spot in Canada where they’re most likely to succeed

Can We Make Our Robots Less Biased Than Us? A.I. developers are committing to end the injustices in how their technology is often made and used.

Important read:

On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot.

Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled. Sending the machine was safer than sending in human officers, Mr. Brown said.

But some robotics researchers were troubled. “Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police offers in Dixmont, Maine, ended a shootout in a similar manner.). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems.

While Mr. Johnson’s death resulted from a human decision, in the future such a decision might be made by a robot — one created by humans, with their flaws in judgment baked in.

“Given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge,” Dr. Howard, a leader of the organization Black in Robotics, and Dr. Borenstein wrote, “it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Last summer, hundreds of A.I. and robotics researchers signed statements committing themselves to changing the way their fields work. One statement, from the organization Black in Computing, sounded an alarm that “the technologies we help create to benefit society are also disrupting Black communities through the proliferation of racial profiling.” Another manifesto, “No Justice, No Robots,” commits its signers to refusing to work with or for law enforcement agencies.

Over the past decade, evidence has accumulated that “bias is the original sin of A.I,” Dr. Howard notes in her 2020 audiobook, “Sex, Race and Robots.” Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one such system told the Detroit police that it had matched photos of a suspected thief with the driver’s license photo of Robert Julian-Borchak Williams, a Black man with no connection to the crime.)

There are A.I. systems enabling self-driving cars to detect pedestrians — last year Benjamin Wilson of Georgia Tech and his colleagues found that eight such systems were worse at recognizing people with darker skin tones than paler ones. Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the M.I.T. Media Lab, has encountered interactive robots at two different laboratories that failed to detect her. (For her work with such a robot at M.I.T., she wore a white mask in order to be seen.)

The long-term solution for such lapses is “having more folks that look like the United States population at the table when technology is designed,” said Chris S. Crawford, a professor at the University of Alabama who works on direct brain-to-robot controls. Algorithms trained mostly on white male faces (by mostly white male developers who don’t notice the absence of other kinds of people in the process) are better at recognizing white males than other people.

“I personally was in Silicon Valley when some of these technologies were being developed,” he said. More than once, he added, “I would sit down and they would test it on me, and it wouldn’t work. And I was like, You know why it’s not working, right?”

Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. So it was striking that many roboticists signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed themselves to actions aimed at making the creation and usage of robots less unjust.

“I think the protests in the street have really made an impact,” said Odest Chadwicke Jenkins, a roboticist and A.I. researcher at the University of Michigan. At a conference earlier this year, Dr. Jenkins, who works on robots that can assist and collaborate with people, framed his talk as an apology to Mr. Williams. Although Dr. Jenkins doesn’t work in face-recognition algorithms, he felt responsible for the A.I. field’s general failure to make systems that are accurate for everyone.

“This summer was different than any other than I’ve seen before,” he said. “Colleagues I know and respect, this was maybe the first time I’ve heard them talk about systemic racism in these terms. So that has been very heartening.” He said he hoped that the conversation would continue and result in action, rather than dissipate with a return to business-as-usual.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

Dr. Jenkins was one of the lead organizers and writers of one of the summer manifestoes, produced by Black in Computing. Signed by nearly 200 Black scientists in computing and more than 400 allies (either Black scholars in other fields or non-Black people working in related areas), the document describes Black scholars’ personal experience of “the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries.”

The statement calls for reforms, including ending the harassment of Black students by campus police officers, and addressing the fact that Black people get constant reminders that others don’t think they belong. (Dr. Jenkins, an associate director of the Michigan Robotics Institute, said the most common question he hears on campus is, “Are you on the football team?”) All the nonwhite, non-male researchers interviewed for this article recalled such moments. In her book, Dr. Howard recalls walking into a room to lead a meeting about navigational A.I. for a Mars rover and being told she was in the wrong place because secretaries were working down the hall.

The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicate to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in A.I.

As the Black in Computing open letter addressed how robots and A.I. are made, another manifesto appeared around the same time, focusing on how robots are used by society. Entitled “No Justice, No Robots,” the open letter pledges its signers to keep robots and robot research away from law enforcement agencies. Because many such agencies “have actively demonstrated brutality and racism toward our communities,” the statement says, “we cannot in good faith trust these police forces with the types of robotic technologies we are responsible for researching and developing.”

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

Last summer, distressed by police officers’ treatment of protesters in Denver, two Colorado roboticists — Tom Williams, of the Colorado School of Mines and Kerstin Haring, of the University of Denver — started drafting “No Justice, No Robots.” So far, 104 people have signed on, including leading researchers at Yale and M.I.T., and younger scientists at institutions around the country.

“The question is: Do we as roboticists want to make it easier for the police to do what they’re doing now?” Dr. Williams asked. “I live in Denver, and this summer during protests I saw police tear-gassing people a few blocks away from me. The combination of seeing police brutality on the news and then seeing it in Denver was the catalyst.”

Dr. Williams is not opposed to working with government authorities. He has conducted research for the Army, Navy and Air Force, on subjects like whether humans would accept instructions and corrections from robots. (His studies have found that they would.). The military, he said, is a part of every modern state, while American policing has its origins in racist institutions, such as slave patrols — “problematic origins that continue to infuse the way policing is performed,” he said in an email.

“No Justice, No Robots” proved controversial in the small world of robotics labs, since some researchers felt that it wasn’t socially responsible to shun contact with the police.

“I was dismayed by it,” said Cindy Bethel, director of the Social, Therapeutic and Robotic Systems Lab at Mississippi State University. “It’s such a blanket statement,” she said. “I think it’s naïve and not well-informed.” Dr. Bethel has worked with local and state police forces on robot projects for a decade, she said, because she thinks robots can make police work safer for both officers and civilians.

One robot that Dr. Bethel is developing with her local police department is equipped with night-vision cameras, that would allow officers to scope out a room before they enter it. “Everyone is safer when there isn’t the element of surprise, when police have time to think,” she said.

Adhering to the declaration would prohibit researchers from working on robots that conduct search-and-rescue operations, or in the new field of “social robotics.” One of Dr. Bethel’s research projects is developing technology that would use small, humanlike robots to interview children who have been abused, sexually assaulted, trafficked or otherwise traumatized. In one of her recent studies, 250 children and adolescents who were interviewed about bullying were often willing to confide information in a robot that they would not disclose to an adult.

Having an investigator “drive” a robot in another room thus could yield less painful, more informative interviews of child survivors, said Dr. Bethel, who is a trained forensic interviewer.

“You have to understand the problem space before you can talk about robotics and police work,” she said. “They’re making a lot of generalizations without a lot of information.”

Dr. Crawford is among the signers of both “No Justice, No Robots” and the Black in Computing open letter. “And you know, anytime something like this happens, or awareness is made, especially in the community that I function in, I try to make sure that I support it,” he said.

Dr. Jenkins declined to sign the “No Justice” statement. “I thought it was worth consideration,” he said. “But in the end, I thought the bigger issue is, really, representation in the room — in the research lab, in the classroom, and the development team, the executive board.” Ethics discussions should be rooted in that first fundamental civil-rights question, he said.

Dr. Howard has not signed either statement. She reiterated her point that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software.

“If external people who have ethical values aren’t working with these law enforcement entities, then who is?” she said. “When you say ‘no,’ others are going to say ‘yes.’ It’s not good if there’s no one in the room to say, ‘Um, I don’t believe the robot should kill.’”

Source: https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html?action=click&module=News&pgtype=Homepage

Twitter apologizes after users notice image-cropping algorithm favours white faces over Black

Big oops:

Twitter has apologized after users called its ‘image-cropping’ algorithm racist for automatically focusing on white faces over Black ones.

Users noticed that when two separate photos, one of a white face and the other of a Black face, were displayed in the post, the algorithm would crop the latter out and only show the former on its mobile version.

PhD student Colin Madland was among the first to point out the issue on Sept. 18, after a Black colleague asked him to help stop Zoom from removing his head while using a virtual background. Madland attempted to post a two-up display of him and his colleague with the head erased and noticed that Twitter automatically cropped his colleague out and focused solely on his face.

“Geez .. any guesses why @Twitter defaulted to show only the right side of the picture on mobile?” he tweeted along with a screenshot.

Entrepreneur Tony Arcieri experimented with the algorithm using a two-up image of Barack Obama and U.S. Senator Mitch McConnell. He discovered that the algorithm would consistently crop out Obama and instead show two images of McConnell.

Several other Twitter users also tested the feature out and noticed that the same thing happened with stock models, different characters from The Simpsons, and golden and black retrievers.

Dantley Davis, Twitter’s chief design officer, replied to Madland’s tweet and suggested his facial hair could be affecting the model “because of the contrast with his skin.”

Davis, who said he experimented with the algorithm after seeing Madland’s tweet, added that once he removed Madland’s facial hair from the photo, the Black colleague’s image showed in the preview.

“Our team did test for racial bias before shipping this model,” he said, but noted that the issue is “100% (Twitter’s) fault.” “Now the next step is fixing it,” he wrote in another tweet.

In a statement, a Twitter spokesperson conceded the company had some further testing to do. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate,” they said, as quoted by the Guardian.

Source: Twitter apologizes after users notice image-cropping algorithm favours white faces over Black

New policing technology may worsen inequality

Good discussion of the risks involved, although not convinced that a judicial enquiry is the best way to address the many policy issues involved:

The Canadian Charter of Rights and Freedoms guarantees the right to equal protection under the law. It is a beautiful thing and a hallmark of a free democracy. Unfortunately, the freedom to live without discrimination remains an unrealized dream for many in Canada. Worsening this problem, the growing use of algorithmic policing technology in Canada poses a fast-approaching threat to equality rights that our justice system is ill-equipped to confront.

Systemic bias in Canada’s criminal justice system is so notorious that Canadian courts no longer require proof of its existence. Indigenous and Black communities are among the worst affected. The critical question is: what can be done? The right to equality under section 15 of Canada’s Charter, a largely forgotten right in the justice system, should serve to remind governments and law enforcement services that bold change is not merely an option. It is a constitutional imperative.

Most often, courts respond to discrimination in the justice system by granting remedies such as compensation, or exclusion of evidence from court proceedings. But these case-specific remedies seem to operate as pyrrhic victories, while systemic change remains elusive. A case-by-case approach to remedying rights violations is also costly for the public and burdensome to the very individuals wronged.

Making matters worse, Canadian police services are beginning to explore the use of algorithmic technologies that may exacerbate systemic discrimination.

As described in a recent report jointly published by the University of Toronto’s Citizen Lab and International Human Rights Program (co-authored by myself), the widespread use of algorithmic policing technology would be deeply problematic. Predictive policing technology is used to attempt to forecast individuals or locations that are most likely to be involved in crimes that have not yet occurred (and may well never occur). Data sets (including data sets created by police) are fed into algorithms that are then supposed to produce “predictions” through machine-learning methods.

Given the continuing over-representation of Black and Indigenous individuals in policing data caused by over-policing and discrimination in the justice system, using such data to forecast potential crime risks perpetuating or amplifying existing inequality. As scholar Virginia Eubanks describes, policing algorithms can operate as “feedback loops of injustice.”

In the report, we call for moratoriums on these controversial technologies, and urge Ottawa to convene a judicial inquiry on the legality of repurposing police data for use in algorithms. Section 15 may well prohibit police decision-making that is guided by algorithmic predictions that are rooted in biased data.

A judicial inquiry is important because section 15 is under-utilized and rarely applied in Canadian courts. Its scope is not well understood. There are substantial costs and legal hurdles that must be overcome to bring a discrimination claim in court. Despite some recent signs of hope, in-court litigation is slow and has not ended the cyclical harm experienced by vulnerable groups.

In theory, the public does not need to wait for courts to painstakingly deliberate these problems over decades. Section 15 prohibits all government action taken in the criminal law enforcement system that has the adverse effect of disproportionately disadvantaging racialized and Indigenous communities (or other groups protected by section 15). The constitutional prohibition operates automatically and is in effect right now.

Section 15 also requires governments and police services to move beyond circular debates as to whether the justice system’s damage is caused by overt racism, historic racism, institutional bias, poverty, or depleted mental health-care systems. It is all of the above. But section 15 prohibits much more than overt racism. It prohibits all government activity that has the purpose or effect of disproportionately disadvantaging protected groups.

When the Charter was enacted in 1982, governments were given a three-year grace period to comply with section 15 in particular — a concession granted in recognition of the hard work and substantial legal reform that would be required by governments to fulfil their new obligations. Nearly 40 years later, it is time for the burden of that hard work to be taken up and completed.

 

Home Office to scrap ‘racist algorithm’ for UK visa applicants

Of note and a reminder that algorithms reflect the views and biases of the programmers and developers, and thus require careful management and oversight:

The Home Office is to scrap a controversial decision-making algorithm that migrants’ rights campaigners claim created a “hostile environment” for people applying for UK visas.

The “streaming algorithm”, which campaigners have described as racist, has been used since 2015 to process visa applications to the UK. It will be abandoned from Friday, according to a letter from Home Office solicitors seen by the Guardian.

The decision to scrap it comes ahead of a judicial review from the Joint Council for the Welfare of Immigrants (JCWI), which was to challenge the Home Office’s artificial intelligence system that filters UK visa applications.

Campaigners claim the Home Office decision to drop the algorithm ahead of the court case represents the UK’s first successful challenge to an AI decision-making system.

Chai Patel, JCWI’s legal policy director, said: “The Home Office’s own independent review of the Windrush scandal found it was oblivious to the racist assumptions and systems it operates.

“This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor such bias and to root it out.”

Source: Home Office to scrap ‘racist algorithm’ for UK visa applicants

Algorithms Learn Our Workplace Biases. Can They Help Us Unlearn Them?

“The nudge doesn’t focus on changing minds. It focuses on the system.”

— Iris Bohnet, a behavioral economist and professor at the Harvard Kennedy School


In 2014, engineers at Amazon began work on an artificially intelligent hiring tool they hoped would change hiring for good — and for the better. The tool would bypass the messy biases and errors of human hiring managers by reviewing résumé data, ranking applicants and identifying top talent.

Instead, the machine simply learned to make the kind of mistakesits creators wanted to avoid.

The tool’s algorithm was trained on data from Amazon’s hires over the prior decade — and since most of the hires had been men, the machine learned that men were preferable. It prioritized aggressive language like “execute,” which men use in their CVs more often than women, and downgraded the names of all-women’s colleges. (The specific schools have never been made public.) It didn’t choose better candidates; it just detected and absorbed human biases in hiring decisions with alarming speed. Amazon quietly scrapped the project.

Amazon’s hiring tool is a good example of how artificial intelligence — in the workplace or anywhere else — is only as smart as the input it gets. If sexism or other biases are present in the data, machines will learn and replicate them on a faster, bigger scale than humans could do alone.

On the flip side, if A.I. can identify the subtle decisions that end up excluding people from employment, it can also spot those that lead to more diverse and inclusive workplaces.

Humu Inc., a start-up based in Mountain View, Calif., is betting that, with the help of intelligent machines, humans can be nudged to make choices that make workplaces fairer for everyone, and make all workers happier as a result.

A nudge, as popularized by Richard Thayer, a Nobel-winning behavioral economist, and Cass Sunstein, a Harvard Law professor, is a subtle design choice that changes people’s behavior in a predictable way, without taking away their right to choose.

Laszlo Bock, one of Humu’s three founders and Google’s former H.R. chief, was an enthusiastic nudge advocate at Google, where behavioral economics — essentially, the study of the social, psychological and cultural factors that influence people’s economic choices — informed much of daily life.

Nudges showed up everywhere, like in the promotions process (women were more likely to self-promote after a companywide email pointed out a dearth of female nominees) and in healthy-eating initiatives in the company’s cafeterias (placing a snack table 17 feet away from a coffee machine instead of 6.5 feet, it turns out, reduces coffee-break snacking by 23 percent for men and 17 percent for women).

Humu uses artificial intelligence to analyze its clients’ employee satisfaction, company culture, demographics, turnover and other factors, while its signature product, the “nudge engine,” sends personalized emails to employees suggesting small behavioral changes (those are the nudges) that address identified problems.

One key focus of the nudge engine is diversity and inclusion. Employees at inclusive organizations tend to be more engaged. Engaged employees are happier, and happier employees are more productive and a lot more likely to stay.

With Humu, if data shows that employees aren’t satisfied with an organization’s inclusivity, for example, the engine might prompt a manager to solicit the input of a quieter colleague, while nudging a lower-level employee to speak up during a meeting. The emails are tailored to their recipients, but are coordinated so that the entire organization is gently guided toward the same goal.

Unlike Amazon’s hiring algorithm, the nudge engine isn’t supposed to replace human decision-making. It just suggests alternatives, often so subtly that employees don’t even realize they’re changing their behavior.

Jessie Wisdom, another Humu founder and former Google staff member who has a doctorate in behavioral decision research, said sometimes she would hear from people saying, “Oh, this is obvious, you didn’t need to tell me that.”

Even when people may not feel the nudges are helping them, she said, data would show “that things have gotten better. It’s interesting to see how people perceive what is actually useful, and what the data actually bears out.”

In part that’s because the nudge “doesn’t focus on changing minds,” said Iris Bohnet, a behavioral economist and professor at the Harvard Kennedy School. “It focuses on the system.” The behavior is what matters, and the outcome is the same regardless of the reason people give themselves for doing the behavior in the first place.

Of course, the very idea of shaping behavior at work is tricky, because workplace behaviors can be perceived differently based on who is doing them.

Take, for example, the suggestion that one should speak up in a meeting. Research from Victoria Brescoll at the Yale School of Management found that people rated male executives who spoke up often in meetings as more competent than peers; the inverse was true for female executives. At the same time, research from Robert Livingston at Northwestern’s Kellogg School of Management found that for black American executives, the penalties were reversed: Black female leaders were not penalizedfor assertive workplace behaviors, but black male executives were.

An algorithm that generates one-size-fits-all fixes isn’t helpful. One that takes into account the nuanced web of relationships and factors in workplace success, on the other hand, could be very useful.

So how do you keep an intelligent machine from absorbing human biases? Humu won’t divulge any specifics — that’s “our secret sauce,” Wisdom said.

It’s also the challenge of any organization attempting to nudge itself, bit by bit, toward something that looks like equity.

Source: In the ‘In Her Words’ NewsletterAlgorithms learn our workplace biases. Can they help us unlearn them?

ACLU Sues ICE Over Its Deliberately-Broken Immigrant ‘Risk Assessment’ Software

Good for the ACLU for launching a lawsuit and the research and study behind it:

from the can’t-really-call-it-an-‘option’-if-there-are-no-alternatives dept

A couple of years ago, a Reuters investigation uncovered another revamp of immigration policies under President Trump. ICE has a Risk Classification Assessment Tool that decides whether or not arrested immigrants can be released on bail or their own recognizance. The algorithm had apparently undergone a radical transformation under the new administration, drastically decreasing the number of detainees who could be granted release. The software now recommends detention in almost every case, no matter what mitigating factors are fed to the assessment tool.

ICE is now being sued for running software that declares nearly 100% of detained immigrants too risky to be released pending hearings. The ACLU’s lawsuit [PDF] opens with some disturbing stats that show how ICE has rigged the system to keep as many people detained as possible.

According to data obtained by the New York Civil Liberties Union under the Freedom of Information Act, from 2013 to June 2017, approximately 47% of those deemed to be low risk by the government were granted release. From June 2017 to September 2019, that figure plummeted to 3%. This dramatic drop in the release rate comes at a time when exponentially more people are being arrested in the New York City area and immigration officials have expanded arrests of those not convicted of criminal offenses. The federal government’s sweeping detention dragnet means that people who pose no flight or safety risk are being jailed as a matter of course—in an unlawful trend that is getting worse.

Despite there being plenty of evidence that immigrants commit fewer criminal acts than natural-born citizens, the administration adopted a “No-Release Policy.” That led directly to ICE tinkering with its software — one that was supposed to assess risk factors when making detention determinations. ICE may as well just skip this step in the process since it’s only going to give ICE (and the administration) the answer it wants: detention without bond. ICE agents can ask for a second opinion on detention from a supervisor, but the documents obtained by the ACLU show supervisors depart from detention recommendations less than 1% of the time.

The negative effects of this indefinite detention are real. The lawsuit points out zero-risk detainees can see their lives destroyed before they’re allowed anything that resembles due process.

Once denied release under the new policy, people remain unnecessarily incarcerated in local jails for weeks or even months before they have a meaningful opportunity to seek release in a hearing before an Immigration Judge. While waiting for those hearings, those detained suffer under harsh conditions of confinement akin to criminal incarceration. While incarcerated, they are separated from families, friends, and communities, and they risk losing their children, their jobs, and their homes. Because of inadequate medical care and conditions in the jails, unmet medical and mental-health needs often lead to serious and at times irreversible consequences.

When they do finally get to see a judge, nearly 40% of them are released on bond. ICE treats nearly 100% of detained immigrants as dangerous. Judges — judges employed by the DOJ and appointed by the Attorney General — clearly don’t agree with the agency’s rigged assessment system.

There will always be those who say, “Well, don’t break the law.” These aren’t criminal proceedings. These are civil proceedings where the detained are tossed into criminal facilities until they’re able to see a judge. This steady stripping of options began under the Obama administration but accelerated under Trump and his no-release policy.

ICE began to alter its custody determinations process in 2015, modifying its risk-assessment tool so that it could no longer recommend individuals be given the opportunity for release on bond. In mid-2017, ICE then removed the tool’s ability to recommend release on recognizance. As a result, the assessment tool—on which ICE offices across the country rely— can only make one substantive recommendation: detention without bond.

The ACLU is hoping to have a class action lawsuit certified that would allow it to hold ICE responsible for violating rights en masse, including the Fifth Amendment’s due process clause. Since ICE is no longer pretending to be targeting the “worst of the worst,” the agency and its deliberately-broken risk assessment tool are locking up immigrants who have lived here for an average of sixteen years — people who’ve added to their communities, held down jobs, and raised families. These are the people targeted by ICE and it is ensuring that it is these people who are thrown into prisons and jails until their hearings, tearing apart their lives and families while denying them the rights extended to them by our Constitution.

Source: ACLU Sues ICE Over Its Deliberately-Broken Immigrant ‘Risk Assessment’ Software