New Zealand: ‘Like swimming in crocodile waters’ – Immigration officials’ data analytics use

Of note. As always, one needs to ensure that AI systems are as free of bias as possible as well as remembering that human decision-making is also not perfect. But any large-scale immigration system will likely have to rely on AI in order to manage the workload:

Immigration officials are being accused of using data analytics and algorithms in visa processing – and leaving applicants in the dark about why they are being rejected.

One immigration adviser described how applicants unaware of risk profiling were like unwitting swimmers in crocodile infested waters.

The automatic ‘triage’ system places tourists, overseas students or immigrants into high, medium or low risk categories.

The factors which raise a red flag on high-risk applications are not made publicly available; Official Information Act requests are redacted on the grounds of international relations.

But an immigration manager has told RNZ that staff identify patterns, such as overstaying and asylum claim rates of certain nationalities or visa types, and feed that data into the triage system.

On a recent visit to a visa processing centre in Auckland, Immigration New Zealand assistant general manager Jeannie Melville acknowledged that it now ran an automated system that triages applications, but said it was humans who make the decisions.

“There is an automatic triage that’s done – but to be honest, the most important thing is the work that our immigration officers do in actually determining how the application should be processed,” she said.

“And we do have immigration officers that have the skills and the experience to be able to determine whether there are further risk factors or no risk factors in a particular application.

“The triage system is something that we work on all the time because as you would expect, things change all the time. And we try and make sure that it’s a dynamic system that takes into account a whole range of factors, whether that be things that have happened in the past or things that are going on at the present time.”

When asked what ‘things that have happened in the past’ might mean in the context of deciding what risk category an applicant would be assigned to, another manager filled the silence.

“Immigration outcomes, application outcomes, things that we measure – overstaying rates or asylum claim rates from certain sources,” she said. “Nationality or visa type patterns that may have trended, so we do some data analytics that feed into some of those business rules.”

Humans defer to machines – professor

Professor Colin Gavaghan, of Otago University, said studies on human interactions with technology suggested people found it hard to ignore computerised judgments.

“What they’ve found is if you’re not very, very careful, you get a kind of situation where the human tends just to defer to whatever the machine recommends,” said Prof Gavaghan, director of the New Zealand Law Foundation Centre for Law and Policy in Emerging Technologies.

“It’s very hard to stay in a position where you’re actually critiquing and making your own independent decision – humans who are going to get to see these cases, they’ll be told that the machine, the system has already flagged them up as being high risk.

“It’s hard not to think that that will influence their decision. The idea they’re going to make a completely fresh call on those cases, I think, if we’re not careful, could be a bit unrealistic.”

Oversight and transparency were needed to check the accuracy of calls made by the algorithmic system and to ensure people could challenge decisions, he added.

Best practice guidelines tended to be high level and vague, he added.

“There’s also questions and concerns about bias,” he said. “It can be biased because the training data that’s been used to prepare it is itself the product of user bias decisions – if you have a body of data that’s been used to train the system that’s informed by let’s say, for the sake of argument, racist assumptions about particular groups, then that’s going to come through in the system’s recommendations as well.

“We haven’t had what we would like to see, which is one body with responsibility to look across all of government and all of these uses.”

The concerns follow questions around another Immigration New Zealand programme in 2018 which was used to prioritise deportations.

A compliance manager told RNZ it was using data, including nationality, of former immigrants to determine which future overstayers to target.

It subsequently denied that nationality was one of the factors but axed the programme.

Don’t make assumptions on raw data – immigration adviser

Immigration adviser Katy Armstrong said Immigration New Zealand had to fight its own ‘jaundice’ that was based on profiling and presumptions.

“Just because you’re a 23-year-old, let’s say, Brazilian coming in, wanting to have a holiday experience in New Zealand, doesn’t make you an enemy of the state.

“And you’re being lumped in maybe with a whole bunch of statistics that might say that young male Brazilians have a particular pattern of behaviour.

“So you then have to prove a negative against you, but you’re not being told transparently what that negative is.”

It would be unacceptable if the police were arresting people based on the previous offending rates of a certain nationality and immigration rules were also based on fairness and natural justice, she said.

“That means not discriminating, not being presumptuous about the way people may behave just purely based on assumptions from raw data,” she said.

“And that’s the area of real concern. If you have profiling and an unsophisticated workforce, with an organisation that is constantly in churn, with people coming on board to make decisions about people’s lives with very little training, then what do you end up with?

“Well, I can tell you – you end up with decisions that are basically unfair, and often biased.

“I think people go in very trusting of the system and not realising that there is this almighty wall between them and a visa over issues that they would have no inkling about.

“And then they get turned down, they don’t even give you a chance very often to respond to any doubts that immigration might have around you.

“People come and say: ‘I got declined’ and you look at it and you think ‘oh my God, it was like they literally went swimming in the crocodile waters without any protection’.”

Source: ‘Like swimming in crocodile waters’ – Immigration officials’ data analytics use

Concerns raised after facial recognition software found to have racial bias

Legitimate concerns:

In 2015, two undercover police officers in Jacksonville, Fla., bought $50 worth of crack cocaine from a man on the street. One of the cops surreptitiously snapped a cellphone photo of the man and sent it to a crime analyst, who ran the photo through facial recognition software.

The facial recognition algorithm produced several matches, and the analyst chose the first one: a mug shot of a man named Willie Allen Lynch. Lynch was convicted of selling drugs and sentenced to eight years in prison.

Civil liberties lawyers jumped on the case, flagging a litany of concerns to fight the conviction. Matches of other possible perpetrators generated by the tool were never disclosed to Lynch, hampering his ability to argue for his innocence. The use of the technology statewide had been poorly regulated and shrouded in secrecy.

But also, Willie Allen Lynch is a Black man.

Multiple studies have shown facial recognition technology makes more errors on Black faces. For mug shots in particular, researchers have found that algorithms generate the highest rates of false matches for African American, Asian and Indigenous people.

After more than two dozen police services, government agencies and private businesses across Canada recently admitted to testing the divisive facial recognition app Clearview AI, experts and advocates say it’s vital that lawmakers and politicians understand how the emerging technology could impact racialized citizens.

“Technologies have their bias as well,” said Nasma Ahmed, director of Toronto-based non-profit Digital Justice Lab, who is advocating for a pause on the use of facial recognition technology until proper oversight is established.

“If they don’t wake up, they’re just going to be on the wrong side of trying to fight this battle … because they didn’t realize how significant the threat or the danger of this technology is,” says Toronto-born Toni Morgan, managing director of the Center for Law, Innovation and Creativity at Northeastern University School of Law in Boston.

“It feels like Toronto is a little bit behind the curve in understanding the implications of what it means for law enforcement to access this technology.”

Last month, the Star revealed that officers at more than 20 police forces across Canada have used Clearview AI, a facial recognition tool that has been described as “dystopian” and “reckless” for its broad search powers. It relies on what the U.S. company has said is a database of three billion photos scraped from the web, including social media.

Almost all police forces that confirmed use of the tool said officers had accessed a free trial version without the knowledge or authorization of police leadership and have been told to stop; the RCMP is the only police service that has paid to access the technology.

Multiple forces say the tool was used by investigators within child exploitation units, but it was also used to probe lesser crimes, including in an auto theft investigation and by a Rexall employee seeking to stop shoplifters.

While a handful of American cities and states have moved to limit or outright ban police use of facial recognition technology, the response from Canadian lawmakers has been muted.

According to client data obtained by BuzzFeed News and shared exclusively with the Star, the Toronto Police Service was the most prolific user of Clearview AI in Canada. (Clearview AI has not responded to multiple requests for comment from the Star but told BuzzFeed there are “numerous inaccuracies” in the client data information, which they allege was “illegally obtained.”)

Toronto police ran more than 3,400 searches since October, according to the BuzzFeed data.

A Toronto police spokesperson has said officers were “informally testing” the technology, but said the force could not verify the Star’s data about officers’ use or “comment on it with any certainty.” Toronto police Chief Mark Saunders directed officers to stop using the tool after he became aware they were using it, and a review is underway.

But Toronto police are still using a different facial recognition tool, one made by NEC Corp. of America and purchased in 2018. The NEC facial recognition tool searches the Toronto police database of approximately 1.5 million mug shot photos.

The National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce, has been testing the accuracy of facial recognition technology since 2002. Companies that sell the tools voluntarily submit their algorithms to be tested to NIST; government agencies sponsor the research to help inform policy.

In a report released in December that tested 189 algorithms from 99 developers, NIST found dramatic variations in accuracy across different demographic groups. For one type of matching, the team discovered the systems had error rates between 10 and 100 times higher for African American and Asian faces compared to images of white faces.

For the type of facial recognition matching most likely to be used by law enforcement, African American women had higher error rates.

“Law enforcement, they probably have one of the most difficult cases. Because if they miss someone … and that person commits a crime, they’re going to look bad. If they finger the wrong person, they’re going to look bad,” said Craig Watson, manager of the group that runs NIST’s testing program.

Clearview AI has not been tested by NIST. The company has claimed its tool is “100% accurate” in a report written by an “independent review panel.” The panel said it relied on the same methodology the American Civil Liberties Union used to assess a facial recognition algorithm sold by Amazon.

The American Civil Liberties Union slammed the report, calling the claim “misleading” and the tool “dystopian.”

Clearview AI did not respond to a request for comment about its accuracy claims.

Before purchasing the NEC facial recognition technology, Toronto police conducted a privacy impact assessment. Asked if this examined potential racial bias within the NEC’s algorithms, spokesperson Meaghan Gray said in an email the contents of the report are not public.

But she said TPS “has not experienced racial or gender bias when utilizing the NEC Facial Recognition System.”

“While not a means of undisputable positive identification like fingerprint identification, this technology provides ‘potential candidates’ as investigative leads,” she said. “Consequently, one race or gender has not been disproportionally identified nor has the TPS made any false identifications.”

The revelations about Toronto police’s use of Clearview AI have coincided with the planned installation of additional CCTV cameras in communities across the city, including in the Jane Street and Finch Avenue West area. The provincially funded additional cameras come after the Toronto police board approved increasing the number placed around the city.

The combination of facial recognition technology and additional CCTV cameras in a neighbourhood home to many racialized Torontonians is a “recipe for disaster,” said Sam Tecle, a community worker with Jane and Finch’s Success Beyond Limits youth support program.

“One technology feeds the other,” Tecle said. “Together, I don’t know how that doesn’t result in surveillance — more intensified surveillance — of Black and racialized folks.”

Tecle said the plan to install more cameras was asking for a lot of trust from a community that already has a fraught relationship with the police. That’s in large part due to the legacy of carding, he said — when police stop, question and document people not suspected of a crime, a practice that disproportionately impacts Black and brown men.

“This is just a digital form of doing the same thing,” Tecle told the Star. “If we’re misrecognized and misidentified through these facial recognition algorithms, then I’m very apprehensive about them using any kind of facial recognition software.”

Others pointed out that false positives — incorrect matches — could have particularly grave consequences in the context of police use of force: Black people are “grossly over-represented” in cases where Toronto police used force, according to a 2018 report by the Ontario Human Rights Commission.

Saunders has said residents in high-crime areas have repeatedly asked for more CCTV cameras in public spaces. At last month’s Toronto police board meeting, Mayor John Tory passed a motion requiring that police engage in a public community consultation process before installing more cameras.

Gray said many residents and business owners want increased safety measures, and this feedback alongside an analysis of crime trends led the force to identify “selected areas that are most susceptible to firearm-related offences.”

“The cameras are not used for surveillance. The cameras will be used for investigation purposes, post-reported offences or incidents, to help identify potential suspects, and if needed during major events to aid in public safety,” Gray said.

Akwasi Owusu-Bempah, an assistant professor of criminology at the University of Toronto, said when cameras are placed in neighbourhoods with high proportions of racialized people, then used in tandem with facial recognition technology, “it could be problematic, because of false positives and false negatives.”

“What this gets at is the need for continued discussion, debate, and certainly oversight,” Owusu-Bempah said.

Source: Concerns raised after facial recognition software found to have racial bias

Canada must look beyond STEM and diversify its AI workforce

From a visible minority perspective, based on STEM graduates, representation reasonably good as per the chart above except for engineering and particularly strong in math and computer sciences, the closest fields of study to AI.

With respect to gender, the percentage of visible minority women is generally equivalent to that on non-visible minority women or stronger (but women are under-represented in engineering and math/computer sciences):

Artificial intelligence (AI) is expected to add US$15.7 trillion to the global economy by 2030, according to a recent report from PwC, representing a 14 percent boost to global GDP. Countries around the world are scrambling for a piece of the pie, as evidenced by the proliferation of national and regional AI strategies aimed at capturing the promise of AI for future value generation.

Canada has benefited from an early lead in AI, which is often attributed to the Canadian Institute for Advanced Research (CIFAR) having had the foresight to invest in Geoffrey Hinton’s research on deep learning shortly after the turn of the century. As a result, Canada can now tout Montreal as having the highest concentration of researchers and students of deep learning in the world and Toronto as being home to the highest concentration of AI start-ups in the world.

But the market for AI is approaching maturity. A report from McKinsey & Co. suggests that the public and private sectors together have captured only between 10 and 40 percent of the potential value of advances in machine learning. If Canada hopes to maintain a competitive advantage, it must both broaden the range of disciplines and diversify the workforce in the AI sector.

Looking beyond STEM

Strategies aimed at capturing the expected future value of AI have been concentrated on innovation in fundamental research, which is conducted largely in the STEM disciplines: science, technology, engineering and mathematics. But it is the application of this research that will grow market share and multiply value. In order to capitalize on what fundamental research discovers, the AI sector must deepen its ties with the social sciences.

To date the role of social scientists in Canada’s strategy on AI has been largely limited to areas of ethics and public policy. While these are endeavours to which social scientists are particularly well suited, they could be engaged much more broadly with AI. Social scientists are well positioned to identify and exploit potential applications of this research that will generate both social and economic returns on Canada’s investment in AI.

Social scientists take a unique approach to data analysis by drawing on social theory to critically interpret both the inputs and outputs of a given model. They ask what a given model is really telling us about the world and how it arrived at that result. They see potential opportunities in data and digital technology that STEM researchers are not trained to look for.

A recent OECD report looks at the skills that most distinguish innovative from non-innovative workers; chief among them are creativity, critical thinking and communication skills. While these skills are by no means exclusively the domain of the social sciences, they are perhaps more central to social scientific training than to any other discipline.

The social science perspective can serve as a defence mechanism against the potential folly of certain applications of AI. If social scientists had been more involved in early adaptations of computer vision, for example, Google might have been spared the shame of image recognition algorithms that classify people of colour as animals (they certainly would have come up with a better solution). In the same vein, Microsoft’s AI chatbots would have been less likely to spew racist slurs shortly after launch.

Social scientists can also help meet a labour shortage: there are not enough STEM graduatesto meet future demand for AI talent. Meanwhile, social science graduates are often underemployed, in part because they do not have the skills necessary to participate in a future of work that privileges expertise in AI. As a consequence, many of the opportunities associated with AI are passing Canada’s social science graduates by. Excluding social science students from Canada’s AI strategy not only reduces their career paths but restricts their opportunities to contribute to fulfilling the societal and economic promise of AI.

Realizing the potential of the social sciences within Canada’s AI ecosystem requires innovative thinking by both governments and universities. Federal and provincial governments should relax restrictions on funding for AI-related research that prohibit applications from social scientists or make them eligible only within interdisciplinary teams that include STEM researchers. This policy has the effect of subordinating social scientific approaches to AI to those of STEM disciplines. In fact, social scientists are just as capable of independent research, and a growing number are already engaged in sophisticated applications of machine learning to address some of the most pressing societal challenges of our time.

Governments must also invest in the development of undergraduate and graduate training opportunities that are specific to the application of AI in the social sciences, using pedagogical approaches that are appropriate for them.

Social science faculties in universities across Canada can also play a crucial role by supporting the development of AI-related skills within their undergraduate and graduate curriculums. At McMaster University, for example, the Faculty of Social Sciences is developing a new degree: master of public policy in digital society. Alongside graduate training in the fundamentals of public policy, the 12-month program will include rigorous training in data science as well as technical training in key digital technologies that are revolutionizing contemporary society. The program, which is expected to launch in 2021, is intended to provide students with a command of digital technologies such as AI necessary to enable them to think creatively and critically about its application to the social world. In addition to the obvious benefit of producing a new generation of policy leadership in AI, the training provided by this program will ensure that its graduates are well positioned for a broader range of leadership opportunities across the public and private sectors.

Increasing workplace diversity

A report released in 2019 by New York University’s AI Now Institute declared that there is a diversity crisis in the AI workforce. This has implications for the sector itself but also for society more broadly, in that the systemic biases within the AI sector are being perpetuated via the myriad touch points that AI has with our everyday lives: it is organizing our online search results and social media news feeds and supporting hiring decisions, and it may even render decisions in some court cases in future.

One of the main findings of the AI Now report was that the widespread strategy of focusing on “women in tech” is too narrow to counter the diversity crisis. In Canada, efforts to diversify AI generally translate to providing advancement opportunities for women in the STEM disciplines. Although the focus of policy-makers on STEM is critical and necessary, it is short-sighted. Disciplinary diversity in AI research not only broadens the horizons for research and commercialization; it also creates opportunities for groups who are underrepresented in STEM to benefit from and contribute to innovations in AI.

As it happens, equity-seeking groups are better represented in the social sciences. According to Statistics Canada, the social sciences and adjacent fields have the highest enrolment of visible minorities. And as of 2017, only 23.7 percent of those enrolled in STEM programs at Canadian universities were women, whereas women were 69.1 percent of participants in the social sciences.

So, engaging the social sciences more substantively in research and training related to AI will itself lead to greater diversity. While advancing this engagement, universities should be careful not to import training approaches directly from statistics or computer science, as these will bring with them some of the cultural context and biases that have resulted in a lack of diversity in those fields to begin with.

Bringing the social sciences into Canada’s AI strategy is a concrete way to demonstrate the strength of diversity, in disciplines as well as demographics. Not only would many social science students benefit from training in AI, but so too would Canada’s competitive advantage in AI benefit from enabling social scientists to effectively translate research into action.

Source: Canada must look beyond STEM and diversify its AI workforce

Douglas Todd: Robots replacing Canadian visa officers, Ottawa report says

Ongoing story, raising legitimate questions regarding the quality and possible bias of the algorithms used. That being said, human decision making is not bias free and using AI, at least in the more straightforward cases, makes sense from an efficiency and timeliness of service response.

Will be important to ensure appropriate oversight and there may be a need from an external body to review the algorithms to reduce risks if not already in place:

Tens of thousands of would-be guest workers and international students from China and India are having their fates determined by Canadian computers that are making visa decisions using artificial intelligence.

Even though Immigration Department officials recognize the public is wary about substituting robotic algorithms for human visa officers, the Liberal government plans to greatly expand “automated decision-making” in April of this year, according to an internal report.

“There is significant public anxiety over fairness and privacy associated with Big Data and Artificial Intelligence,” said the 2019 Immigration Department report, obtained under an access to information request. Nevertheless, Ottawa still plans to broaden the automated approval system far beyond the pilot programs it began operating in 2018 to process applicants from India and China.

At a time when Canada is approving more guest workers and foreign students than ever before, immigration lawyers have expressed worry about a lack of transparency in having machines make life-changing decisions about many of the more than 200,000 temporary visas that Canada issues each year.

The internal report reveals its departmental reservations about shifting more fully to an automated system — in particular wondering if machines could be “gamed” by high-risk applicants making false claims about their banking, job, marriage, educational or travel history.

“A system that approves applications without sufficient vetting would raise risks to Canadians, and it is understandable for Canadians to be more concerned about mistakenly approving risky individuals than about mistakenly refusing bona fide candidates,” says the document.

The 25-page report also flags how having robots stand in for humans will have an impact on thousands of visa officers. The new system “will fundamentally change the day-to-day work of decision-makers.”

Immigration Department officials did not respond to questions about the automated visa program.

Vancouver immigration lawyer Richard Kurland says Ottawa’s sweeping plan “to process huge numbers of visas fast and cheap” raises questions about whether an automated “Big Brother” system will be open to scrutiny, or whether it will lead to “Wizard of Oz” decision-making, in which it will be hard to determine who is accountable.

The publisher of the Lexbase immigration newsletter, which uncovered the internal document, was especially concerned that a single official has already “falsely” signed his or her name to countless visa decisions affecting migrants from India and China, without ever having reviewed their specific applications.

“The internal memo shows tens of thousands of visa decisions were signed-off under the name of one employee. If someone pulled that stunt on a visa application, they would be banned from Canada for five years for misrepresentation. It hides the fact it was really a machine that made the call,” said Kurland.

The policy report itself acknowledges that the upcoming shift to “hard-wiring” the visa decision-making process “at a tremendous scale” significantly raises legal risks for the Immigration Department, which it says is already “one of the most heavily litigated in the government of Canada.”

The population of Canada jumped by 560,000 people last year, or 1.5 per cent, the fastest rate of increase in three decades. About 470,000 of that total was made up of immigrants or newcomers arriving on 10-year multiple-entry visas, work visas or study visas.

The senior immigration officials who wrote the internal report repeatedly warn departmental staff that Canadians will be suspicious when they learn about the increasingly automated visa system.

“Keeping a human in the loop is important for public confidence. While human decision making may not be superior to algorithmic systems,” the report said, “human in-the-loop systems currently represent a form of transparency and personal accountability that is more familiar to the public than automated processes.”

In an effort to sell the automated system to a wary populace, the report emphasizes making people aware that the logarithm that decides whether an applicant receives a visa is not random. It’s a computer program governed by certain rules regarding what constitutes a valid visa application.

“A system that provides no real opportunity for officers to reflect is a de facto automated decision-making system, even when officers click the last button,” says the report, which states that flesh-and-blood women and men should still make the rulings on complex or difficult cases — and will also be able to review appeals.

“When a client challenges a decision that was made in full or in part by an automated system, a human officer will review the application. However, the (department) should not proactively offer clients the choice to have a human officer review and decide on their case at the beginning of the application process.”

George Lee, a veteran immigration lawyer in Burnaby, said he had not heard that machines are increasingly taking over from humans in deciding Canadian visa cases. He doesn’t think the public will like it when they learn it.

“People will say, ‘What are we doing here? Where are the human beings? You can’t do this. People are afraid of change. We want to keep the status quo.”

However, Lee said society’s transition towards replacing human workers with robots is “unstoppable. We’re seeing it everywhere.”

Lee believes people will eventually get used to the idea that machines are making vitally important decisions about human lives, including about people’s dreams of migrating to a new country.

“I think the use of robots will become more acceptable down the road,” he said. “Until the robots screw up.”

Source: Douglas Todd: Robots replacing Canadian visa officers, Ottawa report says

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Yet another article emphasizing the risks:

Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.

But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.

BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. Decades and even centuries of biases — along with a few new ones — are probably baked into all that material.

BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit. One program decided almost everything written about President Trump was negative, even if the actual content was flattering.

AI system for granting UK visas is biased, rights groups claim

Always a challenge with AI, ensuring that the algorithms do not replicate or create bias:

Immigrant rights campaigners have begun a ground-breaking legal case to establish how a Home Office algorithm that filters UK visa applications actually works.

The challenge is the first court bid to expose how an artificial intelligence program affects immigration policy decisions over who is allowed to enter the country.

Foxglove, a new advocacy group promoting justice in the new technology sector, is supporting the case brought by the Joint Council for the Welfare of Immigrants (JCWI) to legally force the Home Office to explain on what basis the algorithm “streams” visa applicants.

The two groups both said they feared the AI “streaming tool” created three channels for applicants including a “fast lane” that would lead to “speedy boarding for white people”.

The Home Office has insisted that the algorithm is used only to allocate applications and does not ultimately rule on them. The final decision remains in the hands of human caseworkers and not machines, it said.

A spokesperson for the Home Office said: “We have always used processes that enable UK Visas and Immigration to allocate cases in an efficient way.

“The streaming tool is only used to allocate applications, not to decide them. It uses data to indicate whether an application might require more or less scrutiny and it complies fully with the relevant legislation under the Equalities Act 2010.”

Cori Crider, a director at Foxglove, rejected the Home Office’s defence of the AI system.

Source: AI system for granting UK visas is biased, rights groups claim

Beware of Automated Hiring It won’t end employment discrimination. In fact, it could make it worse.

Some interesting ideas to reduce the risks of bias and discrimination:

Algorithms make many important decisions for us, like our creditworthiness, best romantic prospects and whether we are qualified for a job. Employers are increasingly turning to automated hiring platforms, believing they’re both more convenient and less biased than humans. However, as I describe in a new paper, this is misguided.

In the past, a job applicant could walk into a clothing store, fill out an application, and even hand it straight to the hiring manager. Nowadays, her application must make it through an obstacle course of online hiring algorithms before it might be considered. This is especially true for low-wage and hourly workers.

The situation applies to white-collar jobs too. People applying to be summer interns and first-year analysts at Goldman Sachs have their résumés digitally scanned for keywords that can predict success at the company. And the company has now embracedautomated interviewing.

Automated hiring can create a closed loop system. Advertisements created by algorithms encourage certain people to send in their résumés. After the résumés have undergone automated culling, a lucky few are hired and then subjected to automated evaluation, the results of which are looped back to establish criteria for future job advertisements and selections. This system operates with no transparency or accountability built in to check that the criteria are fair to all job applicants.

The language gives it away: How an algorithm can help us detect fake news

Interesting:

Have you ever read something online and shared it among your networks, only to find out it was false?

As a software engineer and computational linguist who spends most of her work and even leisure hours in front of a computer screen, I am concerned about what I read online. In the age of social media, many of us consume unreliable news sources. We’re exposed to a wild flow of information in our social networks — especially if we spend a lot of time scanning our friends’ random posts on Twitter and Facebook.

My colleagues and I at the Discourse Processing Lab at Simon Fraser University have conducted research on the linguistic characteristics of fake news.

The effects of fake news

A study in the United Kingdom found that about two-thirds of the adults surveyed regularly read news on Facebook, and that half of those had the experience of initially believing a fake news story. Another study, conducted by researchers at the Massachusetts Institute of Technology, focused on the cognitive aspects of exposure to fake news and found that, on average, newsreaders believe a false news headline at least 20 percent of the time.

False stories are now spreading 10 times faster than real news and the problem of fake news seriously threatens our society.

For example, during the 2016 election in the United States, an astounding number of U.S. citizens believed and shared a patently false conspiracy claiming that Hilary Clinton was connected to a human trafficking ring run out of a pizza restaurant. The owner of the restaurant received death threats, and one believer showed up in the restaurant with a gun. This — and a number of other fake news stories distributed during the election season — had an undeniable impact on people’s votes.

 

It’s often difficult to find the origin of a story after partisan groups, social media bots and friends of friends have shared it thousands of times. Fact-checking websites such as Snopes and Buzzfeed can only address a small portion of the most popular rumors.

The technology behind the internet and social media has enabled this spread of misinformation; maybe it’s time to ask what this technology has to offer in addressing the problem.

In an interview, Hilary Clinton discusses ‘Pizzagate’ and the problem of fake news online.

Giveaways in writing style

Recent advances in machine learning have made it possible for computers to instantaneously complete tasks that would have taken humans much longer. For example, there are computer programs that help police identify criminal faces in a matter of seconds. This kind of artificial intelligence trains algorithms to classify, detect and make decisions.

When machine learning is applied to natural language processing, it is possible to build text classification systems that recognize one type of text from another.

During the past few years, natural language processing scientists have become more active in building algorithms to detect misinformation; this helps us to understand the characteristics of fake news and develop technology to help readers.

One approach finds relevant sources of information, assigns each source a credibility score and then integrates them to confirm or debunk a given claim. This approach is heavily dependent on tracking down the original source of news and scoring its credibility based on a variety of factors.

A second approach examines the writing style of a news article rather than its origin. The linguistic characteristics of a written piece can tell us a lot about the authors and their motives. For example, specific words and phrases tend to occur more frequently in a deceptive text compared to one written honestly.

Spotting fake news

Our research identifies linguistic characteristics to detect fake news using machine learning and natural language processing technology. Our analysis of a large collection of fact-checked news articles on a variety of topics shows that, on average, fake news articles use more expressions that are common in hate speech, as well as words related to sex, death and anxiety. Genuine news, on the other hand, contains a larger proportion of words related to work (business) and money (economy).

This suggests that a stylistic approach combined with machine learning might be useful in detecting suspicious news.

Our fake news detector is built based on linguistic characteristics extracted from a large body of news articles. It takes a piece of text and shows how similar it is to the fake news and real news items that it has seen before. (Try it out!)

The main challenge, however, is to build a system that can handle the vast variety of news topics and the quick change of headlines online, because computer algorithms learn from samples and if these samples are not sufficiently representative of online news, the model’s predictions would not be reliable.

One option is to have human experts collect and label a large quantity of fake and real news articles. This data enables a machine-learning algorithm to find common features that keep occurring in each collection regardless of other varieties. Ultimately, the algorithm will be able to distinguish with confidence between previously unseen real or fake news articles.

Source: The language gives it away: How an algorithm can help us detect fake news

Will Your Job Still Exist In 2030?

More on the expected impact of automation and AI:

Automation is already here. Robots helped build your car and pack your latest online shopping order. A chatbot might help you figure out your credit card balance. A computer program might scan and process your résumé when you apply for work.

What will work in America look like a decade from now? A team of economists at the McKinsey Global Institute set out to figure it out in a new report out Thursday.

The research finds automation widening the gap between urban and rural areas and dramatically affecting people who didn’t go to college or didn’t finish high school. It also projects some occupations poised for massive growth or growing enough to offset displaced jobs.

Below are some of the key takeaways from McKinsey’s forecast.

Most jobs will change; some will decline“Intelligent machines are going to become more prevalent in every business. All of our jobs are going to change,” said Susan Lund, co-author of the report. Almost 40% of U.S. jobs are in occupations that are likely to shrink — though not necessarily disappear — by 2030, the researchers found.

Employing almost 21 million Americans, office support is by far the most common U.S. occupation that’s most at risk of losing jobs to digital services, according to McKinsey. Food service is another heavily affected category, as hotel, fast-food and other kitchens automate the work of cooks, dishwashers and others.

At the same time, “the economy is adding jobs that make use of new technologies,” McKinsey economists wrote. Those jobs include software developers and information security specialists — who are constantly in short supply — but also solar panel installers and wind turbine technicians.

Health care jobs, including hearing aid specialists and home health aides, will stay in high demand for the next decade, as baby boomers age. McKinsey also forecast growth for jobs that tap into human creativity or “socioemotional skills” or provide personal service for the wealthy, like interior designers, psychologists, massage therapists, dietitians and landscape architects.

In some occupations, even as jobs disappear, new ones might offset the losses. For example, digital assistants might replace counter attendants and clerks who help with rentals, but more workers might be needed to help shoppers in stores or staff distribution centers, McKinsey economists wrote.

Similarly, enough new jobs will be created in transportation or customer service and sales to offset ones lost by 2030.

Employers and communities could do more to match workers in waning fields to other compatible jobs with less risk of automation. For instance, 900,000 bookkeepers, accountants and auditing clerks nationwide might see their jobs phased out but could be retrained to become loan officers, claims adjusters or insurance underwriters, the McKinsey report said.

Automation is likely to continue widening the gap between job growth in urban and rural areas

By 2030, the majority of job growth may be concentrated in just 25 megacities and their peripheries, while large swaths of the country see slower job creation and even lose jobs, the researchers found. This gap has already widened in the past decade, as Federal Reserve Chairman Jerome Powell noted in his remarks on Wednesday.

Source: Will Your Job Still Exist In 2030?

Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Interesting. The advertisers are always ahead of the rest of us….:

There has been significant coverage of bias problems in the use of machine learning in the analysis of people. There has also been pushback against the use of facial recognition because of both bias and inaccuracy. However, a more narrow approach to recognition, one focused on recognition emotions rather than identification, can address marketing challenges. Sentiment analysis by survey is one thing, but tracking human facial responses can significantly improve accuracy of the analysis.

The Brookings Institute points to a projection that the US will become a majority-minority nation by 2045. That means that the non-white population will be over 50% of the population. Even before then, the growing demographic shift means that the non-white population has become a significant part of the consumer market. In this multicultural society, it’s important to know if messages work across those cultures. Today’s marketing needs are much more detailed and subtle than the famous example of the Chevy Nova not selling in Latin America because “no va” means “no go” in Spanish.

It’s also important to understand not only the growth of the multicultural markets, but also what they mean in pure dollars. The following chart from the Collage Group shows that the 2017 revenues from the three largest minority segments are similar to the revenues of entire nations.

It would be foolish for companies to ignore these already large and continually growing segments. While there’s the obvious need to be more inclusive in the images, in particular the people, appearing in ads, the picture is only part of the equation. The right words must also be used to interest different demographics. Of course, that a marketing team thinks it has been more inclusive doesn’t make it so. Just as with other aspects of marketing, these messages must be tested.

Companies have begun to look at vision AI for more than the much reported on facial recognition, that of identifying people. While social media and surveys can catch some sentiment, analysis of facial features is even more detailed. That identification is also an easier AI problem than that of full facial identification. Identifying basic facial features such as the mouth and the eyes, then tracking changes based on watching or reading an advertisement can catch not only a smile, but the “strength” of that smile. Other types of sentiment capture can also be scaled.

Then, without having to identify the individual people, information about their demographics can build a picture of how sentiment varies between groups of people. For instance, the same ad can easily get a different typical reaction from white, middle aged women, then from older black men, and from that of East Asian teenagers. With social media polarizing and fragmenting many attitudes, it’s important to understand how marketing messages are received through the target audiences.

The use of AI to rapidly provide feedback on sentiment analysis will help advertisers to better tune messages, whether aiming at a general message that attracts an audience across the US marketing landscape, or finding appropriate focused messages to attract specific demographics. One example of marketers leveraging AI in this arena is Collage Group. They are a market research firm which has helped companies to better understand and improve messaging to minority communities. Collage Group has recently rolled out AdRate, a process for evaluating ads that integrates AI vision to analysis sentiment of the viewers.

“Companies have come to understand the growing multicultural nature of the US consumer market,” said David Wellisch, CEO, Collage Group. “Artificial intelligence is improving Collage Group’s ability to help B2C companies understand the different reactions in varied communities and then adapt their to the best effect.”

While questions of accuracy and ethics in the use of facial recognition will continue in many areas of business, the opportunity to better message to the diversity of the market is a clear benefit. Visual AI to enhance the accuracy of sentiment analysis is clearly a segment that will grow.

Source: Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness