China’s high-tech repression of Uyghurs is more sinister — and lucrative — than it seems, anthropologist says

Keeps on getting worse:

When people started to disappear in China’s northwest province of Xinjiang in 2014, then-PhD student Darren Byler was living there, with a rare, ground-level view of events that would eventually be labelled by some as a modern-day genocide.

The American anthropologist, who learned Chinese and Uyghur languages, witnessed a digital police state rise up around him, as mass detention and surveillance became a feature of life in Xinjiang. He spent years experiencing and gathering testimony on the impact.

“It’s affected all of society,” he told CBC’s Ideas.

Since those early days of Chinese President Xi Jinping’s so-called “People’s War on Terror,”Human Rights Watch says at least one million Uyghur and other Muslims in Xinjiang have been arbitrarily detained in what China calls “re-education” or “vocational training” camps, in prisons or “pre-trial detention” facilities. 

Survivors have recounted being tortured and raped in the camps, scruitinized by the gaze of cameras 24/7, and perhaps most crucially, forced to learn how to be Chinese and unlearn what it is to be Uyghur. 

Countless of their children, says HRW, are forced to do the same in residential boarding schools. 

China — currently in the Olympic spotlight and steering clear of such topics — routinely denies accusations, including from Canada’s House of Commons, that its treatment of Uyghurs amounts to genocide. 

China declared its campaign in 2014 after a series of violent attacks that it blamed on Uyghur extremists or separatists. 

But what all Uyghurs are now facing is more sinister and lucrative than that, said Byler, now an assistant professor of international studies at Simon Fraser University in British Columbia.

It is, he said, a modern-day colonial project that operates at the nexus of state surveillance, mass detention and huge profits, and is enabled by high tech companies using ideas and technology first developed in the West.

Byler calls it “terror capitalism,” a new frontier of global capitalism that is fuelled by the labelling of a people as dangerous, and then using their labour and most private personal data to generate wealth.

“When we’re talking about a frontier of capitalism, you’re talking about turning something that previously was not a commodity into a commodity,” he said. 

“So in this context, it’s Uyghur social life, Uyghur behaviour, Uyghur digital histories that are being extracted and then quantified, measured and assessed and turned into this pattern data that is then made predictable.”

The process Byler describes involves forced harvesting of people’s data and then using it to improve predictive artificial intelligence technology. It also involves using the same population as test subjects for companies developing new tech. In other words, Xinjiang serves as an incubator for new tech.

Also critical is using those populations as unpaid or cheap labour in a resource-rich area considered a strategic corridor for China’s economic ambitions.

“As I started to think more about the technology systems that were being built and understand the money that was flowing into this space, I started to think about it as more of a kind of security industrial complex that was funding technology development and research in the region,” Byler told CBC’s Ideas.

Byler said research shows that tech companies working with Chinese state security tend to flourish and innovate, thanks largely to access to the huge troves of data collected by various levels of government.

David Yang, an assistant professor of economics at Harvard University, conducted such research using thousands of publicly available contracts specifically for facial recognition technology procured by mostly municipal governments all over China. 

A contracted firm with access to government data “steadily increased its product innovation not just for the government, but also for the commercial market,” for the next two years, said Yang. 

‘Health check’

Surveillance is a feature of everyday life in Xinjiang, so the personal data crucial to the profits is constantly being collected.

Central to the harvesting is a biometric ID system introduced there in 2017 requiring citizens to provide fingerprints, facial imagery, iris scans and DNA samples.

There are also turnstiles, checkpoints and cameras everywhere, and citizens are required to carry smartphones with specific apps.

“It’s the technology that really pervades all moments of life,” said Byler. “It’s so intimate. There’s no real outside to it.”

It was in 2017 that Alim (not his real name) returned to Xinjiang from abroad to see his ailing mother. His arrest upon landing in China was the start of what he said was a descent into powerlessness — and the involuntary harvesting of his data. 

Alim, now in his 30s, spoke to IDEAS on condition of anonymity out of fear of reprisals against remaining family in Xinjiang.

At the police station at home, as part of what he was told was a “health check,” Alim had a DNA sample taken and “multiple pictures of my face from different sides … they made me read a passage from a book” to record his voice.

“Right before the voice recording, I had an anxiety attack, realizing that I’m possibly going to be detained for a very long time,” Alim said.

The warrant for Alim’s arrest said he was “under suspicion of disrupting the societal order.” 

In a crowded and airless pre-trial detention facility, he said he was forced to march and chant Communist Party slogans. 

“I was just a student visiting home, but in the eyes of the Chinese government, my sheer identity, being a male Uyghur born after the 1980s, is enough for them to detain me.” 

Once released through the help of a relative, Alim found that his data haunted him wherever he went, setting off police alarms whenever he swiped his ID. 

“I basically realized I was in a form of house arrest. I felt trapped.”

Global connections

While the Xinjiang example is extreme, it is still an extension of surveillance that has become the norm in the West, too, but where consent is at least implicitly given when we shop online or use social media.

And just as the artificial intelligence technology used for surveillance in Xinjiang or elsewhere in China has roots in the computer labs of Silicon Valley and Big Tech companies in the West, new Chinese iterations of such technology are also being exported back into the world, selling in countries like Zimbabwe and the Philippines, said Byler.

China may be the site of “some of the sharpest, most egregious manifestations of tech oppression, but it’s by no means the only place in the world,” said lawyer and anthropologist Petra Molnar, who is associate director with the Refugee Law Lab at  York University in Toronto.

One such place is the modern international border, not only in the U.S. but also in Europe, where Molnar is studying how surveillance technology affects migrant crossings.

Molnar said China’s avid investment in artificial intelligence is creating an “arms race” that carries risks of “normalizing surveillance” in competing countries with stricter human rights laws. 

“How is this going to then impact average individuals who are concerned about the growing role of Big Tech in our society?” she said from Athens.

“It seems like we’ve skipped a few steps in terms of the kind of conversations that we need to have as a public, as a society, and especially including the perspectives of communities and groups who are the ones experiencing this.” 

‘A lot more nuance to this story’

Despite human rights concerns, other countries are loath to condemn China over Xinjiang because it is such an important part of the global economy, said Byler.

But he points out that he focuses on the economics of Xinjiang partly “to destabilize this easy binary of ‘China is bad and the West is good.'”

China’s “People’s War on Terror” should be seen as an extension of the “war on terror” that originated in the U.S. following the 9/11 attacks and is now a global phenomenon, said Byler.

“If we want to criticize China, we also have to criticize the ‘war on terror.’ We have to criticize or think carefully about capitalism and how it exploits people in multiple contexts,” he said. 

“There’s actually a lot more nuance to the story.”

The West’s complicity, he said, begins with “building these kinds of technologies without really thinking about the consequences.” 

Byler’s observations on the ground form the basis of two books he’s authored on the situation in Xinjiang — and of his policy suggestions to lawmakers, including Canadian MPs, about the repression in Xinjiang. 

He’s called on lawmakers to demand China’s leaders immediately abolish the re-education detention system and release all detainees. He’s also called for economic sanctions on Chinese authorities and technology companies that benefit from that process and for expediting asylum for Uyghur and Kazakh Muslims from China.

“I am a scholar at the end of the day,” said the Vancouver-based anthropologist.

“Maybe I can nudge people to think in ways that advocate for change. It takes many, many voices and I’m just trying to do my best with what I know how.”

Source: China’s high-tech repression of Uyghurs is more sinister — and lucrative — than it seems, anthropologist says

What happens when artificial intelligence comes to Ottawa

More on the note of caution of government adoption of AI for decision-making (Ottawa’s use of AI in immigration system has profound implications for human rights):

There is a notion that the choices a computer algorithm makes on our behalf are neutral and somehow more reliable than our notoriously faulty human decision-making.

But, as a new report presented on Parliament Hill Wednesday points out, artificial intelligence isn’t pristine, absolute wisdom downloaded from the clouds. Rather, it’s shaped by the ideas and priorities of the human beings who build it and by the database of examples those architects feed into the machine’s “brain” to help it “learn” and build rules on which to operate.

Much like a child is a product of her family environment—what her parents teach her, what they read to her and show her of the world—artificial intelligence sees the world through the lens we provide for it. This new report, entitled “Bots at the Gate,” contemplates how decisions rendered by artificial intelligence (AI) in Canada’s immigration and refugee systems could impact the human rights, safety and privacy of people who are by definition among the most vulnerable and least able to advocate for themselves.

The report says the federal government has been “experimenting” with AI in limited immigration and refugee applications since at least 2014, including with “predictive analytics” meant to automate certain activities normally conducted by immigration officials. “The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others,” the document warns. “These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

Citing ample evidence of how biased and confused—how human—artificial intelligence can be, the report from the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy makes the case for a very deliberate sort of caution.

The authors mention how a search engine coughs up ads for criminal record checks when presented with a name it associates with a black identity. A woman searching for jobs sees lower-paying opportunities than a man doing the same search. Image recognition software matches a photo of a woman with another of a kitchen. An app store suggests a sex offender search as related to a dating app for gay men.

“You have this huge dataset, you just feed it into the algorithm and trust it to pick out the patterns,” says Cynthia Khoo, a research fellow at the Citizen Lab and a lawyer specializing in technology. “If that dataset is based on a pre-existing set of human decisions, and human decisions are also faulty and biased—if humans have been traditionally racist, for example, or biased in other ways—then that pattern will simply get embedded into the algorithm and it will say, ‘This is the pattern. This is what they want, so I’m going to keep replicating that.’”

Immigration, Refugees and Citizenship Canada says the department launched two pilot projects in 2018 using computer analytics to identify straightforward and routine Temporary Resident Visa applications from China and India for faster processing. “The use of computer analytics is not intended to replace people,” the department said. “It is another tool to support officers and others in managing our ever-increasing volume of applications. Officers will always remain central to IRCC’s processing.”

This week, the report’s authors made the rounds on the Hill, presenting their findings and concerns to policy-makers. “It does now sound like it’s a measured approach,” says Petra Molnar, a lawyer and technology and human rights researcher with the IHRP. “Which is great.”

Other countries offer cautionary tales rather than best practices. “The algorithm that was used [to determine] whether or not someone was detained at the U.S.-Mexico border was actually set to detain everyone and used as a corroboration for the extension of the detention practices of the Trump administration,” says Molnar.

And in 2016, the U.K. government revoked the visas of 36,000 foreign students after automated voice analysis of their English language equivalency exams suggested they may have cheated and sent someone else to the exam in their place. When the automated voice analysis was compared to human analysis, however, it was found to be wrong over 20 per cent of the time—meaning the U.K. may have ejected 7,000 foreign students who had done nothing wrong.

The European Union’s General Data Protection Regulation that came into force in April 2018, on the other hand, is the gold standard, enshrining such concepts as “the right to an explanation,” or the legal certainty that if your data was processed by an automated tool, you have the right to know how it was done.

Immigration and refugee decisions are both opaque and highly discretionary even when rendered by human beings, argues Molnar, pointing out that two different immigration officers may look at the same file and reach different decisions. The report argues that lack of transparency reaches a different level when you introduce AI into the equation, outlining three distinct reasons.

First, automated decision-making solutions are often created by outside entities that sell them to government agencies, so the source code, training data and other information would be proprietary and hidden from public view.

Second, full disclosure of the guts of these programs might be a bad idea anyway because it could allow people to “game” the system.

“Third, as these systems become more sophisticated (and as they begin to learn, iterate, and improve upon themselves in unpredictable or otherwise unintelligible ways), their logic often becomes less intuitive to human onlookers,” the authors explain. “In these cases, even when all aspects of a system are reviewable and superficially ‘transparent,’ the precise rationale for a given output may remain uninterpretable and unexplainable.” Many of these systems end up inscrutable black boxes that could spit out determinations on the futures of vulnerable people, the report argues.

Her group aims to use a “carrot-and-stick approach,” Khoo says, urging the federal government to make Canada a world leader on this in both a human rights and high-tech context. It’s a message that may find a receptive audience with a government that has been eager to make both halves of that equation central to its brand at home and abroad.

But they’ll have to move fast: If AI is currently in a nascent state in policy decisions that shape real people’s lives, it’s growing fast and won’t stay there for long.

“This is happening everywhere,” Khoo says.

Source: What happens when artificial intelligence comes to Ottawa

Ottawa’s use of AI in immigration system has profound implications for human rights

Good discussion of the main issues and the need for care and accountability frameworks in the development of AI and its algorithms.

The authors also note that “Human decision-making is also riddled with bias and error” (unfortunately, we don’t have any comparable analysis to that of Sean Rehaag with respect to IRB and Federal Court immigration-related decisions – Getting refugee decisions appealed in court ‘the luck of the draw,’ study shows):

How would you feel if an algorithm made a decision about your application for a Canadian work permit, or determined how much money you can bring in as an investor? What if it decided whether your marriage is “genuine?” Or if it trawled through your Tweets or Facebook posts to determine if you are “suspicious” and therefore a “risk,” without ever revealing any of the categories it used to make this decision?

While seemingly futuristic, these types of questions will soon be put to everyone who interacts with Canada’s immigration system.

A report released Wednesday by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy finds that algorithms and artificial intelligence are augmenting and replacing human decision makers in Canada’s immigration and refugee system, with profound implications for fundamental human rights.

We know that Canada has already introduced automated decision-making experiments as part of the immigration determination process since at least 2014. These new automated techniques support the evaluation of immigrant and visitor applications such as Express Entry for Permanent Residence. Recent announcements signal an expansion of the uses of these technologies in a variety of applications and immigration decisions in the coming years.

Exploring new technologies and innovations is exciting and necessary, particularly when used in an immigration system plagued by lengthy delays, protracted family separation and uncertain outcomes. However, without proper oversight, mechanisms and accountability measures, the use of AI threatens to create a laboratory for high-risk experiments.

The system is already opaque. The ramifications of using AI in immigration and refugee decisions are far-reaching. Vulnerable and under-resourced communities such as those without citizenship often have access to less-robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may serve only to exacerbate these disparities and can result in severe rights violations, such as discrimination and threats to life and liberty.

Without proper oversight, automated decisions can rely on discriminatory and stereotypical markers, such as appearance, religion, or travel patterns, and thus entrench bias in the technology. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies. This could lead to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, such as the right to have a fair and impartial decision maker and being able to appeal your decision. These rights are internationally protected by instruments that Canada has ratified, such as the United Nations Convention on the Status of Refugees, and the International Covenant on Economic, Social and Cultural Rights, among others. These rights are also protected by the Canadian Charter of Rights and Freedoms and accompanying provincial human rights legislation.

At this point, there are more questions than answers.

If an algorithm makes a decision about your fate, can it be considered fair and impartial if it relies on biased data that is not made public? What happens to your data during the course of these decisions and can it be shared with other departments, or even with the government of your country, potentially putting you at risk? The use of AI has already been criticized in the predictive policing context, where algorithms linked race with the likelihood of re-offending, or when they link women with lower paying jobs, or purport to discern sexual orientations from photos.

Given the already limited safeguards and procedural justice protections in immigration and refugee decisions, the use of discriminatory and biased algorithms have profound ramifications on a person’s safety, life, liberty, security, and mobility. Before exploring how these technologies will be used, we need to create a framework for transparency and accountability that addresses bias and error in automated decision making.

Our report recommends Ottawa establish an independent, arm’s-length body with the power to engage in all aspects of oversight and review all automated decision-making systems by the federal government, publishing all current and future uses of AI by the government. We advocate for the creation of a task force that brings key government stakeholders, alongside academia and civil society, to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

Without these frameworks and mechanisms, we risk creating a system that – while innovative and efficient – could ultimately result in human rights violations. Canada is exploring the use of this technology in high-risk contexts within an accountability vacuum. Human decision-making is also riddled with bias and error, and AI may in fact have positive impacts in terms of fairness and efficiency. We need a new framework of accountability that builds on the safeguards and review processes we have in place for the frailties in human decision-making. AI is not inherently objective or immune to bias and must be implemented only after a broad and critical look at the very real impacts these technologies will have on human lives.

Source: Ottawa’s use of AI in immigration system has profound implications for human rights